CI / CD Pipeline Patterns For JavaScript (Part 2)

Django Shelton
4 min readMay 12, 2017
Kiran and myself pretending to work on Continuous Delivery

Part one of this three part series spoke about the pattern for introducting Continuous Integration into your JavaScript development flow — this part will extend that approach to introduce Continuous Delivery, while part three will focus on the other types of testing you could extend your pipelines to include.

Continuous Delivery

“Continuous Delivery is the natural extension of Continuous Integration, an approach in which teams ensure that every change to the system is releasable, and release any version with the push of a button. Continuous Delivery aims to make releases boring, so that we can deliver frequently and get quick feedback on what users care about.” — Thoughtworks

Now that the quality of any code reaching the later stages of our pipeline has been ensured, we can move on to automatically deploy this code to our environments. There are going to be some differences to your steps between deploying to a test environment vs a production environments, as there will likely be different hoops to jump through or different build steps to take, but in general you are now going to want to build and deploy your app, so these are the steps we will now implement.

//Checkout, Prepare, Unit test stages… stage(“Build”) {   sh ‘./jenkins/build.sh’ } stage(“Deploy”) {   sh ‘./jenkins/deploy.sh’ }

Again, we store the implementation details of the stages inside scripts. The build stage should carry out the tasks needed to take your app from source code to a deployable artefact, such as dependency bundling, uglification, minification, and bundling of assets into a distribution folder. There are many tools which can do this for you, such as Webpack and Browserify, but the general pattern of having an executable command line task run through a script, coordinated by a pipeline, is the same.

Similarly, the deploy stage could do several things, from simply copying your dist folder onto a web server, to packaging your distribution folder as an RPM and installing that onto a remote location. In general this step should handle anything which needs to happen from getting your deployable code to a place where it is actually running and it can be accessed inside a test or production environment. You may also want to upload your built code to an artefact storage service such as Artifactory.

Web Apps vs Services

So far we have been focusing mainly on how a pipeline might look for a front end web app (such as an Angular or React application), not how you might approach deploying a Node web service. While many concepts are the same, such as preparing the environment and running unit tests, there are some differences when it comes to building and deploying, as well as some extra stages to add.

In addition to unit tests, services are more likely to require integration tests of some sort to ensure changes to the application don’t break the connections the service has to make to other services or some data store. This stage could take place after the unit test stage, and adds another layer to the level of assurance provided by CI.

Services call for a little bit more set up than static .js files, requiring an environment in which to run and often needing several external modules installed in that environment. Rather than building to a dist folder, the build stage of a service may create an image for a containerisation service like Docker, which can ensure everything the service needs to run is shipped with the service itself.

When deploying a service, there are also more steps to take than simply copying a dist folder to the target location. The service needs to start running, and may also need to stop any previous running instances to ensure traffic is sent to the right place, and load balancers may need to be notified of the new service instance. If this all sounds daunting, it’s because it’s not exactly an easy problem to solve, and part of the reason technologies like Docker and Kubernetes exist.

Common Pipelines

A microservices architecture is particularly popular amongst the Node community (and the software community in general), but raises problems when it comes to pipelines. If each service maintains it’s own pipeline and bash scripts, any number of services above one become unwieldy to manage — a single change to your pipeline would mean redeploying all of your services to maintain consistency, which takes up far too much time and effort to be reasonable. The solution is to build your pipelines in a central repository and version them as their own entities, rather than as part of a larger application, and then each service simply loads the latest stable pipeline. Even when building single applications I would encourage this pattern, as it creates a reusable asset which you can apply to future work and save yourself loads of time further down the line. In bigger organisations you could also use this pattern to standardise deployments across teams, and speed up the time it takes to start new projects. My colleague Jack Stevenson has written a great article on this pattern here.

Part 3 will look at additional types of testing you can add to your pipelines, and when these might be appropriate.

--

--