CI / CD Pipeline Patterns For JavaScript (Part 1)

Django Shelton
4 min readMay 9, 2017

Continuous Integration (CI) and Continuous Deployment (CD) are practices used by developers all over the world to increase the quality of their software, and decrease the time to market for features and bug fixes. But in the world of JavaScript where the lifetime of a framework or style being “good” can often be measured in days, how do you achieve these practices in a consistent manner?

This article will go through a basic pipeline pattern for implementing CI and CD, and look at how this can be applied to both Web Applications and Services, as well as ways of extending this pattern depending on the use case. The code shown will be written in Groovy and intended for use with the Pipeline plugin for Jenkins, but the concepts can be applied to your platform of choice. The aim is also to be framework agnostic, as the concepts explained here should be applied to any app you’re building in JavaScript. In this part the focus will be on a single page application built in something like Angular or React, while part two will talk about Node.JS apps.

Continous Integration

“Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. “ — Thoughtworks

The value of CI cannot be overstated in catching problems as and when they occur, rather than having those problems spotted by users. The primary method of testing in a JavaScript app will be linting and unit tests, and this is where we will start with our pipeline. The tools you use to run your tests may differ, but in general you will want some sort of task runner (Gulp, Grunt, npm) which will be able to execute your tests from the command line. The commands to run these will then be stored in scripts and executed by Jenkins (or your CI platform of choice). In order to be able to test an app however, we have to get the source code and prepare our environment first.

node() {  stage("Checkout") {    checkout scm: [$class: 'GitSCM',
branches: [[name: 'origin/develop']],
userRemoteConfigs: [[
credentialsId: '<credentials>',
url: '<gitURL>']]
]
}
stage("Prepare") { sh './jenkins/prepare.sh' } stage("Codestyle") { sh './jenkins/codestyle.sh' } stage("Unit Test") {

sh './jenkins/unit.test.sh'
junit healthScaleFactor: 5, testResults: 'reports/**/*.xml'
}
}

The sample code above introduces a number of pipeline concepts. The first is a “node”, which in Jenkins means a context to do work inside of, by scheduling the steps within it and creating a workspace (directory). Multiple nodes can be used to parallelise work and to move the execution context between VMs or environments, but for our basic purposes with purely sequential tasks, we will wrap all our work in a single node.

“Stages” are used to define subsets of the pipeline, which are visualised by plugins on the Jenkins dashboard. Within stages, single tasks called “steps” tell Jenkins what to actually do. Stages may be small but critically should be logically distinct from each other (e.g. “Codestyle” and “Unit Test”, not just “Test”), as this enables easier troubleshooting and bug finding.

For ease of reading and in order to “separate our concerns”, the actual details of the steps are generally stored inside separate scripts, rather than forcing all of our implementation details into a single file.

The checkout scm step defines the type of Source Code Management tool you are using (in this case Git), the branch to checkout, as well as the actual URL to go to and the credentials required (if any). The sh step will execute the given shell command, which in our case runs some shell scripts stored inside a directory called jenkins.

The Prepare stage should be used to run any prerequisites for running your app, such as an npm install or bower install, and can also be used to run any necessary cleanup tasks such as npm prune.

If you’re working in big teams, or if you like consistency, you can add a Codestyle stage in your pipeline to ensure standards are maintained automatically, and reject anything which doesn’t meet the standards. Tools like JSHint and ESLint can do this for JavaScript, and can be coordinated to run using a task runner, then executed in your pipeline.

Once your environment has been setup and any linting rules have passed, you can run your Unit Test stage. This will of course differ depending on your app, as you may be using any combination of a task runner and Mocha, Jasmine, Karma (among many others) to run the tests, but the concept is the same and the tests should be executed from a script, in this case `unit.test.sh`. Reports produced by your unit tests can be published, and in the case of Jenkins the junit plugin can publish XML reports for you.

Running this pipeline against your code on every commit to your Develop branch (as well as your feature and bug fix branches, depending on your teams branching model) will ensure any code on those branches has a level of quality that you define in your unit test specification — where you can define metrics such as coverage levels to fail on. You can also set your local environment to run these tests automatically for every file save, catching problems even earlier in the development cycle. Any failing unit tests should cause the pipeline to fail, which is critical when it comes to implementing the next steps, as you don’t want to deploy code with failing tests…

Part 2 will cover how to extend this basic pipeline to deploy your app, and look at how the pipeline might be a bit different when working with Node.JS services rather than front end apps.

--

--