Understanding CI/CD Pipelines

If you’ve been reading up on continuous integration, delivery and deployment (collectively known as CI/CD), you’ve almost certainly come across the term “automated pipeline” and the fact it plays a central role in implementing these practices. But what exactly is a Continuous Integration/Continuous Delivery pipeline? And how do you get one?

The aim of CI/CD is to reduce the time it takes to deliver software to users without compromising on quality. You achieve this by checking in changes frequently, testing them rigorously and addressing feedback promptly, so that you can deploy your changes to live as often as you want.

What is a CI/CD Pipeline?

When we talk about a CI/CD pipeline, we’re referring to the series of steps that your code goes through in order to get from your development machine, through testing and staging, and finally out of the door and into your users’ hands.

As the CI/CD strategy is to execute this process on a very regular basis – usually multiple times a day – it’s essential to automate as much of it as possible, with each step either triggering the next one or raising a flag if something has gone wrong.

Automation not only speeds up the overall process, and therefore the individual feedback loops, but also ensures each step is performed consistently and reliably.

The Stages of a Build Pipeline

Although the exact shape of your CI/CD pipeline will depend on the type of product you’re building, alongside your organization’s requirements, there is a general pattern that all pipelines tend to follow, which we’ve outlined here.

The process begins with a commit to master (or whichever branch you’ve nominated as the CI branch), which triggers either a build or an initial set of unit tests. The results are fed back to a dashboard, and if either the build or a test fails, it’s flagged with an automated notification.

You can either configure the pipeline to stop the process so that you can address the issue and start again with a new commit, or create exceptions for particular types of failure so that the process can continue.

The next stage involves a series of automated tests, with feedback provided after each round of testing. Usually, tests are structured so that the quickest tests are run first, thereby providing feedback as early as possible.

More involved tests that will occupy servers for longer, such as end-to-end tests, are only run once the previous tests have passed successfully. This makes for more efficient use of resources.

Once automated tests have completed, the software is typically deployed to a series of staging environments, some of which may be used for further manual testing while others may be used for training, support, and customer previews.

The final stage of the CI/CD pipeline architecture involves making the changes live and can either be triggered manually (in the case of continuous delivery) or automatically (as with continuous deployment).

Let’s look at some considerations for each of these stages in a bit more detail.

Flags and Branches

The first step towards adopting continuous integration is to get your entire codebase into a version control system (VCS aka source control management or SCM), such as Git, Mercurial or Perforce, and then to get everyone on your team into the habit of committing their changes frequently. Each commit to master initiates the pipeline, building and testing the code to provide rapid feedback on what you’ve written.

While frequent commits are an important practice in the CI/CD pipeline, if you’re working on a larger feature that will take several days or weeks to complete then committing periodically during that process can feel like a double-edged sword.

Pushing your changes through the pipeline in regular increments provides you with rapid feedback and reduces the likelihood of more complex conflicts than if you wait until the end.

On the other hand, you probably don’t want to release a half-finished feature to users, and you may not be ready to share your work-in-progress with internal users via staging environments either.

Feature flags and feature branches offer ways around this issue. With feature flags, you specify the environments in which your code is visible to users. Your changes are still committed to master and visible to your team, but you decide when the functionality becomes available in staging and production.

Feature branches allow you to develop your feature in a separate branch without losing out on the benefits of automated build and test. By triggering the CI/CD pipeline on each commit to a feature branch, just as you do with a commit to master, you can get rapid feedback on what you’ve built.

Build and Test

Having triggered an instance of your pipeline with a commit, the next stages are build and test. If you have automated unit tests, these are usually run before the build with linting and static analysis checks.

The build tool you use (such as Ant or Maven), and the details of the build steps will depend on the language and framework you’re working in. By running the automated build on a dedicated build server, you can avoid issues further down the line caused by missing dependencies; the classic “works on my machine” problem.

The output of the build step includes the installers, binaries or containers (the build artifacts), which are then deployed to testing environments and combined with other parts of the system to run higher-level automated tests: integration tests, component tests, and end-to-end tests as well as non-functional testing, such as performance and security analysis.

These tests may be run in parallel to speed up the pipeline and provide you with feedback faster.

Containers vs VMs

For the results of your automated tests to be reliable, you need to ensure they run consistently.

Ideally, your test environments should be configured to resemble production as closely as possible, and they should be reset between test runs to avoid environmental inconsistencies disrupting your test results.

Virtual machines (VMs) have long been a popular choice for running test environments, as you can script the process of refreshing them for each new build under test.

However, tearing down and spinning up new VMs takes time, while your scripts will need to include configuration for each virtual environment to provide all the dependencies the software needs to run. When new dependencies are added, the environment scripts will need to be updated – an easy detail to miss until you’re wondering why your build won’t run.

You can avoid these issues by packaging your code in a container as part of the initial build step. A container includes all the dependencies that the software needs to run, making it highly portable and easier to deploy to different environments.

If you’re hosting your CI/CD on your own infrastructure, you’ll still need VMs to deploy the containers to, but there is less work involved in preparing the test environment, which helps to keep the pipeline operating efficiently. If you’re running your pipeline in the cloud, adopting containers means you can use managed services and offload the infrastructure side to your cloud provider.

Pre-production Environments

The number of testing and staging environments in your pipeline architecture will depend on what you’re building and the needs of the different stakeholder groups in your organization. Examples include exploratory testing, security reviews, user research, sales demos, training environments and sandboxes for support staff to replicate customer issues.

Automating the creation of and deployment to these environments is more efficient than refreshing them manually, and you can configure different pipeline triggers for different environments.

For example, while your test environments might be updated with every build, you may decide to refresh staging environments less frequently – perhaps once a day or once a week with the latest successful build.

Deploy

Once your code changes have passed each of the previous pipeline stages successfully, they are ready for release to production. That final step can either be manual or automatic.

Releasing manually (known as continuous delivery), is useful if you want to control when new features or functionality are made available, if your deployment process involves downtime for your users, or if your product is installed and you want to batch up changes and deliver them according to a regular release schedule.

With a fully automated continuous deployment process, changes are deployed to live as long as they have passed all previous stages.

Depending on the number of developers working on the codebase and the frequency of their commits, this can mean you’re deploying updates to users dozens of times a day; a feat practically impossible without an automated pipeline.

Understanding CI/CD Pipelines: To Summarize

CI/CD makes software development more efficient by highlighting issues as early as possible; it helps you to fail fast by moving interactions earlier and getting feedback sooner (aka shifting left). Building an automated pipeline helps you put these techniques into practice.

When it comes to designing your own CI/CD process, it helps to build it up in stages, starting with continuous integration. The exact stages of the pipeline and the logic determining when each stage is triggered depend on your product and your organization.

Choosing a CI/CD platform that provides you with the flexibility to configure your pipeline for your requirements, while still being easy to manage, will help you forge a dependable release process and improve the quality of your software.