CI/CD Best Practices

Continuous integration, delivery and deployment are software development practices born out of the DevOps movement. They make the process of building, testing and releasing code more efficient and get working product into the hands of users more quickly than traditional methods. Done well, a build pipeline enables teams to deliver working software at pace and get timely feedback on their latest changes.

Building a CI/CD pipeline should not be a fire-and-forget exercise. Just like the software under development, it pays to take an iterative approach to your CI/CD practices: keep analyzing the data and listening to feedback in order to refine your CI/CD process. In this article, we’ll explore the continuous integration/continuous delivery best practices that you should consider applying to your pipeline.

Commit early, commit often

Ensuring all your source code, configuration files, scripts, libraries and executables are in source control is an essential first step towards implementing continuous integration, enabling you to keep track of every change.

The tool alone, however, is not enough – it’s how you use it that counts. Continuous integration seeks to make the process of integrating changes from multiple contributors easier by sharing smaller updates more frequently.

Each commit triggers a set of automated tests to provide prompt feedback on the change. Committing regularly ensures your team works on the same foundations, thereby facilitating collaboration, and reduces the likelihood of painful merge conflicts when integrating large, complex changes.

In order to reap the benefits of continuous integration, it’s essential for everyone to share their changes with the rest of your team by pushing to main (master) and to update their working copy in order to receive everyone else’s changes. As a general rule of thumb, aim to commit to main (master) at least once a day.

Pushing changes to the main branch this frequently can feel uncomfortable for teams used to working in long-running branches. This can be due to a fear of scrutiny by others or perhaps because the size of a task is too large for it to be completed in a day.

Creating a team culture of collaboration rather than judgment is essential and, as with any change to working practices, it pays to discuss how you work as a team. Working as a team to break tasks down into smaller, discrete chunks can help individuals adopt this practice.

Where long-running branches are used to host new features that are not ready for release to live, another option is to use feature flags. These allow you to control visibility of particular functionality in different settings, so that the code changes can be merged and included in the build for quality assurance without being available to end users.

Keep the builds green

By building the solution and running a set of automated tests each time a change is committed, a CI/CD pipeline provides rapid feedback to developers about their changes.

The aim is to avoid building on bad foundations and keep the code in a constantly releasable state. Not only is it much more efficient to address issues as soon as they arise, but it also makes it possible to roll out a fix quickly if something goes wrong in production.

If a build fails for any reason, it should be the team’s priority to get it working again. It can be tempting to blame whoever made the last change and leave the task of fixing the issue to them. However, focusing on blaming your team rarely produces a constructive team culture and is less likely to uncover the underlying cause of a problem. By making it the whole team’s responsibility to address a failing build and trying to understand what led to the failure, you can improve the entire CI/CD workflow. Of course, that can be easier said than done when the pressure is on and tensions are running high; evolving a DevOps culture is also an exercise in continuous improvement!

Of course, it can be frustrating to have to drop everything to fix a failing build, only to discover that it was caused by something trivial – a syntax error or missed dependency. To avoid this, it’s a good idea for team members to do a build and run an initial set of tests locally before they share their changes. Ideally, everyone should be able to use the same scripts as the CI/CD system to avoid duplicating effort. Also, consider implementing a CI/CD tool in your organization.

Build only once

A common misstep is to create a new build for each stage.

Rebuilding the code for different environments risks inconsistencies being introduced and means you cannot be confident that all previous tests have passed. Instead, the same build artifact should be promoted through each stage of the build pipeline and ultimately released to live.

Putting this into practice requires the build to be system-agnostic. Any variables, authentication parameters, configuration files or scripts should be called by the deployment script rather than being incorporated into the build itself. This allows the same build to be deployed to each setting for testing, with each stage increasing the team’s confidence in that particular build artifact.

While it is good practice to keep everything, including the build script, configuration files and deployment scripts in the same source control system as the application code, that doesn’t apply to the build artifact itself. As a product of these inputs, the build doesn’t belong in source control. Instead, it should be versioned and stored in a central artifact repository, such as Nexus, from which it can be pulled down and deployed to each instance.

Streamline your tests

Although CI/CD relies heavily on automated testing to provide confidence in the quality of your software, that doesn’t mean you should aim to check every eventuality.

After all, the purpose of continuous integration is to provide rapid feedback and deliver valuable products to users at a faster pace than traditional methods. That means there is a balance to be struck between test coverage and performance. If it takes too long to get test results, people will look for reasons and ways to circumvent the process.

Run the tests that complete quickest first in order to get feedback as early as possible, and only invest in lengthier ones once you have a degree of confidence in the build. Given the time involved in manual quality assurance, and the dependency on your team being available to perform those check, it’s best to limit this phase until after all automated tests have completed successfully.

The first layer of automated tests is normally unit tests, which you can use to provide broad coverage and alert you to any obvious issues introduced by the latest change. After unit tests you may have a layer of automated integration or component ones, which check interactions between different parts of your code.

Beyond these, you might invest in more complex automated tests, such as GUI tests, performance and load tests, or security tests, before finally taking the time for manual exploratory and/or acceptance check. To make these longer running tests – whether automatic or manual – more efficient, focus on the areas that pose the greatest risk for your particular product and users.

Clean your environments

To get the most out of your QA suite, it’s worth taking the time to clean up your pre-production environments between each deployment.

When environments are kept running for a long time it becomes harder to keep track of all the configuration changes and updates that have been applied to each one.

Over time, settings diverge from the original setup and from each other, which means that tests that pass or fail in one might not return the same result in another. Maintaining static environments also comes with a maintenance cost, which can slow down the QA process and delay the release process.

Using containers to host environments and run tests makes it easy to spin up and tear them down for each new deployment, using an infrastructure-as-code approach to script these steps. Instantiating a new container each time ensures consistency and allows you to scale environments more easily, so you can check multiple builds in parallel if needed.

Make it the only way to deploy to production

Once you’ve invested in building a reliable, fast, and secure CI/CD pipeline that gives you confidence in the quality of your builds, as you don’t want to undermine that effort by allowing the process to be bypassed for whatever reason.

Typically the request to circumvent the release process is made because the change is minor or urgent (or both), but yielding to such demands is a false economy.

Skipping the stages of automated quality assurance risks introducing avoidable issues, while reproducing and debugging issues is much harder as the build is not readily available to deploy to a testing instance.

It's likely that at some point you'll be asked to bypass the process, "just this once". You'll probably be in full fire-fighting mode at the time, but it's worth using a retrospective or post-mortem to understand the motivation behind it. Does the process seem too slow? Perhaps there are performance improvements or refinements to be made. Is there a misunderstanding as to when it should be used? Communicating the benefits of a CI/CD pipeline can help bring stakeholders on board and avoid these kinds of demands the next time the roof is on fire.

Monitor and measure your pipeline

As part of setting up your CI/CD pipeline, you probably implemented monitoring for your production environment to alert you to signs of trouble as early as possible.

Just like the product you’re releasing, your build process will also benefit from a feedback loop.

By analyzing the metrics collected by your CI/CD tool you can identify potential issues and areas for improvement.

  • Comparing the number of builds triggered per week, day or hour provides useful insight on how your pipeline infrastructure is used, whether you need to scale it up or down and when the peak load tends to occur.
  • Tracking the speed of deployments over time, and monitoring whether they are tending to take longer, can indicate when it's time to invest in performance optimizations.
  • Statistics from automated tests can help to determine areas that would benefit from parallelization.
  • Reviewing QA results to find those that are routinely ignored can identify potential for streamlining your quality assurance coverage.

Make it a team effort

Building an effective CI/CD workflow is as much about team and organizational culture as it is about the processes and tools that you use.

Continuous integration, delivery and deployment are DevOps practices. They rely on breaking down the traditional silos between developers, QA engineers and operations, and encouraging collaboration between disciplines.

Breaking down silos gives teams more visibility of the end-to-end workflow and the opportunity to collaborate and benefit from different areas of expertise. Maintaining the pipeline should never be the job of a single person. Implementing a CI/CD platform might also help with improving your operational practices.

By creating a sense of shared responsibility for delivering your software you can empower everyone on the team to contribute – whether that’s jumping in to fix the build, taking the time to containerize environments or automating a manual task that doesn’t get done as often as it should.

Promoting a culture of trust, where team members are able to experiment and share ideas, benefits not just the people but also the organization and the software you deliver. If something goes wrong, instead of focusing on assigning the blame for it to a member of your team, the aim should be to learn from the failure; understand the underlying cause and how it can be avoided in future.

Use the opportunity to improve your CI/CD practice and make it more robust and effective. By allowing team members to experiment and innovate without fear of recrimination you’ll create a virtuous circle of continuous improvement.