What is a CI Server?

Continuous integration (CI) is a DevOps practice designed to avoid the problems that come from integrating changes late in the game: merge conflicts and build errors followed by countless bugs and the dawning realization that your software doesn’t actually do what your users need.

With continuous integration you commit, build and test everyone’s code changes as you go. Integrating frequently throughout a project means you can minimize conflicts, check how everyone’s changes interact, and address any bugs before they become deeply entrenched and relied on by other functionalities.

CI forms the first half of a continuous integration and delivery/deployment (CI/CD) pipeline – the ongoing process of committing, building, testing, staging and releasing each change, which delivers feedback at every stage and enables you to constantly iterate and improve.

Implementing CI/CD requires cultural changes to enable collaboration between different functions, adoption of new processes and workflows, and tools to automate the steps involved and enable an efficient pipeline.

A CI server (or build server) plays a key role in implementing and managing the whole process. It serves as the glue that brings all the stages of the pipeline together by applying your business logic to coordinate automated tasks and collating and publishing feedback. In this article, we’ll look in more detail at what a CI server does and how it can help you get the most out of CI/CD.

Integrating with source control

At the start of any CI/CD pipeline is an integration with your version or source control system.

The basic implementation involves configuring your CI/CD server to listen for commits on the master branch and triggering the pipeline when a change is made.

While this ensures that each commit is verified and tested, it leaves plenty of scope for individuals to commit something that breaks the build, bringing the process to a halt and preventing other changes from being verified until the offending code is either backed out or fixed.

Configuring your CI server to build and test your changes before they can be committed helps to prevent this type of issue, and creates an additional feedback loop for each developer.

Importantly, the build server plays the roles of both enabler and enforcer – it takes care of running the build and tests on a remote machine and feeds the results back to the individual, but it also makes that process a condition of committing to the master or a feature branch.

Another step you may want to consider is integrating your CI server with your code review tool so that each commit must pass code review before it can be committed, but only after it has been successfully built and tested.

Enforcing these extra layers of business logic at the start of the process helps to keep your codebase clean and ready for release, while minimizing interruptions and delays in the pipeline.

Managing builds

When it comes to the build and test phases of a CI/CD pipeline, your CI server is the brains of the operation, coordinating tasks and allocating jobs to build agents based on various criteria.

Your build agents, however, do the heavy lifting of running builds and executing tests according to instructions received from the CI server.

It’s good practice to keep your build server distinct from the build agents on which you run builds and execute tests, at least in a production setup, to avoid resource contention and performance issues.

When you use your CI server to configure the logic for a stage of your pipeline, you can specify a range of details and rules. For example, you may want to run certain tests on commits to the master branch but not when running a pre-commit build on a development branch, or you may want to control how many builds can call a test database at the same time.

Being able to run certain tasks at the same time using different build agents can make your pipeline more efficient. This is useful if you need to run tests on different operating systems, or if you’re working on a huge codebase with tests numbering in the hundreds of thousands and the only practical option is to parallelize. In the latter case, setting up a composite build will aggregate the results so you can treat the tasks as a single build step.

A build server that integrates with cloud-hosted infrastructure, such as AWS, will allow you to benefit from elastic, scalable resources in which to run your builds and tests. If your infrastructure needs are considerable, support for containerized build agents and integration with Kubernetes will allow you to manage your build resources efficiently, whether they are in the cloud or on-premises.

Defining failures

A key part of your business logic involves defining what constitutes a failure at each stage of your CI/CD pipeline.

Your CI server should allow you to configure various failure conditions, which it will then apply and use to determine the status of a particular step and whether to proceed to the next stage of the pipeline.

In addition to self-evident failures, such as a build returning an error code or tests failing to execute, you can define other types of failure based on data collected by your build server.

Examples include test coverage decreasing relative to the previous build (indicating that tests have not been added for the latest code changes), or the number of ignored tests increasing compared to the last successful build.

These metrics serve as a useful warning that code quality may be deteriorating. By triggering a failure for these reasons and limiting which users have permission to override these failures, you can drive desirable behavior.

Enabling continuous delivery

Although the name “CI server” suggests their use is limited to continuous integration, most CI servers also provide support for continuous delivery and deployment.

Having produced your build artifacts and run an initial set of tests during the continuous integration phase, the next step is to deploy those artifacts to QA environments for further levels of testing, followed by staging so that your stakeholders can try it out, and then – if everything looks good – release to live.

As well as providing an artifact repository to store the outputs from each build so you can deploy them as needed, a CI/CD server can also store and manage parameters for each environment in your pipeline. You can then specify whether your deployment scripts are triggered automatically based on the outcome from the previous stage.

Tracking progress

Providing rapid feedback from each stage is a key element of a CI/CD pipeline.

A build server can provide information about queued jobs, real-time reporting on builds and tests while they are in progress, and the status of completed build steps.

By enabling notifications, you can ensure you and your team are aware of any issues as soon as they arise, while integration with bug tracking tools means you can see details of the fixes that were included in a commit and quickly drill into the cause of a failure. Historic data can provide useful insights for improving your pipeline, as well as defining conditions as part of the pipeline logic.

Wrapping up

A continuous integration server plays a vital role in implementing your CI/CD pipeline, coordinating and triggering the various steps in your process, and collating and delivering data from each stage. Have a look at our Guide to CI/CD tools for tips on how to choose the right CI server for your organization.