TeamCity allows configuring and starting one or more secondary server instances (nodes) in addition to the main one. The main and secondary nodes operate under the same license and share the TeamCity Data Directory and the database.
Using the multinode setup, you can:
Set up a high-availability TeamCity installation that will have zero downtime. Secondary nodes will operate as usual during the downtime of the main server (for example, during the minor upgrades).
Improve the performance of the main server by delegating tasks to the secondary nodes. A secondary node can detect new commits and process data produced by running builds (build logs, artifacts, statistic values).
After installation, each secondary node runs as a read-only copy of the main server. To extend its functionality, you can assign it to a certain responsibility. In this case, the secondary node will allow users to perform the most common actions on builds:
Triggering a build, including a custom or personal one
Stopping/deleting and pinning/tagging/commenting builds
Assigning investigations and muting build problems and tests
Marking a build as successful/failed
Merging sources and labeling sources actions
and more (see the full list in our issue tracker)
The following diagram shows an example of a TeamCity installation with one main node and two secondary nodes, where each secondary node has a certain responsibility:
Prerequisites for Multinode Setup
This section describes configuration requirements for setting up multiple TeamCity nodes.
Shared Data Directory
The main TeamCity server and secondary nodes require access to the same TeamCity Data Directory (which must be shared) and to the same database.
For a high availability setup, we recommend storing the TeamCity Data Directory on a separate machine. In this case, even if the main server goes down, the secondary nodes will be able to connect to the shared Data Directory.
Ensure that all machines where TeamCity server nodes will be installed can access the Data Directory in the read/write mode. Using a dedicated network file storage with good performance is recommended.
The typical Data Directory mounting options are SMB and NFS. TeamCity uses the Data Directory as a regular file system so all basic file system operations should be supported. The backing storage and way of mounting should not impose strict I/O operations count or I/O volume limits.
We recommend tuning storage and network performance: make sure to review performance guidelines for your storage solution. For example, increasing MTU for the network connection between the server and the storage usually increases the artifacts transferring speed.
Note that on Windows, a node might not be able to access the TeamCity Data Directory via a mapped network drive if TeamCity is running as a service. This happens because Windows services cannot work with mapped network drives, and TeamCity does not support the UNC format (
\\host\directory) for the Data Directory path. To workaround this problem, you can use
mklink as it allows making a symbolic link on a network drive:
Disabling network client caches on Data Directory mounts
It is important that all the nodes "see" the current state of the shared Data Directory without delay. If this is not the case, it is likely to manifest in various unstable behavior and frequent build logs corruption.
If TeamCity nodes run on Windows with Data Directory shared via SMB protocol, make sure that all the registry keys mentioned in the related article are set to 0 on all of the TeamCity nodes.
If Data Directory is shared via NFS, make sure that all nodes have the following option in their mount settings:
Node-Specific Data Directory
Besides the Data Directory shared with the main server, a secondary node requires a local Data Directory where it stores some caches, unpacked external plugins, and other configuration.
On the first start of the node, the local Data Directory is automatically created as
<TeamCity Data Directory>/nodes/<node_ID>. This is usually the location of the shared Data Directory, the directory used by all nodes.
To reduce the load caused by extra IO requests from all nodes to the shared TeamCity Data Directory and to speed up the nodes' access to data, we highly recommend redefining the location of the node-specific Data Directory to use the node's local disk.
To set up a high-availability TeamCity installation, you need to install both the main server and the secondary node behind a reverse proxy and configure it to route requests to the main server while it is available and to the secondary one in other cases. If you are about to set up the TeamCity server behind a reverse proxy for the first time, make sure to review our notes on this topic.
The following NGINX configuration will route requests to the secondary node only when the main server is unavailable or when the main server responds with the 503 status code (when starting or upgrading).
Note that NGINX Open Source does not support active health checks (which is a better way to configure High Availability installation) and may experience DNS resolution issues. Consider using NGINX Plus or HA Proxy.
The HA Proxy configuration example:
Firewall settings should allow accessing secondary nodes from the agents and from the main TeamCity server (the main server also communicates with the nodes by HTTP).
Upgrade & Downgrade
It is recommended that the main TeamCity server and all secondary nodes have the same version. In certain cases, the main server and the secondary nodes can be running different versions for a short period, for example, during the minor upgrade of the main server. When the versions of the secondary node and the main server are different, the corresponding health report will be displayed on both nodes.
When upgrading to a minor version (a bugfix release), the main and the secondary nodes should be running without issues as the TeamCity data has the same format. You can upgrade the main TeamCity server and then the secondary servers as usual.
When upgrading the main server to a major version, its TeamCity data format will change. We recommend stopping all the secondary nodes before starting the upgrade of the main server to avoid any possible data format errors.
All secondary nodes must be upgraded after the main server major upgrade to be able to process tasks.
To upgrade nodes in a multinode setup to a major version of TeamCity, follow these steps:
Stop all secondary nodes.
Start the upgrade on the main TeamCity server as usual.
Proceed with the upgrade.
Verify that everything works properly and agents are connecting (the agents will reroute the data, that was supposed to be routed to the secondary nodes, to the main server).
Upgrade the TeamCity installations on the secondary nodes to the same version.
Start the secondary nodes and verify that they are connected on the Administration | Server Administration | Nodes Configuration page on the main server.
To downgrade nodes in a multinode setup, follow these steps:
Shutdown the main server and the secondary nodes.
Restore the data from backup (only if the data format has been changed during the upgrade).
Downgrade the TeamCity software on the main server.
Start the main TeamCity server and verify that everything works properly.
Downgrade the TeamCity software on the secondary nodes to the same version as the main server.
Start the secondary nodes.
TeamCity agents will perform upgrade/downgrade automatically.
Secondary Nodes Limitations
A secondary server has a few limitations compared to the main server:
A secondary node does not allow changing the server configuration and state. The nodes without responsibilities are served in the read-only mode; the nodes with responsibilities provide user-level actions. In both cases, not all administration pages and actions are available.
Currently, only bundled plugins and a limited set of some other plugins can be loaded by a secondary server. Some functionality provided by external plugins can be missing. Read more in Configuring Secondary Node.
Users may need to relogin when they are routed to a secondary node if they did not select the Remember Me option.