Executor Mode: External Kubernetes Integration
TeamCity offers two types of Kubernetes integration:
Regular Kubernetes integration. This approach uses TeamCity cloud profiles and images, similar to integrations with other cloud providers like AWS, Microsoft Azure, or Google Cloud. You configure build agents in TeamCity and use a Kubernetes cluster to host them. This integration type relies on the external Kubernetes Support plugin.
Kubernetes cluster as an external executor. In this mode, TeamCity is unaware of any build agents on the Kubernetes side. Instead, it recognizes the cluster's capability to run builds and delegates the assignment and lifecycle management of entities running its builds entirely to the cluster.
This article explains the native integration approach. To learn about the traditional integration instead, refer to the Setting Up TeamCity for Kubernetes topic.
How It Works
Create a Kubernetes connection that will allow TeamCity to access your K8s cluster.
In project settings, go to the Cloud Profiles section and click Create New Profile.
Click the Kubernetes tile under the "Offload your tasks to external agents" section.
Set your K8s integration as follows:
Connection — choose a connection created in step 1.
Server URL — enter your TeamCity server URL or leave empty to use the URL specified on the server's Global Settings page.
Pod template — choose a required pod configuration. See the Pod Templates section for more information.
Maximum number of builds — enter the cluster capacity. When this capacity is reached, new builds will remain queued unless currently ongoing builds are finished.
In your build configuration settings, specify agent requirements and step containers if needed.
Trigger a new build.
The TeamCity K8s executor collects a list of build steps with their parameters, generates a pod definition, and submits it to K8s cluster. Each build step runs in a separate container, which allows you to specify different images for individual steps.
The K8s cluster allocates pods required to run a build and starts it.
Cluster Permissions
Make sure the TeamCity user is allowed to perform writing operations in the Kubernetes namespace. Your Kubernetes user role must be configured with the following permissions:
Pods:
get,create,list,delete.Pod templates:
get,listNamespaces:
get,list— to allow TeamCity to suggest the namespaces available on your server.
The following sample illustrates all required permissions configured via Kubernetes RBAC:
Pod Templates
The podtemplates list permission enables TeamCity to access the list of pod templates stored in your cluster (under the same namespace as specified in the selected Kubernetes connection. The retrieved templates are displayed in the Pod Templates drop-down menu.
The sample template below launches pods that have 2GB of memory and 25Gb of storage, and use a custom build agent image (see the Special Notes and Limitations section). You can also explicitly declare build parameters in YAML markup. These parameters and their values are matched against explicit agent requirements to match compatible executors with queued builds.
Agent Priority
If your build configuration has a mix of different agent options, TeamCity uses the following logic to delegate queued builds:
Self-hosted agents have the highest priority
If there is no free self-hosted agent compatible with a build, TeamCity looks for a suitable cloud agent
If neither of these options are available, a build is offloaded to an external executor
See the following article to learn more about agent priorities: Agent Priority.
Licensing
Although Kubernetes-based builds do not occupy native TeamCity agents, their number is still limited by your agent license. The combined number of "native" and executor builds cannot exceed this licensed limit. When the limit is reached, new builds will remain queued with the "Maximum number of concurrent builds reached" message, waiting for a free agent slot.
Detached builds and those spawned by composite build configurations do not occupy agent slots and can run without restrictions.
Special Notes and Limitations
Currently, a project can use only one Kubernetes integration. We expect to support multiple executors per project (along with a mechanism to prioritize them) in future release cycles.
A Kubernetes cluster acts as an external orchestrator that processes builds without using "classic" build agents connected to a TeamCity server. This leads to a "Build agent was disconnected while running a build" warning displayed when a build handled by an executor is running. As long as builds finish successfully, this warning does not indicate a misconfiguration or connectivity issue and can be disregarded. We expect to resolve this behavior in upcoming bug-fix releases.
Pod templates that specify custom container properties must have the "template-container" container names.
# ... template: spec: containers: - name: template-container image: johndoe/custom_agent_image:latest # ...Otherwise, the container will use default settings. For example, it will override the
imageproperty in favor of the standard "jetbrains/teamcity-agent:latest" image.Currently, the Kubernetes executor does not support Windows nodes. Builds handled by these nodes are stuck in the "Setting up resources" phase with pods displaying the
MountVolume.SetUp failed for volume "kube-api-access-sfhbc"error. For that reason, builds designed to run under Windows cannot be delegated to Kubernetes executor.To avoid this issue for mixed clusters (with both Windows and Linux nodes), specify the required node in pod templates:
spec: containers: # ... nodeSelector: kubernetes.io/os: linuxThe Docker build step is not supported.
The Docker inside Docker (DinD) setup is not supported.
Pod initialization can stall while cleaning the "/agent/temp/.old" directory.
Advanced Container Wrapper are not available in build steps if the configuration's parent project has a configured Kubernetes Executor.