JetBrains Space Help

Docker

Prerequisites

  • A Dockerfile that defines the Docker image is stored in the project sources.

Run environment

CI/CD scenarios for Docker

Ways to run Docker builds

Automation provides two different ways for building and publishing Docker images depending on a run environment:

job.host.dockerBuildPush

job.host.dockerBuildPush lets you run Docker builds on a self-hosted or Space Cloud worker. Technically, this is a DSL wrapper for the job.host.shellScript block that runs the docker build and docker push commands.

Pros over job.kaniko:

  • It's possible to build Windows Docker images (if the host machine runs on Windows).

  • It's easier and faster to build image dependencies – As you can pre-install all necessary tools to the host machine, the same host step can be used not only to build and publish an image but also to build artifacts required for this image.

    In case of job.kaniko, you must build image artifacts in a separate step and provide them to job.kaniko via the file share.

  • Some things might work differently with Kaniko compared to classic Docker.

job.kaniko

job.kaniko lets you run Docker builds in a container with the pre-installed Kaniko tool. Technically, this is a job.container step that runs in a container based on a special image. The main reason of using job.kaniko over host.dockerBuildPush is when using of the latter is not possible. For example, if your company uses Space On-Premises and doesn't allow using self-hosted workers.

Build and publish a Docker image

The job below first builds and then publishes an image defined in ./docker/config/Dockerfile.

job("Build and push Docker") { host("Build and push a Docker image") { dockerBuildPush { // by default, the step runs not only 'docker build' but also 'docker push' // to disable pushing, add the following line: // push = false // path to Docker [[[context|https://docs.docker.com/engine/reference/commandline/build/#extended-description]]] (by default, context is working dir) context = "docker" // path to Dockerfile relative to the project root // if 'file' is not specified, Docker will look for it in 'context'/Dockerfile file = "docker/config/Dockerfile" // [[[build-time variables|https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg]]] args["HTTP_PROXY"] = "http://10.20.30.2:1234" // [[[image labels|https://docs.docker.com/config/labels-custom-metadata/]]] labels["vendor"] = "mycompany" // to add a raw list of additional build arguments, use // extraArgsForBuildCommand = listOf("...") // to add a raw list of additional push arguments, use // extraArgsForPushCommand = listOf("...") // [[[image tags|https://docs.docker.com/engine/reference/commandline/tag/]]] tags { // use current job run number as a tag - '0.0.run_number' +"mycompany.registry.jetbrains.space/p/prjkey/mydocker/myimage:1.0.${"$"}JB_SPACE_EXECUTION_NUMBER" } } } }
job("Build and push Docker") { // special step that runs a container with the Kaniko tool kaniko { // build an image build { // path to Docker [[[context|https://docs.docker.com/engine/reference/commandline/build/#extended-description]]] (by default, context is working dir) context = "docker" // path to Dockerfile relative to 'context' // this option is equivalent to the Kaniko's --dockerfile argument dockerfile = "config/Dockerfile" // [[[build-time variables|https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg]]] args["HTTP_PROXY"] = "http://10.20.30.2:1234" // [[[image labels|https://docs.docker.com/config/labels-custom-metadata/]]] labels["vendor"] = "mycompany" } // push the image to a Space Packages repository (doesn't require authentication) push("mycompany.registry.jetbrains.space/p/prjkey/mydocker/myimage") { // [[[image tags|https://docs.docker.com/engine/reference/commandline/tag/]]] tags { // use current job run number as a tag - '0.0.run_number' +"0.0.\$JB_SPACE_EXECUTION_NUMBER" } // see [[[example|https://www.jetbrains.com/help/space/automation-environment-variables.html#example]]] on how to use branch name in a tag } } }

Create an image dependency, build, and publish a Docker image

The job below implies that a Gradle build generates artifacts in the ./build directory. The job then builds a Docker image that includes the artifacts, for example, the artifacts can be added using the ADD directive in Dockerfile (not shown). The Dockerfile is located in the project root.

job("Build and push Docker") { // both 'host.shellScript' and 'host.dockerBuildPush' run on the same host host("Build artifacts and a Docker image") { // Gradle build creates artifacts in ./build shellScript { content = """ ./gradlew build """ } dockerBuildPush { // Note that if Dockerfile is in the project root, we don't specify its path. // We also imply that Dockerfile takes artifacts from ./build and puts them to image // e.g. with 'ADD /build/app.jar /root/home/app.jar' val spaceRepo = "mycompany.registry.jetbrains.space/p/prjkey/mydocker/myimage" tags { +"$spaceRepo:0.${"$"}JB_SPACE_EXECUTION_NUMBER" +"$spaceRepo:lts" } } } }

In case of job.kaniko the job must contain two steps: The first step (job.container) builds artifacts and puts them to the file share, the second step (job.kaniko) takes the files from the file share and builds the image.

job("Build and push Docker") { container(displayName = "Run gradle build", image = "amazoncorretto:17-alpine") { // run gradle build and copy artifacts to the file share shellScript { content = """ ./gradlew build cp -r build ${'$'}JB_SPACE_FILE_SHARE_PATH """ } } kaniko { // This is a shellScript that runs before 'docker build' and 'docker push'. // Here we use it to copy the Gradle output from the file share to the // context directory ('build'). Initially, the 'build' directory in this // container is empty (we ran 'gradlew build' in another step, i.e. container). beforeBuildScript { content = "cp -r ${'$'}JB_SPACE_FILE_SHARE_PATH build" } // We imply that Dockerfile takes artifacts from ./build and puts them to image // e.g. with 'ADD /build/app.jar /root/home/app.jar' build { context = "build" } push("mycompany.registry.jetbrains.space/p/prjkey/mydocker/myimage") { val spaceRepo = "mycompany.registry.jetbrains.space/p/prjkey/mydocker/myimage" tags { +"$spaceRepo:0.${"$"}JB_SPACE_EXECUTION_NUMBER" +"$spaceRepo:lts" } } } }

Publish a Docker image to Docker Hub

  1. In Docker Hub, create an access token with the Write permission. Save the created token to a safe location.

    Docker Hub token
  2. In Space, create two secrets:

    • dockerhub_user: your Docker Hub username.

    • dockerhub_token: the Docker Hub token you've created in step 1.

  3. Edit the project's .space.kts:

    job("Publish to Docker Hub") { host("Build artifacts and a Docker image") { // assign project secrets to environment variables env["HUB_USER"] = Secrets("dockerhub_user") env["HUB_TOKEN"] = Secrets("dockerhub_token") shellScript { // login to Docker Hub content = """ docker login --username ${'$'}HUB_USER --password "${'$'}HUB_TOKEN" """ } dockerBuildPush { labels["vendor"] = "mycompany" tags { +"myrepo/hello-from-space:1.0.${"$"}JB_SPACE_EXECUTION_NUMBER" } } } }
  1. In Docker Hub, create an access token with the Write permission. Save the created token to a safe location.

    Docker Hub token
  2. In Space, create two secrets:

    • dockerhub_user: your Docker Hub username.

    • dockerhub_token: the Docker Hub token you've created in step 1.

  3. Edit the project's .space.kts:

    job("Publish to Docker Hub") { kaniko("Docker build and push") { // assign project secrets to environment variables env["HUB_USER"] = Secrets("dockerhub_user") env["HUB_TOKEN"] = Secrets("dockerhub_token") // put auth data to Docker config beforeBuildScript { content = """ B64_AUTH=${'$'}(echo -n ${'$'}HUB_USER:${'$'}HUB_TOKEN | base64 -w 0) echo "{\"auths\":{\"https://index.docker.io/v1/\":{\"auth\":\"${'$'}B64_AUTH\"}}}" > ${'$'}DOCKER_CONFIG/config.json """ } build { labels["vendor"] = "mycompany" } //in push, specify repo_name/image_name push("myrepo/hello-from-space") { tags{ +"1.0.\$JB_SPACE_EXECUTION_NUMBER" } } } }
Last modified: 25 November 2022