Run Environment
Space Automation run environment is based on the concept of workers. A worker is a lightweight agent that connects to Space Automation, gets jobs and source code, runs the jobs, and reports results back to Space. A worker can run in virtual machines in the Space Automation Cloud, your own self-hosted machines, and Docker containers. The following table summarizes possible run environments:
Environment | Description | OS | Resources |
---|---|---|---|
Space Cloud workers * | Virtual machines hosted in the Space cloud infrastructure. Learn more | Linux Planned: macOS, Windows | Default: 2 vCPU, 7800 MB. Large: 4 vCPU, 15600 MB. Extra large: 8 vCPU, 31200 MB (not available for the Free plan). |
Containers in Space Cloud | Docker containers running in the Space cloud workers. Learn more | Linux only | Default: 2 vCPU, 7800 MB. Max: 8 vCPU, 31200 MB. Max for the Free plan: 4 vCPU, 15600 MB. |
Self-hosted workers | Self-hosted hardware or virtual machines. Learn more | Linux, macOS, Windows | All available resources of the host machine |
Containers in self-hosted workers | Docker containers running on self-hosted hardware or virtual machines. Learn more | Linux only | All resources allocated to the container on the host machine |
Choose run environment for a job
Space Automation chooses the run environment for a particular job based on the worker pool assigned to this job. Currently, there are two available pool types: Space Automation Cloud (default) and Self-Hosted Workers.
The environment where a job will eventually run depends on:
The default worker pool selected for the organization or a project.
The
requirements
of ajob
.A step type:
container
orhost
.
For better understanding how to run a job in a particular environment, see Examples.
Default worker pool
The default pool for running jobs is defined by the Default worker pool parameter on the organization and project levels. The project-level parameter has priority over the organization-level one.
To change the default worker pool for the organization
On the main menu, click
Administration and choose Automation.
Set the Default worker pool parameter: Space Automation Cloud or Self-Hosted Workers.
To change the default worker pool for a project
Open the project and then open the Jobs page.
Click Settings.
Set the Default worker pool parameter: Space Automation Cloud or Self-Hosted Workers.
Job requirements
The requirements
block makes requirements to the run environment more specific based on the following parameters:
|
|
// this job will run on a worker
// that has at least 2.cpu and 4000.mb
job("Example") {
requirements {
resources {
minCpu = 1.cpu
minMemory = 2000.mb
}
}
// the container will be limited to 1.cpu and 2000.mb
container(displayName = "Say Hello", image = "hello-world")
host("Say Hello 2") {
shellScript {
content = "echo Hello World!"
}
// these requirements override job's requirements
// as they are higher
requirements {
resources {
minCpu = 2.cpu
minMemory = 4000.mb
}
}
}
}
|
Run environments in Space On-Premises
Support for different run environments depends on Space On-Premises installation type:
Proof-of-concept installation via Docker Compose:
Self-hosted workers – full support.
Cloud workers – not supported.
Production installation in a Kubernetes cluster:
Self-hosted workers – full support.
Cloud workers – as Kubernetes implies running workload in containers only, the behavior will be different if Automation is configured to run a job in the cloud (e.g., Space Automation Cloud is selected in Jobs settings or
job.requirements
hasworkerPool = WorkerPools.SPACE_CLOUD
). In this case, even if a job uses ahost
block, Automation will run it in a Docker container:Regardless of what is specified in
job.requirements.workerType
, the container has 2 vCPU and 7800 MB memory.The container image is based on Alpine Linux and provides support for Docker and Docker Compose.
Examples
Below you will find examples on how to run jobs in various environments.