JetBrains Space Help

Space On-Premises Installation

As an alternative to using Space as a service, you can get your own self-managed Space instance (or Space On-Premises). It implies that you install, manage, and maintain Space on your own.

Space On-Premises overview

The production installation of Space On-Premises implies running Space and the required services in a Kubernetes cluster. The cluster itself can run in your own environment, in Amazon Elastic Kubernetes Service, Google Kubernetes Engine, or any other cloud service that supports Kubernetes. The minimum supported Kubernetes version is 1.21.

Limitations of the Space On-Premises Beta version

Currently, Space On-Premises is in the Beta state and has the following limitations:

  • The Beta license is free and includes all features of the Organization plan.

  • The Beta license is valid only till January 31, 2023.

  • The max number of users is 1000.

  • Only self-hosted Automation (CI/CD) workers are supported. To use all features of self-hosted workers, you must have a publicly-available object storage endpoint.

  • The max number of concurrently-running Automation workers is 50.

  • Dev environments are not supported. We plan to add the support for dev environments in the public release of Space On-Premises.

Overview

A user can access Space instance from various clients: mobile app, web browser, desktop app. All Space DNS names must resolve to the IP address of the same load balancer. The load balancer uses Server Name Indication to route the user to one of the pods that runs the required service.

Space On-Premises design

A Space On-Premises instance consists of the following components:

  • Cluster services:

    • Ingress controller – creates Space subdomains on demand.

    • External DNS – links the created subdomains with an external DNS.

    • Cert manager – issues domain certificates on demand.

  • Space application components:

    • Space application – provides all Space functionality including the user interface. Available to users via a public URL.

    • VCS – a Git server. Available to users via a public URL.

    • Packages – a Space package repository manager that lets users create various repositories, for example, container registries, NuGet feeds, Maven repositories, etc. Available to users via a public URL.

    • Langservice – an internal component that provides code formatting services for the Space user interface. Not available to users.

  • Space application services is an optional component that typically includes logging and monitoring services. These services are not included in the Space On-Premises installation as most of the companies already have conventions about what services should be used to monitor applications on a cluster level.

  • Space external storage components is a data pool shared between all application components. The components can be configured in the way that provides data segregation and isolation: Each component doesn't share data with any other component. The storage components are added to a cluster with external links – they can be a part of the cluster or can be hosted externally to Space.

    • Redis – used as an event queue.

    • PostgreSQL – an SQL database that stores apllication data.

    • MinIO or another S3-compatible storage – an object storage.

    • Elasticsearch – a search database.

DNS

As mentioned above, a user client can access only three Space application components via public URLs: Space user interface, Space Packages, and VCS. This means a Space On-Premises instance requires a top-level domain name and three subdomains for the corresponding services. The URLs must be available to the clients. For example, if we suggest that your top-level domain is space.local:

DNS for on-premises

From the security perspective, you can either create a number of TLS certificates (one for each service) or create a shared (wildcard) certificate for all services.

Pod scheduling

The scheduling policy implies that a worker can run only one pod with a particular Space application component. For example, if a worker runs a pod with Space Packages, Kubernetes Scheduler will not deploy one more Space Packages pod to this worker. But it can deploy a pod with another component to it, e.g. with a Git server.

For example, a runtime pod configuration may look like follows:

On-prem pod scheduling

Application components. Space

Onprem Space Ui

The Space application component provides the user interface and the main Space functionality. Within the cluster, the Space application is reachable as the internal service on the default port 9084 (adjustable).

The Space application component uses the following configuration files:

  1. cm:space-envs: a ConfigMap with custom environment variables which can be injected into the process by a user.

  2. cm:space-conf: a ConfigMap with specific configuration settings for the process. It is local to this application.

  3. cm:logs: a ConfigMap with logging configuration. This ConfigMap is common for all application components.

In addition to the configuration files, you should use a number of secrets to configure the Space application component. The secrets:space is a group of configuration files that contain required application configuration specified as environment variables. Each secret represents a corresponding group in the values.yaml file:

  • space-automation-dsl

  • space-automation-logs

  • space-automation-worker

  • space-database

  • space-es-audit

  • space-es-metrics

  • space-es-search

  • space-eventbus

  • space-mail

  • space-main

  • space-oauth

  • space-organization

  • space-packages

  • space-recaptcha

  • space-s3

  • space-vcs

Optionally:

  1. It is possible to configure the Horizontal Pod Autoscaler policy. To enable the policy, the Kubernetes Cluster Admin must configure an HPA Controller.

  2. It is possible to configure a service account.

See the Space application configuration in values.yaml.

Application components. VCS

VCS application component

The VCS component lets users create Git repositories for their projects in Space. Within the cluster, the component is reachable as the internal service on the default port 19084 (adjustable).

The VCS component uses the following configuration files:

  1. cm:vcs-envs: a ConfigMap with custom environment variables which can be injected into the process by a user.

  2. cm:vcs-conf: a ConfigMap with specific configuration settings for the process. It is local to this application.

  3. cm:logs: a ConfigMap with logging configuration. This ConfigMap is common for all application components.

In addition to the configuration files, you should use a number of secrets to configure the VCS component. The secrets:vcs is a group of configuration files that contain required application configuration specified as environment variables. Each secret represents a corresponding group in the values.yaml file:

  • vcs-database

  • vcs-eventbus

  • vcs-main

  • vcs-s3

Optionally:

  1. It is possible to configure the Horizontal Pod Autoscaler policy. To enable the policy, the Kubernetes Cluster Admin must configure an HPA Controller.

  2. It is possible to configure a service account.

See the VCS configuration in values.yaml.

Application components. Packages

Packages component

Space Packages is a package repository manager. Within the cluster, the component is reachable as the internal service on the default port 9390 (adjustable).

The Packages component uses the following configuration files:

  1. cm:packages-envs: a ConfigMap with custom environment variables which can be injected into the process by a user.

  2. cm:packages-conf: a ConfigMap with specific configuration settings for the process. It is local to this application.

  3. cm:logs: a ConfigMap with logging configuration. This ConfigMap is common for all application components.

In addition to the configuration files, you should use a number of secrets to configure the Packages component. The secrets:packages is a group of configuration files that contain required application configuration specified as environment variables. Each secret represents a corresponding group in the values.yaml file:

  • packages-database

  • packages-es-search

  • packages-eventbus

  • packages-main

  • packages-oauth

  • packages-organization

  • packages-s3

  • packages-space

  • packages-oauth

Optionally:

  1. It is possible to configure the Horizontal Pod Autoscaler policy. To enable the policy, the Kubernetes Cluster Admin must configure an HPA Controller.

  2. It is possible to configure a service account.

See the Packages configuration in values.yaml.

Application components. Langservice

Langservice app component

The Langservice is an internal component that provides code formatting for the Space user interface. Within the cluster, the component is reachable as the internal service svc:langservice on the default port 8095 (adjustable).

The Langservice component uses the following configuration files:

  1. cm:langservice-envs: a ConfigMap with custom environment variables which can be injected into the process by a user.

  2. cm:langservice-conf: a ConfigMap with specific configuration settings for the process. It is local to this application.

  3. cm:logs: a ConfigMap with logging configuration. This ConfigMap is common for all application components.

Optionally:

  1. It is possible to configure the Horizontal Pod Autoscaler policy. To enable the policy, the Kubernetes Cluster Admin must configure an HPA Controller.

  2. It is possible to configure a service account.

See the Langservice configuration in values.yaml.

Proof-of-concept installation

As an alternative to a Kubernetes cluster, you can run your Space On-Premises instance in a number of Docker containers configured with Docker Compose. Note that we recommend such a deployment only for proof-of-concept purposes. It's the easiest way to try Space on-premises on your local physical or virtual machine.

Docker Compose deployment

Get started

We recommend that you start with a proof-of-concept installation on your local machine. It will let you get acquainted with Space On-Premises configuration and better understand the requirements for your future production installation to a Kubernetes cluster.

Last modified: 25 November 2022