Upsource 2020.1 Help

Setting up Upsource cluster

This is a step-by-step instruction on how to install Upsource in a distributed multi-node cluster. The installation procedure should only be performed once. After it's completed, the services will be managed by a standard docker-compose tool (or more accurately by cluster.sh— a JetBrains provided docker-compose wrapper that supports the same command line format for managing cluster services, but has additional commands for cluster initialization and upgrade).

What's included

Upsource cluster consists of the following services:

cassandra

Manages Cassandra database included with Upsource.

frontend

Upsource web-based UI.

psi-agent

Provides code model services (code intelligence) based on IntelliJ IDEA.

psi-broker

Manages psi-agent tasks.

analyzer

Imports revisions from VCS to Cassandra database.

opscenter

Provides cluster monitoring facilities.

haproxy

Entry point of the distributed Upsource cluster. Proxies all incoming requests to services.

file-clustering

Provides comparative analysis for Upsource "smart" features (suggestions, reminders, etc.) by computing file and revision similarity indices.

The cluster structure is defined by the docker-compose.yml file and is parametrized by the properties defined in the upsource.env file. Both files are included in the cluster-config artifact (you can download the latest version from here ).

Upsource cluster doesn't include JetBrains Hub

Prerequisites

  1. Install JetBrains Hub or use existing standalone instance.

  2. Setup Docker Swarm Cluster, including:

    • swarm manager (master)

    • work instances (nodes)

    • key-value storage (consul)

    Refer to this instruction to setup a key-value based swarm cluster.

    Notes to the instruction above:

    • Install the docker engine of version 1.12 and higher on every swarm node.

    • Docker engine swarm mode originally introduced in 1.12 is not fully integrated with the docker-compose, hence the swarm should be created with the help of swarm docker images.

    • Check that the swarm cluster is operational.

  3. Make sure that time between all cluster nodes, and Hub and Cassandra nodes is synchronized (systemd/Timers may be suggested as one of the ways to synchronize time).

Configure Upsource cluster

  1. Unpack cluster-config.zip to a host from which you're going to manage Upsource cluster — we'll be referring to it as the cluster admin host.

    You can use any server, not necessarily a swarm manager or a swarm node. Just make sure that once you've selected it, the cluster is managed from that admin host only.

  2. Make sure all of these files are located in the same directory:

    • cluster.sh a wrapper of the standard docker-compose tool. cluster.sh defines some variables substituted in docker-compose.yml

    • docker-compose.yml defines Upsource cluster structure.

    • upsource.env defines properties passed to the upsource services running inside docker containers.

    • docker-compose-params.env defines parameters used in docker-compose.yml (it is assumed that cluster.sh defines default values of the parameters, and docker-compose-params.env overrides defaults if needed depending on the environment).

    • docker-compose-cluster-init.yml defines parameters of the service activated at installation and upgrade.

    • docker-compose-cluster-upgrade.yml defines parameters of the service activated at upgrade.

  3. The port Upsource cluster listens to is defined by the property UPSOURCE_EXPOSED_PROXY_PORT in cluster.sh and is equal to 8080. This property might be overridden in docker-compose-params.env.

    UPSOURCE_EXPOSED_PROXY_PORT=<The port number Upsource should listen to>
  4. Define a swarm node on which opscenter should be deployed by specifying a value for the variable UPSOURCE_OPSCENTER_NODE located in docker-compose-params.env:

    UPSOURCE_OPSCENTER_NODE=<opscenter_nodeId>

    where opscenter_nodeId is the name of the swarm worker node you're defining.

  5. On the node you specified in the previous step (opscenter_nodeId), create a folder for backups and give read-write access permissions to the user with ID 13001 (Upsource service runs under the user jetbrains with ID 13001 inside a container, and will have no access to the mapped volume on the host machine otherwise).

    Although the backups volume is mapped to the folder /opt/upsource/backups on the host machine by default, it can be customized in the docker-compose-params.env by editing the UPSOURCE_BACKUPS_PATH_ON_HOST_SYSTEM property. The following commands should be executed on the swarm node (opscenter_nodeId) assuming that the property UPSOURCE_BACKUPS_PATH_ON_HOST_SYSTEM was not changed (otherwise commands should be run against the overridden backups directory):

    mkdir -p -m 750 /opt/upsource/backups chown 13001:13001 /opt/upsource/backups

  6. Define a swarm node on which haproxy service should be deployed by specifying a value for the variable UPSOURCE_PROXY_NODE located in docker-compose-params.env:

    UPSOURCE_PROXY_NODE=<haproxy_nodeId>

    Where haproxy_nodeId is the name of the swarm worker node you're defining.

  7. Define a swarm node on which cassandra service should be deployed by specifying a value for the variable UPSOURCE_CASSANDRA_NODE located in docker-compose-params.env:

    UPSOURCE_CASSANDRA_NODE=<cassandra_nodeId>

    Where cassandra_nodeId is the name of the swarm worker node you're defining.

  8. Define a swarm node on which cluster-init service should be deployed by specifying a value for the variable UPSOURCE_CLUSTER-INIT_NODE located in docker-compose-params.env:

    UPSOURCE_CLUSTER_INIT_NODE=<cluster_init_nodeId>

    Where cluster_init_nodeId is the name of the swarm worker node you're defining.

  9. On all the swarm nodes, pre-create service logs directories and give the user with id 13001 read-write access to them (Upsource service runs under the user jetbrains with ID 13001 inside a container, and will have no access to the mapped volume on the host machine otherwise):

    mkdir -p -m 750 /var/log/upsource/psi-agent chown 13001:13001 /var/log/upsource/psi-agent mkdir -p -m 750 /var/log/upsource/psi-broker chown 13001:13001 /var/log/upsource/psi-broker mkdir -p -m 750 /var/log/upsource/analyzer chown 13001:13001 /var/log/upsource/analyzer mkdir -p -m 750 /var/log/upsource/frontend chown 13001:13001 /var/log/upsource/frontend mkdir -p -m 750 /var/log/upsource/opscenter chown 13001:13001 /var/log/upsource/opscenter mkdir -p -m 750 /var/log/upsource/cluster-init chown 13001:13001 /var/log/upsource/cluster-init mkdir -p -m 750 /var/log/upsource/file-clustering chown 13001:13001 /var/log/upsource/file-clustering
  10. Set variables in upsource.env

    • Define Hub related properties:

      1. HUB_URL

        HUB_URL=<URL of external Hub>
      2. Import a Hub certificate to Upsource. Skip this step unless Hub is available through HTTPS via a self-signed certificate or a certificate signed by a private CA.

    • Add the following line to upsource.env:

      UPSOURCE_MONITORING_RECORD_HOURS=1
    • Set the property UPSOURCE_URL to the URL the end users will use to access Upsource.

      UPSOURCE_URL=<URL for end users to access Upsource>

      The default URL Upsource is available at is: http://<haproxy_nodeId.address>:${UPSOURCE_EXPOSED_PROXY_PORT}/

      where haproxy_nodeId is the node to which haproxy service is deployed.

      Your production environment will most likely be set up behind an SSL-terminating proxy (See how to configure proxy for Upsource. The only difference with cluster is that there is no need to run Upsource configure command, instead Upsource cluster base-url is defined as a value of the property UPSOURCE_URL). In this case, set proxy address as a value of the variable UPSOURCE_URL.

      Let's assume the value of variable UPSOURCE_URL was set to some <upsource.url>. In this case you should:

    • Create a trusted service in Hub (the page <hub.url>/hub/services) for Upsource cluster (note the service id and secret as you will need them in Step e to set in the upsource.env file).

      This service will be used by all Upsource services in order to communicate with Hub.

      Add redirect URLs for the created service:

      • <upsource.url>

      • <upsource.url>/~download

      • <upsource.url>/~generatedTree

      • <upsource.url>/~oauth/github

      • <upsource.url>/~unsubscribe

      • <upsource.url>/~uploads

      • <upsource.url>/monitoring

      Set Upsource service home URL to <upsource.url>

    • Set UPSOURCE_SERVICE_ID and UPSOURCE_SERVICE_SECRET to the values defined in the Hub trusted service you've created:

      UPSOURCE_SERVICE_ID=<Key of the trusted service pre-configured in Hub >
      UPSOURCE_SERVICE_SECRET=<Secret of the trusted service pre-configured in Hub>

Start Upsource cluster

  1. Make sure your external Hub is started and available prior to Upsource cluster startup.

  2. Prepare the Cassandra database:

    ./cluster.sh init-cluster -H tcp://<swarm.master.host>:<swarm.master.port>

    This command launches Cassandra, configures its structure, then stops it.

  3. Check that upsource-cluster-init has started successfully:

    The logs of the cluster-init execution are located in the /var/log/upsource/cluster-init directory of the node where the container was started. The following command will give a clue on which node the cluster-init was executed:

    docker -H tcp://<swarm.master.host>:<swarm.master.port> ps -a --format "{{.ID}}: {{.Names}} {{.Image}}"
  4. Make sure the docker-compose (version 1.10 or higher) is installed on the node where you'd like to manage the cluster from:

    docker-compose -v
  5. Launch Upsource cluster:

    ./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> up

    The cluster.sh has the same command line format as docker-compose since cluster.sh is simply a wrapper of docker-compose.

Last modified: 02 April 2021