Upsource 2017.1 Help

Upgrading Upsource cluster

Follow this guide to upgrade your existing Upsource cluster installation to a newer one. Upgrade procedures vary significantly depending on the version increment (i.e. 3.5.1 to 3.5.2 vs. 3.5.X to 4.0.X) as outlined below.

Minor upgrade

Use this instruction when upgrading to a version that only differs by a build number, for example, when upgrading from 3.5.111 to 3.5.222.

  1. Stop Upsource cluster and remove its containers:

    ./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> stop ./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> rm

  2. Download cluster-config-<major.minor.NewBuildNumber>.zip

    Important! If you've ever changed cluster.sh and docker-compose.yml files (for example, added new analyzer properties), you need to:

    • Rename these old files (e.g. cluster.sh.bak and docker-compose.yml.bak.
    • Save the new cluster.sh and docker-compose.yml files in the same directory.
    • Copy changed and added lines from the old files to the new ones.
  3. Check that a correct UPSOURCE_VERSION is set inside cluster.sh

  4. Start Upsource cluster:

    ./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> up

Major upgrade

Use this instruction when upgrading to a new version, for example, from 3.5.111 to 4.0.111.

  1. Create a backup of your existing installation.

  2. Delete the data from your Cassandra database instance. Indexed data stored in Cassandra can not be migrated during a major upgrade. All user-generated data will be restored from backup.

  3. Stop Upsource cluster and remove its containers:

    ./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> stop ./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> rm

  4. Copy your backup to some temporary folder on the host where cluster-init will be started (let's assume the folder containing backup is /tmp/upsource/backup/2016 Oct 11 12-18-26)

  5. Run the following command:

    chown -R 13001:13001 "/tmp/upsource/backup/2016 Oct 11 12-18-26"

  6. Run the new version's cluster init process on the host where backup was copied to (the backup location and hub certificate, if any, are provided as volumes):

    docker -H <docker host where backup was copied to> run -v /var/log/upsource/cluster-init:/opt/upsource-cluster-init/logs -v /opt/hub/cert:/opt/upsource-cluster-init/conf/cert -v "/tmp/upsource/backup/2016 Oct 11 12-18-26/data":/opt/upsource-cluster-init/data --env-file=upsource.env jetbrains/upsource-cluster-init:<major.minor.NewBuildNumber>

  7. Check that it has run successfully. The cluster-init execution logs are located in the directory /var/log/upsource/cluster-init of the node on which the container was started. The following command will give a clue which node cluster-init was executed on:

    docker -H tcp://<swarm.master.host>:<swarm.master.port> ps -a --format "{{.ID}}: {{.Names}} {{.Image}}"

  8. Download cluster-config-<major.minor.NewBuildNumber>.zip

    Important! If you've ever changed cluster.sh and docker-compose.yml files (for example, added new analyzer properties), you need to:

    • Rename these old files (e.g. cluster.sh.bak and docker-compose.yml.bak.
    • Save the new cluster.sh and docker-compose.yml files in the same directory.
    • Copy changed and added lines from the old files to the new ones.
  9. Check that a correct UPSOURCE_VERSION is set inside cluster.sh

  10. Start Upsource cluster:

    ./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> up

Last modified: 13 July 2017