Datalore Help

Pricing and plans


You can choose one of the following Datalore plans:

  • Community

  • Professional

  • Enterprise

The plans define what computation time, storage options, instances, and extra features are available to you. The information in the sections below will help you choose the right option.


With regards to computation, it is important to consider the following aspects: computation time, parallel computation, and long computation.

Computation time

Computation time is the time consumed to run code and execute your notebooks. To view your current computations, click your avatar icon in the upper-right corner of the screen to open the Account menu and select Running computations.

For a detailed report on machine usage, go to Account menu | Account settings | Resource Usage. You can download it as a .csv file from there.

Parallel computation

With the Community plan, you can run 2 notebooks in parallel within the available computation time quota.

With the Professional plan, you can have unlimited parallel computations within the available computation time quota.

Background computation

This mode keeps the computation running after the tab is closed. The option allows you to close your notebook at any point without losing your computation progress. Background computation is particularly helpful when training heavy Machine learning and Deep learning models.

The Community plan allows you to keep your machine running for up to 6 hours after the tab is closed. The Professional plan sets no limitations other than your computation time quota.


You have the following storage options:

  • Internal storage

    Datalore provides cloud storage for notebooks and attachments. The attached files remain attached to the notebooks whenever you close Datalore. For a .csv report on storage usage, go to Account menu | Account settings | Resource Usage.

    If you downgrade from the Professional plan to Community, any data exceeding the storage limit will be deleted after 30 days.

  • External storage

    Datalore supports external S3 buckets. This helps you work with the data files you already have in your cloud storage and extend Datalore internal storage based on your needs. To connect your storage, go to Tools | Attached data sources.


For shared notebooks, Datalore uses the computation resources from the account of the document’s owner. This means that if you are the owner of a notebook that you share with two people and they continue running the notebook after you close the Datalore tab, it is your computation time and memory that will be consumed. If all three of you are running one notebook simultaneously, only one computation will be enabled and consumed.

When you share a workspace, your storage resources will be consumed.

When you publish a notebook, you publish a static copy, and no computation resources will be consumed when the user views it.

Plan comparison

The table below compares the key features of Datalore Community and Professional plans.

Community PlanProfessional Plan
Monthly machine usage
Basic machine120 hours750 hours
Large machine-120 hours
GPU machine

10 hours

*One-time option, after completing a short interview

20 hours
Internal cloud storage10 GB20 GB
External storage (S3 bucket support)YesYes
Version historyYesYes
Notebook publishingYesYes
Team collaboration and workspaces
  • Up to 3 editors

  • Unlimited viewers

  • 1 shared workspace

Unlimited editors, viewers, and shared workspaces
Background computationUp to 6 hours after closing the notebook

Unlimited time

* after closing the notebook

* within your computation time quota

Parallel computation2 notebooks running in parallelUnlimited notebooks running in parallel


When you subscribe, you are redirected to the JetBrains Account page upon registration/upgrade, where you provide your billing details.

You can cancel your subscription at any time in your Account settings. If your subscription expires, you are automatically downgraded to Community.


We run your computations on Amazon AWS EC2 virtual servers.

Suitable forAWS nameDetails
Basic machineSimple data analysis and machine learning taskst2a.medium
  • 4 GB RAM

  • AVX-512 enabled CPUs for efficient parallel computations

Large machineTasks with huge datasetsr5.large
  • 16 GB RAM

  • 2 vCPU cores

  • Up to 5 times the speed of the basic machine

  • AVX-512 enabled CPUs for efficient parallel computations

GPU machineDeep learning tasksg4dn.xlarge

  • 16 GB RAM

  • 4 vCPU cores

  • Up to 50 Gbps networking throughput

For more information, go to Amazon Web Services.

Last modified: 15 June 2021