Read Product data collection notice for more details.

Does JetBrains collect my data to train AI models?

Only if you explicitly allow it. If you choose to share data with us, this data is used exclusively to improve JetBrains tools and train our own models, like Mellum, and it is never shared with third parties.

We are a commercial organization. Does JetBrains collect our data?

For companies that are unwilling or, for legal reasons, unable to opt in to the program, detailed data collection is disabled by default, and their admins remain in full control.

Admin settings in JetBrains Accounts:

The screenshot of the JetBrains Account with data-sharing section

Levels of protection:

  1. Data sharing is disabled by default on all types of JetBrains IDE licenses except for the non-commercial tier (this tier is for individuals only). Data sharing is only enabled if you explicitly allow this in the settings or accept the non-commercial agreement.
  2. Users with organizational licenses can only share detailed code‑related data if an admin enables sharing at the company level, preventing accidental IP leaks.
  3. For companies, admin control is available but disabled by default, meaning no data is shared unless explicitly enabled.

I am an individual user. Does JetBrains collect my data?

For individuals using JetBrains IDEs with commercial licenses, free trials, free community licenses, or EAP builds who do not explicitly consent to the new data collection model, we do not collect detailed code-related data.

If you’re using a non-commercial license, detailed code‑related data collection is enabled by default – you can disable it in the IDE settings via Settings | Appearance & Behavior | System Settings | Data Sharing.

To what extent can detailed data sharing be configured in JetBrains IDEs?

You can configure your data sharing settings separately for each JetBrains IDE and each machine you work on, with further customized configuration possible for non-commercial and other types of licenses. However, it is important to note that organization-level data sharing settings take precedence over user-level settings.

To protect your privacy, we don’t link detailed data sharing settings to your JetBrains Account, so you need to adjust preferences for each device and each JetBrains product you use.

Users of non-commercial license who opted out of detailed data sharing on one machine will need to opt out again on any new machines they add to their network. JetBrains ensures that clear and timely notifications of this requirement are displayed to users when additional machines are added.

JetBrains is also considering implementing project-level data sharing settings.

Will my proprietary code be secure?

Yes. Any data you share will be handled responsibly, in full compliance with all EU data protection laws. This data is used exclusively to improve JetBrains tools and train our own models, like Mellum, and it is never exposed to third parties.

Will JetBrains sell my data or share it with other companies?

No. We will never sell your data or share it with third parties. It is used only for product analytics, model evaluation, and training our own models.

How will I see the benefit of sharing data?

Your contributions directly improve the tools you use every day. Data helps us detect unsafe code, reduce false positives, improve completions, and make AI features adapt better to real-world workflows.

What if I change my mind after opting in?

  1. You can change the data collection settings at any time, except if your company has opted in to data sharing on behalf of its employees.
  2. We apply a one-year data retention period for the detailed data collected. Once you stop sharing your data, it stays on our server for one year and is then removed.
  3. If you stop sharing your data, previous data could still be used to train the models until the data expires.
  4. If the data was already used for training, we may still use the model even after the data is deleted.
  5. JetBrains will provide a mechanism to remove data upon request in order to mitigate risks.

Anonymous telemetry tells us how features are used, but not how AI behaves on real tasks. Detailed data (like edit history and AI prompts and responses) shows where the AI succeeds or fails, which is critical for making it more reliable for professional development.

How do you prevent sensitive data from being used to train models?

We use filtering, anonymization, and strict access controls to avoid using sensitive information for LLM training. Our processes are continuously audited, and we treat your code as intellectual property that must remain protected.