AI Assistant Help

Use third-party and local models

By default, AI Assistant provides access to a set of cloud-based models from various AI providers through the JetBrains AI service subscription. These models power AI Assistant features and can be selected in AI Chat to have a conversation about your codebase.

In addition, you can configure AI Assistant to use locally hosted models or models provided by third parties. Supported providers include:

By configuring models from different sources, you can control which models AI Assistant uses and how those models are provided.

Access models from third-party AI providers

To access models from third-party providers such as OpenAI, Anthropic, or other OpenAI-compatible endpoints, AI Assistant requires an API key and, in some cases, an endpoint URL. Entering the key allows AI Assistant to authenticate with the provider and access its models.

To provide the API key:

  1. Navigate to Settings | Tools | AI Assistant | Providers & API keys.

    Providers & API keys settings
  2. In the Third-party AI providers section, select the Provider.

  3. Enter the API Key and click Test Connection to check whether the connection is established successfully.

    Provide API key

    If you are configuring an OpenAI-compatible provider, specify the URL of the provider's API endpoint in addition to the API Key. Also, indicate whether the model supports calling tools configured through the Model Context Protocol (MCP) by enabling or disabling the Tool calling setting.

    OpenAI-compatible provider
  4. Click Apply to save changes.

To verify that the models from the configured provider became available for use, open AI Chat and click the model selector. Provider's models are available under a corresponding section.

Third-party provider models in chat

Models accessed from a third-party AI provider are assigned to AI Assistant features automatically. If these models do not support certain AI Assistant features, those features become unavailable.

For more information on what models are used for AI Assistant features, refer to List of assigned and fallback models.

Connect local models

Providers like Ollama and LM Studio run models on your machine. Connecting to them in AI Assistant allows you to use these models directly from your local setup.

  1. Navigate to Settings | Tools | AI Assistant | Providers & API keys.

  2. In the Third-party AI providers section, select the Provider.

  3. Specify the URL where it can be accessed and click Test Connection to check whether the connection is established successfully.

    Enable Third-party AI providers
  4. Click Apply to save changes.

Once the connection is established, local models become available for use in AI Chat. Additionally, locally hosted models can also be assigned to specific AI Assistant features.

Assign models to AI Assistant features

Each AI Assistant feature has a predefined list of models assigned to it. These models are used when the feature is triggered. Some features also have a predefined list of fallback models, which are used if none of the assigned models are available.

When a feature is triggered, AI Assistant checks whether any of the models available to you match the models assigned to that feature. If no match is found and fallback models are defined for the feature, the system checks for a match among the fallback models. If no compatible model is available, the feature is unavailable.

The mechanism works as follows:

Yes

No

No

Yes

Yes

No

User obtains models from provider

User triggers feature

System checks user models against assigned models

Matching model found?

Are fallback models defined?

System checks user models against fallback models

Matching model found?

Feature is available

Feature is NOT available

By default, AI Assistant features use models provided through the JetBrains AI service, ensuring that all features are available.

However, models obtained from third-party AI providers or local models can also be used for AI Assistant features. Depending on the model source, models are assigned differently:

Model source

Feature support

Third-party AI providers

Models are assigned to features automatically. If the provider models do not support a specific feature, that feature is unavailable.

Local models and models from OpenAI-compatible endpoints

Models can be assigned to groups of AI Assistant features manually.

To assign local models and models accessed from the OpenAI-compatible endpoint to AI Assistant features:

  1. Go to Settings | Tools | AI Assistant | Providers & API keys.

  2. In the Models Assignment section, specify the models that you want to use for core, lightweight, and code completion features. Also, define the model context window size if needed.

    Local models setup
    • Core features – this model will be used for in-editor code generation, commit message generation, as a default model in chat, and other core features.

    • Instant helpers – this model will be used for lightweight features, such as chat context collection, chat title generation, and name suggestions.

    • Completion model – this model will be used for the inline code completion feature in the editor. Works only with Fill-in-the-Middle (FIM) models.

    • Context window – allows you to configure the model context window for local models. A larger window lets the model handle more context in a request, while a smaller one reduces memory usage and may improve performance. This helps balance context length with system resources. The default value is 64 000 tokens.

  3. Click Apply to save changes.

As a result, AI Assistant uses the assigned models when the corresponding feature is triggered.

List of assigned and fallback models

This section lists AI Assistant features and the models they require, helping you assess compatibility with models from third-party providers.

Core features

Feature

Application area

Used model(s)

In-editor code generation

Editor

  • Anthropic models: Claude 4.5 Sonnet

  • Google models: Gemini 2.5 Pro

  • OpenAI models: GPT-4o

  • Alibaba models (Mainland China only): Qwen Max

Generate documentation

Editor

  • Google models: Gemini 2.5 Pro

  • OpenAI models: GPT-4o

  • Alibaba models (Mainland China only): Qwen Max

Generate tests

Editor

  • Google models: Gemini 2.5 Pro

  • OpenAI models: GPT-4o

  • Alibaba models (Mainland China only): Qwen Max

Fix with AI (only in RustRover)

Editor

  • Anthropic models: Claude 3.7 Sonnet

Resolve Git conflicts with AI

VCS

  • Google models: Gemini 2.5 Pro

  • OpenAI models: GPT-4o

Perform Self-Review with AI

VCS

  • Anthropic models: Claude 3.7 Sonnet

  • OpenAI models: GPT-4o, GPT-4o mini

Instant helpers

Feature

Application area

Used model(s)

Apply a suggestion to the current file

AI Chat

  • Google models: Gemini 2.0 Flash, Gemini 2.5 Flash

  • OpenAI models: GPT-4o, GPT-4o mini

  • Alibaba models (Mainland China only): Qwen Max

File name generation

AI Chat

  • Google models: Gemini 2.5 Flash

  • OpenAI models: GPT-4o mini

  • Alibaba models (Mainland China only): Qwen Max

Chat context collection

AI Chat

  • Google models: Gemini 2.0 Flash, Gemini 2.5 Flash

  • OpenAI models: GPT-4o mini

  • Alibaba models (Mainland China only): Qwen Max

Chat title generation

AI Chat

  • Google models: Gemini 2.5 Flash

  • OpenAI models: GPT-4o mini

  • Alibaba models (Mainland China only): Qwen Max

Name suggestions

Editor

  • Google models: Gemini 2.5 Flash

  • OpenAI models: GPT-4o mini

  • Alibaba models (Mainland China only): Qwen Max

Completion model

Feature

Where the feature is invoked

Used model(s)

Code completion

Editor, AI Chat, Commit message

  • JetBrains models: Mellum

  • Alibaba models (Mainland China only): Alibaba Code Completion

Code completion (for AI Enterprise, if opted to use a different AI provider)

Editor, AI Chat, Commit message

  • Anthropic models: Claude 4.5 Haiku, Claude 3.5 Haiku, Claude 4.5 Sonnet, Claude 3.5 Sonnet

  • Google models: Gemini 2.5 Flash, Gemini 2.0 Flash

  • OpenAI models: GPT-5 mini, GPT-4o, GPT-4o mini

Fallback models

For features that support fallback, this list is compared with the models available to you. If no matching model is found, the feature is unavailable.

The following models are defined as fallback models:

  • Anthropic models: Claude 4.5 Sonnet, Claude 4 Sonnet, Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude 4.5 Haiku, Claude 3.5 Haiku

  • Google models: Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash

  • OpenAI models: GPT-4o, GPT-4o mini

  • Alibaba models (Mainland China only): Qwen Max

Activate JetBrains AI

If you are using AI Assistant without a JetBrains AI service subscription, some features may not work properly when using models from third-party AI providers.

To ensure that all features are available and work as expected, you can purchase and activate a JetBrains AI service subscription. An active subscription covers the features that might not work properly or are unavailable with models from third-party AI providers.

To enable your JetBrains AI subscription:

  1. Navigate to Settings | Tools | AI Assistant | Providers & API keys.

  2. In the JetBrains AI section, click Activate JetBrains AI. You will be redirected to AI Chat.

    Click Activate JetBrains AI
  3. Click Log in to JetBrains Account, enter your credentials, and wait for the login process to complete.

After you sign in with a JetBrains Account that has an active JetBrains AI subscription, you can start using AI Assistant with full functionality.

Switch to offline mode

Not available in IDE versions starting from 2025.3.1

If you want to restrict calls to remote models and only use the local ones, you can enable the offline mode. In this mode, most cloud model calls will be blocked, and all AI-related features will rely on the local models instead.

To enable offline mode:

  1. Go to Settings | Tools | AI Assistant | Models.

  2. Select your local third-party provider.

  3. In the Local models section, specify the models that you want to use for AI features.

  4. Enable the Offline mode setting.

    Enable offline mode
  5. Click Apply to save changes.

Once you have finished the setup, you can toggle offline mode on and off whenever applicable:

  1. Click the JetBrains AI widget located in the toolbar in the window header.

  2. Hover over the Offline Mode option and click Enable or Disable.

    Disable offline mode
18 February 2026