AI Assistant Help

Use custom models

By default, AI Assistant provides access to a set of cloud-based models from various AI providers, but you can also configure it to use custom local models or models provided by third parties. AI Assistant supports the Bring Your Own Key approach, so if you already have an account with a specific AI provider, you can use its models in AI Assistant.

The following third-party providers are supported:

Connect to a third-party provider

To use models from a third-party provider, you need to set up a connection to it. Depending on your setup, you can either enter an API key to access the provider's cloud-based models or connect to models running locally.

Provide your own API key

Third-party providers such as Anthropic, OpenAI, or other OpenAI-compatible endpoints require an API key and, in some cases, a custom endpoint URL. Entering your key allows AI Assistant to access the provider's models using your existing account. To provide an API key:

  1. Navigate to Settings | Tools | AI Assistant | Models & API keys.

    Models settings
  2. In the Third-party AI providers section, select the Provider.

  3. Enter the API Key and click Test Connection to check whether the connection is established successfully.

    Provide API key

    If you are configuring an OpenAI-compatible provider, specify the URL of the provider's API endpoint in addition to the API Key. Also, indicate whether the model supports calling tools configured through the Model Context Protocol (MCP) by enabling or disabling the Tool calling setting.

    OpenAI-compatible provider
  4. Click Apply to save changes.

Connect to local models

Providers like Ollama and LM Studio run models on your computer. Connecting to them in AI Assistant allows you to use these models directly from your local setup.

  1. Navigate to Settings | Tools | AI Assistant | Models & API keys.

  2. In the Third-party AI providers section, select the Provider.

  3. Specify the URL where it can be accessed and click Test Connection to check whether the connection is established successfully.

    Enable Third-party AI providers
  4. Click Apply to save changes.

Once the connection is established, models from the third-party provider become available for use. You can select them to process your requests in AI Chat or assign them to be used in AI Assistant features.

Activate JetBrains AI

AI Assistant allows you to use your existing subscriptions with AI providers instead of purchasing a JetBrains AI license. However, some AI Assistant features may not work properly with third-party models.

To ensure that all features are available and work as expected, you can purchase and activate a JetBrains AI subscription. An active subscription covers the features that might not work properly or are unavailable with models from third-party providers.

To enable your JetBrains AI subscription:

  1. Navigate to Settings | Tools | AI Assistant | Models & API keys.

  2. In the JetBrains AI section, click Activate JetBrains AI. You will be redirected to AI Chat.

    Click Activate JetBrains AI
  3. Click Log in to JetBrains Account, enter your credentials, and wait for the login process to complete.

After you sign in with a JetBrains Account that has an active JetBrains AI subscription, you can start using AI Assistant with full functionality.

Use custom models in AI Assistant features

Custom models can be assigned to AI Assistant features such as code completion, in-editor code generation, commit message generation, and more.

To configure custom models to be used in AI features:

  1. Go to Settings | Tools | AI Assistant | Models & API keys.

  2. Configure your third-party provider.

  3. In the Models Assignment section, specify the models that you want to use for core, lightweight, and code completion features. Also, define the model context window size if needed.

    Local models setup
    • Core features – this model will be used for in-editor code generation, commit message generation, as a default model in chat, and other core features.

    • Instant helpers – this model will be used for lightweight features, such as chat context collection, chat title generation, and name suggestions.

    • Completion model – this model will be used for the inline code completion feature in the editor. Works only with Fill-in-the-Middle (FIM) models.

    • Context window – allows you to configure the model context window for local models. A larger window lets the model handle more context in a request, while a smaller one reduces memory usage and may improve performance. This helps balance context length with system resources. The default value is 64 000 tokens.

  4. Click Apply to save changes.

17 December 2025