AI Assistant Help

Models

Use this page to set up custom models for AI Assistant features and to enable offline mode.

AI Assistant models settings

Third-party AI providers

Item

Description

Provider

Select the third-party AI provider (LM Studio, Ollama, and OpenAI-compatible services like llama.cpp and LiteLLM) whose custom models you want to use.

URL

Specify the URL where the third-party provider can be accessed. To check if the connection is established, click Test Connection.

For additional information, refer to Connect to a third-party provider.

Local models

Item

Description

Core features

Select the custom model that must be used for in-editor code generation, commit message generation, chat responses, and other core features.

Instant helpers

Select the custom model that must be used for lightweight features, such as name suggestion.

Completion model

Select the custom model that must be used for inline code completion.

Context window

Specify the size of the model context window for local models. This setting determines how much context the model can process at once. A larger window allows more context, while a smaller one reduces memory usage and may improve performance.

By default, the context window is set to 64 000 tokens.

For additional information, refer to Configure the model context window.

Offline mode

Enable this setting to prioritize local models over cloud-based ones.

For additional information, refer to Switch to offline mode.

For additional information, refer to Use custom models in AI Assistant features.

30 September 2025