About AI Assistant
AI Assistant provides AI-powered features for software development. It can explain code, answer questions about code fragments, provide code completion suggestions, commit messages, and much more.
Integrated directly into JetBrains IDEs, AI Assistant supports a wide range of tasks across different stages of development, helping you write, understand, and improve code more efficiently.
IDE compatibility
AI Assistant seamlessly integrates with most JetBrains IDEs. You can install and use it in the following development environments:
AI Assistant is also available in Android Studio[1] – the official IDE for Android app development, created by Google and based on IntelliJ IDEA by JetBrains.
Feature set
This section outlines the features offered by AI Assistant and indicates their availability in the IDEs. The functionality is divided into categories for your convenience.
AI chat
Feature | Supported in: |
---|---|
|
Code completion
Feature | Supported in: |
---|---|
|
Explain code with AI
Feature | Supported in: |
---|---|
| |
| |
| |
| |
| |
|
Find and fix problems with AI
Feature | Supported in: |
---|---|
| |
| |
| |
| |
|
In-editor code generation
Feature | Supported in: |
---|---|
| |
| |
|
AI in VCS integration
Feature | Supported in: |
---|---|
| |
| |
| |
| |
| |
| |
|
Generate tests
Feature | Supported in: |
---|---|
|
Generate documentation
Feature | Supported in: |
---|---|
|
Convert files to another language
Feature | Supported in: |
---|---|
|
Generate terminal commands
Feature | Supported in: |
---|---|
|
Use AI with databases
Feature | Supported in: |
---|---|
|
Supported LLMs
AI Assistant offers a variety of advanced cloud-based LLMs, as well as the option to use locally hosted models. This flexibility allows you to choose the most suitable model for your specific task. For example, you might want to use large models for complex codebase-related tasks, compact models for quick responses, or local models if you prefer to keep your data private.
- Our suggestions
Depending on your requirements, you might want to consider using the following models:
If accuracy and low hallucination rate are critical, consider using Claude 4 Sonnet, Claude 3.7 Sonnet, Claude 3.5 Sonnet, or Gemini 2.5 Pro.
If speed is your priority, consider using GPT-4.1 nano, GPT-4.1 mini, o4-mini, or Gemini 2.5 Flash.
For general intelligence in non-reasoning tasks, use GPT-4.1, GPT-4o, Claude 3.5 Haiku, or Gemini 2.5 Pro.
If you need general intelligence with strong reasoning capabilities, try Claude 4 Sonnet, Claude 3.7 Sonnet, o1, or o3.
Explore the sections below to find the list of supported LLMs.
OpenAI models
GPT-4.1 – a versatile large language model offering a strong combination of language fluency and reasoning, well-suited for tasks that require both accuracy and contextual understanding.
GPT-4.1 mini – a lightweight variant of GPT-4.1 optimized for faster responses and lower resource consumption, suitable for less intensive coding tasks and quick completions.
GPT-4.1 nano – the most compact GPT-4.1 version, designed for minimal latency and resource usage while maintaining core capabilities for basic code assistance.
GPT-4o – an advanced and reliable model that offers deep understanding and lightning-fast responses.
o1 – the o1 series models are trained with reinforcement learning to handle complex reasoning. They think before responding, generating a detailed internal chain of thought to provide more accurate, logical, and well-structured answers.
o3 – an efficient open-source model delivering reliable code completions with balanced performance and quality, ideal for everyday coding needs.
o3-mini – a small reasoning model that maintains the low cost and speed of o1‑mini while matching the coding performance of the larger o1 model.
o4-mini – a smaller, faster version of the o4 model, focused on rapid response times and lower system demands for lightweight code assistance.
For more information on OpenAI models, see Open AI's documentation.
Google models
Gemini 2.5 Pro (Experimental) – an advanced model designed for deep reasoning across complex code. It helps with writing, refactoring, and understanding code, making it ideal for large-scale development tasks.
Gemini 2.5 Flash (Expreimental) – an experimental model focused on delivering rapid and contextually relevant code completions.
Gemini 2.0 Flash – a high-speed, low-latency model optimized for efficiency and performance.
For more information on Google models, see Google's documentation.
Anthropic models
Claude 4 Sonnet – a balanced and capable model from Anthropic, offering strong reasoning and code generation with improved performance and response speed, suitable for a wide range of development tasks.
Claude 3.7 Sonnet – a reliable model for comprehensive software development. Combines fast performance with strong problem-solving capabilities, making it suitable for tasks like writing, refactoring, and understanding complex code.
Claude 3.5 Sonnet – a versatile LLM for coding, code migration, bug fixes, refactoring, and translation. It offers deep code understanding, along with strong problem-solving capabilities.
Claude 3.5 Haiku – a fast, cost-effective LLM that excels in real-time coding, chatbot development, data extraction, and content moderation.
For more information on Anthropic models, see Anthropic's documentation.
Local models
AI Assistant supports a selection of models available through Ollama and LM Studio. These models are optimized for local use, enabling powerful AI capabilities without the need for cloud access.
For more information on available models, see Ollama and LM Studio.