About AI Assistant
AI Assistant provides AI-powered features for software development. It can explain code, answer questions about code fragments, provide code completion suggestions, commit messages, and much more.
Integrated directly into JetBrains IDEs, AI Assistant supports a wide range of tasks across different stages of development, helping you write, understand, and improve code more efficiently.
IDE compatibility
AI Assistant seamlessly integrates with most JetBrains IDEs. You can install and use it in the following development environments:
AI Assistant is also available in Android Studio[1] – the official IDE for Android app development, created by Google and based on IntelliJ IDEA by JetBrains.
Feature set
This section outlines the features offered by AI Assistant and indicates their availability in the IDEs. The functionality is divided into categories for your convenience.
AI chat
Feature | Supported in: |
---|---|
|
Code completion
Feature | Supported in: |
---|---|
|
Explain code with AI
Feature | Supported in: |
---|---|
| |
| |
| |
| |
| |
|
Find and fix problems with AI
Feature | Supported in: |
---|---|
| |
| |
| |
| |
|
In-editor code generation
Feature | Supported in: |
---|---|
| |
| |
|
AI in VCS integration
Feature | Supported in: |
---|---|
| |
| |
| |
| |
| |
| |
|
Generate tests
Feature | Supported in: |
---|---|
|
Generate documentation
Feature | Supported in: |
---|---|
|
Convert files to another language
Feature | Supported in: |
---|---|
|
Generate terminal commands
Feature | Supported in: |
---|---|
|
Use AI with databases
Feature | Supported in: |
---|---|
|
Supported LLMs
AI Assistant offers a variety of advanced cloud-based LLMs, as well as the option to use locally hosted models. This flexibility allows you to choose the most suitable model for your specific task. For example, you might want to use large models for complex codebase-related tasks, compact models for quick responses, or local models if you prefer to keep your data private.
- Our suggestions
Depending on your requirements, you might want to consider using the following models:
If accuracy and low hallucination rate are critical, consider using Gemini 2.0 Flash.
If speed is your priority, consider using GPT-4o-mini, Gemini 1.5 Flash, or Gemini 2.0 Flash.
For general intelligence in non-reasoning tasks, use GPT-4o, Claude 3.5 Sonnet, Claude 3.5 Haiku, and Gemini 1.5 Pro.
If you need general intelligence with strong reasoning capabilities, try Claude 3.7 Sonnet, o1, o1-mini, or o3-mini.
Explore the sections below to find the list of supported LLMs.
OpenAI models
GPT-4o – OpenAI's most advanced and reliable GPT model. GPT-4o offers deep understanding and lightning-fast responses.
GPT-4o mini – a smaller model that distills the power of GPT-4o into a compact, low-latency package.
o1 – the o1 series models are trained with reinforcement learning to handle complex reasoning. They think before responding, generating a detailed internal chain of thought to provide more accurate, logical, and well-structured answers.
o1-mini – a smaller, cost-effective reasoning model that nearly matches the coding performance of the full o1 model.
03-mini – the latest small reasoning model, o3-mini maintains the low cost and speed of o1‑mini while matching the larger o1 model’s coding performance and providing faster responses.
For more information on OpenAI models, see Open AI's documentation.
Google models
Gemini 2.5 Pro – an advanced AI model designed for deep reasoning across complex code. It helps with writing, refactoring, and understanding code, making it ideal for large-scale development tasks.
Gemini 2.0 Flash – a high-speed, low-latency model optimized for efficiency and performance. It is ideal for powering dynamic, agent-driven experiences.
Gemini 1.5 Flash – a lightweight AI model, optimized for tasks where speed and efficiency matter most. Gemini 1.5 Flash delivers high-quality performance on most tasks, rivaling larger models while being significantly more cost-efficient and responsive.
Gemini 1.5 Pro – a powerful AI model built for deep reasoning across large-scale data. It excels at analyzing, classifying, and summarizing vast amounts of content.
For more information on Google models, see Google's documentation.
Anthropic models
Claude 3.7 Sonnet – Anthropic’s most advanced coding model. Balancing speed and quality, it excels at full-cycle software development with agent-driven coding, deep problem-solving, and intelligent automation.
Claude 3.5 Sonnet – a versatile LLM for coding, code migration, bug fixes, refactoring, and translation. It supports agent-driven workflows and offers deep code understanding, along with strong problem-solving capabilities.
Claude 3.5 Haiku – a fast, cost-effective LLM that excels in real-time coding, chatbot development, data extraction, and content moderation.
For more information on Anthropic models, see Anthropic's documentation.
Local models
AI Assistant supports a selection of models available through Ollama and LM Studio. These models are optimized for local use, enabling powerful AI capabilities without the need for cloud access.
For more information on available models, see Ollama and LM Studio.