Qodana 2025.2 Help

Requirements

Qodana license

Reach out to our support team to request a license that can be used by Qodana Self-Hosted.

System and network requirements

Dockerized version

Below are the requirements grouped in categories.

Attribute

Value

CPU Architecture

AMD64_86, ARM64

Number of cores

>= 4

RAM

>= 16GB

HDD

>= 100GB

Operating system

Any Linux distribution that supports a compatible CPU Architecture

Docker version

20.10.23 or later

qodana-installer-cli references container images from the quay.io Docker registry. Make sure that quay.io is a trusted address in the network. For offline installations, mirror to an internal trusted Docker registry the tags available in the https://quay.io/repository/jetbrains/qodana-installer-cli-dependencies Docker registry.

For usage statistics dynamic configuration, download the configuration from the JetBrains website. Also, the following FQDNs must be accessible if you wish to share analytics with the Qodana team:

  • https://analytics.services.jetbrains.com

  • https://resources.jetbrains.com

Qodana Self-Hosted supports MinIO starting from version RELEASE.2025-01-20T14-49-07Z onwards.

Kubernetes version

Below are the requirements grouped in categories.

Environment

CPU (cores)

RAM (GB)

Storage (GB)

Notes

Demo/PoC (small team)

6-8

8

40-60

Single-node possible, not HA

Small prod

8-16

16-32

100-200

3+ nodes, HA with only multi nodes

Medium-to-large prod

16+

32+

200+

Scale per workload, more for large codebases

In case of a single-node configuration, limited resources are not sufficient for the Kubernetes Scheduler and Kubelet for deployments that mutate the configuration state of Self-Hosted (SH) pods. In such a case, proceed with two-step deployment: with the first step apply the configuration change and set the replicaCount of SH pods to 0, wait for the deployment to succeed and proceed with the second step where you set the replicaCount from 0 to 1. Another alternative that avoids this is to provision a multi-node Kubernetes cluster such that pods are distributed cross nodes and are not limited by the limitation of a one node cluster.

Attribute

Value(s)

Operating system

Any Linux distribution that satisfies Kubernetes requirements

Container runtime

Any container runtime that satisfies Kubernetes requirements

Ingress controller

Any Ingress Controller that is deployed in the Kubernetes cluster as a cluster service and can resolve service URLs

Storage controller

Any Storage Controller that allows volumes to migrate cross nodes based on pod locations

For any ingress controller, you must configure redirect behavior, client identity propagation, and size/buffering limits.

First, decide where TLS terminates and who performs HTTP to HTTPS redirects; if your edge LB/CDN already enforces HTTPS, disable redirects at the ingress to avoid loops and double hops.

Second, ensure your apps see the real client IP, scheme, and host: choose the appropriate mechanism your controller supports (X-Forwarded-* headers, Forwarded header, or PROXY protocol) and restrict trust to known upstream CIDRs so users can’t spoof IPs.

Third, right-size limits for request headers, request bodies, and response buffering to match your workloads (SSO cookies, many Set-Cookie headers, file uploads, streaming).

Each controller exposes different knobs and names for these concepts, but they map to the same concerns: header buffer sizes, large-header buffers, max body size, proxy buffer sizes, forwarded header handling, and optional proxy protocol. Review your controller’s documentation for the exact settings, mirror the intent of the examples shown for NGINX, and validate under load tests to confirm no 400/413 responses, no misreported client IPs, and consistent redirect behavior.

Below is an example configuration for Kubernetes nginx ingress controller.

# Disabling avoids double redirects, redirect loops, and unnecessary hops during health checks or internal service calls over HTTP. Services with external URLs expose the same URL internally for intra cluster communication. force-ssl-redirect: "false"   # Disabling avoids double redirects, redirect loops, and unnecessary hops during health checks or internal service calls over HTTP. Services with external URLs expose the same URL internally for intra cluster communication. ssl-redirect: "false"   # Supports larger-than-default request lines and headers (e.g., long cookies, SSO tokens, or complex auth headers) without immediate resort to the “large” buffers. Reduces 400 Bad Request (Request header too large) errors at modest memory cost. client-header-buffer-size: "32k"   # Accommodates bursts of large headers (multiple cookies, SAML/OIDC headers, complex reverse-proxy chains). Prevents header truncation and 494/400 errors under peak conditions. large-client-header-buffers: "4 32k"   # Handles large response headers (e.g., many Set-Cookie directives or big metadata) without spilling to disk or triggering buffer-related errors. Useful with SSO gateways or multi-cookie apps. proxy-buffer-size: "128k"   # Provides 1 MB of in-memory buffering per connection for smoother delivery of medium responses and to absorb backend send bursts. Reduces client-facing latency jitter and backend backpressure. proxy-buffers: "4 256k"   # Balances memory usage and throughput. Prevents excessive memory pressure while still allowing efficient streaming to slower clients. proxy-busy-buffers-size: "256k"   # Supports larger uploads (files, form posts, GraphQL multipart, large JSON) without 413 Request Entity Too Large. Choose a value aligned with app limits and upstream timeouts; higher values increase memory/disk usage risk if many concurrent uploads. proxy-body-size: "100m"   # Necessary when TLS terminates upstream (LB/CDN) so apps see correct scheme (https), host, and client IP. Prevents generating incorrect redirects (http instead of https) and preserves accurate logs and security rules. use-forwarded-headers: "true"   # Matches the de-facto standard used by most LBs and CDNs. Ensures consistent client IP extraction across components. forwarded-for-header: "X-Forwarded-For"   # Use only if your external load balancer is explicitly configured for PROXY protocol and your entire chain supports it. Keeping it false avoids handshake mismatches and connection failures. If you rely on HTTP headers instead, this should remain disabled. use-proxy-protocol: "false"

Name

Value(s)

helm

Helm CLI installed latest version

kubectl

Kubernetes CLI (kubectl) installed and configured

openssl

OpenSSL CLI for generating secret

Attribute

Value(s)

DNS top-level domain

Allocate a DNS zone for Qodana Self-Hosted so that it can identify assets on the network starting from the zone name. Example: qodana.local

Base URLs

Qodana Self-Hosted is composed of several components. Almost each of these components requires a dedicated base URL. Example: given that the top-level domain is qodana.local:

  • UI: qodana.local

  • Backend: api.qodana.local

  • Linters API: lintersapi.qodana.local

  • Built-in file storage: files.qodana.local

  • Built-in SSO provider: login.qodana.local

  • Built-in ingress controller: ingress.qodana.local

Docker registry

The Helm Chart and Docker images are hosted at https://jetbrains.team

Other URLs

JetBrains resources:

  • analytics.services.jetbrains.com

  • vulnerability-search.jetbrains.com

  • resources.jetbrains.com

  • www.jetbrains.com

  • account.jetbrains.com

Third-party resources:

  • dl.min.io

Qodana Self-Hosted supports S3 in AWS or Minio starting from version RELEASE.2025-01-20T14-49-07Z onwards.

11 December 2025