Teams shipping software at velocity need automated feedback on code quality, security hotspots, and test coverage before changes reach production. SonarQube is an open-source platform that continuously inspects code for bugs, vulnerabilities, and technical debt across more than twenty languages. When self-hosted, it becomes a private governance layer that integrates with CI/CD pipelines, enforces quality gates, and keeps source-code analysis inside your perimeter.
This guide walks through a production-hardened deployment of SonarQube Community Edition on Ubuntu using Docker Compose, Caddy for TLS termination, and PostgreSQL as the external database. The result is a resilient, observable instance you can back up, restore, and scale as your repositories grow.
Architecture and flow overview
SonarQube is a Java application that bundles a web server, a rules engine, and an embedded Elasticsearch search node. In production, the official image expects an external PostgreSQL database rather than the default embedded H2 database. Caddy sits in front as a reverse proxy, handling TLS with automatic certificate renewal and forwarding traffic to the SonarQube container.
Data flows as follows: developers push code to a repository, a CI runner triggers a SonarScanner analysis, the scanner sends results to SonarQube over HTTPS, and SonarQube persists project configuration and issue data in PostgreSQL while keeping search indexes and plugins in local volumes. The embedded Elasticsearch node runs inside the same container but stores its data on a dedicated volume.
Prerequisites
- Ubuntu 22.04 LTS or 24.04 LTS server with at least 4 vCPU, 8 GB RAM, and 20 GB SSD.
- Docker and Docker Compose installed.
- A DNS A record pointing to the server (for example,
sonar.example.com). - Host kernel parameter
vm.max_map_countset to at least524288for Elasticsearch. - Firewall ports 22, 80, and 443 open.
Apply the Elasticsearch prerequisite before bringing up containers:
sudo sysctl -w vm.max_map_count=524288
echo 'vm.max_map_count=524288' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
If the copy button does not work in your browser/editor, manually select and copy the code block.
Step-by-step deployment
Create a dedicated directory for the stack, generate strong secrets, and keep environment files readable only by the owner.
sudo mkdir -p /opt/sonarqube/{data,logs,extensions,backups}
sudo chown -R $USER:$USER /opt/sonarqube
cd /opt/sonarqube
touch .env
chmod 600 .env
If the copy button does not work in your browser/editor, manually select and copy the code block.
Populate /opt/sonarqube/.env with production values. Replace the domain and generate random passwords:
DOMAIN=https://sonar.example.com
SONAR_JDBC_URL=jdbc:postgresql://db:5432/sonar
SONAR_JDBC_USERNAME=sonar
SONAR_JDBC_PASSWORD=ReplaceWithRandom32CharString
POSTGRES_DB=sonar
POSTGRES_USER=sonar
POSTGRES_PASSWORD=ReplaceWithAnotherRandom32CharString
If the copy button does not work in your browser/editor, manually select and copy the code block.
Define the Docker Compose stack. PostgreSQL is isolated on an internal bridge network, and SonarQube depends on the database being healthy before it starts:
services:
db:
image: postgres:16-alpine
container_name: sq-db
restart: unless-stopped
env_file: .env
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- /opt/sonarqube/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 10
networks:
- private
sonarqube:
image: sonarqube:community
container_name: sonarqube
restart: unless-stopped
env_file: .env
environment:
SONAR_JDBC_URL: ${SONAR_JDBC_URL}
SONAR_JDBC_USERNAME: ${SONAR_JDBC_USERNAME}
SONAR_JDBC_PASSWORD: ${SONAR_JDBC_PASSWORD}
SONAR_WEB_JAVAOPTS: "-Xmx2g -Xms1g"
SONAR_CE_JAVAOPTS: "-Xmx1g -Xms512m"
SONAR_SEARCH_JAVAOPTS: "-Xmx1g -Xms512m"
volumes:
- /opt/sonarqube/data:/opt/sonarqube/data
- /opt/sonarqube/logs:/opt/sonarqube/logs
- /opt/sonarqube/extensions:/opt/sonarqube/extensions
depends_on:
db:
condition: service_healthy
networks:
- private
caddy:
image: caddy:2
container_name: sq-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /opt/sonarqube/caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- /opt/sonarqube/caddy/data:/data
- /opt/sonarqube/caddy/config:/config
depends_on:
- sonarqube
networks:
- private
networks:
private:
driver: bridge
If the copy button does not work in your browser/editor, manually select and copy the code block.
Create the Caddyfile. Caddy handles automatic HTTPS and proxies requests to SonarQube while preserving the original host header:
sonar.example.com {
encode zstd gzip
reverse_proxy sonarqube:9000 {
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-For {remote_host}
header_up Host {host}
}
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "strict-origin-when-cross-origin"
}
}
If the copy button does not work in your browser/editor, manually select and copy the code block.
Launch the stack and validate container health before creating the first admin account:
cd /opt/sonarqube
docker compose up -d
docker compose ps
docker compose logs -f --tail=100 sonarqube db
If the copy button does not work in your browser/editor, manually select and copy the code block.
Once the logs show that SonarQube is up, open https://sonar.example.com and log in with the default credentials admin / admin. Change the password immediately, disable the default admin token, and create service accounts for CI integrations.
Configuration and secrets handling best practices
Production discipline starts with minimizing secret exposure. Keep the .env file at mode 600, never commit it to version control, and rotate database and analysis tokens on a quarterly cadence. When possible, inject secrets through a secret manager or CI vault rather than leaving them on disk.
Because SonarQube includes an embedded Elasticsearch node, ensure the host kernel parameter remains set after reboots and verify that swap is limited or disabled to avoid index corruption. Back up all three volumes (data, logs, extensions) along with the PostgreSQL database so plugins and custom rules are not lost during recovery.
Enable host firewall rules that allow only SSH from restricted sources and public HTTP/HTTPS traffic. If your environment supports a private network, place the database and application containers on an isolated bridge with no external routes except through Caddy.
Backup and restore
Create a backup script that dumps the PostgreSQL database, archives the SonarQube volumes, and prunes old backups. Keep archives off-host when possible.
#!/usr/bin/env bash
set -euo pipefail
cd /opt/sonarqube
STAMP=$(date +%F-%H%M)
OUT=/opt/sonarqube/backups/sq-$STAMP
mkdir -p "$OUT"
docker compose exec -T db pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" > "$OUT/db.sql"
tar -czf "$OUT/data.tar.gz" -C /opt/sonarqube data extensions
find /opt/sonarqube/backups -maxdepth 1 -type d -mtime +14 -exec rm -rf {} +
echo "Backup completed: $OUT"
If the copy button does not work in your browser/editor, manually select and copy the code block.
Verification checklist
- HTTPS certificate is valid and auto-renewing (verify with browser inspection or
openssl s_client). docker compose psshows all services healthy with restart policy set tounless-stopped.- Public ports expose only 80/443; PostgreSQL is not reachable externally.
- Default admin password has been changed and a new admin token has been generated.
- A test project analysis from a local scanner completes successfully and results appear on the dashboard.
- Backup script runs on schedule and the latest archive is present and restorable.
Common issues and fixes
1) SonarQube fails to start with max_map_count errors
The embedded Elasticsearch node refuses to start when the host limit is too low. Run the sysctl command from the prerequisites section, persist it in /etc/sysctl.conf, and reboot if the container still reports the error.
2) Database connection refused on first start
Usually caused by PostgreSQL still initializing or incorrect credentials in .env. Confirm the health check is passing, verify the password matches between SonarQube and PostgreSQL environment variables, and ensure the database name exists.
3) Analysis uploads return 404 or 401
Check that the project token is valid and that the scanner is pointing to the correct server URL including the HTTPS scheme. If using a reverse proxy, confirm that SONAR_WEB_CONTEXT is not set to a conflicting path.
4) Performance degrades with large codebases
Increase the JVM heap sizes for the web server, compute engine, and search node. Monitor container resource usage and scale the host CPU and memory before adjusting timeouts. Keep Elasticsearch indices healthy by archiving old projects.
5) Plugins disappear after recreating the container
Plugins must be stored on a persistent volume mapped to /opt/sonarqube/extensions. If the volume is ephemeral or the mapping is missing, every container restart will revert to the base image plugins.
FAQ
Should we use the Community Edition or upgrade to Developer Edition?
Community Edition covers bug detection, code smells, and basic security hotspots across many languages. Upgrade to Developer or Enterprise Edition only when you need branch and pull-request analysis, deeper security rules, or portfolio-level reporting.
How often should we rotate analysis tokens?
Rotate project and user analysis tokens at least quarterly, and immediately after any team member with access leaves or a token is suspected of exposure. Use project-level tokens rather than personal tokens in CI pipelines.
Can we run SonarQube behind Cloudflare or another CDN?
Yes, but preserve the original client IP and avoid caching the API endpoints used by scanners. Test that analysis uploads complete without size or timeout errors before declaring production readiness.
What is a practical backup frequency for a small team?
Daily automated backups are a safe baseline. Capture both the PostgreSQL dump and the three SonarQube volumes. Verify restores in a staging clone monthly.
How do we handle break-glass admin access safely?
Document a sealed recovery procedure, use dual-approval where possible, and rotate the admin password immediately after any emergency access event. Audit all admin actions through the built-in administration log.
Do we need separate staging and production instances?
Yes. A lightweight staging instance allows you to validate plugin upgrades, custom quality profiles, and scanner compatibility before applying changes to the production instance that your teams depend on.
Related internal guides
- Deploy Vaultwarden with Docker Compose + Caddy + PostgreSQL
- Deploy n8n with Docker Compose + Caddy + PostgreSQL + Redis
- Deploy ToolJet with Docker Compose + Caddy + PostgreSQL + Redis
Talk to us
If you want this implemented with hardened defaults, CI/CD integration, and tested recovery playbooks, our team can help.