Skip to Content

Production Guide: Deploy Sentry with Docker Compose + Caddy + PostgreSQL on Ubuntu

A production-focused, operator-friendly Sentry deployment with TLS, persistence, backup discipline, and practical incident-response integration.

Introduction: real-world use case

If your engineering team is shipping services quickly, you need an error-monitoring system that catches regressions before customers open support tickets. Many teams start with logs only, then discover that logs are too noisy during incidents and too slow for root-cause triage. A production Sentry deployment gives you actionable stack traces, release-level regression detection, source-map support, and alert routing that maps directly to on-call operations. In this guide, we deploy self-hosted Sentry on Ubuntu using Docker Compose, Caddy for HTTPS, and PostgreSQL for persistent metadata storage. This pattern works well for teams that want strong control over data residency and predictable infrastructure cost without moving to a full Kubernetes footprint.

The target outcome is practical: you will have a hardened Sentry stack with TLS, persistent volumes, baseline firewall policy, routine backup points, and verification checks that prove events can be ingested end-to-end. We also include operational habits that prevent common production failures, such as queue starvation, bad secret rotation, and disk pressure caused by unbounded event retention. The guide assumes you are comfortable with Linux shell basics and can manage DNS records for a public domain.

Architecture and flow overview

The deployment uses a single Ubuntu host for app services and persistence. Docker Compose orchestrates Sentry services (web, worker, cron) with required dependencies (PostgreSQL, Redis, ClickHouse, Kafka, Zookeeper, Snuba, Relay, Symbolicator, and Nginx inside the Sentry bundle where applicable). Caddy terminates TLS and reverse-proxies public traffic to the Sentry web endpoint. PostgreSQL volume snapshots provide relational data protection, while Docker volumes preserve service state across upgrades.

Event flow is straightforward: your application SDK sends events to the public Sentry DSN endpoint, Relay validates/enriches payloads, Snuba/ClickHouse indexes event data, and Sentry web exposes issue triage UI. Workers process background jobs for notifications, rule execution, and integrations. This architecture gives you clean separation between ingress, processing, storage, and presentation, which is why it remains reliable under moderate production load.

Client SDKs -> Caddy (TLS) -> Sentry Web/Relay
                         \-> Redis (queues/cache)
                         \-> PostgreSQL (metadata/auth)
                         \-> Kafka + Zookeeper + ClickHouse (event pipeline)
                         \-> Workers/Cron (alerts, notifications, jobs)

If the copy button does not work in your browser, select the code manually and copy it.

Prerequisites

  • Ubuntu 22.04/24.04 server with at least 8 vCPU, 16 GB RAM, and fast SSD storage (100+ GB recommended for retention headroom).
  • A domain such as sentry.example.com with DNS A/AAAA records pointing to your server.
  • Root or sudo access, plus ability to open ports 22/80/443 in host firewall and upstream cloud firewall.
  • SMTP credentials for alerts/invites, and optional Slack/MS Teams webhook for incident notifications.
  • A backup target (S3-compatible bucket, remote host, or snapshot policy) before first production traffic.
sudo apt update && sudo apt -y upgrade
sudo timedatectl set-timezone UTC
sudo apt -y install ca-certificates curl gnupg lsb-release ufw git jq
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable

If the copy button does not work in your browser, select the code manually and copy it.

Step-by-step deployment

1) Install Docker Engine and Compose plugin

Use Docker’s official repository to avoid stale Compose/plugin versions. Production troubleshooting is easier when your runtime closely follows upstream patch cadence.

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker
docker --version && docker compose version

If the copy button does not work in your browser, select the code manually and copy it.

2) Clone self-hosted Sentry and generate baseline config

Sentry’s official self-hosted repository ships with tested compose definitions and install scripts. Pin to a stable tag in production to reduce surprise upgrades.

sudo mkdir -p /opt/sentry && sudo chown $USER:$USER /opt/sentry
cd /opt/sentry
git clone https://github.com/getsentry/self-hosted.git .
git checkout $(git describe --tags --abbrev=0)
./install.sh

If the copy button does not work in your browser, select the code manually and copy it.

3) Configure public URL and service endpoints

Before first launch, set the externally reachable URL and mail parameters. Avoid editing random files ad hoc; keep changes in .env and tracked override files so upgrades remain predictable.

cd /opt/sentry
cp -n .env .env.backup.$(date +%F)
cat >> .env <<'EOF'
SENTRY_BIND=127.0.0.1
SENTRY_WEB_HOST=0.0.0.0
SENTRY_EVENT_RETENTION_DAYS=30
SENTRY_MAIL_HOST=smtp.example.com
SENTRY_MAIL_PORT=587
[email protected]
SENTRY_MAIL_PASSWORD=REPLACE_WITH_SECRET
SENTRY_MAIL_USE_TLS=true
EOF

If the copy button does not work in your browser, select the code manually and copy it.

4) Install and configure Caddy for HTTPS reverse proxy

Caddy keeps certificate management simple and robust. We terminate TLS at Caddy and proxy to local Sentry web service. If your compliance policy requires mTLS inside the host, place Sentry behind an internal TLS listener and rotate certs via your secret manager.

sudo apt -y install debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt -y install caddy
sudo tee /etc/caddy/Caddyfile > /dev/null <<'EOF'
sentry.example.com {
  encode gzip zstd
  reverse_proxy 127.0.0.1:9000
  header {
    Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
    X-Content-Type-Options "nosniff"
    X-Frame-Options "SAMEORIGIN"
    Referrer-Policy "strict-origin-when-cross-origin"
  }
}
EOF
sudo systemctl enable --now caddy
sudo caddy validate --config /etc/caddy/Caddyfile

If the copy button does not work in your browser, select the code manually and copy it.

5) Launch stack and create admin account

Bring up the Sentry stack and initialize an administrative user. During the first boot, some components may take a few minutes to become healthy, especially ClickHouse and Snuba. Do not panic-restart repeatedly; instead watch health and logs.

cd /opt/sentry
docker compose up -d
# create first admin if not prompted during install
docker compose run --rm web createuser --superuser
# quick status overview
docker compose ps

If the copy button does not work in your browser, select the code manually and copy it.

6) Baseline operations: backup and upgrade runbook

Production readiness is not just deployment; it is recoverability. Snapshot PostgreSQL and key Docker volumes daily, and test restore quarterly. For upgrades, clone current state, read release notes, run staging validation, then roll to production during a low-traffic window.

# PostgreSQL logical backup (adjust container/service names if customized)
docker compose exec -T postgres pg_dumpall -U postgres | gzip > /var/backups/sentry/pgdump-$(date +%F-%H%M).sql.gz

# Volume backup example
sudo tar -czf /var/backups/sentry/volumes-$(date +%F-%H%M).tar.gz /var/lib/docker/volumes

# Prune old backups (keep 14 days)
find /var/backups/sentry -type f -mtime +14 -delete

If the copy button does not work in your browser, select the code manually and copy it.

Configuration and secrets handling best practices

Keep secrets out of shell history and source control. Store long-lived values (SMTP password, webhook secrets, auth tokens) in a secret manager and materialize them at deploy time through environment injection. If you must keep local files, lock permissions to root and rotate on a fixed schedule. For Compose-managed apps, a common operational failure is changing a secret in one place but not reloading dependent containers; define a post-rotation checklist that includes restart order and verification tests.

Set retention intentionally. Event retention drives cost and disk growth; retention that is too long creates noisy historical drag, while too short weakens RCA for recurring defects. For many teams, 30 days is a practical starting point, then adjust based on compliance and incident-review cadence. Configure alert thresholds by ownership domain (payments, auth, API, background jobs) so responders get focused signal instead of broad, low-value noise.

Harden ingress and admin access. Enforce MFA for Sentry accounts, restrict SSH by source ranges where possible, and keep host packages patched. If your organization uses SSO, integrate it early rather than after user sprawl. Finally, document your DSN rotation process: if a token is leaked in a public repo, responders should be able to rotate quickly without breaking every environment.

Verification checklist

After deployment, verify each subsystem before announcing completion. Test DNS and TLS, authenticate to UI, send a synthetic exception from a non-production app, and confirm issue creation + alerting path. Verify worker queues are draining and no service is crash-looping. Capture screenshots and command output in your runbook so future operators can compare known-good vs incident state quickly.

# TLS + HTTP status
curl -I https://sentry.example.com

# Container health
cd /opt/sentry && docker compose ps

# Watch error pipeline logs briefly
docker compose logs --since=5m web worker relay snuba | tail -n 120

If the copy button does not work in your browser, select the code manually and copy it.

  • βœ… Login works for admin and invited users.
  • βœ… Test event appears in the correct project within 60 seconds.
  • βœ… Alert notifications are delivered to configured channel.
  • βœ… No sustained queue backlog in worker/consumer services.
  • βœ… Daily backup artifact appears in remote or local target.

Common issues and fixes

Issue: 502/504 from reverse proxy

Usually indicates web container not ready, wrong proxy upstream port, or host firewall restrictions. Validate service bind addresses and confirm Caddy points to the active internal listener. Check startup logs for migration delays before declaring failure.

Issue: Events not appearing in projects

Most common causes are incorrect DSN, blocked egress from app environment, or Relay/Snuba ingestion delay. Send a minimal SDK test payload and inspect Relay + Snuba logs first, then confirm project keys and environment filters in Sentry UI.

Issue: High disk growth

Large retention windows, debug-level event noise, and unbounded attachments can consume storage quickly. Lower retention, reduce noisy event classes, and enforce release health tags so triage remains focused. Add disk alerts at 70/80/90% thresholds.

Issue: Upgrade introduces schema or worker mismatch

Pin release tags, run upgrades in staging first, and execute migration commands in documented order. Keep rollback snapshots before each production upgrade and test restore path so rollback is an option, not a theory.

FAQ

1) Can I run Sentry on a 4 GB VPS?

You can run a lab environment, but production workloads generally need more memory and CPU because Kafka/Snuba/ClickHouse pipelines are not lightweight under real traffic. Start at 16 GB RAM for reliable operations.

2) Should I expose PostgreSQL directly for analytics?

No. Keep PostgreSQL private to the host/network boundary. Exposing it publicly increases attack surface and bypasses the operational controls you already enforce at the application edge.

3) Is Caddy required, or can I use Nginx/Traefik?

Caddy is not required; it is used here for simpler automatic TLS and concise config. If your stack standardizes on Nginx or Traefik, use those with equivalent TLS/security headers and health checks.

4) How often should I rotate secrets?

At minimum quarterly, and immediately after suspected exposure. Pair rotation with scripted rollout plus post-change validation so rotation does not create hidden auth failures.

5) What retention window should I choose first?

Thirty days is a practical starting point for most teams. Increase if compliance or long incident cycles require it, but monitor storage and query performance as you scale.

6) Can I migrate from hosted Sentry later?

Yes, but migration planning matters. Inventory projects, teams, alerts, and DSNs, then stage migration by service group. Keep a temporary dual-reporting window to validate parity before full cutover.

7) How do I keep alerts actionable?

Create ownership-based alert routes, group by environment/release, and suppress known low-value noise. Alert quality is more about policy design than raw tooling.

Related guides on SysBrix

Talk to us

Need help deploying Sentry in production, integrating alerting workflows with your incident response process, or building backup and upgrade runbooks your team can trust? We can help with architecture, hardening, migration, and operational readiness.

Contact Us

Production Guide: Deploy Plane with Docker Compose + Nginx + PostgreSQL on Ubuntu
A production-ready Plane deployment blueprint with secure secrets handling, TLS termination, backups, verification, and practical troubleshooting.