Skip to Content

Production Guide: Deploy Sentry with Docker Compose + Caddy + PostgreSQL + Redis on Ubuntu

A practical, production-ready Sentry deployment with security hardening, observability, backup, and troubleshooting playbooks.

Introduction: real-world use case

Teams usually adopt Sentry after repeated incident fatigue: alerts are noisy, root-cause detail is thin, and engineers waste hours reproducing failures. A production deployment must do more than start containers. It should protect secrets, preserve data durability, expose meaningful health signals, and support safe upgrades.

This guide walks through a practical Ubuntu deployment using Docker Compose + Caddy + PostgreSQL + Redis. We focus on reliability under day-two pressure: TLS, backup paths, queue behavior, database sizing, and runbook-friendly verification. The target audience is operators who need predictable outcomes, not fragile demo setups.

By the end, you will have a resilient baseline architecture and an operational checklist your team can reuse for onboarding, audits, and incident response. The commands are intentionally explicit so junior and senior engineers can collaborate from the same procedure.

Architecture and flow overview

Caddy terminates TLS and proxies traffic to the Sentry web service. PostgreSQL stores transactional state, Redis supports queue/cache behavior, and background workers process asynchronous jobs. Splitting these responsibilities reduces blast radius and keeps the UI responsive during spikes.

  • Caddy handles HTTPS and secure headers.
  • Sentry web serves dashboard/API traffic.
  • Workers execute background processing.
  • PostgreSQL keeps durable system state.
  • Redis accelerates queue and cache operations.

Only ports 80/443 should be exposed publicly. All other services stay on private Docker networks.

Prerequisites

  • Ubuntu 22.04 or 24.04 server (4 vCPU, 8 GB RAM, 60 GB+ SSD recommended).
  • DNS record pointing your hostname to the server IP.
  • Sudo access and outbound internet for image pulls.
  • Firewall policy allowing only SSH, HTTP, HTTPS.
  • Basic familiarity with Docker logs and restart workflows.

Step-by-step deployment

1) Install container runtime and compose plugin

sudo apt update
sudo apt install -y docker.io docker-compose-plugin
sudo systemctl enable --now docker
sudo usermod -aG docker $USER

If copy button does not work in your browser/editor, manually copy the block.

2) Prepare directories

sudo mkdir -p /opt/sentry/{caddy,postgres,redis,backups}
sudo chown -R $USER:$USER /opt/sentry
cd /opt/sentry

If copy button does not work in your browser/editor, manually copy the block.

3) Generate secrets and env file

openssl rand -hex 32
openssl rand -base64 48

If copy button does not work in your browser/editor, manually copy the block.

cat > /opt/sentry/.env <<'ENV'
SENTRY_HOST=errors.example.com
POSTGRES_DB=sentry
POSTGRES_USER=sentry
POSTGRES_PASSWORD=REPLACE_ME
REDIS_PASSWORD=REPLACE_ME
SENTRY_SECRET_KEY=REPLACE_WITH_LONG_SECRET
ENV
chmod 600 /opt/sentry/.env

If copy button does not work in your browser/editor, manually copy the block.

4) Create compose stack

cat > /opt/sentry/docker-compose.yml <<'YAML'
services:
  caddy:
    image: caddy:2.8
    restart: unless-stopped
    ports: ["80:80", "443:443"]
    volumes:
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - caddy_config:/config
    depends_on: [web]
    networks: [edge, app]
  postgres:
    image: postgres:16
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes: ["./postgres:/var/lib/postgresql/data"]
    networks: [app]
  redis:
    image: redis:7-alpine
    restart: unless-stopped
    command: ["redis-server","--requirepass","${REDIS_PASSWORD}"]
    volumes: ["./redis:/data"]
    networks: [app]
  web:
    image: getsentry/sentry:24.7.1
    restart: unless-stopped
    env_file: .env
    depends_on: [postgres, redis]
    networks: [app]
  worker:
    image: getsentry/sentry:24.7.1
    restart: unless-stopped
    command: ["run","worker"]
    env_file: .env
    depends_on: [postgres, redis, web]
    networks: [app]
  cron:
    image: getsentry/sentry:24.7.1
    restart: unless-stopped
    command: ["run","cron"]
    env_file: .env
    depends_on: [postgres, redis, web]
    networks: [app]
volumes:
  caddy_data:
  caddy_config:
networks:
  edge:
  app:
    internal: true
YAML

If copy button does not work in your browser/editor, manually copy the block.

5) Configure reverse proxy

cat > /opt/sentry/caddy/Caddyfile <<'CADDY'
errors.example.com {
  encode gzip zstd
  reverse_proxy web:9000
  header {
    Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
    X-Content-Type-Options "nosniff"
    X-Frame-Options "DENY"
    Referrer-Policy "strict-origin-when-cross-origin"
  }
}
CADDY

If copy button does not work in your browser/editor, manually copy the block.

6) Start services and validate process status

cd /opt/sentry
docker compose --env-file .env up -d
docker compose ps

If copy button does not work in your browser/editor, manually copy the block.

7) Apply firewall baseline

sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
sudo ufw status

If copy button does not work in your browser/editor, manually copy the block.

Configuration and secret-handling best practices

Store secrets in restricted files or an external secret manager. Never commit credentials to source control. Use unique credentials per environment and rotate them with a documented runbook.

In production, pair secret governance with access governance: least privilege, short-lived privileged sessions, and auditable change history. Ensure backup archives are encrypted and that restore credentials are not stored beside backup files. Validate your restore process every quarter, because untested backups are operationally equivalent to no backups.

When teams scale, move from manual secret files to centralized secret delivery. This reduces drift and makes compliance evidence easier to produce. Keep a small break-glass procedure for emergency rotations during active incidents.

Verification checklist

Use this checklist after initial rollout and after each update:

cd /opt/sentry
docker compose ps
docker compose logs --tail=100 web
docker compose logs --tail=100 worker
docker compose logs --tail=100 cron

If copy button does not work in your browser/editor, manually copy the block.

curl -I https://errors.example.com
curl -sS https://errors.example.com | head

If copy button does not work in your browser/editor, manually copy the block.

docker exec -it $(docker compose ps -q postgres) psql -U sentry -d sentry -c 'select now();'

If copy button does not work in your browser/editor, manually copy the block.

Successful verification means all services remain Up, TLS responds cleanly, and logs contain no repeating crash loops. Add these checks to your change-management template and post-deploy signoff.

For ongoing reliability, capture baseline latency and queue depth metrics now. Baselines make anomaly detection much faster during incidents.

Common issues and fixes

Issue 1: Proxy returns 502

Check upstream service readiness and proxy target port. Most 502 events are timing-related after restarts or image pulls.

docker compose logs --tail=200 web
docker compose restart caddy

If copy button does not work in your browser/editor, manually copy the block.

Issue 2: Worker backlog grows

Validate Redis credentials, monitor queue depth, and scale workers if sustained throughput exceeds processing capacity.

Issue 3: Database disk growth accelerates

Enable retention policies, archive old events, and review indexes for high-cardinality fields.

Issue 4: Upgrades break migrations

Pin versions, snapshot DB before upgrades, rehearse in staging, and keep rollback artifacts ready.

Issue 5: Team loses confidence in alerts

Retune alert thresholds based on service criticality and business impact. Signal quality matters more than alert volume.

Issue 6: Secrets leaked into logs

Rotate affected credentials immediately and implement output redaction in pipelines.

FAQ

Can this run on a single VM in production?

Yes for moderate load, with careful sizing and monitoring. Larger deployments should separate data services.

How often should backups run?

At least daily full backups, plus more frequent incremental/WAL strategy for tighter recovery objectives.

What is the safest way to upgrade?

Pin versions, test in staging, backup before deploy, then perform controlled maintenance-window rollout.

How do we rotate secrets without downtime?

Use staged credential rotation: create new secrets, update consumers, verify health, revoke old ones.

Should PostgreSQL ever be internet-exposed?

No. Restrict to private networks and audited access paths such as VPN or bastion workflows.

What metrics matter most for early warning?

Queue depth, worker failure rate, API latency, DB saturation, and restart counts are high-value indicators.

Can we add SSO later?

Yes. Start with strong local auth controls and plan SSO integration once baseline stability is proven.

Related guides

Talk to us

Need help deploying or hardening Sentry in production? We can help with architecture, security baselines, and operational runbooks tailored to your team.

Contact Us

Production Guide: Deploy OpenObserve with Kubernetes + Helm + cert-manager + ingress-nginx on Ubuntu
A production-oriented, operations-first deployment guide with secure defaults, observability, upgrade strategy, and recovery runbooks.