Skip to Content

Production Guide: Deploy Umami with Docker Compose + Caddy + PostgreSQL on Ubuntu

A production-focused walkthrough for secure self-hosted analytics with TLS, backups, and practical operations.

If your team needs privacy-friendly product analytics without sending behavior data to third-party SaaS tools, self-hosting Umami is a practical option. A common real-world use case is a startup running several marketing and product sites that wants one analytics stack, full data ownership, and predictable monthly infrastructure costs. In this guide, you will deploy Umami in production on Ubuntu using Docker Compose + PostgreSQL + Caddy, then harden it for reliability with backups, update procedures, and operational checks you can hand to an on-call engineer.

This walkthrough is intentionally production-oriented: we isolate secrets, enforce TLS, define health checks, and verify the full request path from browser script to database writes. You can use this approach for a single website or for multiple domains, and the same structure scales well when you later move analytics workloads behind centralized logging and observability.

Architecture and flow overview

The deployment uses three core containers: umami (web app + API), postgres (event storage), and caddy (reverse proxy + automatic HTTPS). Caddy terminates TLS and forwards requests to Umami on an internal Docker network. Umami writes sessions/events to PostgreSQL. Backups run from the host via cron using pg_dump inside the database container and are then synced to off-server storage.

  • Public edge: Caddy on ports 80/443 with automatic certificate management.
  • Private app network: only internal container-to-container traffic for Umami/PostgreSQL.
  • Stateful layer: PostgreSQL volume + scheduled logical backups.
  • Operations: deterministic compose commands for health checks, updates, and rollback.
# high-level runtime view
# internet -> caddy (TLS) -> umami -> postgres

docker network inspect umami_net >/dev/null 2>&1 || true
docker compose ps

If the copy button does not work in your browser/editor, select the code manually and copy.

Prerequisites

  • Ubuntu 22.04 or 24.04 server with at least 2 vCPU, 4 GB RAM, and 40+ GB disk.
  • A domain/subdomain pointed to your server (example: analytics.example.com).
  • Open inbound ports 80 and 443.
  • Non-root sudo user for operations.
  • Docker Engine + Docker Compose plugin installed.

Before continuing, verify DNS has propagated and resolves to the target host. Caddy can only provision certificates if your domain points correctly and ports are reachable from the internet.

# replace with your host
dig +short analytics.example.com
curl -I http://analytics.example.com

If the copy button does not work in your browser/editor, select the code manually and copy.

Step-by-step deployment

1) Prepare directories and least-privilege file permissions

Create an isolated project directory so deployments, environment files, and backup artifacts remain predictable. Keep secret files readable only by the deployment user.

sudo mkdir -p /opt/umami/{caddy,postgres,backups}
sudo chown -R $USER:$USER /opt/umami
cd /opt/umami
umask 027
touch .env
chmod 600 .env

If the copy button does not work in your browser/editor, select the code manually and copy.

2) Create environment variables and strong credentials

Use long random values for database and application secrets. Never commit these values to git. Store only in the local .env and your secret manager. In production teams, rotate these secrets on a schedule and after staff changes.

cat > /opt/umami/.env <<'EOF'
DOMAIN=analytics.example.com
POSTGRES_DB=umami
POSTGRES_USER=umami
POSTGRES_PASSWORD=CHANGE_ME_DB_PASSWORD
APP_SECRET=CHANGE_ME_UMAMI_APP_SECRET
UMAMI_ADMIN_PASSWORD=CHANGE_ME_ADMIN_PASSWORD
TZ=UTC
EOF

# generate secure replacements
openssl rand -base64 36
openssl rand -base64 48

If the copy button does not work in your browser/editor, select the code manually and copy.

3) Write Docker Compose for Umami + PostgreSQL + Caddy

The compose file below pins service roles clearly and includes health checks and restart policies. This gives you cleaner recoverability during host reboots and service restarts.

cat > /opt/umami/docker-compose.yml <<'EOF'
services:
  postgres:
    image: postgres:16-alpine
    container_name: umami-postgres
    restart: unless-stopped
    env_file: .env
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - ./postgres:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
      interval: 10s
      timeout: 5s
      retries: 10
    networks: [umami_net]

  umami:
    image: ghcr.io/umami-software/umami:postgresql-latest
    container_name: umami-app
    restart: unless-stopped
    env_file: .env
    environment:
      DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
      APP_SECRET: ${APP_SECRET}
    depends_on:
      postgres:
        condition: service_healthy
    networks: [umami_net]

  caddy:
    image: caddy:2-alpine
    container_name: umami-caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy/data:/data
      - ./caddy/config:/config
    depends_on:
      - umami
    networks: [umami_net]

networks:
  umami_net:
    name: umami_net
EOF

If the copy button does not work in your browser/editor, select the code manually and copy.

4) Configure Caddy reverse proxy and TLS

Keep proxy behavior explicit: forwarded headers, compression, and strict upstream target. This avoids subtle issues where analytics requests fail because of missing host or protocol headers.

cat > /opt/umami/caddy/Caddyfile <<'EOF'
{$DOMAIN} {
  encode gzip zstd
  reverse_proxy umami:3000 {
    header_up X-Forwarded-Proto {scheme}
    header_up X-Forwarded-Host {host}
    header_up X-Real-IP {remote_host}
  }
  header {
    Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
    X-Content-Type-Options "nosniff"
    X-Frame-Options "SAMEORIGIN"
    Referrer-Policy "strict-origin-when-cross-origin"
  }
}
EOF

If the copy button does not work in your browser/editor, select the code manually and copy.

5) Launch stack and create initial admin account

Start services, confirm health, and create your first admin. Keep bootstrap credentials temporary; rotate immediately after first login and enable MFA if your policy requires it.

cd /opt/umami
docker compose pull
docker compose up -d

docker compose ps
docker compose logs --tail=100 umami

docker compose exec umami sh -lc 'node script/create-admin.js admin CHANGE_ME_TEMP_PASSWORD'

If the copy button does not work in your browser/editor, select the code manually and copy.

6) Add tracking script to your websites

After creating a site in Umami, place the generated script in your website template footer. Roll this out first on staging, then production, so you can verify event capture before broad release.

<script async defer data-website-id="YOUR_WEBSITE_ID"
  src="https://analytics.example.com/script.js"></script>

If the copy button does not work in your browser/editor, select the code manually and copy.

Configuration and secrets handling best practices

For production, treat analytics as business-critical telemetry. Keep secrets outside source control, rotate on a calendar, and define incident playbooks for leaked credentials. At minimum:

  • Store .env in a root-owned directory with mode 600 or equivalent.
  • Rotate APP_SECRET, admin password, and DB password periodically.
  • Use off-host encrypted backups and test restore monthly.
  • Enable host firewall rules and allow only required ports.
  • Document emergency procedures for compromised app credentials.

For teams, map ownership clearly: platform engineer owns runtime + patching, product analyst owns dashboard governance, and security owns credential rotation and access review. This avoids the common failure mode where analytics runs but no one owns uptime, backups, or upgrades.

# example daily backup job
cat > /etc/cron.d/umami-backup <<'EOF'
15 2 * * * root docker exec umami-postgres pg_dump -U umami -d umami | gzip > /opt/umami/backups/umami-$(date +\%F).sql.gz
EOF

# keep 14 days locally
find /opt/umami/backups -type f -name 'umami-*.sql.gz' -mtime +14 -delete

If the copy button does not work in your browser/editor, select the code manually and copy.

Verification checklist

Use this checklist before announcing the deployment complete:

  1. TLS valid: certificate chain trusted and auto-renew path healthy.
  2. Containers healthy: all services running and restart policy applied.
  3. UI reachable: login succeeds and dashboard loads quickly.
  4. Event ingestion: test pageview appears within expected latency.
  5. Backup success: backup artifact created and readable.
  6. Restore drill: at least one test restore in non-production environment.
curl -I https://analytics.example.com
openssl s_client -connect analytics.example.com:443 -servername analytics.example.com </dev/null | openssl x509 -noout -dates -issuer

docker compose -f /opt/umami/docker-compose.yml ps
docker compose -f /opt/umami/docker-compose.yml logs --tail=50 caddy umami postgres

If the copy button does not work in your browser/editor, select the code manually and copy.

Common issues and fixes

Issue: Caddy cannot obtain certificates

Symptoms: repeated ACME errors in Caddy logs and browsers showing insecure connection warnings.
Fix: verify DNS A/AAAA records, ensure ports 80/443 are open, and remove any conflicting reverse proxy on the same host.

Issue: Umami starts but cannot connect to PostgreSQL

Symptoms: app container restarts repeatedly, logs show DB connection failures.
Fix: validate DATABASE_URL, ensure credentials match .env, and wait for PostgreSQL health check before app start.

Issue: Tracking script loaded but no events appear

Symptoms: browser loads script.js successfully but dashboard remains empty.
Fix: confirm correct data-website-id, no CSP block, and domain mapping in Umami matches the website origin.

Issue: Disk usage grows unexpectedly

Symptoms: alerts for low disk on host after weeks of operation.
Fix: enforce log rotation, prune unused images, and apply retention policies for old backup files.

# quick triage commands
sudo ss -tulpen | egrep ':80|:443'
docker logs umami-caddy --tail=120
docker logs umami-app --tail=120
docker logs umami-postgres --tail=120
sudo du -sh /opt/umami/*

If the copy button does not work in your browser/editor, select the code manually and copy.

FAQ

1) Can I run Umami for multiple websites on one deployment?

Yes. One Umami instance can track multiple websites. Create each site in the Umami UI and use its corresponding website ID in the script tag.

2) Is PostgreSQL required, or can I use SQLite in production?

PostgreSQL is strongly recommended for production reliability, better concurrency behavior, and cleaner backup/restore workflows.

3) How often should I back up analytics data?

For most teams, daily logical backups are a solid baseline. If analytics drives revenue decisions, use more frequent snapshots and test restore procedures monthly.

4) How do I safely upgrade without losing data?

Take a verified backup first, pin image tags in compose, run docker compose pull, then docker compose up -d. Validate ingestion after deployment before closing the change window.

5) What is the minimum security hardening I should enforce?

Strong secrets, HTTPS-only access, host firewall, restricted SSH, least-privilege file permissions, and routine patching for Ubuntu + container images.

6) Can I put this behind an existing edge proxy or CDN?

Yes. Preserve forwarded headers and TLS assumptions correctly. If a CDN is in front, ensure cache rules do not break dynamic endpoints and API calls.

7) How do I handle analytics for staging environments?

Create separate Umami website entries or a separate staging instance to avoid polluting production dashboards with QA and load-test traffic.

Related guides

Talk to us

Need help deploying and hardening production AI platforms, improving reliability, or building practical runbooks for your operations team? We can help with architecture, migration, security, and ongoing optimization.

Contact Us

Production Guide: Deploy Vaultwarden with Docker Compose + NGINX + PostgreSQL on Ubuntu
A production-first blueprint for running a secure self-hosted password manager with TLS, backups, and operational guardrails.