Skip to Content

Production Guide: Deploy Umami with Docker Compose + Traefik + PostgreSQL on Ubuntu

A production-ready Umami analytics deployment with secure networking, secret handling, backups, and practical troubleshooting.

Teams usually adopt product analytics after they notice the same pattern: decisions are being made from incomplete signals. Marketing sees campaign clicks, engineering sees logs, product sees support tickets, and leadership still lacks a shared view of activation, retention, and feature adoption. Umami is a strong option when you want analytics that are privacy-aware, fast, and self-hosted without adding unnecessary operational complexity.

This guide shows how to run Umami in production on Ubuntu using Docker Compose, Traefik, and PostgreSQL. The focus is practical operations: clean network boundaries, secret hygiene, TLS, backups, verification checkpoints, and failure recovery. By the end, you will have a deployment your team can actually maintain, not just a quick demo environment.

Architecture and flow overview

The stack has three core services:

  • umami: dashboard and API service for event analytics
  • postgres: durable storage for users, websites, events, and settings
  • traefik: reverse proxy handling TLS certificates, routing, and secure edge exposure

Traffic flows from browser to Traefik over HTTPS, then to Umami on a private Docker network. PostgreSQL stays internal and is never exposed publicly. This keeps the attack surface narrower and makes policy easier to reason about during audits. Operationally, split concerns this way: edge and certificates in Traefik, application lifecycle in Umami, and data durability in PostgreSQL.

For production environments, keep these principles non-negotiable: use strong secrets with rotation, pin image versions intentionally, limit public ports to 80/443 only, and test backup restore paths on a schedule. A secure analytics stack is not only about encryption in transit; it is also about repeatable operations under pressure.

Prerequisites

  • Ubuntu 22.04/24.04 host with at least 2 vCPU, 4 GB RAM, and 30+ GB storage
  • Domain or subdomain (example: analytics.yourdomain.com) pointed to your server
  • Open ports 80 and 443 from internet to host
  • Docker Engine + Docker Compose plugin installed
  • Basic Linux admin access (sudo + SSH keys)
sudo apt update && sudo apt -y upgrade
sudo apt -y install ca-certificates curl gnupg ufw
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo   "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu   $(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null
sudo apt update
sudo apt -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER

If the copy button does not work in your browser/editor, manually select the command block and copy it.

Step-by-step deployment

1) Create project directories and a protected environment file

Keep all runtime files grouped under a single root folder so upgrades, rollbacks, and backups remain predictable. Restrict the environment file to local administrators only, because it will hold credentials and app secrets.

mkdir -p ~/umami-prod/{traefik,db,backups}
cd ~/umami-prod
touch .env
chmod 600 .env

If the copy button does not work in your browser/editor, manually select the command block and copy it.

Edit .env with strong values. Use 32+ random characters for secrets and avoid reusing credentials across environments.

cat > .env << 'EOF'
DOMAIN=analytics.yourdomain.com
TZ=America/Chicago
UMAMI_APP_SECRET=replace_with_very_long_random_string
UMAMI_DB_NAME=umami
UMAMI_DB_USER=umami
UMAMI_DB_PASSWORD=replace_with_strong_database_password
POSTGRES_DB=umami
POSTGRES_USER=umami
POSTGRES_PASSWORD=replace_with_strong_database_password
EOF

If the copy button does not work in your browser/editor, manually select the command block and copy it.

2) Create Docker Compose stack

This compose file keeps PostgreSQL on an internal network, adds health checks, and constrains restart policy to production-safe defaults. Pin image major versions intentionally so your updates are controlled and testable.

cat > docker-compose.yml << 'EOF'
services:
  postgres:
    image: postgres:16
    container_name: umami-postgres
    restart: unless-stopped
    env_file: .env
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      TZ: ${TZ}
    volumes:
      - ./db:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 12
    networks:
      - internal

  umami:
    image: ghcr.io/umami-software/umami:postgresql-latest
    container_name: umami-app
    restart: unless-stopped
    env_file: .env
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      DATABASE_URL: postgresql://${UMAMI_DB_USER}:${UMAMI_DB_PASSWORD}@postgres:5432/${UMAMI_DB_NAME}
      APP_SECRET: ${UMAMI_APP_SECRET}
      DATABASE_TYPE: postgresql
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.umami.rule=Host(`${DOMAIN}`)"
      - "traefik.http.routers.umami.entrypoints=websecure"
      - "traefik.http.routers.umami.tls.certresolver=letsencrypt"
      - "traefik.http.services.umami.loadbalancer.server.port=3000"
    networks:
      - internal
      - edge

  traefik:
    image: traefik:v3.1
    container_name: umami-traefik
    restart: unless-stopped
    command:
      - "--api.dashboard=false"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
      - "--entrypoints.web.http.redirections.entryPoint.scheme=https"
      - "--certificatesresolvers.letsencrypt.acme.email=ops@yourdomain.com"
      - "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
      - "--certificatesresolvers.letsencrypt.acme.httpchallenge=true"
      - "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
      - "--accesslog=true"
      - "--log.level=INFO"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./traefik:/letsencrypt
    networks:
      - edge

networks:
  internal:
    driver: bridge
  edge:
    driver: bridge
EOF

If the copy button does not work in your browser/editor, manually select the command block and copy it.

3) Launch stack and run initial bootstrap checks

Start services, wait for health checks, and verify logs before inviting users. Early verification prevents hidden misconfiguration from showing up later as data gaps.

cd ~/umami-prod
docker compose up -d
docker compose ps
docker compose logs --tail=100 umami
docker compose logs --tail=100 traefik

If the copy button does not work in your browser/editor, manually select the command block and copy it.

4) Create admin account and first website profile

Log into the Umami UI at your domain, create your administrator account, then register your first tracked site. Copy the tracking script and place it on a test page so you can validate end-to-end event ingestion.

5) Host hardening and access controls

Enable a default-deny firewall posture. Keep SSH restricted by keys and network policies. This prevents accidental service exposure as your server evolves.

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
sudo ufw status verbose

If the copy button does not work in your browser/editor, manually select the command block and copy it.

Configuration and secret-handling best practices

A production analytics service quickly becomes business-critical, so credentials and data handling need structure from day one. Do not store secrets in Git repositories, screenshots, or chat threads. Use environment files only on host, and migrate to a formal secret manager as your platform matures.

  • Rotate app and database credentials at regular intervals and after staffing changes.
  • Keep different secrets per environment (dev, staging, prod), never shared values.
  • Use least-privilege accounts for database access and operational scripts.
  • Protect backups at rest and in transit; encryption should be default.
  • Document who can access dashboards containing sensitive product metrics.

Privacy and compliance expectations vary by industry, but the baseline is consistent: define retention windows, avoid collecting unnecessary personal data, and publish transparent tracking disclosures. If your legal team requires geo-specific controls, deploy region-aware consent handling before broad rollout.

Verification checklist

  • DNS resolves your analytics domain to the server IP
  • Valid TLS certificate is issued and auto-renewal logs are clean
  • Umami dashboard loads over HTTPS without mixed-content warnings
  • New page views appear in near real time from test traffic
  • Database backup job completes and can be restored in a test environment
  • Only ports 22, 80, and 443 are exposed externally
curl -I https://analytics.yourdomain.com
docker compose ps
docker compose logs --tail=80 traefik
docker compose logs --tail=80 umami

If the copy button does not work in your browser/editor, manually select the command block and copy it.

Backup and recovery runbook

Without tested restore paths, analytics data is still at risk. Treat backup automation and recovery rehearsal as a required part of the deployment, not optional polish. Store daily database dumps off-host and verify integrity regularly.

cat > ~/umami-prod/backup-postgres.sh << 'EOF'
#!/usr/bin/env bash
set -euo pipefail
cd ~/umami-prod
export $(grep -v '^#' .env | xargs)
TS=$(date +%F-%H%M%S)
docker exec -e PGPASSWORD="$POSTGRES_PASSWORD" umami-postgres   pg_dump -U "$POSTGRES_USER" -d "$POSTGRES_DB" -Fc > "./backups/umami-${TS}.dump"
find ./backups -type f -name 'umami-*.dump' -mtime +14 -delete
EOF
chmod +x ~/umami-prod/backup-postgres.sh
( crontab -l 2>/dev/null; echo "15 2 * * * ~/umami-prod/backup-postgres.sh" ) | crontab -

If the copy button does not work in your browser/editor, manually select the command block and copy it.

When testing restore, spin up an isolated temporary PostgreSQL instance, import the dump, and validate key tables and row counts. Do this monthly so the process is proven before a real incident occurs.

Common issues and fixes

Umami container restarts repeatedly

Most often this is a malformed DATABASE_URL or wrong database credentials. Verify the environment values and check PostgreSQL health before restarting the app.

TLS certificate does not issue

Confirm DNS propagation and verify that ports 80 and 443 are reachable publicly. If your host is behind another proxy, ensure HTTP challenge traffic can still reach Traefik.

No page views after installing script

Check the tracking script domain mapping and browser console for blocked requests. Validate that CSP and ad-blocking rules are not preventing analytics requests.

Slow dashboard queries as traffic grows

Review host CPU and PostgreSQL I/O. Increase resources, tune PostgreSQL, and apply retention rules so historical event volume does not degrade interactive performance.

Unexpected data gaps

Inspect deployment changes, reverse-proxy logs, and frontend releases around the missing window. Most gaps come from script changes, blocked requests, or transient container restarts.

FAQ

Can Umami replace Google Analytics completely?

For many teams, yes. Umami covers core product and traffic insights with a privacy-first model. If you depend on advanced ad ecosystem integrations, run both temporarily and compare reporting before migration.

Should I run Umami and PostgreSQL on separate hosts?

For larger or regulated environments, separating workloads is common. Keep network policies strict and use encrypted links between services where possible.

How do I manage zero-downtime updates?

Use staged rollout patterns where feasible and always take a fresh database backup before upgrade. For most teams, a short maintenance window with tested rollback is safer than an unproven hot swap.

What retention window is practical for most SaaS teams?

Start with 12–18 months of raw event data, then review costs and query behavior quarterly. Keep aggregated KPI exports for longer strategic trend analysis.

How do we protect analytics access for contractors?

Create role-limited accounts, enforce least privilege, and remove dormant users quickly. Pair this with periodic access audits and policy sign-off.

Can I front Umami with Cloudflare while keeping Traefik?

Yes. Many teams do this for DDoS protection and caching at the edge. Keep SSL mode strict and verify client IP logging behavior for audit and troubleshooting accuracy.

Internal links

Talk to us

If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.

Contact Us

Production Guide: Deploy Mattermost with Docker Compose + Caddy + PostgreSQL on Ubuntu
Self-hosted team chat with HTTPS, persistent PostgreSQL, secure config, backups, and production troubleshooting.