Skip to Content

Production Guide: Deploy Plausible with Docker Compose + Caddy + PostgreSQL + ClickHouse on Ubuntu

A production-first, operator-focused guide for self-hosting privacy-friendly analytics with secure defaults, observability, and disaster-recovery runbooks.

If you run product, growth, or engineering teams, web analytics usually becomes a trade-off between insight depth, privacy requirements, and vendor lock-in. Many teams start with hosted analytics, then discover costs rising with traffic, limited raw control, and compliance concerns across regions. That is where Plausible fits: lightweight, privacy-first analytics you can operate on your own infrastructure.

This guide shows a production deployment pattern using Docker Compose + Caddy + PostgreSQL + ClickHouse on Ubuntu. The stack gives you practical reliability without Kubernetes overhead: Caddy handles TLS and reverse proxy; PostgreSQL stores transactional app data; ClickHouse handles analytics events at scale. We will use opinionated defaults that reduce operational surprises in real environments.

The target audience is platform engineers, DevOps teams, and technical founders who need a secure, maintainable setup that supports routine upgrades, backups, incident triage, and predictable scaling paths. The examples are copy-paste ready, but every section explains why each setting exists so you can adapt safely to your environment.

Architecture and flow overview

At a high level, browsers send analytics events to Plausible endpoints over HTTPS. Caddy terminates TLS and routes traffic to the Plausible web service inside a private Docker network. Plausible writes metadata and app state to PostgreSQL while event-heavy datasets go to ClickHouse for efficient aggregation and reporting. This split avoids overloading PostgreSQL with high-cardinality event queries and keeps dashboards responsive as traffic grows.

From an operations perspective, this architecture is straightforward: one host can run the full stack for small-to-medium workloads, and each stateful component can later move to managed or external services. The migration path remains clean because service boundaries are explicit from day one.

Internet Users
   |
HTTPS (443)
   v
[Caddy Reverse Proxy]
   |
   +--> [Plausible Web/App Container]
            |                 |
            |                 +--> [ClickHouse]  (events, aggregations)
            +--------------------> [PostgreSQL]  (accounts, sites, settings)

If the copy button does not work in your browser/editor, select the code block manually and copy.

Prerequisites

  • Ubuntu 22.04/24.04 server with sudo access (minimum 2 vCPU, 4 GB RAM to start).
  • A DNS record (e.g., analytics.example.com) pointing to your server.
  • Open ports 80/443 on your firewall/security group.
  • Docker Engine + Docker Compose plugin installed.
  • Email SMTP credentials for Plausible account invites/password reset.

Before deployment, run system updates and verify clock sync (NTP). TLS issuance, logs, and event ordering all depend on accurate host time.

sudo apt update && sudo apt -y upgrade
sudo timedatectl set-ntp true
sudo timedatectl status

docker --version
docker compose version

If the copy button does not work in your browser/editor, select the code block manually and copy.

Step-by-step deployment

1) Prepare project directories and secrets

Use a dedicated directory with strict file permissions. Keep secrets in .env and never hardcode them in compose files or shell history. Generate long random values for all secret keys and DB passwords.

sudo mkdir -p /opt/plausible/{caddy,data,backups}
sudo chown -R $USER:$USER /opt/plausible
cd /opt/plausible

umask 077
cat > .env <<'EOF'
BASE_DOMAIN=analytics.example.com
[email protected]

POSTGRES_DB=plausible
POSTGRES_USER=plausible
POSTGRES_PASSWORD=REPLACE_WITH_LONG_RANDOM

CLICKHOUSE_DB=plausible
CLICKHOUSE_USER=plausible
CLICKHOUSE_PASSWORD=REPLACE_WITH_LONG_RANDOM

SECRET_KEY_BASE=REPLACE_WITH_64_PLUS_RANDOM_CHARS
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USER=smtp-user
SMTP_PASSWORD=REPLACE_WITH_SMTP_SECRET
SMTP_FROM="Plausible <[email protected]>"
EOF
chmod 600 .env

If the copy button does not work in your browser/editor, select the code block manually and copy.

2) Create Docker Compose definition

The compose layout below pins service names and networks so inter-service DNS remains stable. Health checks and restart policies are included for production resilience.

cat > docker-compose.yml <<'EOF'
services:
  caddy:
    image: caddy:2.8
    container_name: plausible_caddy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - caddy_config:/config
    depends_on:
      - plausible
    restart: unless-stopped

  postgres:
    image: postgres:16
    container_name: plausible_postgres
    env_file: .env
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 10
    restart: unless-stopped

  clickhouse:
    image: clickhouse/clickhouse-server:24.8
    container_name: plausible_clickhouse
    env_file: .env
    environment:
      CLICKHOUSE_DB: ${CLICKHOUSE_DB}
      CLICKHOUSE_USER: ${CLICKHOUSE_USER}
      CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
    volumes:
      - clickhouse_data:/var/lib/clickhouse
    ulimits:
      nofile:
        soft: 262144
        hard: 262144
    restart: unless-stopped

  plausible:
    image: plausible/analytics:latest
    container_name: plausible_app
    env_file: .env
    depends_on:
      postgres:
        condition: service_healthy
      clickhouse:
        condition: service_started
    environment:
      BASE_URL: https://${BASE_DOMAIN}
      SECRET_KEY_BASE: ${SECRET_KEY_BASE}
      DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
      CLICKHOUSE_DATABASE_URL: http://${CLICKHOUSE_USER}:${CLICKHOUSE_PASSWORD}@clickhouse:8123/${CLICKHOUSE_DB}
      MAILER_EMAIL: ${SMTP_FROM}
      SMTP_HOST_ADDR: ${SMTP_HOST}
      SMTP_HOST_PORT: ${SMTP_PORT}
      SMTP_USER_NAME: ${SMTP_USER}
      SMTP_USER_PWD: ${SMTP_PASSWORD}
      DISABLE_REGISTRATION: "invite_only"
    restart: unless-stopped

volumes:
  postgres_data:
  clickhouse_data:
  caddy_data:
  caddy_config:
EOF

If the copy button does not work in your browser/editor, select the code block manually and copy.

3) Configure Caddy

Caddy keeps TLS and reverse-proxy config minimal while still supporting hardened headers and compression. It is a practical default for small teams without dedicated edge infrastructure.

cat > caddy/Caddyfile <<'EOF'
{$BASE_DOMAIN} {
  encode zstd gzip

  header {
    Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
    X-Content-Type-Options "nosniff"
    X-Frame-Options "SAMEORIGIN"
    Referrer-Policy "strict-origin-when-cross-origin"
  }

  reverse_proxy plausible:8000
}
EOF

If the copy button does not work in your browser/editor, select the code block manually and copy.

4) Boot and initialize services

Bring up stateful services first, then the app. Watch logs until migrations complete and health checks settle. Do not continue until this is stable.

export $(grep -v '^#' .env | xargs)
docker compose up -d postgres clickhouse
docker compose up -d plausible caddy

docker compose ps
docker compose logs --tail=100 plausible

If the copy button does not work in your browser/editor, select the code block manually and copy.

5) Create first admin and lock registration

After first access, create your admin account and verify invite flows. Keep open registration disabled for production unless you intentionally operate a public multi-tenant service.

Configuration and secrets handling best practices

Store .env in a restricted path, and back it up in an encrypted vault (not plain Git). Rotate database and SMTP secrets on a schedule, and immediately after staff changes. If you use external secret managers, template environment variables at deployment time instead of persisting raw secrets on disk.

For host hardening, apply unattended security updates, disable password SSH auth, and enforce key-based login with short-lived access where possible. Also set log retention policies: enough for incident response, not so large that disks fill silently.

  • Use ufw or cloud firewall rules to expose only 22/80/443.
  • Restrict Postgres/ClickHouse ports to internal Docker network only.
  • Schedule database backups with test restores, not backup jobs alone.
  • Pin major image versions in production and upgrade deliberately.
# Example: simple encrypted backup flow
cd /opt/plausible
mkdir -p backups

# PostgreSQL logical backup
TS=$(date +%F-%H%M)
docker exec plausible_postgres pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" | gzip > backups/postgres-$TS.sql.gz

# ClickHouse backup (table-level export example)
docker exec plausible_clickhouse clickhouse-client --query "BACKUP DATABASE plausible TO Disk('backups', 'ch-$TS.zip')"

# Encrypt artifacts before off-host transfer (age example)
age -r YOUR_PUBLIC_KEY backups/postgres-$TS.sql.gz > backups/postgres-$TS.sql.gz.age

If the copy button does not work in your browser/editor, select the code block manually and copy.

Verification checklist

Verification should test functionality and operations, not only HTTP 200. Confirm dashboards render, events ingest, SMTP sends correctly, and restart behavior is predictable after host reboot.

  1. Open https://analytics.example.com and log in.
  2. Add a test domain and include the Plausible tracking snippet.
  3. Generate test traffic and verify it appears in near real time.
  4. Restart containers and confirm service recovery order.
  5. Run a backup then test restore in a staging environment.
# Liveness checks
curl -I https://analytics.example.com

docker compose ps
docker compose logs --tail=50 plausible

docker exec plausible_postgres psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "select now();"
docker exec plausible_clickhouse clickhouse-client --query "SELECT now()"

If the copy button does not work in your browser/editor, select the code block manually and copy.

Common issues and fixes

Issue: TLS certificate is not issued

Symptoms: Browser warns about insecure cert, Caddy logs ACME failures.
Fix: Confirm DNS A record points to the correct host, ports 80/443 are open, and no other service is binding those ports. Retry after DNS propagation.

Issue: Plausible boots but dashboards are empty

Symptoms: UI loads, but no events populate.
Fix: Check site domain configuration and tracking snippet placement. Verify ad-blocking/network policies are not stripping analytics requests in your test environment.

Issue: Database connection resets during peak traffic

Symptoms: Intermittent 5xx errors, connection timeout logs.
Fix: Increase Postgres shared buffers and max connections based on host capacity; validate ClickHouse disk latency and available IOPS.

Issue: Copy button unavailable in rendered article

Symptoms: Odoo sanitizes scripts, copy click has no effect.
Fix: Keep the manual-copy fallback line under each code block. If your theme allows custom assets, move clipboard JavaScript to a trusted static bundle.

FAQ

Can I run Plausible without ClickHouse?

For very small usage, some setups may defer ClickHouse, but production analytics performance and retention behavior are much better with ClickHouse enabled from the start.

Should I use managed PostgreSQL instead of local containerized PostgreSQL?

Yes, if your team already has managed database operations. Keep connection pooling and network ACLs tight; the rest of this guide remains mostly the same.

How much traffic can this single-host setup handle?

Capacity depends on event volume, query patterns, and retention windows. Start with observability baselines (CPU, disk IOPS, query latency), then scale vertically or split stateful services as needed.

What is the safest upgrade strategy?

Pin major versions, create backups, deploy in staging first, then perform rolling production updates during a low-traffic window with rollback images pre-pulled.

How do I make this deployment more secure?

Use least-privilege credentials, secret rotation, host patching, strict firewall rules, SSH hardening, and centralized log monitoring with alert thresholds for anomalies.

Can I put Cloudflare or another CDN in front of Caddy?

Yes. Preserve origin TLS, forward real client IP headers correctly, and validate caching rules so analytics/event endpoints are never cached unexpectedly.

How do I recover from accidental data loss?

Use tested restore runbooks: restore PostgreSQL metadata, restore ClickHouse event datasets, then validate dashboard consistency against known checkpoints before reopening access.

Related guides

Talk to us

Need help deploying and hardening production AI platforms, improving reliability, or building practical runbooks for your operations team? We can help with architecture, migration, security, and ongoing optimization.

Contact Us

Production Guide: Deploy MinIO with Docker Compose + Caddy on Ubuntu
A production-oriented MinIO deployment with TLS, least-privilege credentials, lifecycle controls, backups, and operational runbooks.