Skip to Content

Production Guide: Deploy Vaultwarden with Docker Compose + Caddy + PostgreSQL on Ubuntu

A production-focused Vaultwarden deployment with TLS, secret hygiene, backups, restore drills, and operational verification.

Password managers are one of the first services security teams ask to self-host when SaaS sprawl, compliance questions, or regional data residency requirements start to grow. In many organizations, credentials are still scattered across browsers, shared documents, and chat threads. That creates a fragile operating model: people lose access during incidents, emergency resets are slow, and offboarding often leaves unknown credential exposure behind. This guide shows a production-oriented way to deploy Vaultwarden on Ubuntu with Docker Compose, Caddy, and PostgreSQL, including backup and restore routines, secret-handling guardrails, and practical operations checks.

Architecture and flow overview

We use a layered architecture that keeps the deployment simple for a small team but still robust enough for production:

  • Vaultwarden container serves the application API and web vault.
  • PostgreSQL container stores encrypted vault metadata and operational records.
  • Caddy container terminates TLS, manages certificates automatically, and reverse-proxies to Vaultwarden.
  • Docker network segmentation keeps the database unreachable from the public internet.
  • Host-level hardening (firewall, least privilege filesystem ownership, and explicit backup retention) reduces common failure paths.

Request flow is straightforward: users connect to vault.example.com over HTTPS, Caddy handles TLS and forwards requests to Vaultwarden, and Vaultwarden reads/writes encrypted data to PostgreSQL over the private Docker network. Operational flow adds two critical controls: scheduled backups and routine restore tests. Backups without restore validation are only a hopeful assumption; this guide treats restore verification as a first-class task.

Prerequisites

  • Ubuntu 22.04/24.04 server with at least 2 vCPU, 4 GB RAM, and 40+ GB SSD.
  • A DNS A record for your domain (for example, vault.example.com) pointing to the server.
  • Ports 80 and 443 open on your cloud firewall/security group.
  • A non-root sudo user for administration.
  • Docker Engine + Docker Compose plugin installed.
  • A strong admin token generated offline and stored in a secure secrets manager.
sudo apt update
sudo apt install -y ca-certificates curl gnupg ufw
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker
docker --version
docker compose version

If the copy button does not work in your browser/editor, manually select and copy the code block.

Before proceeding, create and securely store secrets (do not keep them in shell history): a random PostgreSQL password, a random Vaultwarden admin token, and a long SMTP password if you will enable email invites/reset flows.

Step-by-step deployment

Create a predictable directory layout so upgrades, backups, and incident response are easier. Keep environment files private, and avoid world-readable permissions for anything containing credentials.

sudo mkdir -p /opt/vaultwarden/{caddy,data,postgres,backups}
sudo chown -R $USER:$USER /opt/vaultwarden
cd /opt/vaultwarden
touch .env
chmod 600 .env

If the copy button does not work in your browser/editor, manually select and copy the code block.

Populate /opt/vaultwarden/.env with hardened defaults. Replace placeholders with your actual values:

DOMAIN=https://vault.example.com
VW_ADMIN_TOKEN=replace_with_long_random_token
VW_SIGNUPS_ALLOWED=false
VW_WEBSOCKET_ENABLED=true
VW_INVITATIONS_ALLOWED=true
VW_SMTP_HOST=smtp.mailprovider.com
[email protected]
VW_SMTP_PORT=587
VW_SMTP_SECURITY=starttls
[email protected]
VW_SMTP_PASSWORD=replace_with_smtp_secret
POSTGRES_DB=vaultwarden
POSTGRES_USER=vaultwarden
POSTGRES_PASSWORD=replace_with_strong_db_password
TZ=UTC

If the copy button does not work in your browser/editor, manually select and copy the code block.

Now define the Docker Compose stack. This configuration keeps PostgreSQL on the private network only, uses health checks, and persists all critical data to host volumes.

services:
  db:
    image: postgres:16-alpine
    container_name: vw-db
    restart: unless-stopped
    env_file: .env
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - /opt/vaultwarden/postgres:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 10
    networks:
      - private

  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: unless-stopped
    env_file: .env
    environment:
      DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
      DOMAIN: ${DOMAIN}
      ADMIN_TOKEN: ${VW_ADMIN_TOKEN}
      SIGNUPS_ALLOWED: ${VW_SIGNUPS_ALLOWED}
      WEBSOCKET_ENABLED: ${VW_WEBSOCKET_ENABLED}
      INVITATIONS_ALLOWED: ${VW_INVITATIONS_ALLOWED}
      SMTP_HOST: ${VW_SMTP_HOST}
      SMTP_FROM: ${VW_SMTP_FROM}
      SMTP_PORT: ${VW_SMTP_PORT}
      SMTP_SECURITY: ${VW_SMTP_SECURITY}
      SMTP_USERNAME: ${VW_SMTP_USERNAME}
      SMTP_PASSWORD: ${VW_SMTP_PASSWORD}
      TZ: ${TZ}
    volumes:
      - /opt/vaultwarden/data:/data
    depends_on:
      db:
        condition: service_healthy
    networks:
      - private

  caddy:
    image: caddy:2
    container_name: vw-caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /opt/vaultwarden/caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - /opt/vaultwarden/caddy/data:/data
      - /opt/vaultwarden/caddy/config:/config
    depends_on:
      - vaultwarden
    networks:
      - private

networks:
  private:
    driver: bridge

If the copy button does not work in your browser/editor, manually select and copy the code block.

Create the Caddyfile. Keep this minimal until the service is stable, then add security headers and optional request limits if your threat model requires it.

vault.example.com {
  encode zstd gzip

  reverse_proxy vaultwarden:80 {
    header_up X-Forwarded-Proto {scheme}
    header_up X-Forwarded-For {remote_host}
  }

  header {
    Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
    X-Content-Type-Options "nosniff"
    X-Frame-Options "DENY"
    Referrer-Policy "strict-origin-when-cross-origin"
  }
}

If the copy button does not work in your browser/editor, manually select and copy the code block.

Launch and validate container health. Do not proceed to user onboarding until all health checks are passing and TLS is active.

cd /opt/vaultwarden
docker compose up -d
docker compose ps
docker compose logs -f --tail=100 caddy vaultwarden db

If the copy button does not work in your browser/editor, manually select and copy the code block.

At this point, sign in to the admin portal (https://vault.example.com/admin) using the admin token and configure organization policy: disable public signups, enforce two-step login policy, and require owner approval for sensitive vault sharing. If your team uses SSO externally, document the trust boundary clearly because Vaultwarden itself may still hold direct credentials for break-glass and service accounts.

Configuration and secret-handling best practices

Production quality comes from operational discipline more than from container count. Start by minimizing secret exposure. Keep .env file permissions at 600, avoid copying secrets into tickets, and rotate database/admin tokens on a predictable cadence. When possible, feed secrets from a dedicated secret manager during CI/CD instead of static files. If static files are required, hash and monitor them with file-integrity tooling so unauthorized changes are caught early.

For SMTP, use a dedicated mailbox with strict sending limits and no interactive login reuse. For backup encryption, use an independent key from your app credentials. A single credential set should not unlock both production and backup archives. Enable host firewall defaults-deny, allowing only SSH (restricted source ranges if possible), HTTP, and HTTPS. If your environment supports private networking, move SSH behind VPN and remove direct public exposure.

Define retention policy by business requirement, not by disk convenience. A practical starting point is daily encrypted backups retained for 14 days, weekly snapshots retained for 8 weeks, and monthly snapshots retained for 6–12 months depending on audit needs. Write this policy into your runbook and test at least one restore every month.

Production operations playbook

Create a backup script that captures both PostgreSQL and Vaultwarden data directory, compresses it, encrypts it, and prunes old archives. Keep backup storage off-host if possible (object storage, NFS appliance, or secure secondary host).

#!/usr/bin/env bash
set -euo pipefail
cd /opt/vaultwarden
STAMP=$(date +%F-%H%M)
OUT=/opt/vaultwarden/backups/vw-$STAMP
mkdir -p "$OUT"

docker compose exec -T db pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" > "$OUT/db.sql"
tar -czf "$OUT/data.tar.gz" -C /opt/vaultwarden data

# optional: encrypt with age or gpg before offsite sync
# age -r  -o "$OUT.tar.gz.age" "$OUT/data.tar.gz"

find /opt/vaultwarden/backups -maxdepth 1 -type d -mtime +14 -exec rm -rf {} +
echo "Backup completed: $OUT"

If the copy button does not work in your browser/editor, manually select and copy the code block.

Add a restore drill command sequence to your runbook and run it in a staging clone at least monthly:

# Example restore drill (staging host)
cd /opt/vaultwarden
docker compose down
rm -rf /opt/vaultwarden/postgres/* /opt/vaultwarden/data/*
# restore db.sql and data.tar.gz from backup package
docker compose up -d db
cat /path/to/db.sql | docker compose exec -T db psql -U "$POSTGRES_USER" "$POSTGRES_DB"
tar -xzf /path/to/data.tar.gz -C /opt/vaultwarden
docker compose up -d

docker compose ps
curl -I https://vault.example.com

If the copy button does not work in your browser/editor, manually select and copy the code block.

Verification checklist

  • HTTPS certificate is valid and auto-renewing (verify with openssl s_client or browser inspection).
  • docker compose ps shows all services healthy and restart policy set to unless-stopped.
  • Public ports expose only 80/443; PostgreSQL is not reachable externally.
  • Admin portal is reachable only to authorized operators, and admin token is not stored in shell history.
  • SMTP test email succeeds for invite/reset workflow.
  • Backup job runs on schedule and latest archive is present.
  • Restore drill completes in staging and is documented with measured RTO/RPO.

Common issues and fixes

1) Caddy gets certificate errors

Most failures come from DNS mismatch or blocked port 80 during ACME challenge. Confirm the A record resolves correctly and inbound 80/443 are allowed. Then restart Caddy and recheck logs.

2) Vaultwarden cannot connect to PostgreSQL

Usually caused by wrong DATABASE_URL, stale credentials, or missing health dependency. Validate credentials in .env, confirm database service health, and test with psql from inside the app container.

3) WebSocket or sync feels unreliable

Ensure proxy headers are correct and no intermediate gateway is stripping upgrade headers. Caddy generally handles this well with default reverse proxy behavior, but upstream L7 devices can interfere.

4) Backup exists but restore fails

Common root causes are schema mismatch (image drift) or incomplete data extraction. Pin image tags during backup/restore drills and keep a tested recovery script with explicit sequence, not ad-hoc commands.

5) Team accidentally enables open signups

Set SIGNUPS_ALLOWED=false and monitor config changes. Consider adding a daily compliance check script that alerts when critical flags drift from baseline.

FAQ

Should we use SQLite for a small team?

For proofs of concept, SQLite can work, but production reliability and backup consistency improve with PostgreSQL, especially as users and organizations grow.

How often should we rotate the admin token?

At minimum quarterly, and immediately after role changes or suspected exposure. Also rotate after any support session where secrets might have been visible.

Can we run Vaultwarden behind Cloudflare or another CDN?

Yes, but keep origin locked down, preserve real client IP headers, and verify WebSocket behavior. Test sync under realistic client conditions before announcing production readiness.

What is a practical backup frequency?

Daily backups are baseline for most teams, with additional snapshots before upgrades. High-change environments may require more frequent schedules based on tolerance for data loss.

How do we handle break-glass access safely?

Use sealed runbook procedures, dual-approval where possible, and audit every break-glass event. Credentials used for emergency access should be rotated immediately after use.

Do we need separate environments for staging and production?

Yes. Even a lightweight staging stack pays off by enabling upgrade validation, restore drills, and policy checks without risking production availability.

Related internal guides

Talk to us

If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.

Contact Us

Production Guide: Deploy Grafana with Docker Compose + Nginx + PostgreSQL on Ubuntu
A production-oriented Grafana deployment with hardened config, reverse proxy, backups, and operations runbooks.