Most teams adopt a headless CMS after they hit the same bottleneck: engineering is overloaded with simple content requests, while marketing and operations need faster publishing without waiting for application releases. Directus is a strong fit in this stage because it sits cleanly on top of SQL, gives your team a modern admin UI, and exposes API-first access (REST and GraphQL) for websites, apps, portals, and internal tools. This guide shows how to deploy Directus in production on Ubuntu using Docker Compose, PostgreSQL, and Caddy with automatic TLS.
The goal is not only to get a demo running, but to build an environment your team can operate safely: structured secrets handling, constrained networking, health checks, backups, update strategy, and practical troubleshooting. By the end, you will have a production-ready baseline that you can extend with SSO, object storage, and observability.
Architecture and flow overview
This deployment uses three containers on a dedicated Ubuntu host:
- directus: the application/API layer and admin interface
- postgres: primary metadata/content database
- caddy: reverse proxy, automatic HTTPS certificates, and HTTP-to-HTTPS redirects
Traffic flow is straightforward: users reach https://cms.yourdomain.com, Caddy terminates TLS and forwards requests to Directus on the private Docker network. Directus talks to PostgreSQL over the same private network, and the database is not exposed to the public internet. Backups are taken from PostgreSQL and stored off-host. Operationally, this gives a clear separation of concerns and keeps your threat surface tight.
For production, the most important principles are: isolate services on an internal network, pin image tags intentionally, avoid plaintext secrets in committed files, and verify every deploy with a short smoke test before you hand over to editors.
Prerequisites
- Ubuntu 22.04/24.04 server with at least 2 vCPU, 4 GB RAM, and 30+ GB disk
- A domain/subdomain (example:
cms.yourdomain.com) pointed to your server IP - Open ports 80 and 443 in firewall/security group
- Docker Engine + Docker Compose plugin installed
- Basic Linux admin access (sudo + SSH)
sudo apt update && sudo apt -y upgrade
sudo apt -y install ca-certificates curl gnupg ufw
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
If the copy button does not work in your browser/editor, manually select the command block and copy it.
Step-by-step deployment
1) Create project directories and environment file
We keep runtime state in explicit folders so backup and migration are predictable.
mkdir -p ~/directus-prod/{caddy,data,db}
cd ~/directus-prod
touch .env
chmod 600 .env
If the copy button does not work in your browser/editor, manually select the command block and copy it.
Edit .env with strong secrets (32+ chars). Do not reuse app passwords anywhere else.
cat > .env << 'EOF'
DOMAIN=cms.yourdomain.com
TZ=America/Chicago
DIRECTUS_KEY=replace_with_long_random_key
DIRECTUS_SECRET=replace_with_long_random_secret
[email protected]
ADMIN_PASSWORD=replace_with_very_strong_password
POSTGRES_DB=directus
POSTGRES_USER=directus
POSTGRES_PASSWORD=replace_with_long_random_db_password
EOF
If the copy button does not work in your browser/editor, manually select the command block and copy it.
2) Write Docker Compose file
This compose stack pins major versions and configures health checks so failures surface early. Keep the database on the internal network only.
cat > docker-compose.yml << 'EOF'
services:
postgres:
image: postgres:16
container_name: directus-postgres
restart: unless-stopped
env_file: .env
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
TZ: ${TZ}
volumes:
- ./db:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 10
networks:
- internal
directus:
image: directus/directus:10.13.1
container_name: directus-app
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
env_file: .env
environment:
KEY: ${DIRECTUS_KEY}
SECRET: ${DIRECTUS_SECRET}
ADMIN_EMAIL: ${ADMIN_EMAIL}
ADMIN_PASSWORD: ${ADMIN_PASSWORD}
DB_CLIENT: pg
DB_HOST: postgres
DB_PORT: 5432
DB_DATABASE: ${POSTGRES_DB}
DB_USER: ${POSTGRES_USER}
DB_PASSWORD: ${POSTGRES_PASSWORD}
WEBSOCKETS_ENABLED: "true"
CORS_ENABLED: "true"
CORS_ORIGIN: "https://${DOMAIN}"
PUBLIC_URL: "https://${DOMAIN}"
TZ: ${TZ}
volumes:
- ./data:/directus/uploads
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:8055/server/health"]
interval: 15s
timeout: 5s
retries: 10
networks:
- internal
caddy:
image: caddy:2.8
container_name: directus-caddy
restart: unless-stopped
depends_on:
directus:
condition: service_healthy
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy/data:/data
- ./caddy/config:/config
networks:
- internal
networks:
internal:
driver: bridge
EOF
If the copy button does not work in your browser/editor, manually select the command block and copy it.
3) Configure Caddy reverse proxy
Caddy gives you managed certificates and clean defaults for modern TLS without a separate ACME setup.
cat > caddy/Caddyfile << 'EOF'
{$DOMAIN} {
encode zstd gzip
reverse_proxy directus:8055
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "strict-origin-when-cross-origin"
}
log {
output stdout
format console
}
}
EOF
If the copy button does not work in your browser/editor, manually select the command block and copy it.
4) Launch and verify initial bootstrap
Start the stack and check health before inviting editors. You should see all services marked healthy.
export $(grep -v '^#' .env | xargs)
docker compose up -d
docker compose ps
docker compose logs --tail=100 directus
If the copy button does not work in your browser/editor, manually select the command block and copy it.
5) Harden host access and updates
At minimum, enforce SSH-only administration, allow web ports, and deny everything else. Keep host and containers patched on a regular maintenance window.
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
sudo ufw status verbose
If the copy button does not work in your browser/editor, manually select the command block and copy it.
6) Add backup automation for PostgreSQL
A CMS without tested backups is still a risk. Use logical dumps plus off-site replication (object storage or backup server). Keep at least 7 daily and 4 weekly copies.
mkdir -p ~/directus-prod/backups
cat > ~/directus-prod/backup-postgres.sh << 'EOF'
#!/usr/bin/env bash
set -euo pipefail
cd ~/directus-prod
export $(grep -v '^#' .env | xargs)
TS=$(date +%F-%H%M%S)
docker exec -e PGPASSWORD="$POSTGRES_PASSWORD" directus-postgres \
pg_dump -U "$POSTGRES_USER" -d "$POSTGRES_DB" -Fc > "./backups/directus-${TS}.dump"
find ./backups -type f -name 'directus-*.dump' -mtime +14 -delete
EOF
chmod +x ~/directus-prod/backup-postgres.sh
( crontab -l 2>/dev/null; echo "30 2 * * * ~/directus-prod/backup-postgres.sh" ) | crontab -
If the copy button does not work in your browser/editor, manually select the command block and copy it.
Configuration and secret-handling best practices
In production, secrets management is usually where teams drift into unsafe habits. Keep these guardrails from day one:
- Do not commit
.envto Git; use infrastructure secret stores for CI/CD. - Rotate
DIRECTUS_SECRET, admin password, and DB credentials on a schedule and after team changes. - Use unique credentials per environment (dev/stage/prod), never shared values.
- Restrict server SSH access by IP and use key-based auth only.
- Prefer object storage (S3-compatible) for uploads when scaling beyond one node.
When teams integrate Directus with frontend builds, avoid over-permissioned static tokens. Instead, create role-scoped tokens for each service and document exactly what each token can read/write. This simplifies audit and limits blast radius in case of accidental exposure.
For operational maturity, define a short change policy: upgrades are tested in staging, then rolled to production during a maintenance window; rollback means restoring both database dump and uploads volume snapshot taken right before upgrade. A good process is boring and repeatable, and that is exactly what you want for content infrastructure.
Verification checklist
- DNS resolves
cms.yourdomain.comto your server IP - TLS certificate is valid and auto-renewing in Caddy logs
/server/healthreturns healthy status- Admin login works and you can create a test collection/item
- PostgreSQL dump can be created and restored in a test environment
- Firewall exposes only 22, 80, 443 as expected
curl -I https://cms.yourdomain.com
curl -s https://cms.yourdomain.com/server/health | jq .
docker compose ps
docker compose logs --tail=50 caddy
If the copy button does not work in your browser/editor, manually select the command block and copy it.
Common issues and fixes
Directus container keeps restarting
Most often this is a DB credential mismatch or a typo in DB_HOST/DB_DATABASE. Confirm variable names exactly match your .env and test DB readiness with pg_isready inside the postgres container.
HTTPS does not issue certificates
Check DNS propagation first, then ensure ports 80/443 are reachable publicly. If you are behind Cloudflare proxy, start with DNS-only mode until first certificate is issued.
Uploads disappear after redeploy
This indicates your uploads path is not on a persistent volume. Verify ./data:/directus/uploads mapping and that you did not recreate host directories incorrectly during maintenance.
Editors report slow admin UI
Look at database sizing and missing indexes in custom collections. Start with query inspection, then move heavy assets to object storage/CDN. For larger teams, add proper CPU/memory headroom and monitor container limits.
CORS errors from frontend clients
Set CORS_ORIGIN explicitly to known frontend origins and avoid wildcard policies in production. Confirm browser cache is cleared after changes.
FAQ
Can I run Directus and PostgreSQL on separate hosts?
Yes. For larger environments this is common. Keep database on a private network, enforce TLS between nodes where possible, and update DB_HOST plus firewall rules carefully.
Should I use SQLite for production to simplify setup?
No. SQLite is fine for local testing, but production workloads need PostgreSQL or another supported external database for concurrency, reliability, and backup control.
How do I handle zero-downtime upgrades?
Use blue/green or rolling patterns where feasible, but for many small teams a short maintenance window with tested rollback is safer. Always backup DB and uploads immediately before upgrade.
What is the minimum backup strategy that is still responsible?
Daily DB dumps, frequent uploads snapshots, off-site replication, and restore drills at least monthly. A backup that has never been restored is not a backup strategy.
How should I manage API access for multiple internal apps?
Create separate Directus roles/tokens per app with least privilege. Avoid shared "super tokens". Track ownership and rotation schedule in your ops runbook.
Can I place Cloudflare or another CDN in front of Caddy?
Yes. Many teams do this for DDoS protection and caching. Ensure origin SSL mode is strict, preserve host headers, and verify webhook callbacks still reach Directus reliably.
When should I move uploads to S3-compatible object storage?
Move when media volume grows quickly, you need cross-region durability, or you want easier horizontal scaling. Keep lifecycle rules and cost controls in place from day one.
Internal links
- Deploy Gitea with Docker Compose and Caddy on Ubuntu
- Deploy Metabase with Docker Compose + Caddy + PostgreSQL on Ubuntu
- Deploy MinIO on Kubernetes with Helm
Talk to us
Need help deploying Directus in production, designing safe content workflows, or integrating your CMS with existing apps and analytics pipelines? We can help with architecture, security hardening, migration, and operational readiness.