Directus is a strong fit when a team needs an internal data portal, API layer, or lightweight content operations system without building a custom admin application from scratch. It sits in front of PostgreSQL, gives editors and operators a clean interface, and exposes REST and GraphQL APIs for applications that need controlled access to the same data. The production concern is not simply starting a container; it is making the service predictable, recoverable, and safe enough to hold business data.
This guide uses Docker Compose for repeatable service definitions, PostgreSQL for durable data, Redis for cache and rate-limit state, and Caddy as the public TLS reverse proxy. It keeps the stack easy to inspect on one Ubuntu host while still covering private bind ports, explicit secrets, backups, verification checks, and restore basics.
Architecture/flow overview
Traffic enters through Caddy on ports 80 and 443. Caddy obtains and renews certificates, applies conservative security headers, and forwards requests to Directus on localhost port 8055. Directus talks to PostgreSQL over the private Compose network for collections, permissions, flows, revisions, and users. Redis stores cache and rate-limit data so the application can handle repeated API traffic without putting every read directly on the database. Uploaded assets and extensions live in host-mounted directories so they can be backed up with the database.
The deployment keeps only Caddy exposed to the internet. PostgreSQL, Redis, and the Directus container network are not published publicly. This is the most important design choice in the guide: it allows the application to be reachable while the backing services remain private. For many teams, this single-host pattern is also easier to audit than a larger orchestrator because every moving part is visible in one directory.
Prerequisites
- An Ubuntu 22.04 or 24.04 server with a non-root sudo user.
- Docker Engine and the Docker Compose plugin already installed.
- Caddy installed as a system service on the host.
- A DNS record such as
directus.example.compointing at the server. - Outbound internet access for image pulls and certificate issuance.
- A plan for off-host backups, even if the first backup target is object storage or another server.
If this server already runs other guides behind Caddy, keep the same operational convention: one application directory under /opt, one Compose file, one environment file, and one Caddy site block. That consistency makes handoff and incident response much simpler.
Step-by-step deployment
1. Prepare the application directory
Create a dedicated directory for Directus and install basic utilities. The examples assume you are using a sudo-capable deployment user and that Docker is already available.
sudo apt update && sudo apt -y upgrade
sudo apt -y install ca-certificates curl gnupg ufw
sudo install -d -m 0755 /opt/directus
cd /opt/directus
sudo chown -R $USER:$USER /opt/directus
If the copy button is unavailable, select the command text manually and paste it into your terminal.
2. Generate secrets before writing the environment file
Directus uses KEY and SECRET values for cryptographic operations. Do not reuse values from staging, old blog posts, password managers shared with contractors, or examples found online. Generate fresh values and keep them in the environment file with restrictive permissions.
openssl rand -base64 48
openssl rand -base64 32
# Save the first value as KEY and the second as SECRET in /opt/directus/.env
If the copy button is unavailable, select the command text manually and paste it into your terminal.
3. Create the environment file
Replace every placeholder before starting the stack. Use a temporary admin password for the first login, then rotate it inside Directus and store the final credential in your normal password manager. The database password should be unique to this stack.
cat > /opt/directus/.env <<'EOF'
PUBLIC_URL=https://directus.example.com
KEY=replace-with-48-byte-random-value
SECRET=replace-with-32-byte-random-value
[email protected]
ADMIN_PASSWORD=replace-with-a-long-temporary-password
DB_CLIENT=pg
DB_HOST=postgres
DB_PORT=5432
DB_DATABASE=directus
DB_USER=directus
DB_PASSWORD=replace-with-a-database-password
CACHE_ENABLED=true
CACHE_STORE=redis
REDIS=redis://redis:6379/0
RATE_LIMITER_ENABLED=true
RATE_LIMITER_STORE=redis
RATE_LIMITER_REDIS=redis://redis:6379/1
EOF
chmod 600 /opt/directus/.env
If the copy button is unavailable, select the command text manually and paste it into your terminal.
4. Define the Compose stack
The Compose file publishes Directus only on 127.0.0.1:8055. That is intentional because Caddy runs on the host and should be the only public entry point. The uploads and extensions directories are mounted from the host, while PostgreSQL and Redis keep their own persistent data directories.
cat > /opt/directus/docker-compose.yml <<'EOF'
services:
directus:
image: directus/directus:latest
restart: unless-stopped
env_file: .env
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
ports:
- "127.0.0.1:8055:8055"
volumes:
- ./uploads:/directus/uploads
- ./extensions:/directus/extensions
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: directus
POSTGRES_USER: directus
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- ./postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U directus -d directus"]
interval: 10s
timeout: 5s
retries: 6
redis:
image: redis:7-alpine
restart: unless-stopped
command: ["redis-server", "--appendonly", "yes"]
volumes:
- ./redis-data:/data
EOF
If the copy button is unavailable, select the command text manually and paste it into your terminal.
5. Configure Caddy for TLS and reverse proxying
Update the domain in the Caddyfile before reloading. If you already manage Caddy with a larger shared file, add only the site block and keep your existing global settings. The reverse proxy target must match the host-bound port in Compose.
sudo mkdir -p /etc/caddy
sudo tee /etc/caddy/Caddyfile >/dev/null <<'EOF'
directus.example.com {
encode zstd gzip
reverse_proxy 127.0.0.1:8055
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
Referrer-Policy "strict-origin-when-cross-origin"
X-Frame-Options "SAMEORIGIN"
}
}
EOF
sudo systemctl reload caddy
If the copy button is unavailable, select the command text manually and paste it into your terminal.
6. Start the stack
Pull the images, start the services, and inspect the first Directus logs. A clean start should show the server listening without repeated database connection errors. If the database is still initializing, wait a few seconds and re-check the logs rather than restarting repeatedly.
cd /opt/directus
docker compose pull
docker compose up -d
docker compose ps
docker compose logs --tail=80 directus
If the copy button is unavailable, select the command text manually and paste it into your terminal.
7. Lock down the firewall
Only SSH, HTTP, and HTTPS need to be reachable from outside the server. PostgreSQL, Redis, and Directus remain private because the Compose ports and firewall policy both enforce that boundary.
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable
sudo ufw status verbose
If the copy button is unavailable, select the command text manually and paste it into your terminal.
Configuration and secrets handling
Directus will quickly become a central path to business data, so treat its configuration as production configuration from day one. Keep .env out of Git unless your repository uses sealed secrets or a comparable encryption workflow. Restrict file permissions to the deployment user and root. When a teammate leaves the project, rotate the Directus admin password, API tokens, and any database password that may have been shared during setup.
Use Directus roles and policies instead of giving every user administrator access. Create a small break-glass administrator group, then define editor, analyst, and application roles around real use cases. For API integrations, prefer scoped static tokens or service users with the minimum collection permissions necessary. If you enable custom extensions, review them like application code because extensions run inside the Directus process and can affect data integrity.
For email, SSO, file storage, or external webhooks, add variables deliberately and document the owner of each integration. A production Directus instance often grows from βsimple admin panelβ into a workflow hub; the operational risk comes from forgotten tokens and undocumented automations rather than the base container.
Verification
After the first start, verify the public health endpoint, application information endpoint, database readiness, and Redis connectivity. Install jq if you want formatted JSON output, or remove that pipe for a minimal server.
curl -I https://directus.example.com/server/health
curl -s https://directus.example.com/server/info | jq .
docker compose exec postgres pg_isready -U directus -d directus
docker compose exec redis redis-cli ping
If the copy button is unavailable, select the command text manually and paste it into your terminal.
Then sign in at your Directus URL, change the temporary administrator password, create a test collection, add one item, and confirm that the item appears through the API only for an authorized user. Finally, upload a small test file so you know the uploads volume is writable and included in the backup plan.
Backups and restore checks
A Directus backup must include the PostgreSQL database, uploads, extensions, and deployment configuration. The script below creates local archives and deletes local files older than fourteen days. Local backups are not enough; copy the resulting archives to an off-host location with your preferred backup agent or object storage sync.
cat > /opt/directus/backup-directus.sh <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
cd /opt/directus
STAMP=$(date +%Y%m%d-%H%M%S)
mkdir -p backups
source .env
docker compose exec -T postgres pg_dump -U directus directus | gzip > backups/directus-db-$STAMP.sql.gz
tar -czf backups/directus-files-$STAMP.tar.gz uploads extensions .env docker-compose.yml
find backups -type f -mtime +14 -delete
EOF
chmod 700 /opt/directus/backup-directus.sh
sudo /opt/directus/backup-directus.sh
If the copy button is unavailable, select the command text manually and paste it into your terminal.
Do not trust a backup until you have tested a restore. Schedule a maintenance window, restore into a temporary host or staging directory, and confirm that Directus boots with expected users, collections, permissions, files, and extensions.
cd /opt/directus
docker compose down
# Restore files from your selected directus-files archive first, then start Postgres.
docker compose up -d postgres redis
gunzip -c backups/directus-db-YYYYMMDD-HHMMSS.sql.gz | docker compose exec -T postgres psql -U directus directus
docker compose up -d directus
If the copy button is unavailable, select the command text manually and paste it into your terminal.
Common issues and fixes
Caddy returns 502 Bad Gateway
Confirm that Directus is listening on 127.0.0.1:8055 and that the Compose stack is healthy. A 502 usually means the application container is stopped, the host port changed, or Caddy is pointing to the wrong upstream.
Directus starts but cannot connect to PostgreSQL
Check that DB_HOST=postgres matches the Compose service name, and verify the database password is identical in the environment file and the PostgreSQL service configuration. If you changed the password after the database volume was initialized, update the database user inside PostgreSQL rather than only changing the environment file.
Logins fail after changing the public URL
Make sure PUBLIC_URL exactly matches the external HTTPS URL, including scheme and hostname. Incorrect public URL values can cause confusing redirect, cookie, or asset behavior behind a reverse proxy.
Uploads disappear after container recreation
Verify that ./uploads:/directus/uploads is present and that the host directory is included in backups. Files written only to a container filesystem are lost when the container is replaced.
The API is slow under repeated reads
Confirm Redis is running and that cache variables are enabled. Also review Directus permissions and query patterns; overly broad API queries can still stress PostgreSQL even with caching enabled.
FAQ
Can Directus run without Redis?
Yes, but Redis is useful for production because it supports cache and rate-limit state outside the application process. Keeping it in the stack adds little complexity and gives you room to handle heavier API traffic later.
Should I pin the Directus image version?
For production, pin to a tested major or exact version after the first deployment. The guide uses latest for readability, but controlled upgrades should test migrations and extensions before changing the live container.
Can I use an external managed PostgreSQL database?
Yes. Replace the PostgreSQL service with managed database connection variables, enforce TLS if your provider supports it, and update the backup process so database snapshots are captured outside this host.
How do I make this highly available?
Move PostgreSQL and Redis to managed or clustered services, store uploads in object storage, run multiple Directus containers behind a load balancer, and keep session, cache, and rate-limit state outside individual containers.
Where should extensions be stored?
Keep extensions in the mounted extensions directory and review them before deployment. If extensions are built from source, store the source in Git and deploy the compiled output intentionally.
How often should backups run?
Match the schedule to how often data changes. For active operations, run database backups at least daily and more often for critical workflows. Always test restore time, not just backup creation.
Internal links
- Deploy Baserow with Docker Compose, Caddy, PostgreSQL, and Redis
- Deploy Vikunja with Docker Compose, Caddy, and PostgreSQL
- Deploy Listmonk with Docker Compose, Caddy, and PostgreSQL
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.
Header image: Unsplash, no watermark.