Skip to Content

Production Guide: Deploy n8n with Docker Compose + Caddy + PostgreSQL + Redis on Ubuntu

Workflow automation deployed with Docker Compose, Caddy, PostgreSQL, and Redis ยท SysBrix Guides

Manual integrations are fragile: a cron job on one server, a webhook handler on another, a spreadsheet that someone updates by hand every Monday. When the person who built the glue leaves, the business process quietly breaks. n8n is an open-source workflow automation platform that lets teams build integrations, data pipelines, and event-driven automations in a visual editor while keeping the execution logic under version control. It connects to hundreds of services, runs self-hosted, and stores execution history so you can debug failures without digging through server logs.

In this guide, we will deploy n8n on Ubuntu with Docker Compose, publish it through Caddy with automatic HTTPS, and wire in PostgreSQL for workflow state and Redis for execution queue buffering. The target audience is a small business, operations team, or internal IT group that wants a maintainable, self-hosted automation engine. The pattern keeps the application stack isolated, exposes only Caddy to the public internet, stores secrets in an environment file with restricted permissions, and verifies each layer before moving workflows into production. You can integrate OAuth and LDAP later, but this baseline gives you a dependable, upgrade-friendly foundation.

Architecture and flow overview

The browser and external webhook sources talk to Caddy on ports 80 and 443. Caddy terminates TLS and reverse-proxies to the n8n container bound to 127.0.0.1 on port 5678 inside the Docker network. n8n is a Node.js application that serves a visual workflow editor, REST API, and webhook listeners. It depends on PostgreSQL for workflow definitions, credentials, execution history, and user accounts, and Redis for queue state when execution volume spikes. Persistent data lives in a Docker volume for file storage and local binary data. Logs are written to container stdout by default and can be collected with your existing log shipping stack. The flow is intentionally simple: one public entry point, one application server, and clearly separated backing services.

Prerequisites

  • Ubuntu 22.04 or 24.04 LTS server with at least 2 CPU cores, 4 GB RAM, and 30 GB disk.
  • A DNS A record pointing your domain to the server public IP.
  • Docker Engine 24.x and Docker Compose plugin installed.
  • Caddy installed as a system package or binary.
  • UFW or another firewall allowing SSH (22), HTTP (80), and HTTPS (443).
  • An SMTP relay or mail provider account for outbound email (used for password resets and notifications).

Step-by-step deployment

1) Install Docker, Compose, Caddy, and firewall basics

sudo apt update && sudo apt install -y ca-certificates curl gnupg ufw
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker
sudo usermod -aG docker "$USER"

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install -y caddy

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable

If the copy button is unavailable, select the command text manually and paste it into your terminal.

2) Create directories and environment file

sudo mkdir -p /opt/n8n/{data,postgres,redis}
sudo chown -R "$USER":"$USER" /opt/n8n
chmod 750 /opt/n8n

If the copy button is unavailable, select the command text manually and paste it into your terminal.

Create /opt/n8n/.env with the following content. Replace secrets with strong random values and set your domain and email credentials.

N8N_HOST=n8n.example.com
N8N_PROTOCOL=https
WEBHOOK_URL=https://n8n.example.com/
N8N_ENCRYPTION_KEY=$(openssl rand -hex 32)
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=postgres
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=$(openssl rand -hex 32)
POSTGRES_DB=n8n
POSTGRES_USER=n8n
POSTGRES_PASSWORD=${DB_POSTGRESDB_PASSWORD}
REDIS_PASSWORD=$(openssl rand -hex 32)
N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=$(openssl rand -hex 16)
N8N_SMTP_HOST=smtp.mailprovider.com
N8N_SMTP_PORT=587
[email protected]
N8N_SMTP_PASS=your-email-password
N8N_SMTP_SENDER=n8n <[email protected]>

If the copy button is unavailable, select the command text manually and paste it into your terminal.

Lock the file:

chmod 600 /opt/n8n/.env

If the copy button is unavailable, select the command text manually and paste it into your terminal.

3) Define Compose services

Create /opt/n8n/docker-compose.yml:

version: "3.8"
services:
  postgres:
    image: postgres:15-alpine
    container_name: n8n_postgres
    restart: unless-stopped
    env_file: .env
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - ./postgres:/var/lib/postgresql/data
    networks:
      - n8n
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    container_name: n8n_redis
    restart: unless-stopped
    command: redis-server --requirepass ${REDIS_PASSWORD}
    volumes:
      - ./redis:/data
    networks:
      - n8n
    healthcheck:
      test: ["CMD", "redis-cli", "--raw", "-a", "${REDIS_PASSWORD}", "incr", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5

  n8n:
    image: docker.n8n.io/n8nio/n8n:latest
    container_name: n8n_server
    restart: unless-stopped
    env_file: .env
    environment:
      N8N_HOST: ${N8N_HOST}
      N8N_PROTOCOL: ${N8N_PROTOCOL}
      WEBHOOK_URL: ${WEBHOOK_URL}
      N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
      DB_TYPE: ${DB_TYPE}
      DB_POSTGRESDB_HOST: ${DB_POSTGRESDB_HOST}
      DB_POSTGRESDB_DATABASE: ${DB_POSTGRESDB_DATABASE}
      DB_POSTGRESDB_USER: ${DB_POSTGRESDB_USER}
      DB_POSTGRESDB_PASSWORD: ${DB_POSTGRESDB_PASSWORD}
      N8N_BASIC_AUTH_ACTIVE: ${N8N_BASIC_AUTH_ACTIVE}
      N8N_BASIC_AUTH_USER: ${N8N_BASIC_AUTH_USER}
      N8N_BASIC_AUTH_PASSWORD: ${N8N_BASIC_AUTH_PASSWORD}
      N8N_SMTP_HOST: ${N8N_SMTP_HOST}
      N8N_SMTP_PORT: ${N8N_SMTP_PORT}
      N8N_SMTP_USER: ${N8N_SMTP_USER}
      N8N_SMTP_PASS: ${N8N_SMTP_PASS}
      N8N_SMTP_SENDER: ${N8N_SMTP_SENDER}
    volumes:
      - ./data:/home/node/.n8n
    networks:
      - n8n
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    ports:
      - "127.0.0.1:5678:5678"
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:5678/healthz"]
      interval: 30s
      timeout: 10s
      retries: 3

networks:
  n8n:
    driver: bridge

If the copy button is unavailable, select the command text manually and paste it into your terminal.

4) Configure Caddy reverse proxy

Create /etc/caddy/Caddyfile (or add a site block):

n8n.example.com {
  reverse_proxy 127.0.0.1:5678
  header {
    Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
    X-Content-Type-Options "nosniff"
    X-Frame-Options "SAMEORIGIN"
    Referrer-Policy "strict-origin-when-cross-origin"
  }
}

If the copy button is unavailable, select the command text manually and paste it into your terminal.

Validate and reload Caddy:

sudo caddy fmt --overwrite /etc/caddy/Caddyfile
sudo systemctl reload caddy

If the copy button is unavailable, select the command text manually and paste it into your terminal.

5) Start services and verify health

cd /opt/n8n
docker compose up -d
sleep 15
docker compose ps
docker compose logs --tail 50 n8n

If the copy button is unavailable, select the command text manually and paste it into your terminal.

Wait until the n8n container reports that the server is listening. The first startup initializes the database schema, so it may take one to two minutes.

6) Run first-time setup

Open your domain in a browser. Log in with the basic auth credentials defined in N8N_BASIC_AUTH_USER and N8N_BASIC_AUTH_PASSWORD. After login, n8n will prompt you to set up the owner account. Complete the onboarding, then create your first workflow and enable execution logging. Test a simple webhook or schedule trigger to confirm end-to-end execution.

7) Backup script

Create /opt/n8n/backup.sh:

#!/bin/bash
set -euo pipefail
BACKUP_DIR=/opt/n8n/backups/$(date +%Y%m%d_%H%M%S)
mkdir -p "$BACKUP_DIR"
docker exec n8n_postgres pg_dump -U n8n n8n | gzip > "$BACKUP_DIR/n8n.sql.gz"
tar czf "$BACKUP_DIR/data.tar.gz" -C /opt/n8n data
find /opt/n8n/backups -mindepth 1 -maxdepth 1 -type d -mtime +7 -exec rm -rf {} +

If the copy button is unavailable, select the command text manually and paste it into your terminal.

chmod +x /opt/n8n/backup.sh
/opt/n8n/backup.sh

If the copy button is unavailable, select the command text manually and paste it into your terminal.

Schedule it in cron:

(crontab -l 2>/dev/null; echo "0 3 * * * /opt/n8n/backup.sh") | crontab -

If the copy button is unavailable, select the command text manually and paste it into your terminal.

8) Acceptance checklist execution

  • Caddy serves HTTPS with a valid certificate.
  • n8n login page loads without mixed-content warnings.
  • Basic auth and owner login both succeed.
  • A simple workflow with a manual trigger executes and logs output.
  • Webhook test URL is reachable from the public internet.
  • docker compose ps shows all services healthy.
  • Backup archive exists and can be decompressed.

Configuration and secrets handling

All sensitive values live in /opt/n8n/.env with mode 600. The file is never copied into images; it is mounted at runtime by Docker Compose. Rotate the N8N_ENCRYPTION_KEY only during a planned maintenance window because it invalidates stored credentials. For SMTP credentials, use an app-specific password or a dedicated relay user rather than a personal mailbox password. If you run n8n behind a corporate proxy, export HTTP_PROXY and HTTPS_PROXY in the host environment before starting Compose, or add them to the n8n service environment block.

Verification

Run these checks from the server:

curl -s -o /dev/null -w "%{http_code}" https://n8n.example.com
# Expected: 200

docker compose exec postgres pg_isready -U n8n
# Expected: accepting connections

docker compose exec redis redis-cli --raw -a "$REDIS_PASSWORD" ping
# Expected: PONG

docker compose exec n8n_server wget -qO- http://localhost:5678/healthz
# Expected: {"status":"ok"}

If the copy button is unavailable, select the command text manually and paste it into your terminal.

Common issues and fixes

  • Container exits on startup: Check docker compose logs n8n. The most common cause is a missing or unhealthy backing service. Ensure PostgreSQL and Redis are healthy before n8n starts.
  • Database connection errors: Verify that DB_POSTGRESDB_PASSWORD matches the PostgreSQL credentials and that the postgres service is on the same Docker network.
  • Email not delivering: Confirm SMTP host, port, credentials, and the N8N_SMTP_SENDER address. Test with swaks or msmtp from the host. Check n8n logs for SMTP authentication failures.
  • Webhook URL returns 404: Ensure WEBHOOK_URL exactly matches the public HTTPS URL, including the protocol and trailing slash. n8n uses this to generate webhook paths.
  • 502 Bad Gateway: This usually means n8n is still starting or crashed. Wait two minutes after docker compose up and check health status.
  • Permission denied on uploads: Verify that the n8n container user has write access to /home/node/.n8n. The official image runs as a non-root user; ensure the host directory is writable by UID 1000 or adjust ownership.

FAQ

Can I use SQLite instead of PostgreSQL?

Yes, but only for personal or evaluation deployments. PostgreSQL is strongly recommended for production because it handles concurrent writes, backups, and larger execution histories reliably. The Docker Compose setup above uses PostgreSQL by default.

How do I enable OAuth or SAML authentication?

n8n supports external authentication via environment variables for OAuth2, LDAP, and SAML providers. Set the appropriate N8N_EXTERNAL_* variables for your identity provider and restart the container. Consult the n8n documentation for the full variable list and provider-specific formats.

Can I run multiple n8n instances behind a load balancer?

Yes, but you must enable queue mode and run separate worker containers so executions are not duplicated. Move PostgreSQL and Redis to external or shared services, and ensure all instances use the same N8N_ENCRYPTION_KEY and database.

How do I migrate workflows from another n8n instance?

Use the n8n CLI or the built-in export feature to download workflows as JSON. On the new instance, import each JSON file through the UI or API. Reconnect credentials manually because they are encrypted with the instance key and cannot be transferred directly.

What backup strategy is recommended?

The backup script above dumps the PostgreSQL database and archives the local file storage daily. For production, also replicate backups to an offsite S3 bucket, test restores quarterly, and snapshot the host filesystem before major upgrades.

How do I update n8n?

Run docker compose pull && docker compose up -d to fetch the latest image and restart the stack. Always back up before upgrading. After the restart, verify that existing workflows execute correctly and that the admin panel reports the expected version.

Can I use S3 instead of local storage for binary data?

Yes. n8n supports external binary data storage via environment variables. Configure N8N_EXTERNAL_STORAGE_S3_BUCKET, N8N_EXTERNAL_STORAGE_S3_REGION, and associated credentials. This is recommended when running multiple replicas or when you want to offload file storage from the host disk.

Internal links

Talk to us

If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.

Contact Us

Header image: Original SysBrix generated header, no watermark.

Production Guide: Deploy Grist with Docker Compose + Caddy + PostgreSQL + Redis on Ubuntu
A production-ready deployment of Grist, the open-source relational spreadsheet, using Docker Compose, Caddy, PostgreSQL, and Redis on Ubuntu.