Skip to Content

Production Guide: Deploy Apache Superset with Docker Compose + Nginx + PostgreSQL + Redis on Ubuntu

A practical, production-oriented Superset deployment with secure defaults, TLS, operations checks, and troubleshooting.

If your analytics team keeps asking for self-hosted dashboards while security asks for strict data control, Apache Superset is a practical middle ground. In many organizations, the challenge is not installing Superset itself—it is packaging the full stack in a way that is reproducible, secure, and easy to operate three months later. This production guide shows how to deploy Superset on Ubuntu using Docker Compose with PostgreSQL and Redis behind Nginx, including TLS, secret handling, backups, and day-2 checks you can actually run during an incident.

The walkthrough is optimized for operators who want deterministic deployments: explicit directory layout, pinned images, health checks, a hardened reverse proxy, and a verification checklist that can be handed to on-call engineers. You will also get practical troubleshooting notes for common startup loops, migration failures, and authentication/session issues.

Architecture and Flow Overview

This deployment uses four core services: Superset web, Superset worker (for async tasks), PostgreSQL (metadata store), and Redis (cache + broker). Nginx terminates TLS and forwards requests to Superset. The host runs Docker and Compose, while persistent volumes store PostgreSQL data, uploaded assets, and Superset state.

  • Client → HTTPS (443) → Nginx
  • Nginx → Superset web container (8088 internal)
  • Superset web/worker → PostgreSQL + Redis
  • Nightly backup job → dumps PostgreSQL + archives Superset config

Why this layout works in production: the app and data planes are separated, secrets are centralized in environment files, and each dependency can be tested in isolation. If dashboards fail to load, you can quickly distinguish whether the issue is Nginx routing, Superset process health, metadata DB, or cache/broker behavior.

Prerequisites

  • Ubuntu 22.04+ server (4 vCPU, 8 GB RAM minimum for moderate use)
  • Domain name pointed to server public IP (for example: analytics.example.com)
  • Open ports: 22, 80, 443
  • Root or sudo access
  • Basic familiarity with SQL and container logs

Install Docker Engine + Compose plugin first if not already present.

sudo apt update
sudo apt install -y ca-certificates curl gnupg lsb-release
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER

If the copy button does not work in your browser, manually copy the command block above.

Step-by-Step Deployment

Create a clean working directory and explicit subfolders for state and backup artifacts.

sudo mkdir -p /opt/superset/{postgres,redis,config,backups,nginx,logs}
sudo chown -R $USER:$USER /opt/superset
cd /opt/superset

If the copy button does not work in your browser, manually copy the command block above.

Generate strong secrets and store them in a local env file. Keep this file readable only by admins.

openssl rand -hex 32
openssl rand -base64 36

If the copy button does not work in your browser, manually copy the command block above.

Create /opt/superset/.env:

SUPERSET_DOMAIN=analytics.example.com
POSTGRES_DB=superset
POSTGRES_USER=superset
POSTGRES_PASSWORD=CHANGE_ME_STRONG_DB_PASSWORD
REDIS_PASSWORD=CHANGE_ME_STRONG_REDIS_PASSWORD
SUPERSET_SECRET_KEY=CHANGE_ME_LONG_SECRET
ADMIN_USERNAME=admin
ADMIN_FIRSTNAME=Platform
ADMIN_LASTNAME=Admin
[email protected]
ADMIN_PASSWORD=CHANGE_ME_STRONG_ADMIN_PASSWORD

If the copy button does not work in your browser, manually copy the command block above.

Now create docker-compose.yml with pinned images and health checks:

version: "3.9"
services:
  db:
    image: postgres:16-alpine
    restart: unless-stopped
    env_file: .env
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - ./postgres:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 10

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    command: ["redis-server", "--requirepass", "${REDIS_PASSWORD}"]
    volumes:
      - ./redis:/data

  superset:
    image: apache/superset:4.0.2
    restart: unless-stopped
    depends_on:
      - db
      - redis
    env_file: .env
    environment:
      SUPERSET_SECRET_KEY: ${SUPERSET_SECRET_KEY}
      SQLALCHEMY_DATABASE_URI: postgresql+psycopg2://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_PASSWORD: ${REDIS_PASSWORD}
    command: ["/bin/sh", "-c", "superset db upgrade && superset init && gunicorn -w 4 -k gevent --timeout 120 -b 0.0.0.0:8088 'superset.app:create_app()'"]

  worker:
    image: apache/superset:4.0.2
    restart: unless-stopped
    depends_on:
      - superset
    env_file: .env
    environment:
      SUPERSET_SECRET_KEY: ${SUPERSET_SECRET_KEY}
      SQLALCHEMY_DATABASE_URI: postgresql+psycopg2://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
      REDIS_HOST: redis
      REDIS_PORT: 6379
      REDIS_PASSWORD: ${REDIS_PASSWORD}
    command: ["/bin/sh", "-c", "celery --app=superset.tasks.celery_app:app worker -Ofair -l INFO"]

If the copy button does not work in your browser, manually copy the command block above.

Bring up the stack and create your first admin account.

docker compose pull
docker compose up -d
docker compose exec superset superset fab create-admin \
  --username "$ADMIN_USERNAME" \
  --firstname "$ADMIN_FIRSTNAME" \
  --lastname "$ADMIN_LASTNAME" \
  --email "$ADMIN_EMAIL" \
  --password "$ADMIN_PASSWORD"

If the copy button does not work in your browser, manually copy the command block above.

Configure Nginx as reverse proxy and TLS terminator. This keeps Superset private on the Docker network and exposes only HTTPS publicly.

sudo apt install -y nginx certbot python3-certbot-nginx
sudo tee /etc/nginx/sites-available/superset > /dev/null <<'EOF'
server {
  listen 80;
  server_name analytics.example.com;
  location / {
    proxy_pass http://127.0.0.1:8088;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_read_timeout 300;
  }
}
EOF
sudo ln -s /etc/nginx/sites-available/superset /etc/nginx/sites-enabled/superset
sudo nginx -t && sudo systemctl reload nginx
sudo certbot --nginx -d analytics.example.com --redirect -m [email protected] --agree-tos -n

If the copy button does not work in your browser, manually copy the command block above.

Configuration and Secrets Handling

Do not keep secrets in Compose YAML. Keep them in .env, restrict permissions, and snapshot encrypted backups separately from application files. In production, move to a secret manager (Vault, SOPS, or cloud KMS) once your team matures. At minimum:

  • chmod 600 /opt/superset/.env and group-limit shell access.
  • Rotate SUPERSET_SECRET_KEY only with a maintenance plan—rotation invalidates active sessions.
  • Use distinct DB/Redis passwords for each environment (dev/stage/prod).
  • Store Certbot account email with monitored inbox and enable renewal alerting.

For database security, limit PostgreSQL exposure to the Docker bridge and never publish 5432 externally. If you need BI connectors from external hosts, place a separate read-replica with network ACLs and audited credentials instead of opening production metadata DB.

Verification

After deployment, verify at three levels: container health, app readiness, and user journey.

  1. Container health: ensure services are up and stable for at least 5 minutes.
  2. App readiness: login works and dashboard page loads without 5xx.
  3. Data path: test query against a sample datasource and confirm chart rendering.
docker compose ps
docker compose logs --tail=100 superset
docker compose logs --tail=100 worker
curl -I https://analytics.example.com

If the copy button does not work in your browser, manually copy the command block above.

Operationally, add a weekly restore drill: recover PostgreSQL dump to a staging namespace and validate dashboard metadata integrity. Teams that test restore flows before incidents recover far faster when a node or disk actually fails.

Common Issues and Fixes

1) Superset container keeps restarting after upgrade

Usually caused by migration errors or stale metadata assumptions. Check migration output in logs, take a full DB backup, then run superset db upgrade manually inside the container before full restart.

2) Login loop or unexpected logouts

Most often mismatched SUPERSET_SECRET_KEY across container recreations, reverse-proxy header issues, or clock drift. Ensure one stable secret key and correct X-Forwarded-Proto forwarding in Nginx.

3) Celery tasks not processing

Confirm Redis auth and worker logs. If broker password changed, recycle worker and web containers together to keep env in sync.

4) Slow dashboards under concurrent load

Increase Gunicorn workers cautiously, enable query result caching, and move heavy transformations upstream (materialized views or pre-aggregated marts). Superset should not be your ETL engine.

5) TLS renewals fail silently

Run sudo certbot renew --dry-run monthly and monitor systemd timer logs. Many outages happen because certificate expiry alerting was never implemented.

FAQ

Can I run Superset without Redis in production?

You can for very small use cases, but background tasks and cache behavior degrade quickly. Redis is strongly recommended for stable async workloads and better UI responsiveness.

Should I use SQLite for metadata to simplify setup?

No. SQLite is fine for local testing only. Production deployments should use PostgreSQL for transactional reliability, backup tooling, and concurrent access safety.

How do I integrate SSO later without rebuilding everything?

Keep domain, proxy, and TLS architecture stable now. Then add SSO via Superset auth configuration (OIDC/SAML) and map roles incrementally. Start with read-only group mapping before broader admin sync.

What is a practical backup policy for this stack?

At minimum: nightly PostgreSQL dumps, 7–14 daily retention, weekly off-site copy, and quarterly restore drills. Also backup key config files and environment templates used to recreate the stack.

How much capacity should I plan for 50–100 internal users?

A 4–8 vCPU host with 16 GB RAM is a common starting point depending on query complexity and data source latency. Benchmark with realistic dashboards and concurrency before final sizing.

Can I place Superset behind Cloudflare and still keep secure sessions?

Yes. Preserve original host/proto headers correctly, enforce HTTPS redirect, and keep trusted proxy settings consistent. Test login/session flows after any CDN/proxy change.

How do I upgrade with minimal downtime?

Use pinned image tags, backup metadata DB, run migrations in a maintenance window, and validate critical dashboards with a smoke test checklist. Never jump multiple major versions without release-note review.

Related internal guides

Talk to us

If you want support designing or hardening your analytics platform, we can help with architecture, migration planning, and production readiness.

Contact Us

How to Deploy Kestra with Docker Compose and Caddy for Production Workflows
A production-ready guide to running Kestra reliably with TLS, queue workers, secrets handling, observability, and failure recovery.