If your operations, product, or analytics teams keep shipping one-off spreadsheets that drift out of sync, NocoDB gives you a governed, API-first way to turn tabular workflows into managed data apps. This guide walks through a production deployment on Ubuntu using Docker Compose, Traefik, and PostgreSQL, with practical defaults for TLS, secrets handling, backups, and day-2 operations.
Architecture and flow overview
This stack uses NocoDB as the application layer, PostgreSQL as the system of record, and Traefik as the reverse proxy plus automatic TLS terminator. Users access NocoDB through a public hostname, Traefik routes requests to the NocoDB container, and NocoDB persists metadata and project state in PostgreSQL on an internal Docker network.
- NocoDB: Web app and API service for table modeling, workflows, and integrations.
- PostgreSQL: Durable backend database for NocoDB internal state and metadata.
- Traefik: Edge routing, ACME certificate automation, and HTTP-to-HTTPS redirection.
- Docker Compose: Deterministic, repeatable service orchestration on a single node.
Request flow is straightforward: browser/API client → Traefik (443) → NocoDB (internal service) → PostgreSQL (internal only). Keep PostgreSQL non-public, and allow only the NocoDB container to connect over the private bridge network.
Prerequisites
- Ubuntu 22.04 or 24.04 VM with at least 2 vCPU, 4 GB RAM, and 40+ GB SSD.
- A DNS A record (for example
nocodb.example.com) pointing to your server IP. - Open firewall ports 80 and 443 inbound; restrict SSH to trusted IP ranges.
- A non-root sudo user.
- Docker Engine + Docker Compose plugin installed.
Install baseline packages and Docker:
sudo apt update && sudo apt -y upgrade
sudo apt install -y ca-certificates curl gnupg ufw
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
sudo bash -c 'echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list'
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
Manual fallback: if copy is blocked by browser policy, select the command block and copy with Ctrl/Cmd + C.
Step-by-step deployment
Create a dedicated application directory:
sudo mkdir -p /opt/nocodb
sudo chown -R $USER:$USER /opt/nocodb
cd /opt/nocodb
Manual fallback: select and copy the block if the copy button is unavailable.
Create a secrets file. Use long random strings and keep this file out of backups that are broadly readable:
cat > .env <<'EOF'
DOMAIN=nocodb.example.com
TZ=UTC
POSTGRES_DB=nocodb
POSTGRES_USER=nocodb
POSTGRES_PASSWORD=replace_with_32plus_char_random_secret
NC_DB=pg://postgres:replace_with_32plus_char_random_secret@db:5432/nocodb
NC_AUTH_JWT_SECRET=replace_with_64plus_char_random_secret
EOF
chmod 600 .env
Manual fallback: highlight the text block and copy directly.
Now create your production Compose file:
services:
traefik:
image: traefik:v3.1
command:
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
- --certificatesresolvers.le.acme.tlschallenge=true
- [email protected]
- --certificatesresolvers.le.acme.storage=/letsencrypt/acme.json
- --log.level=INFO
- --accesslog=true
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik-letsencrypt:/letsencrypt
restart: unless-stopped
db:
image: postgres:16
env_file: .env
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- TZ=${TZ}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 10
restart: unless-stopped
nocodb:
image: nocodb/nocodb:latest
env_file: .env
environment:
- NC_DB=${NC_DB}
- NC_AUTH_JWT_SECRET=${NC_AUTH_JWT_SECRET}
- TZ=${TZ}
depends_on:
db:
condition: service_healthy
labels:
- traefik.enable=true
- traefik.http.routers.nocodb.rule=Host(`${DOMAIN}`)
- traefik.http.routers.nocodb.entrypoints=websecure
- traefik.http.routers.nocodb.tls.certresolver=le
- traefik.http.services.nocodb.loadbalancer.server.port=8080
volumes:
- nocodb_data:/usr/app/data
restart: unless-stopped
volumes:
postgres_data:
nocodb_data:
Manual fallback: if script sanitization removes copy behavior, use standard text selection.
Launch and validate initial health:
mkdir -p traefik-letsencrypt
touch traefik-letsencrypt/acme.json
chmod 600 traefik-letsencrypt/acme.json
docker compose pull
docker compose up -d
docker compose ps
docker compose logs --tail=100 nocodb
docker compose logs --tail=100 traefik
Manual fallback: copy from the terminal block manually if needed.
At this point, open https://nocodb.example.com and finish admin bootstrap. After first login, create a workspace, enforce role-based permissions, and disable any throwaway test credentials used during validation.
Configuration and secrets handling best practices
In production, uptime is usually lost through weak secret hygiene and undocumented operator paths rather than code defects. Apply these controls early:
- Rotate secrets quarterly: regenerate
POSTGRES_PASSWORDandNC_AUTH_JWT_SECRETwith a maintenance window. - Prefer secret managers: if available, inject env vars from Vault/1Password/SM rather than storing long-lived values in flat files.
- File permissions: keep
.envmode at 600 and owned by the deployment account. - Backups: schedule logical PostgreSQL dumps plus periodic volume snapshots; test restores monthly.
- Least privilege: database account should be scoped to NocoDB database only.
- Observability: centralize Traefik and app logs, and alert on repeated 5xx responses.
Example backup script for PostgreSQL + retention:
#!/usr/bin/env bash
set -euo pipefail
cd /opt/nocodb
source .env
BACKUP_DIR=/opt/nocodb/backups
mkdir -p "$BACKUP_DIR"
STAMP=$(date +%F-%H%M%S)
# Logical dump
docker compose exec -T db pg_dump -U "$POSTGRES_USER" -d "$POSTGRES_DB" | gzip > "$BACKUP_DIR/nocodb-$STAMP.sql.gz"
# Keep 14 days
find "$BACKUP_DIR" -type f -name 'nocodb-*.sql.gz' -mtime +14 -delete
Manual fallback: if copy UX is unavailable, copy this script as plain text and save as /usr/local/bin/nocodb-backup.sh.
Schedule the backup with cron:
chmod +x /usr/local/bin/nocodb-backup.sh
(crontab -l 2>/dev/null; echo "17 2 * * * /usr/local/bin/nocodb-backup.sh") | crontab -
Manual fallback: open crontab and paste line manually when needed.
Verification checklist
- DNS resolves:
dig +short nocodb.example.comreturns your server IP. - TLS works: certificate is valid and browser shows a secure lock icon.
- Containers healthy:
docker compose psshows running state for all services. - Database reachable only on internal network (no public 5432 exposure).
- NocoDB login, table creation, and API token generation all succeed.
- Backup file created and restore test works on a staging host.
Optional smoke test from CLI:
curl -I https://nocodb.example.com
openssl s_client -connect nocodb.example.com:443 -servername nocodb.example.com /dev/null | openssl x509 -noout -dates
docker compose exec -T db psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SELECT now();"
Manual fallback: if one-click copy is unavailable, run commands line by line.
Common issues and fixes
1) TLS certificate is not issued
Symptoms: Traefik logs ACME errors or browser warns about insecure certificate.
Fix: Verify DNS points correctly, port 80 is reachable publicly for challenge flow, and acme.json has strict write permissions (600).
2) NocoDB keeps restarting
Symptoms: docker compose ps shows restarts; logs mention DB connection errors.
Fix: Confirm NC_DB string matches db service host and credentials exactly. Run docker compose logs db to validate PostgreSQL readiness.
3) Slow response under moderate load
Symptoms: Delays on table views and API calls after team onboarding.
Fix: Increase VM memory, tune PostgreSQL shared buffers conservatively, enable connection pooling if needed, and review expensive views/queries in large bases.
4) Backups exist but restores fail
Symptoms: Dump files are present, but restore process errors during incidents.
Fix: Automate restore drills monthly in staging and document exact restore command sequence in the runbook.
5) Copy button appears but does nothing
Symptoms: Odoo theme sanitization or browser policy blocks clipboard API.
Fix: Keep manual-copy fallback text below every code block (included in this guide) and ensure keyboard copy remains usable.
FAQ
Can I run NocoDB with SQLite for production?
You can for testing, but production should use PostgreSQL for resilience, concurrency handling, and operational visibility. SQLite is convenient for demos, not for team-scale workflows.
Should Traefik and NocoDB live on the same server?
For small to medium workloads, yes. It simplifies routing and certificate handling. As traffic grows, move edge routing to a dedicated tier and keep app/database on private subnets.
How do I safely rotate database credentials?
Use a maintenance window: update PostgreSQL user password first, then update .env, restart NocoDB, and verify app login/API health before closing the change request.
What backup frequency is reasonable for operational teams?
Daily logical dumps are a practical baseline. If your teams treat NocoDB as a critical workflow platform, add hourly snapshots and replicate backup artifacts off-host.
How do I handle upgrades without long downtime?
Pin image tags, test upgrade in staging, snapshot volumes, run docker compose pull + up -d, and keep a rollback path with previous image tags documented.
Can I integrate SSO?
Yes, typically through your identity stack and reverse proxy policies. Validate group-to-role mapping early so teams inherit least-privilege access by default.
What monitoring should I add first?
Start with container uptime, 5xx rate at Traefik, PostgreSQL availability, and disk usage alerts. These cover most production incidents before adding deep application metrics.
Related internal guides
- Production Guide: Deploy Miniflux with Docker Compose, Nginx, and PostgreSQL on Ubuntu
- Production Guide: Deploy Grafana with Docker Compose + Nginx + PostgreSQL on Ubuntu
- Production Guide: Deploy Authentik with Docker Compose + Traefik + PostgreSQL on Ubuntu
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.