When a small operations team starts tracking infrastructure work in a shared spreadsheet, the failure mode is predictable: tickets get copied into chat, maintenance windows lose context, and nobody can tell whether a blocked task is waiting on DNS, procurement, or a rollback decision. Kanboard is a focused open-source Kanban application that gives engineering teams a lightweight board without bringing in a heavyweight project-management suite. In this guide we will deploy Kanboard on Ubuntu with Docker Compose, Caddy for automatic HTTPS, PostgreSQL for durable metadata, and a simple backup routine that an on-call engineer can actually restore under pressure.
Architecture and flow overview
The deployment uses a single Ubuntu host with four clear responsibilities. Caddy terminates HTTPS and forwards browser traffic to the Kanboard container on the private Docker network. Kanboard stores application state in PostgreSQL instead of the default SQLite file so backups, upgrades, and future migrations are safer. Uploaded files live on a named Docker volume, while database dumps are written to a host directory that can be copied to object storage or another backup target. This pattern keeps the public attack surface small: only ports 80 and 443 are exposed, while PostgreSQL remains internal to the Compose project.
Request flow is simple: users visit https://kanboard.example.com, Caddy obtains and renews certificates, Caddy proxies to kanboard:80, Kanboard reads configuration from environment variables, and PostgreSQL persists projects, users, comments, swimlanes, and task history. Operationally, the host owner manages lifecycle with docker compose, checks logs from one directory, rotates backups with a cron job, and verifies the service through both the web UI and direct container health checks.
Prerequisites
- Ubuntu 22.04 or 24.04 server with a non-root sudo user.
- A DNS record such as
kanboard.example.compointing at the server. - Ports 80 and 443 open to the internet for Caddy certificate issuance.
- Outbound access for pulling Docker images and renewing TLS certificates.
- A password manager for database and application secrets.
Step-by-step deployment
1) Install Docker, Compose, Caddy, and firewall basics
Start from a clean server and install the runtime pieces. If your organization already manages Docker and Caddy with configuration management, keep those standards; the important part is that Compose and Caddy are installed before the application directory is created.
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg ufw
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker "$USER"
sudo apt-get install -y caddy
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable
newgrp docker
If the copy button is unavailable, select the command block manually and copy it.
2) Create the application layout and strong secrets
Keep the stack in a predictable directory so future responders know where to look. The .env file is intentionally restricted to root and the deployment group because it contains database credentials. Generate unique values for every environment; do not reuse passwords from a staging Kanboard instance.
sudo mkdir -p /opt/kanboard/{backups,caddy}
sudo chown -R "$USER":"$USER" /opt/kanboard
cd /opt/kanboard
cat > .env <<'EOF'
POSTGRES_DB=kanboard
POSTGRES_USER=kanboard
POSTGRES_PASSWORD=replace-with-a-long-random-password
KANBOARD_DOMAIN=kanboard.example.com
EOF
chmod 600 .env
If the copy button is unavailable, select the command block manually and copy it.
3) Define the Docker Compose stack
This Compose file pins the major services, separates the application and database volumes, and avoids publishing PostgreSQL on the host. The Kanboard container is reachable only by Caddy on the internal bridge network. For strict environments, pin image digests after testing and promote changes through a staging host first.
cat > docker-compose.yml <<'EOF'
services:
kanboard:
image: kanboard/kanboard:latest
restart: unless-stopped
env_file: .env
environment:
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
PLUGIN_INSTALLER: "false"
LOG_DRIVER: stdout
volumes:
- kanboard_data:/var/www/app/data
- kanboard_plugins:/var/www/app/plugins
depends_on:
postgres:
condition: service_healthy
networks:
- internal
postgres:
image: postgres:16-alpine
restart: unless-stopped
env_file: .env
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 10s
timeout: 5s
retries: 10
networks:
- internal
networks:
internal:
driver: bridge
volumes:
kanboard_data:
kanboard_plugins:
postgres_data:
EOF
If the copy button is unavailable, select the command block manually and copy it.
4) Configure Caddy and start HTTPS
Caddy can run as a host service while the application stays in Docker. Use a short reverse proxy block, reload Caddy, then start the stack. If your domain does not resolve yet, stop here and fix DNS first; repeated failed certificate attempts can hit ACME rate limits.
source .env
sudo tee /etc/caddy/Caddyfile >/dev/null <
If the copy button is unavailable, select the command block manually and copy it.
5) Create backups and a restore drill
Backups should include both PostgreSQL and Kanboard's data volume. The following script writes a compressed database dump and a tarball of uploaded files. Ship the resulting files off-host with your normal backup system, then test a restore to a disposable VM before trusting the schedule.
cat > /opt/kanboard/backup.sh <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
cd /opt/kanboard
source .env
stamp=$(date -u +%Y%m%dT%H%M%SZ)
mkdir -p backups
docker compose exec -T postgres pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" | gzip > "backups/kanboard-db-${stamp}.sql.gz"
docker run --rm -v kanboard_kanboard_data:/data -v "$PWD/backups:/backup" alpine \
tar -czf "/backup/kanboard-data-${stamp}.tar.gz" -C /data .
find backups -type f -mtime +14 -delete
EOF
chmod +x /opt/kanboard/backup.sh
/opt/kanboard/backup.sh
(crontab -l 2>/dev/null; echo '17 2 * * * /opt/kanboard/backup.sh >/var/log/kanboard-backup.log 2>&1') | crontab -
If the copy button is unavailable, select the command block manually and copy it.
Configuration and secrets handling best practices
Immediately change the default Kanboard administrator credentials after first login and create named accounts for every operator. Disable the plugin installer in production unless you have a controlled plugin review process; third-party plugins can become an untracked supply-chain path. Treat .env as sensitive, keep it out of Git, and rotate the PostgreSQL password if the file is ever exposed in a ticket or chat transcript.
Use groups or project-level roles rather than sharing a single admin account. For teams that need single sign-on, evaluate Kanboard authentication plugins in staging first and document a break-glass local administrator. Keep SMTP settings, webhook tokens, and integration credentials in the same restricted configuration flow as database credentials. For high-change environments, place Caddy and Compose files under infrastructure-as-code while keeping secrets in your password manager or a secret store.
Verification checklist
docker compose psshows both containers running and PostgreSQL healthy.curl -I https://kanboard.example.comreturns HTTP 200 or 302 over TLS.- First login succeeds, the admin password is changed, and a non-admin user can access a test project.
- A task attachment uploads successfully and survives a container restart.
- The backup script creates both database and data archives, and at least one restore has been tested.
cd /opt/kanboard
docker compose ps
curl -I "https://$(grep KANBOARD_DOMAIN .env | cut -d= -f2)"
docker compose logs --tail=80 kanboard
docker compose exec -T postgres pg_isready -U kanboard -d kanboard
ls -lh backups | tail
If the copy button is unavailable, select the command block manually and copy it.
Common issues and fixes
Caddy cannot issue a certificate
Confirm the DNS A or AAAA record points to the server, ports 80 and 443 are reachable from the public internet, and no cloud firewall is blocking ACME validation. Run sudo journalctl -u caddy -n 100 --no-pager and fix the first network or DNS error before retrying.
Kanboard starts but shows database errors
Check that the values in .env match the PostgreSQL environment used on first boot. If you changed credentials after the database volume was initialized, update the database user inside PostgreSQL or recreate the volume only if you are still before production use.
Attachments disappear after redeploy
Make sure the kanboard_data volume is present and included in backups. Avoid binding application data to a temporary path, and do not run cleanup commands that remove named volumes unless you have a verified restore point.
Users are not receiving notifications
Kanboard needs valid SMTP settings for email notifications. Configure SMTP in the application settings or supported environment variables, test with a real mailbox, and verify SPF, DKIM, and DMARC alignment if messages leave your domain.
Upgrades feel risky
Before upgrading, take a fresh database dump and data archive, read the Kanboard release notes, and test the same image change on a staging copy. For production, upgrade during a maintenance window and keep the previous image tag available for rollback.
FAQ
Can this run on a small VPS?
Yes. Kanboard is lightweight, and a 1–2 vCPU server with 2 GB of RAM is enough for many small teams. Increase resources if attachments are large or if many users interact with boards during the same workday.
Why use PostgreSQL instead of SQLite?
SQLite is convenient for demos, but PostgreSQL gives cleaner backups, clearer restore procedures, and better operational boundaries for production. It also matches the database platform many teams already monitor.
Should I expose PostgreSQL for reporting?
No. Keep PostgreSQL private to the Docker network. If reporting is required, create a controlled export or read-only replica path rather than opening the database port on the public host.
How often should backups run?
Daily backups are a reasonable default for small boards. If Kanboard tracks critical operational handoffs, run backups more often and align retention with your incident-response and compliance needs.
Can I install Kanboard plugins?
Yes, but review plugins like production code. Test compatibility in staging, pin versions where possible, and document who owns updates. Disable casual in-app plugin installation for shared production systems.
What should I monitor first?
Monitor HTTPS availability, container restarts, disk usage, backup freshness, and PostgreSQL health. Add application-level checks such as login or board page availability if Kanboard becomes operationally critical.
Internal links
- OpenProject with Docker Compose, Caddy, PostgreSQL, and Redis
- Forgejo with Docker Compose, Caddy, PostgreSQL, and SSH
- Healthchecks with Docker Compose, Caddy, and PostgreSQL
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.
Header image: Unsplash, no watermark.