Skip to Content

Production Guide: Deploy Linkwarden with Docker Compose + Caddy + PostgreSQL on Ubuntu

A production-oriented self-hosted bookmark and web archive deployment with TLS, backups, and operational checks.

Teams often start by saving bookmarks in browsers, wikis, and chat threads, then lose the context when a project moves quickly. Linkwarden gives engineering, operations, and research teams a central place to collect links, preserve snapshots, and organize references without sending every internal discovery trail to a third-party SaaS. This guide walks through a production-oriented deployment of Linkwarden on a single Ubuntu server using Docker Compose, Caddy for automatic HTTPS, and PostgreSQL for durable storage.

The goal is not just to make the container start. We will create a repeatable directory layout, isolate secrets, configure reverse proxy headers, add health and backup routines, and define verification steps your team can reuse during upgrades. The same pattern works for small internal knowledge bases, customer research libraries, security reading lists, or a private bookmark archive shared by a distributed team.

Architecture and flow overview

The deployment has four moving parts. Caddy terminates TLS and routes public traffic to the Linkwarden application container. Linkwarden serves the web UI and background jobs. PostgreSQL stores users, collections, tags, and link metadata. A Docker network keeps application traffic private, while only Caddy binds to ports 80 and 443 on the host.

Operationally, administrators edit one environment file, bring the stack up with Compose, and verify the service through the public hostname. Backups focus on the PostgreSQL database and uploaded archive data. If the app container is replaced during an upgrade, persistent data remains in named volumes and the database volume.

Prerequisites

  • Ubuntu 22.04 or 24.04 server with a non-root sudo user.
  • A DNS record such as links.example.com pointing to the server.
  • Ports 80 and 443 open from the internet for Caddy and Let's Encrypt.
  • Docker Engine and Docker Compose plugin installed.
  • A plan for outbound email if you want invitations and password resets.

For sizing, start with 2 vCPU, 4 GB RAM, and 40 GB disk for a small team. Increase disk capacity if you preserve many webpage snapshots or imported browser archives.

Step-by-step deployment

Create a dedicated application directory and lock it down so secrets are not scattered across home folders or shell history.

sudo mkdir -p /opt/linkwarden/{data,postgres,backups}
sudo chown -R $USER:$USER /opt/linkwarden
cd /opt/linkwarden
umask 077
openssl rand -hex 32 > .nextauth-secret
openssl rand -base64 32 > .postgres-password

If the copy button is unavailable, manually copy the command block above.

Next, create an environment file. Keep it outside the Compose YAML so rotations and secret reviews are straightforward. Replace the hostname and SMTP values with your own production settings.

cat > .env <<'EOF'
DOMAIN=links.example.com
POSTGRES_DB=linkwarden
POSTGRES_USER=linkwarden
POSTGRES_PASSWORD=replace-with-content-of-.postgres-password
NEXTAUTH_SECRET=replace-with-content-of-.nextauth-secret
NEXTAUTH_URL=https://links.example.com
SMTP_HOST=smtp.example.com
SMTP_PORT=587
[email protected]
SMTP_PASSWORD=replace-with-smtp-password
EMAIL_FROM=Linkwarden <[email protected]>
EOF
sed -i "s|replace-with-content-of-.postgres-password|$(cat .postgres-password)|" .env
sed -i "s|replace-with-content-of-.nextauth-secret|$(cat .nextauth-secret)|" .env

If the copy button is unavailable, manually copy the command block above.

Create the Compose file with private networking between Linkwarden and PostgreSQL. Caddy is the only service exposed to the host network.

cat > docker-compose.yml <<'EOF'
services:
  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - ./postgres:/var/lib/postgresql/data
    networks: [internal]
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 10

  linkwarden:
    image: ghcr.io/linkwarden/linkwarden:latest
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
      NEXTAUTH_URL: ${NEXTAUTH_URL}
      NEXTAUTH_SECRET: ${NEXTAUTH_SECRET}
      SMTP_HOST: ${SMTP_HOST}
      SMTP_PORT: ${SMTP_PORT}
      SMTP_USER: ${SMTP_USER}
      SMTP_PASSWORD: ${SMTP_PASSWORD}
      EMAIL_FROM: ${EMAIL_FROM}
    volumes:
      - ./data:/data/data
    networks: [internal]

  caddy:
    image: caddy:2-alpine
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - caddy_data:/data
      - caddy_config:/config
    networks: [internal]

networks:
  internal:

volumes:
  caddy_data:
  caddy_config:
EOF

If the copy button is unavailable, manually copy the command block above.

Now configure Caddy. The proxy passes the original scheme and client address, which matters for secure cookies, audit trails, and future rate-limiting rules.

cat > Caddyfile <<'EOF'
{$DOMAIN} {
  encode zstd gzip
  reverse_proxy linkwarden:3000 {
    header_up X-Forwarded-Proto {scheme}
    header_up X-Forwarded-Host {host}
    header_up X-Real-IP {remote_host}
  }
  header {
    X-Content-Type-Options nosniff
    Referrer-Policy strict-origin-when-cross-origin
    X-Frame-Options SAMEORIGIN
  }
}
EOF
docker compose config
docker compose up -d

If the copy button is unavailable, manually copy the command block above.

Watch the first boot carefully. Caddy may take a minute to issue certificates, and Linkwarden may run database migrations before the UI is ready.

Configuration and secrets handling best practices

Treat .env, .nextauth-secret, and .postgres-password as credentials. Do not commit them to Git, do not paste them into tickets, and do not reuse them across environments. If you need configuration-as-code, keep a redacted .env.example in Git and inject the real values from your secrets manager during deployment.

For production email, use a mailbox or SMTP relay dedicated to application notifications. Validate SPF, DKIM, and DMARC for the sending domain so invitation emails are not silently filtered. If your organization uses SSO, place Linkwarden behind an identity-aware proxy or restrict access by VPN until you validate the authentication model.

Backups should include both PostgreSQL and uploaded archive data. A simple nightly dump is adequate for small teams, but test restores quarterly. A backup that cannot be restored is only a log file with optimism attached.

Harden the host as part of the application deployment, not as a separate project nobody schedules. Limit SSH to trusted administrators, keep unattended security upgrades enabled, and expose only the ports required by Caddy. If your cloud provider has security groups, apply the same rule there so the host firewall is not your only control.

sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
sudo ufw status verbose

If the copy button is unavailable, manually copy the command block above.

For upgrades, avoid pulling images during an incident. Take a fresh backup, read the Linkwarden release notes, pull images, restart the stack, and immediately run the verification checklist. If the application fails to boot, keep the previous database dump and image tag available so you can roll back without guessing which migration changed the schema.

cat > backup-linkwarden.sh <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
cd /opt/linkwarden
stamp=$(date -u +%Y%m%dT%H%M%SZ)
docker compose exec -T postgres pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" | gzip > "backups/linkwarden-db-$stamp.sql.gz"
tar -czf "backups/linkwarden-data-$stamp.tar.gz" data
find backups -type f -mtime +14 -delete
EOF
chmod +x backup-linkwarden.sh
./backup-linkwarden.sh

If the copy button is unavailable, manually copy the command block above.

Verification checklist

Verification should prove the full user journey, not only that containers exist. Test from a network outside the server so DNS, TLS, proxy headers, authentication cookies, application rendering, and database writes are all exercised together. Keep the commands below in your runbook and repeat them after every upgrade, restore test, or firewall change.

Document the expected results beside each check. When a future operator sees a different HTTP status, expired certificate, missing backup file, or repeated container restart, they should know whether to roll back, renew DNS, or escalate before users report lost links.

  • Run docker compose ps and confirm all three services are healthy or running.
  • Open https://links.example.com and create the first administrative account.
  • Add a test bookmark, tag it, and confirm it appears in search results.
  • Check docker compose logs caddy for successful certificate issuance.
  • Run the backup script and confirm both database and data archive files are created.
cd /opt/linkwarden
docker compose ps
curl -I https://links.example.com
docker compose logs --tail=80 linkwarden
docker compose logs --tail=80 caddy

If the copy button is unavailable, manually copy the command block above.

Common issues and fixes

Caddy cannot issue a certificate. Confirm DNS points to the server and that ports 80 and 443 are reachable. If a firewall or cloud security group blocks inbound HTTP, Let's Encrypt validation will fail.

Login redirects loop or cookies do not stick. Verify NEXTAUTH_URL exactly matches the public HTTPS URL and that Caddy forwards the original scheme.

PostgreSQL starts but Linkwarden fails migrations. Check for special characters in the database password. If you manually edited DATABASE_URL, URL-encode symbols or use a simpler generated password.

Backups are empty. Run the backup command from /opt/linkwarden so Compose can load the same project and environment. Also confirm the database service name is still postgres.

FAQ

Can I run Linkwarden without exposing it publicly?

Yes. Put it behind a VPN, private reverse proxy, or identity-aware access layer. Caddy can still serve internal TLS if your DNS and certificates support the private hostname.

Should I pin image versions instead of using latest?

For strict change control, pin tested tags after your initial deployment. Use latest only if you have automated backups and an upgrade review process.

What should I back up first?

Back up PostgreSQL first, then the data directory. The database contains users, collections, and metadata; the data directory contains preserved content and uploaded assets.

Can multiple teams share one instance?

Yes, but define naming conventions for collections and tags early. Without conventions, shared bookmark tools become another unstructured dumping ground.

How do I test restores safely?

Restore the database dump and data archive into a temporary server with a different hostname. Never test restores by overwriting production first.

What monitoring should I add?

Monitor HTTPS availability, container restarts, disk usage, backup age, and PostgreSQL health. A bookmark archive can fail quietly if nobody watches background jobs and storage growth.

Related guides

Talk to us

If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.

Contact Us

Production Guide: Deploy OpenBao with Docker Compose + Caddy + Integrated Raft on Ubuntu
A production-ready OpenBao secrets management runbook with HTTPS, Raft storage, audit logs, backups, and practical operations checks.