Joplin Server is a practical choice when a team wants encrypted note sync, shared notebooks, and a self-hosted path that does not turn internal runbooks into another SaaS dependency. A common real-world use case is a small engineering or operations group that already documents incident response, onboarding notes, and customer environment details in Joplin desktop or mobile clients, but needs a central sync target with HTTPS, backups, and predictable recovery. This guide deploys Joplin Server on Ubuntu using Docker Compose, PostgreSQL for durable storage, and Caddy for automatic TLS.
The goal is not just to make the container boot. The goal is a deployment that can survive host reboots, rotate secrets cleanly, validate backups, and give your team a repeatable operating model. The commands below assume a single Ubuntu VM, a DNS name such as notes.example.com, and a server that will be reachable over ports 80 and 443.
Architecture and flow overview
The stack has three main pieces. Caddy terminates HTTPS and forwards traffic to the Joplin Server container on the private Docker network. Joplin Server handles the web/API layer used by desktop, mobile, and terminal clients. PostgreSQL stores application data, users, items, and sync metadata on a named volume. A separate backup directory receives compressed database dumps so restore testing does not depend on Docker internals alone.
- Client flow: Joplin apps connect to
https://notes.example.com. - Edge flow: Caddy obtains certificates, enforces HTTPS, and reverse proxies to
joplin:22300. - Data flow: Joplin Server stores metadata and sync state in PostgreSQL.
- Operations flow: system administrators manage the stack with Compose, scheduled dumps, and periodic restore rehearsals.
Prerequisites
- Ubuntu 22.04 or 24.04 with sudo access.
- A DNS record pointing
notes.example.comto the VM public IP. - Ports 80 and 443 open to the internet for Caddy certificate issuance.
- Docker Engine and the Compose plugin installed.
- A plan for off-host backups, such as S3-compatible storage, rsync, or a managed backup agent.
Step-by-step deployment
1) Install Docker, Compose, and baseline tools
Start with a patched host and the packages needed for firewalling, backups, and simple troubleshooting.
sudo apt update
sudo apt -y upgrade
sudo apt -y install ca-certificates curl gnupg ufw jq openssl postgresql-client
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
newgrp docker
docker version
docker compose version
Manual copy fallback: if the copy button is unavailable, select the code in the block and copy it manually.
2) Create the project layout
Keep Compose files, environment files, Caddy configuration, and backups in predictable locations. This makes audits and restore drills easier.
sudo mkdir -p /opt/joplin-server/{caddy,backups}
sudo chown -R $USER:$USER /opt/joplin-server
cd /opt/joplin-server
touch .env docker-compose.yml caddy/Caddyfile
chmod 600 .env
Manual copy fallback: if JavaScript is stripped by the editor, copy the command text directly from the block.
3) Generate secrets and environment values
Use unique values per environment. Do not reuse database passwords from staging, password managers, or other Compose stacks. The public URL must match the exact HTTPS URL users configure in their Joplin clients.
cd /opt/joplin-server
POSTGRES_PASSWORD=$(openssl rand -base64 36 | tr -d '\n')
cat > .env <<EOF
DOMAIN=notes.example.com
APP_BASE_URL=https://notes.example.com
POSTGRES_DATABASE=joplin
POSTGRES_USER=joplin
POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
EOF
cat .env
Manual copy fallback: copy the block manually and replace notes.example.com before running it.
4) Create Docker Compose services
This Compose file pins the core service layout: PostgreSQL is private, Joplin is private, and only Caddy publishes ports. Named volumes keep application state separate from the Compose file directory.
cat > docker-compose.yml <<'EOF'
services:
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DATABASE}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 10s
timeout: 5s
retries: 10
joplin:
image: joplin/server:latest
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
environment:
APP_BASE_URL: ${APP_BASE_URL}
APP_PORT: 22300
DB_CLIENT: pg
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
POSTGRES_DATABASE: ${POSTGRES_DATABASE}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
expose:
- "22300"
caddy:
image: caddy:2-alpine
restart: unless-stopped
depends_on:
- joplin
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
volumes:
postgres_data:
caddy_data:
caddy_config:
EOF
Manual copy fallback: manually copy the YAML and run docker compose config before starting the stack.
5) Configure Caddy and the firewall
Caddy can handle certificates automatically as long as DNS and inbound ports are correct. Keep the reverse proxy simple first; add SSO or IP restrictions later after client sync works.
source .env
cat > caddy/Caddyfile <<EOF
${DOMAIN} {
encode zstd gzip
reverse_proxy joplin:22300
header {
X-Content-Type-Options nosniff
Referrer-Policy strict-origin-when-cross-origin
X-Frame-Options SAMEORIGIN
}
}
EOF
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable
sudo ufw status
Manual copy fallback: copy the block manually and confirm the domain value before applying firewall rules.
6) Start the stack and create the first admin
Bring services up, watch first boot, and then open the site in a browser. Joplin Server typically provides a default administrator workflow; immediately replace default credentials, create named user accounts, and disable any temporary bootstrap credentials used during installation.
cd /opt/joplin-server
docker compose config
docker compose up -d
docker compose ps
docker compose logs --tail=120 joplin
docker compose logs --tail=80 caddy
Manual copy fallback: copy the commands manually and review logs before onboarding users.
7) Add a database backup job
A Compose volume is not a backup strategy by itself. Schedule PostgreSQL dumps, keep them outside the container, and ship them off-host. The example below creates local compressed dumps you can integrate with your backup agent.
cat > /opt/joplin-server/backup-joplin.sh <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
cd /opt/joplin-server
source .env
stamp=$(date -u +%Y%m%dT%H%M%SZ)
mkdir -p backups
docker compose exec -T postgres pg_dump \
-U "$POSTGRES_USER" \
-d "$POSTGRES_DATABASE" \
--format=custom \
| gzip > "backups/joplin-${stamp}.dump.gz"
find backups -type f -name 'joplin-*.dump.gz' -mtime +14 -delete
EOF
chmod +x /opt/joplin-server/backup-joplin.sh
/opt/joplin-server/backup-joplin.sh
ls -lh /opt/joplin-server/backups
Manual copy fallback: copy the script manually and run it once before adding it to cron.
Configuration and secrets handling best practices
Treat .env as a secret file. It should not be committed to Git, pasted into support tickets, or shared in screenshots. If you use infrastructure automation, store the PostgreSQL password in a proper secret store and render the environment file during deployment. Restrict shell access to operators who are allowed to administer note data, because a database dump can expose sync metadata and content depending on client encryption choices.
Pin image versions when you move from pilot to production. The latest tag is convenient for a first deployment, but production upgrades should be explicit: review Joplin Server release notes, snapshot the VM or verify backups, change the tag, run docker compose pull, and then restart during a maintenance window. Keep Caddy and PostgreSQL patched as part of your monthly host maintenance.
For teams using end-to-end encryption in Joplin clients, document that encryption keys are managed by clients, not by the server. Server backups are still critical because they preserve sync state and encrypted items, but losing client-side keys can still make restored content unusable. Include key recovery guidance in your internal onboarding notes.
Verification checklist
docker compose psshows Caddy, Joplin, and PostgreSQL healthy or running.curl -I https://notes.example.comreturns a 200 or expected redirect with a valid certificate.- A test user can sign in and sync from a desktop Joplin client.
- A mobile client can sync over cellular data, proving the public URL and certificate chain work.
- The backup script produces a non-empty dump and your off-host backup system captures it.
- A restore rehearsal has been completed on a separate VM before production data becomes business-critical.
source /opt/joplin-server/.env
curl -I "https://${DOMAIN}"
docker compose exec -T postgres pg_isready -U "$POSTGRES_USER" -d "$POSTGRES_DATABASE"
docker compose exec -T postgres psql -U "$POSTGRES_USER" -d "$POSTGRES_DATABASE" -c '\dt'
ls -lh /opt/joplin-server/backups
Manual copy fallback: copy each verification command manually if the clipboard helper is unavailable.
Common issues and fixes
1) Caddy cannot issue a certificate
Check DNS first. The domain must resolve to the VM public IP, and ports 80 and 443 must reach Caddy. Also verify that another service is not already bound to those ports with sudo ss -ltnp.
2) Joplin clients fail with URL mismatch errors
Confirm APP_BASE_URL exactly matches the public HTTPS URL users enter in clients. If you change the domain, update .env, restart Joplin, and test from a fresh client profile before asking the whole team to reconnect.
3) PostgreSQL starts but Joplin repeatedly restarts
Run docker compose logs joplin and check database settings. The most common causes are a typo in POSTGRES_PASSWORD, an environment file edited after the database was initialized, or a Compose file indentation problem.
4) Backups exist but restores are untested
Schedule a quarterly restore drill. Create a temporary VM, copy the latest dump, start a clean PostgreSQL container, restore with pg_restore, and verify that Joplin can connect. A backup that has never been restored is only an assumption.
5) Sync feels slow for large notebooks
Measure before tuning. Check host CPU, disk latency, PostgreSQL container logs, and network path from clients. Large initial syncs can be expensive; ongoing sync should stabilize after the first full client upload.
6) Users are confused about encryption responsibilities
Write a short internal note explaining that the server provides sync and availability, while end-to-end encryption keys live with users and clients. This prevents false confidence during device replacement or account recovery.
FAQ
Can I run Joplin Server without PostgreSQL?
For production, use PostgreSQL. It gives you standard backup tooling, predictable recovery, and a database layer that operators already know how to monitor.
Should I put Joplin Server behind SSO?
You can add an identity-aware proxy later, but first verify native Joplin client sync behavior. Some clients and API flows are sensitive to extra authentication layers, so test carefully with every client type your team uses.
How often should I back up the database?
Daily is a reasonable baseline for small teams, but increase frequency if Joplin becomes an operational knowledge base. Match the schedule to your recovery point objective.
What should be included in a restore test?
Restore the PostgreSQL dump, start the stack with the restored database, log in with a test account, and sync a client. Do not count a dump file listing as a restore test.
Can I use Cloudflare in front of Caddy?
Yes, but keep certificate and proxy settings simple. Use full TLS mode, avoid aggressive caching for application routes, and test desktop and mobile sync after enabling proxy features.
How do I safely upgrade Joplin Server?
Read release notes, confirm backups, pull the new image during a maintenance window, restart the stack, and run client sync tests. Keep the previous image tag documented for rollback.
Do users still need Joplin end-to-end encryption?
That depends on your threat model. Server-side HTTPS protects traffic, but client-side E2EE helps when operators should not be able to read note contents from server-side data.
Internal links
- Deploy HedgeDoc with Docker Compose + Caddy + PostgreSQL
- Deploy Linkwarden with Docker Compose + Caddy + PostgreSQL
- Deploy Semaphore UI with Docker Compose + Caddy + PostgreSQL
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.