Backups become strategic only when the team can restore from them under pressure. Kopia is a strong fit for small infrastructure teams because it combines client-side encryption, deduplication, compression, snapshots, retention policies, and a browser UI without forcing you into a large backup suite. This guide shows a practical way to run Kopia behind Caddy with Docker Compose, store encrypted snapshots in S3-compatible object storage, and verify that the backups are actually recoverable.
The real-world use case is a small production environment with application data under /srv, important host configuration under /etc, and a requirement that operators can review jobs from a secure web interface. The pattern is intentionally conservative: the Kopia container binds only to localhost, Caddy terminates TLS, secrets live outside the Compose file, and recovery is tested as part of the deployment instead of being left for an incident.
Architecture/flow overview
The deployment has four moving parts. First, Docker Compose runs the Kopia server and mounts only the paths that need protection. Second, Kopia encrypts and deduplicates snapshot data before it leaves the host. Third, object storage receives the encrypted repository, so the bucket operator cannot read files without the Kopia repository password. Fourth, Caddy publishes the UI over HTTPS and reverse proxies to the local Kopia listener.
Operationally, the flow is simple: a scheduled snapshot reads source directories, chunks and encrypts data locally, uploads changed chunks to S3, records a manifest, then applies retention during maintenance. Verification and restore commands should run from the same host at first, then from a clean recovery host during quarterly disaster recovery drills.
Prerequisites
- A Linux server with Docker Engine and the Compose plugin installed.
- A DNS record such as
backups.example.compointing to the server. - Caddy installed on the host or available through your standard reverse-proxy layer.
- An S3-compatible bucket with access keys scoped to that bucket only.
- A written list of directories that must be backed up and directories that must be excluded.
Do not start with every filesystem path. Back up the data that matters, prove restore behavior, then expand coverage. This keeps the first repository understandable and prevents cache directories, container layers, and transient logs from dominating storage.
Step-by-step deployment
1. Create the working directory
Keep Kopia configuration, cache, logs, and helper scripts in one predictable directory. Restrict permissions because repository credentials and server settings will be present on disk.
mkdir -p /opt/kopia/{config,cache,logs,scripts}
cd /opt/kopia
chmod 700 config cache scripts
If the copy button is unavailable in your browser, select the command block and copy it manually.
2. Store secrets outside Compose
The .env file keeps passwords and access keys out of the Compose YAML. In production, generate these values with a password manager, store a copy in your break-glass vault, and rotate bucket credentials if an operator leaves the team.
cat > .env <<'EOF'
KOPIA_PASSWORD=replace-with-a-long-random-repository-password
KOPIA_USERNAME=backup-admin
KOPIA_SERVER_CONTROL_USERNAME=admin
KOPIA_SERVER_CONTROL_PASSWORD=replace-with-a-long-random-ui-password
KOPIA_ADDRESS=https://backups.example.com
AWS_ACCESS_KEY_ID=replace-with-s3-access-key
AWS_SECRET_ACCESS_KEY=replace-with-s3-secret-key
AWS_DEFAULT_REGION=us-east-1
KOPIA_BUCKET=company-kopia-backups
EOF
chmod 600 .env
If the copy button is unavailable in your browser, select the command block and copy it manually.
3. Add the Compose service
This example runs Kopia as root so it can read protected host paths. If your environment can provide group-readable application data, run it as a narrower user instead. The port mapping is bound to 127.0.0.1 because only Caddy should reach the UI.
services:
kopia:
image: kopia/kopia:latest
restart: unless-stopped
env_file: .env
user: "0:0"
command:
- server
- start
- --insecure
- --address=0.0.0.0:51515
- --server-username=${KOPIA_SERVER_CONTROL_USERNAME}
- --server-password=${KOPIA_SERVER_CONTROL_PASSWORD}
volumes:
- ./config:/app/config
- ./cache:/app/cache
- ./logs:/app/logs
- ./scripts:/app/scripts:ro
- /srv:/backup/srv:ro
- /etc:/backup/etc:ro
ports:
- "127.0.0.1:51515:51515"
If the copy button is unavailable in your browser, select the command block and copy it manually.
4. Publish the UI through Caddy
Caddy handles certificates and security headers. The reverse proxy target matches the Compose port binding, so the service is not accidentally exposed on the public network interface.
backups.example.com {
encode zstd gzip
reverse_proxy 127.0.0.1:51515
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
}
}
If the copy button is unavailable in your browser, select the command block and copy it manually.
5. Start Kopia
Pull the image, start the container, and watch logs until the server is listening. If the UI does not load, check the Caddy journal and confirm that DNS points to the server before changing Kopia settings.
docker compose pull
docker compose up -d
docker compose logs -f kopia
If the copy button is unavailable in your browser, select the command block and copy it manually.
6. Create the repository and first policies
Create the encrypted S3 repository once, then set a retention policy for the paths you are protecting. The sample policy keeps recent restore points dense while retaining weekly history for rollback after slower-moving problems.
docker compose exec kopia kopia repository create s3 --bucket=$KOPIA_BUCKET --endpoint=s3.amazonaws.com --access-key=$AWS_ACCESS_KEY_ID --secret-access-key=$AWS_SECRET_ACCESS_KEY
docker compose exec kopia kopia policy set /backup/srv --keep-latest=10 --keep-hourly=24 --keep-daily=14 --keep-weekly=8 --compression=zstd-fastest
docker compose exec kopia kopia snapshot create /backup/srv /backup/etc
If the copy button is unavailable in your browser, select the command block and copy it manually.
Configuration and secrets handling
The repository password is the most important secret in this system. Losing it means losing access to the encrypted backups; leaking it means an attacker with bucket access may be able to restore data. Store it in a password manager with administrative access controls, and separately document who can approve a recovery operation.
Use a dedicated bucket and IAM policy for Kopia. The access key should not have account-wide permissions, should not manage unrelated buckets, and should be rotated on a schedule. Enable object versioning or retention if your storage provider supports it, but remember that storage-level retention complements Kopia retention; it does not replace repository maintenance.
Keep exclude rules deliberate. For example, exclude build caches, container overlay directories, package caches, temporary uploads, and local database replicas that are already backed up through native database dumps. For databases, prefer application-consistent exports or snapshots rather than relying only on live file copies.
Verification
A backup job is not complete until verification passes. After the first snapshot, list snapshots, verify a sample of file contents, and run maintenance. Put these commands into your runbook so operators have a repeatable check after changes.
docker compose exec kopia kopia snapshot list
docker compose exec kopia kopia snapshot verify --verify-files-percent=5
docker compose exec kopia kopia maintenance run --full
If the copy button is unavailable in your browser, select the command block and copy it manually.
Next, perform a small restore to a disposable location. The goal is to validate credentials, repository metadata, and operator muscle memory without waiting for an emergency.
RESTORE_ID=$(docker compose exec kopia kopia snapshot list --json | jq -r '.[0].id')
mkdir -p /tmp/kopia-restore-test
docker compose exec kopia kopia restore "$RESTORE_ID" /app/cache/restore-test
find /opt/kopia/cache/restore-test -maxdepth 2 -type f | head
If the copy button is unavailable in your browser, select the command block and copy it manually.
For production, add three recurring checks: a daily snapshot status review, a weekly sample restore, and a quarterly full disaster recovery test on a separate host. Track the result in your ticketing or documentation system so failures become visible.
Common issues and fixes
The UI works locally but not through the browser
Confirm that the DNS record resolves to the server, that Caddy is running, and that the Compose port is bound to 127.0.0.1:51515. If Caddy runs in a container instead of on the host, place both services on the same Docker network and proxy to the service name rather than localhost.
Snapshots fail with permission denied
Check the mounted source path and container user. The container must be able to read the files you expect to protect. For databases and applications with strict permissions, create exported backup files in a readable staging directory and snapshot that directory.
Object storage costs climb unexpectedly
Review retention, compression, and excluded paths. Large temporary directories, media processing caches, and container images can change frequently and defeat deduplication. Kopia policies can be applied per path, so tune high-churn sources separately.
Restores are slower than expected
Restore performance depends on object storage latency, repository cache, and file count. Keep a local cache directory with enough space, test from the region where recovery will occur, and document realistic recovery time objectives for each service.
Operators cannot sign in
Verify the server control username and password in the environment file, then restart the container. If you add external authentication in front of Caddy, document both the proxy login and Kopia login so responders know the complete path.
FAQ
Can Kopia replace database-native backups?
Not by itself. Use database-native dumps or snapshots for consistency, then back up those export files with Kopia. File-level snapshots of a busy database can be incomplete unless the database is quiesced or designed for that snapshot method.
Should the Kopia UI be public?
No. It should be reachable only through HTTPS, protected by strong credentials, and ideally restricted further with SSO, VPN, or IP allow lists. The UI can initiate sensitive operations, so treat it as an administrative surface.
How often should snapshots run?
Match the schedule to your recovery point objective. Many small teams start with hourly snapshots for critical application data and daily snapshots for lower-value paths, then adjust after measuring storage growth.
What happens if the server is compromised?
An attacker may be able to delete local configuration or use stored bucket credentials. Mitigate that risk with restricted IAM, bucket versioning, object-lock features where available, off-host credential storage, and regular repository checks from a separate account.
Can the same repository protect multiple servers?
Yes, but start carefully. Use clear hostnames, policies per host, and documented ownership. For larger environments, separate repositories can reduce blast radius and simplify permission boundaries.
How do we know retention is working?
Run maintenance, inspect snapshot lists over time, and compare repository size trends with expected data change rates. Retention settings do not remove all old data instantly; maintenance and object-store lifecycle behavior also matter.
Internal links
- Browse more SysBrix Guides for related self-hosted infrastructure patterns.
- Read SysBrix News for platform, security, and cloud updates that affect operations planning.
- Contact SysBrix if you want help turning this into a backup and recovery runbook.
Talk to us
If you want this pattern adapted to your servers, storage provider, compliance needs, and recovery objectives, SysBrix can help design the backup flow, harden access, and document the restore process.