Secrets management becomes painful when application tokens, database passwords, encryption keys, and CI credentials are spread across environment files and chat history. OpenBao, the community-driven fork of Vault, gives teams a central place to issue, rotate, audit, and revoke secrets without redesigning every application at once. This guide walks through a production-oriented single-node OpenBao deployment on Ubuntu using Docker Compose, Caddy for automatic HTTPS, and integrated Raft storage for durable local state.
The target reader is a small platform team that needs a dependable secrets service for internal applications, automation runners, and staging environments. The design is intentionally conservative: one hardened host, explicit filesystem permissions, TLS at the edge, encrypted backups, and a runbook that operators can follow during upgrades or recovery.
Architecture and flow overview
Traffic enters through Caddy on ports 80 and 443. Caddy obtains and renews certificates, then proxies HTTPS requests to OpenBao on the private Docker network. OpenBao stores encrypted data in its integrated Raft directory, exposes the API only behind Caddy, and writes audit logs to a mounted host path. Operators initialize and unseal the service manually, then configure auth methods, policies, and backup jobs.
- Client path: browser, CLI, or application → HTTPS → Caddy → OpenBao API.
- Storage path: OpenBao → integrated Raft files under
/opt/openbao/data. - Operations path: systemd-managed Docker Compose, encrypted snapshot backups, and audit logs.
Prerequisites
- Ubuntu 22.04 or 24.04 server with a static public IP.
- A DNS record such as
bao.example.compointing at the server. - Root or sudo access, outbound internet access, and ports 80/443 open.
- A secure workstation for storing initial unseal keys and the first root token.
Step-by-step deployment
1) Install Docker, Compose, and baseline tools
Start with a clean package refresh and install only the tools needed for the runtime and backup workflow.
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg ufw jq age
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker "$USER"
docker version
docker compose version
Manual copy fallback: select the command block above and copy it if the button is unavailable.
2) Create directories and permissions
Keep configuration, data, audit logs, and backups separated. The OpenBao container runs as a non-root user, so the data directory must be writable by that UID.
sudo mkdir -p /opt/openbao/{config,data,logs,backups}
sudo chown -R 100:1000 /opt/openbao/data /opt/openbao/logs
sudo chmod 750 /opt/openbao /opt/openbao/config /opt/openbao/data /opt/openbao/logs /opt/openbao/backups
cd /opt/openbao
Manual copy fallback: select the command block above and copy it if the button is unavailable.
3) Write the OpenBao server configuration
This configuration enables integrated storage, binds the API inside the Compose network, and avoids exposing the cluster listener publicly. Replace the hostname before starting the stack.
sudo tee /opt/openbao/config/openbao.hcl >/dev/null <<'EOF'
ui = true
api_addr = "https://bao.example.com"
cluster_addr = "http://openbao:8201"
disable_mlock = true
storage "raft" {
path = "/openbao/data"
node_id = "openbao-1"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = true
}
log_level = "info"
EOF
sudo chmod 640 /opt/openbao/config/openbao.hcl
Manual copy fallback: select the command block above and copy it if the button is unavailable.
4) Create Docker Compose services
Compose keeps the deployment readable and restartable. Caddy is the only container with public ports; OpenBao stays on the internal application network.
sudo tee /opt/openbao/docker-compose.yml >/dev/null <<'EOF'
services:
openbao:
image: openbao/openbao:latest
container_name: openbao
command: server -config=/openbao/config/openbao.hcl
restart: unless-stopped
cap_add:
- IPC_LOCK
environment:
BAO_ADDR: http://127.0.0.1:8200
volumes:
- ./config:/openbao/config:ro
- ./data:/openbao/data
- ./logs:/openbao/logs
networks:
- openbao
caddy:
image: caddy:2-alpine
container_name: openbao-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
networks:
- openbao
networks:
openbao:
driver: bridge
volumes:
caddy_data:
caddy_config:
EOF
Manual copy fallback: select the command block above and copy it if the button is unavailable.
5) Configure Caddy and host firewall
Update the domain and email address, then allow only SSH, HTTP, and HTTPS at the host boundary.
sudo tee /opt/openbao/Caddyfile >/dev/null <<'EOF'
bao.example.com {
encode zstd gzip
reverse_proxy openbao:8200
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Content-Type-Options "nosniff"
Referrer-Policy "no-referrer"
}
}
EOF
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw --force enable
Manual copy fallback: select the command block above and copy it if the button is unavailable.
6) Start, initialize, and unseal OpenBao
Bring the stack online, initialize once, and store the generated recovery material in your password manager or offline break-glass procedure. Never commit these values into Git.
cd /opt/openbao
sudo docker compose up -d
sudo docker compose logs -f --tail=80 openbao
export BAO_ADDR="https://bao.example.com"
bao operator init -key-shares=5 -key-threshold=3
bao operator unseal
bao operator unseal
bao operator unseal
bao status
Manual copy fallback: select the command block above and copy it if the button is unavailable.
7) Enable audit logs, policies, and a first secrets path
After login, enable file audit logging and create a minimal application policy. Applications should receive narrow tokens, not the root token.
bao login
bao audit enable file file_path=/openbao/logs/audit.log
bao secrets enable -path=apps kv-v2
bao kv put apps/payments DB_USER="payments" DB_PASSWORD="replace-with-generated-password"
cat > /tmp/payments-policy.hcl <<'EOF'
path "apps/data/payments" {
capabilities = ["read"]
}
EOF
bao policy write payments-read /tmp/payments-policy.hcl
bao token create -policy=payments-read -ttl=24h
Manual copy fallback: select the command block above and copy it if the button is unavailable.
8) Create encrypted Raft snapshots
Backups should be restorable, encrypted, and rehearsed. This example uses age; replace the recipient with your operations public key.
sudo tee /usr/local/sbin/openbao-backup >/dev/null <<'EOF'
#!/usr/bin/env bash
set -euo pipefail
export BAO_ADDR="https://bao.example.com"
export BAO_TOKEN="$(cat /root/.openbao-backup-token)"
stamp="$(date -u +%Y%m%dT%H%M%SZ)"
bao operator raft snapshot save "/tmp/openbao-${stamp}.snap"
age -r "age1replacewithyourrecipient" -o "/opt/openbao/backups/openbao-${stamp}.snap.age" "/tmp/openbao-${stamp}.snap"
rm -f "/tmp/openbao-${stamp}.snap"
find /opt/openbao/backups -type f -name '*.age' -mtime +30 -delete
EOF
sudo chmod 700 /usr/local/sbin/openbao-backup
Manual copy fallback: select the command block above and copy it if the button is unavailable.
Configuration and secrets handling best practices
Treat OpenBao as production infrastructure, not as a convenient container. Restrict root token use to initial setup, create short-lived operator tokens, and prefer identity-based authentication for applications. Keep unseal keys split across trusted people or a secure break-glass vault. Rotate application secrets on a schedule and whenever deployment logs suggest exposure.
Back up both the encrypted Raft snapshot and the operational runbook. A snapshot without unseal keys, policy context, DNS ownership, and restore instructions is not a recovery plan. Review audit logs for unexpected token creation, broad policy reads, and access outside normal deployment windows.
Verification checklist
bao statusshows initialized, unsealed, and a healthy Raft storage type.curl -I https://bao.example.comreturns a valid HTTPS response.- Caddy logs show successful certificate issuance and no repeated upstream failures.
- A non-root policy token can read only its assigned path.
- An encrypted snapshot exists and a restore has been tested on a disposable host.
Common issues and fixes
1) Caddy returns 502 Bad Gateway
Check that the OpenBao container is healthy and that both containers are on the same Compose network. Run docker compose ps and inspect recent OpenBao logs for configuration errors.
2) OpenBao is sealed after reboot
This is expected unless auto-unseal is configured. For a small deployment, document the manual unseal procedure and keep key holders available. For larger environments, evaluate cloud KMS or HSM-backed auto-unseal.
3) Audit logs grow quickly
Audit logs are valuable but can fill disks. Add log rotation, monitor free space, and ship logs to a central system if OpenBao becomes part of the critical path.
4) Snapshot restore fails
Confirm the target OpenBao version, unseal key availability, and that the snapshot was not corrupted before encryption. Practice restores before an incident, not during one.
5) Applications still use old secrets
Secret rotation requires application reload behavior. Pair OpenBao changes with deployment hooks, sidecar refreshers, or explicit restarts so applications actually consume new values.
FAQ
Should I run OpenBao as a single node?
A single node is acceptable for small internal environments if downtime is tolerable and backups are tested. For high availability, run a multi-node Raft cluster across failure domains.
Can I expose OpenBao directly without Caddy?
You can, but a reverse proxy simplifies TLS renewal, headers, and future access controls. Keep the OpenBao API off public container ports whenever possible.
Where should unseal keys be stored?
Split them across trusted operators or a formal break-glass process. Do not store every key in the same password vault entry as the root token.
How often should snapshots run?
Daily is a reasonable baseline for small teams, with additional snapshots before upgrades, policy migrations, and major application onboarding.
Do I need mlock?
Memory locking is useful, but containers often require extra host tuning. If you disable it, compensate with host hardening, swap policy review, and limited access to the Docker host.
When should I move to auto-unseal?
Move when manual unseal creates unacceptable recovery delays or when you operate multiple nodes. Cloud KMS or HSM integration reduces operator burden but adds dependency planning.
How should CI systems authenticate?
Prefer short-lived tokens or a supported auth method tied to the CI identity. Avoid long-lived root-derived tokens stored as static CI variables.
Internal links
- Deploy Authentik with Docker Compose + Traefik + PostgreSQL
- Deploy NetBird with Kubernetes + Helm + cert-manager
- Deploy Grafana + Prometheus with Docker Compose + Nginx
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.