Skip to Content

Vaultwarden Bitwarden Self-Host: CI/CD Secrets, Infrastructure Automation, and Zero-Trust Credential Access

Learn how to use your self-hosted Vaultwarden as the single source of truth for all infrastructure secrets — injecting credentials into CI/CD pipelines, Terraform, Ansible, and Docker deployments without a single hardcoded secret anywhere in your stack.
Vaultwarden setup guide

Vaultwarden Bitwarden Self-Host: CI/CD Secrets, Infrastructure Automation, and Zero-Trust Credential Access

Most teams deploy Vaultwarden for human password management — and stop there. That's leaving most of the value on the table. A self-hosted Vaultwarden instance is also a battle-tested secrets backend for your entire infrastructure: CI/CD pipelines that pull deployment credentials at runtime, Terraform configurations that read provider API keys from the vault, Ansible playbooks that fetch SSH passwords on demand, and Docker deployments where environment variables are injected from the vault rather than stored in config files. This guide covers using your Vaultwarden Bitwarden self-host as a zero-trust secrets layer for automation and infrastructure — no hardcoded credentials anywhere.

This guide assumes you already have a running Vaultwarden instance. If not, start with our Vaultwarden deployment guide for Docker setup and HTTPS configuration, then read our team and organizations guide for access control and security hardening before connecting automation to your vault.


Prerequisites

  • A running Vaultwarden instance with HTTPS — see our deployment guide
  • At least one organization with collections configured — see our team configuration guide
  • The Bitwarden CLI (bw) installed — npm install -g @bitwarden/cli
  • A service account user in Vaultwarden with access to the relevant credential collections
  • Basic familiarity with CI/CD pipelines, shell scripting, and environment variables

Verify the CLI is configured for your self-hosted instance:

# Point CLI at your self-hosted Vaultwarden
bw config server https://vault.yourdomain.com

# Verify it's hitting the right instance
bw config server
# Should output: https://vault.yourdomain.com

# Test authentication
bw login [email protected]
# Enter password when prompted
export BW_SESSION=$(bw unlock --raw)
bw sync --session $BW_SESSION
bw status --session $BW_SESSION | jq '{status: .status, userEmail: .userEmail}'

Setting Up a Service Account for Automation

Before connecting any automation to Vaultwarden, create a dedicated service account with the minimum necessary access. Never use a human admin account for automated secret fetches — the audit trail becomes meaningless and credential rotation is painful.

Creating the Service Account User

# Create the service account via Vaultwarden admin panel:
# Admin → Users → Invite User
# Email: [email protected]
# This creates a Vaultwarden user account for automation

# Or create via admin API:
curl -X POST https://vault.yourdomain.com/admin/users/invite \
  -H 'Content-Type: application/json' \
  -b "admin_token=YOUR_ADMIN_TOKEN" \
  -d '{"email": "[email protected]"}'

# The service account needs:
# - A strong machine-generated password (stored separately — not in Vaultwarden)
# - Organization membership with read-only access to the CI/CD collection
# - 2FA disabled (machines can't use TOTP)
# Note: disabling 2FA for service accounts is acceptable when
#   the account has read-only access to a dedicated, scoped collection only

Creating a Dedicated CI/CD Credentials Collection

Create a separate collection specifically for secrets that automation will access. Isolating these from human-managed credentials gives you a clean audit trail and makes access revocation surgical:

# Recommended collection structure for automation:
# Organization: Acme Corp
# ├── CI/CD — Deployment Keys     ← ci-service has read access
# │   ├── AWS Production Keys
# │   ├── Docker Registry Token
# │   ├── Production DB Password
# │   └── Signing Certificates
# ├── CI/CD — Test Environment     ← ci-service has read access
# │   ├── Staging DB Password
# │   └── Test API Keys
# └── Infrastructure — Admin      ← ci-service has NO access
#     ├── Root SSH Keys
#     └── Admin Panel Passwords

# Add credentials to the CI/CD collection via the web vault:
# Organization → [Collection] → Add Item
# Type: Login
# Name: AWS Production Keys
# Username: AKIAIOSFODNN7EXAMPLE
# Password: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Notes: Region: us-east-1 | Account: 123456789012

Injecting Secrets into CI/CD Pipelines

GitHub Actions Integration

Store only the Vaultwarden service account credentials in GitHub Secrets — never the actual infrastructure secrets. The pipeline fetches everything else from the vault at runtime:

# .github/workflows/deploy.yml
name: Deploy to Production

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install Bitwarden CLI
        run: npm install -g @bitwarden/cli

      - name: Fetch secrets from Vaultwarden
        env:
          BW_SERVER: ${{ secrets.VAULTWARDEN_URL }}
          BW_EMAIL: ${{ secrets.BW_SERVICE_EMAIL }}
          BW_PASSWORD: ${{ secrets.BW_SERVICE_PASSWORD }}
        run: |
          bw config server "$BW_SERVER"
          export BW_SESSION=$(echo "$BW_PASSWORD" | bw login "$BW_EMAIL" --passwordenv BW_PASSWORD --raw)
          bw sync --session "$BW_SESSION" > /dev/null

          # Fetch credentials and set as environment variables
          echo "AWS_ACCESS_KEY_ID=$(bw get username 'AWS Production Keys' --session $BW_SESSION)" >> $GITHUB_ENV
          echo "AWS_SECRET_ACCESS_KEY=$(bw get password 'AWS Production Keys' --session $BW_SESSION)" >> $GITHUB_ENV
          echo "DB_PASSWORD=$(bw get password 'Production DB Password' --session $BW_SESSION)" >> $GITHUB_ENV

          bw logout > /dev/null

      - name: Deploy application
        run: |
          # AWS credentials are now available as environment variables
          aws ecs update-service --cluster prod --service myapp --force-new-deployment

      - name: Run database migrations
        run: |
          # DB_PASSWORD is available without ever touching GitHub Secrets
          DATABASE_URL="postgresql://app:${DB_PASSWORD}@prod-db.internal:5432/myapp" \
            npm run migrate

GitLab CI Integration

# .gitlab-ci.yml
stages:
  - fetch-secrets
  - deploy

variables:
  BW_SERVER: ${VAULTWARDEN_URL}  # Set in GitLab CI/CD Variables

.fetch_secrets: &fetch_secrets
  before_script:
    - npm install -g @bitwarden/cli --silent
    - bw config server "$BW_SERVER"
    - export BW_SESSION=$(echo "$BW_SERVICE_PASSWORD" | bw login "$BW_SERVICE_EMAIL" --passwordenv BW_SERVICE_PASSWORD --raw)
    - bw sync --session "$BW_SESSION" > /dev/null
    - export AWS_ACCESS_KEY_ID=$(bw get username 'AWS Production Keys' --session $BW_SESSION)
    - export AWS_SECRET_ACCESS_KEY=$(bw get password 'AWS Production Keys' --session $BW_SESSION)
    - bw logout > /dev/null

deploy-production:
  stage: deploy
  <<: *fetch_secrets
  script:
    - aws ecr get-login-password | docker login --username AWS --password-stdin $ECR_URL
    - docker push $ECR_URL/myapp:$CI_COMMIT_SHA
  only:
    - main

Gitea Actions Integration

# .gitea/workflows/deploy.yml
# Identical to GitHub Actions syntax — Gitea Actions is GitHub-compatible
name: Deploy

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Fetch secrets from Vaultwarden and deploy
        env:
          VAULTWARDEN_URL: ${{ secrets.VAULTWARDEN_URL }}
          BW_SERVICE_EMAIL: ${{ secrets.BW_SERVICE_EMAIL }}
          BW_SERVICE_PASSWORD: ${{ secrets.BW_SERVICE_PASSWORD }}
        run: |
          npm install -g @bitwarden/cli --silent
          bw config server "$VAULTWARDEN_URL"
          BW_SESSION=$(echo "$BW_SERVICE_PASSWORD" | \
            bw login "$BW_SERVICE_EMAIL" --passwordenv BW_SERVICE_PASSWORD --raw)

          DEPLOY_KEY=$(bw get password 'Deploy SSH Key' --session $BW_SESSION)
          echo "$DEPLOY_KEY" > /tmp/deploy_key
          chmod 600 /tmp/deploy_key

          bw logout > /dev/null

          ssh -i /tmp/deploy_key -o StrictHostKeyChecking=no \
            [email protected] 'cd /app && git pull && pm2 restart all'

          rm /tmp/deploy_key

Infrastructure-as-Code Secrets: Terraform and Ansible

Terraform with Vaultwarden as Secrets Backend

Terraform configurations that hardcode cloud provider credentials are a security liability. Use a wrapper script that fetches credentials from Vaultwarden and passes them as environment variables at plan/apply time:

#!/bin/bash
# tf-run.sh — Wrapper for Terraform that injects secrets from Vaultwarden
# Usage: ./tf-run.sh plan
#        ./tf-run.sh apply
#        ./tf-run.sh destroy

set -euo pipefail

VW_SERVER="https://vault.yourdomain.com"
VW_EMAIL="[email protected]"

# Prompt for vault password securely (or read from a secured file)
read -s -p "Vault password: " VW_PASSWORD
echo

# Authenticate and fetch secrets
bw config server "$VW_SERVER" > /dev/null
BW_SESSION=$(echo "$VW_PASSWORD" | bw login "$VW_EMAIL" --passwordenv VW_PASSWORD --raw 2>/dev/null)
bw sync --session "$BW_SESSION" > /dev/null

export AWS_ACCESS_KEY_ID=$(bw get username 'Terraform AWS Keys' --session $BW_SESSION)
export AWS_SECRET_ACCESS_KEY=$(bw get password 'Terraform AWS Keys' --session $BW_SESSION)
export TF_VAR_db_password=$(bw get password 'RDS Production Password' --session $BW_SESSION)
export TF_VAR_cloudflare_api_token=$(bw get password 'Cloudflare API Token' --session $BW_SESSION)

bw logout > /dev/null
unset VW_PASSWORD

# Run terraform with injected credentials
terraform "$@"

# Unset after use
unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY TF_VAR_db_password TF_VAR_cloudflare_api_token
echo "Credentials cleared from environment"

Ansible Vault Alternative: Fetching Secrets at Playbook Runtime

Instead of using Ansible Vault files (which require managing yet another encryption key), fetch secrets from Vaultwarden at playbook execution time using a lookup plugin or a pre-task:

#!/bin/bash
# ansible-run.sh — Runs an Ansible playbook with secrets from Vaultwarden
# Usage: ./ansible-run.sh site.yml -i inventory/production

set -euo pipefail

# Fetch secrets from Vaultwarden
bw config server https://vault.yourdomain.com > /dev/null
BW_SESSION=$(bw login [email protected] --passwordenv BW_ANSIBLE_PASSWORD --raw 2>/dev/null)
bw sync --session "$BW_SESSION" > /dev/null

# Create a temporary vars file with fetched secrets
TMP_VARS=$(mktemp /tmp/ansible-secrets-XXXXXX.yml)
cat > "$TMP_VARS" << EOF
---
deploy_db_password: "$(bw get password 'Production DB Password' --session $BW_SESSION)"
redis_password: "$(bw get password 'Redis Production' --session $BW_SESSION)"
ssh_deploy_key: |
$(bw get notes 'Deploy SSH Private Key' --session $BW_SESSION | sed 's/^/  /')
EOF

bw logout > /dev/null

# Run playbook with injected vars
ansible-playbook "$@" -e "@${TMP_VARS}"

# Clean up immediately after
rm "$TMP_VARS"
echo "Temporary secrets file removed"

Docker and Kubernetes Secret Injection

Injecting Secrets into Docker Containers at Runtime

Rather than storing secrets in .env files or Docker Compose configs, fetch them from Vaultwarden and pass them directly to container startup:

#!/bin/bash
# deploy-container.sh — Deploy a Docker container with secrets from Vaultwarden

set -euo pipefail

# Authenticate to Vaultwarden
bw config server https://vault.yourdomain.com > /dev/null
BW_SESSION=$(bw login [email protected] \
  --passwordenv BW_DEPLOY_PASSWORD --raw 2>/dev/null)
bw sync --session "$BW_SESSION" > /dev/null

# Fetch all required secrets
DB_PASSWORD=$(bw get password 'Production DB Password' --session $BW_SESSION)
REDIS_PASSWORD=$(bw get password 'Redis Production' --session $BW_SESSION)
JWT_SECRET=$(bw get password 'JWT Secret Key' --session $BW_SESSION)
SMTP_PASSWORD=$(bw get password 'SMTP Relay Password' --session $BW_SESSION)

bw logout > /dev/null

# Stop and remove old container
docker stop myapp 2>/dev/null || true
docker rm myapp 2>/dev/null || true

# Start new container with secrets injected as env vars
# Secrets only exist in the running container's environment
# They're never written to disk or stored in compose files
docker run -d \
  --name myapp \
  --restart unless-stopped \
  -e "DATABASE_URL=postgresql://app:${DB_PASSWORD}@db:5432/myapp" \
  -e "REDIS_URL=redis://:${REDIS_PASSWORD}@redis:6379" \
  -e "JWT_SECRET=${JWT_SECRET}" \
  -e "SMTP_PASSWORD=${SMTP_PASSWORD}" \
  --network app_network \
  myapp:latest

# Clear secrets from shell variables immediately
unset DB_PASSWORD REDIS_PASSWORD JWT_SECRET SMTP_PASSWORD

echo "Container deployed. Secrets cleared."
docker logs myapp --tail 10

Kubernetes Secret Creation from Vaultwarden

For Kubernetes deployments, create secrets directly from Vaultwarden content rather than maintaining separate secret manifests or using sealed-secrets:

#!/bin/bash
# sync-k8s-secrets.sh — Sync secrets from Vaultwarden to Kubernetes
# Run this as part of your deployment pipeline before helm upgrade

set -euo pipefail

NAMESPACE="production"

# Authenticate to Vaultwarden
bw config server https://vault.yourdomain.com > /dev/null
BW_SESSION=$(bw login [email protected] \
  --passwordenv BW_K8S_PASSWORD --raw 2>/dev/null)
bw sync --session "$BW_SESSION" > /dev/null

# Create/update Kubernetes secret from Vaultwarden
kubectl create secret generic app-secrets \
  --namespace="$NAMESPACE" \
  --from-literal=db-password="$(bw get password 'Production DB Password' --session $BW_SESSION)" \
  --from-literal=redis-password="$(bw get password 'Redis Production' --session $BW_SESSION)" \
  --from-literal=jwt-secret="$(bw get password 'JWT Secret Key' --session $BW_SESSION)" \
  --from-literal=stripe-key="$(bw get password 'Stripe Production Key' --session $BW_SESSION)" \
  --save-config \
  --dry-run=client \
  -o yaml | kubectl apply -f -

bw logout > /dev/null

echo "Kubernetes secrets synced from Vaultwarden"
kubectl get secret app-secrets -n "$NAMESPACE" -o jsonpath='{.metadata.name}'

Secret Rotation Automation

Automated Credential Rotation Pipeline

The real operational value of Vaultwarden as an infrastructure secrets backend emerges when you automate rotation. Here's a complete rotation pipeline for a database password:

#!/bin/bash
# rotate-db-password.sh
# Rotates a PostgreSQL password and updates Vaultwarden + all deployments

set -euo pipefail

DB_HOST="prod-db.internal"
DB_USER="appuser"
DB_NAME="myapp"
VW_ITEM_NAME="Production DB Password"

echo "[1/5] Generating new password..."
NEW_PASSWORD=$(openssl rand -base64 32 | tr -d '/+=\n' | head -c 32)

echo "[2/5] Rotating password in PostgreSQL..."
PSQL_ADMIN_PASS=$(bw get password 'PostgreSQL Admin' --session $BW_SESSION)
PGPASSWORD="$PSQL_ADMIN_PASS" psql \
  -h "$DB_HOST" -U postgres -d "$DB_NAME" \
  -c "ALTER USER ${DB_USER} WITH PASSWORD '${NEW_PASSWORD}';"

echo "[3/5] Updating password in Vaultwarden..."
# Get item ID
ITEM_ID=$(bw get item "$VW_ITEM_NAME" --session $BW_SESSION | jq -r '.id')

# Update the item
bw get item "$ITEM_ID" --session $BW_SESSION | \
  jq --arg pass "$NEW_PASSWORD" '.login.password = $pass' | \
  bw encode | \
  bw edit item "$ITEM_ID" --session $BW_SESSION > /dev/null

echo "[4/5] Redeploying application with new credentials..."
./deploy-container.sh  # Uses the rotation script from above

echo "[5/5] Verifying application health..."
sleep 10
HEALTH=$(curl -sf https://app.yourdomain.com/health | jq -r '.status')
if [ "$HEALTH" != "ok" ]; then
  echo "ERROR: Health check failed after rotation!"
  exit 1
fi

echo "Rotation complete. New password active and verified."

Tips, Gotchas, and Troubleshooting

CLI Session Expires Mid-Pipeline

Bitwarden CLI sessions expire after 15 minutes of inactivity. For long-running pipelines, re-authenticate mid-pipeline rather than using a single session from the start:

# Pattern: fetch all secrets at the start of the pipeline into variables
# Then log out immediately — don't hold the session open

bw config server "$VW_SERVER" > /dev/null
BW_SESSION=$(bw login "$VW_EMAIL" --passwordenv BW_PASSWORD --raw 2>/dev/null)
bw sync --session "$BW_SESSION" > /dev/null

# Fetch everything you need upfront
DB_PASS=$(bw get password 'DB Password' --session $BW_SESSION)
API_KEY=$(bw get password 'API Key' --session $BW_SESSION)
SMTP_PASS=$(bw get password 'SMTP Password' --session $BW_SESSION)

# Log out immediately — session no longer needed
bw logout > /dev/null
unset BW_SESSION BW_PASSWORD

# The fetched values are now in shell variables
# Use them for the rest of the pipeline without a vault connection

Audit Log Shows Unexpected Secret Accesses

# Monitor Vaultwarden logs for access patterns:
docker logs vaultwarden --since 24h | grep -i "cipher"
docker logs vaultwarden --since 24h | grep "ci-service@"

# Common causes of unexpected accesses:
# 1. Pipeline retrying after failure — check CI logs for retry counts
# 2. Multiple pipeline jobs running in parallel — each authenticates separately
# 3. Cron job running more frequently than expected
# 4. Compromised service account — change password immediately

# Check which items the service account accessed:
docker logs vaultwarden --since 24h 2>&1 | \
  grep -E "(ci-service|ansible-service|tf-service)" | \
  grep -v "sync" | \
  tail -50

bw CLI Returns Wrong Item When Multiple Items Have Similar Names

# Problem: bw get item returns the first match alphabetically
# if multiple items have similar names

# Solution 1: Use item IDs instead of names (most reliable)
ITEM_ID=$(bw list items --session $BW_SESSION | \
  jq -r '.[] | select(.name == "Production DB Password" and .collectionIds[0] == "COLLECTION_ID") | .id')
DB_PASS=$(bw get password "$ITEM_ID" --session $BW_SESSION)

# Solution 2: Use unique, unambiguous item names
# "Production DB Password" vs "Staging DB Password" is fine
# "DB Password" with multiple matches is not

# Solution 3: Filter by collection
bw list items --collectionid YOUR_COLLECTION_ID --session $BW_SESSION | \
  jq -r '.[] | select(.name == "DB Password") | .login.password'

Pro Tips

  • Name secrets with environment prefixes — use Production DB Password, Staging DB Password, Dev DB Password as a consistent naming convention. Scripts become readable and the right credential is unambiguous.
  • Use Vaultwarden notes fields for structured metadata — store related configuration alongside secrets. The AWS key item's notes can include the account ID, region, and what IAM policies are attached. The note is encrypted alongside the credential.
  • Create a secrets inventory document — maintain a non-sensitive document (in Git or your wiki) listing which Vaultwarden items each automation script depends on. When a new team member sets up the service account, they know exactly what permissions are needed.
  • Set up alerting for vault unavailability — if Vaultwarden goes down, your CI/CD pipelines stop working. Monitor it with Uptime Kuma and make sure the on-call team is paged before pipelines start failing in production.
  • Rotate service account passwords quarterly — even machine credentials need rotation. Schedule it, document the rotation procedure, and test that pipelines still work after rotation before marking it complete.

Wrapping Up

A Vaultwarden Bitwarden self-host that only manages human passwords is useful. One that's wired into your CI/CD pipelines, Terraform runs, Ansible playbooks, and container deployments is transformative. Every secret in your infrastructure becomes auditable, rotatable, and centrally managed — without any credentials hardcoded in config files, environment variable files, or CI/CD platform secrets that different team members can access and copy.

The pattern is consistent across every tool: authenticate to Vaultwarden at the start of the automated process, fetch exactly what you need, log out immediately, use the values, and clear them when done. The vault stays closed except for exactly the window needed to retrieve a specific credential.

Build on this foundation from our previous guides — deployment and initial setup and team configuration and security hardening — and you have a complete, enterprise-grade secrets management system running entirely on infrastructure you own.


Need a Zero-Trust Secrets Architecture for Your Infrastructure?

Designing a secrets management system that covers human credentials, CI/CD pipelines, infrastructure-as-code, and container deployments — with proper audit trails, rotation policies, and access controls — is an architecture project. The sysbrix team designs and implements zero-trust secrets infrastructure for engineering teams that are done with hardcoded credentials and ad-hoc secret management.

Talk to Us →
Deploy with Coolify: Git Deployments, Databases, Multi-Server Management, and Production CI/CD
Go beyond basic app deployment — learn how to wire up Git-based CI/CD, manage production databases with automated backups, run multi-server infrastructure from one dashboard, and configure resource limits and health checks in Coolify.