Skip to Content

Vaultwarden Bitwarden Self-Host: High Availability, SSO Integration, Migration from Cloud Password Managers, and Compliance Auditing

Complete your Vaultwarden production deployment with a highly available multi-instance setup, SSO via OIDC, automated migration from LastPass and Bitwarden Cloud, and the compliance-grade audit logging your security team actually needs.
Vaultwarden setup guide

Vaultwarden Bitwarden Self-Host: High Availability, SSO Integration, Migration from Cloud Password Managers, and Compliance Auditing

The first three guides in this series covered the complete lifecycle: deployment and basic configuration, team organizations and security hardening, and CI/CD secrets automation and zero-trust credential access. This final guide covers the operational maturity requirements that enterprise deployments demand: a highly available multi-instance setup that survives node failures, SSO via OpenID Connect so users authenticate with their existing corporate identity, systematic migration from Bitwarden Cloud and LastPass without losing credentials, and compliance-grade audit logging that satisfies your security team's requirements.


Prerequisites

  • A running Vaultwarden instance — see our deployment guide
  • Team organizations configured — see our team configuration guide
  • For HA: at least two servers and a PostgreSQL instance accessible from both
  • For SSO: a running OIDC provider (Keycloak, Authentik, Google Workspace, Azure AD)
  • For migration: export files from your current password manager
  • An S3-compatible bucket or remote storage for synchronized attachment storage

Confirm your current Vaultwarden state before making changes:

# Check current Vaultwarden version and config:
docker exec vaultwarden cat /data/config.json 2>/dev/null || echo "No config.json"
docker logs vaultwarden --tail 5

# Count current users and organizations:
docker exec vaultwarden sqlite3 /data/db.sqlite3 \
  "SELECT 'Users: ' || COUNT(*) FROM users UNION ALL SELECT 'Orgs: ' || COUNT(*) FROM organizations;"

# Verify HTTPS is working:
curl -s https://vault.yourdomain.com/api/config | jq '{version: .version, server: .server_url}'

# Create a full backup before making any changes:
docker exec vaultwarden sqlite3 /data/db.sqlite3 ".backup '/data/pre-upgrade-$(date +%Y%m%d).db'"
docker cp vaultwarden:/data/pre-upgrade-$(date +%Y%m%d).db ~/vaultwarden-backup-$(date +%Y%m%d).db
echo "Backup created"

High Availability: Running Multiple Vaultwarden Instances

A single Vaultwarden instance is a single point of failure for every credential your team depends on. If the server is rebooted during an active deployment, nobody can look up the database password. HA deployment eliminates this by running multiple instances sharing a common PostgreSQL backend with a load balancer routing traffic between them.

Architecture Overview

Vaultwarden's HA architecture has three layers:

  • Load balancer — distributes requests across instances, performs health checks, removes unhealthy nodes
  • Vaultwarden instances — two or more stateless application containers sharing common storage
  • Shared storage — PostgreSQL for vault data, S3-compatible storage for file attachments

Switching from SQLite to PostgreSQL

# Migrate from SQLite to PostgreSQL (required for HA)
# This is a one-way migration — backup SQLite first!

# Step 1: Create PostgreSQL database:
docker exec -it postgres psql -U postgres \
  -c "CREATE DATABASE vaultwarden OWNER vaultwarden;"

# Step 2: Install migration tool:
pip install sqlite3-to-postgres
# Or use the Vaultwarden migration utility:

# Step 3: Stop Vaultwarden before migration:
docker stop vaultwarden

# Step 4: Migrate SQLite to PostgreSQL:
docker run --rm \
  -v vaultwarden_data:/data \
  --network vaultwarden_net \
  -e DATABASE_URL="postgresql://vaultwarden:${PG_PASSWORD}@postgres:5432/vaultwarden" \
  vaultwarden/server:latest \
  /vaultwarden migrate-from-sqlite /data/db.sqlite3

# Step 5: Update Vaultwarden to use PostgreSQL:
# Update .env file:
# Remove or comment out:
# DATABASE_URL=
# Add:
DATABASE_URL=postgresql://vaultwarden:${PG_PASSWORD}@postgres:5432/vaultwarden

# Step 6: Restart and verify:
docker compose up -d vaultwarden
docker logs vaultwarden --tail 20 | grep -iE '(postgres|database|migrat|error)'

# Verify data migrated:
docker exec postgres psql -U vaultwarden vaultwarden \
  -c "SELECT COUNT(*) as users FROM users; SELECT COUNT(*) as ciphers FROM ciphers;"

Multi-Instance Docker Compose for HA

# docker-compose.yml for HA Vaultwarden
version: '3.8'

services:
  postgres:
    image: postgres:15-alpine
    container_name: vaultwarden_db
    restart: unless-stopped
    environment:
      POSTGRES_DB: vaultwarden
      POSTGRES_USER: vaultwarden
      POSTGRES_PASSWORD: ${PG_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - vaultwarden_net
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U vaultwarden"]
      interval: 10s
      retries: 5

  vaultwarden_1:
    image: vaultwarden/server:latest
    container_name: vaultwarden_node1
    restart: unless-stopped
    environment:
      DATABASE_URL: postgresql://vaultwarden:${PG_PASSWORD}@postgres:5432/vaultwarden
      DOMAIN: https://vault.yourdomain.com
      ADMIN_TOKEN: ${ADMIN_TOKEN}
      WEBSOCKET_ENABLED: "true"
      SMTP_HOST: ${SMTP_HOST}
      SMTP_PORT: ${SMTP_PORT}
      SMTP_USERNAME: ${SMTP_USERNAME}
      SMTP_PASSWORD: ${SMTP_PASSWORD}
      SMTP_FROM: ${SMTP_FROM}
      SIGNUPS_ALLOWED: "false"
      # Attachments stored in S3 (shared between instances)
      USE_SYSLOG: "true"
      LOG_LEVEL: warn
    volumes:
      # Shared RSA keys — MUST be identical on all instances
      - vaultwarden_keys:/data/keys
      # Attachments via S3 mount or NFS (NOT local filesystem in HA)
    ports:
      - "8081:80"
      - "3013:3012"  # WebSocket port
    depends_on:
      postgres:
        condition: service_healthy
    networks:
      - vaultwarden_net

  vaultwarden_2:
    image: vaultwarden/server:latest
    container_name: vaultwarden_node2
    restart: unless-stopped
    environment:
      # Identical to node1 — share the same config
      DATABASE_URL: postgresql://vaultwarden:${PG_PASSWORD}@postgres:5432/vaultwarden
      DOMAIN: https://vault.yourdomain.com
      ADMIN_TOKEN: ${ADMIN_TOKEN}
      WEBSOCKET_ENABLED: "true"
      SMTP_HOST: ${SMTP_HOST}
      SMTP_PORT: ${SMTP_PORT}
      SMTP_USERNAME: ${SMTP_USERNAME}
      SMTP_PASSWORD: ${SMTP_PASSWORD}
      SMTP_FROM: ${SMTP_FROM}
      SIGNUPS_ALLOWED: "false"
      USE_SYSLOG: "true"
      LOG_LEVEL: warn
    volumes:
      - vaultwarden_keys:/data/keys  # SAME volume as node1 — critical!
    ports:
      - "8082:80"
      - "3014:3012"
    depends_on:
      postgres:
        condition: service_healthy
    networks:
      - vaultwarden_net

volumes:
  postgres_data:
  vaultwarden_keys:  # Shared RSA keys — both instances must use identical keys

networks:
  vaultwarden_net:

Nginx Load Balancer for HA

# /etc/nginx/sites-available/vaultwarden-ha

upstream vaultwarden {
    # ip_hash ensures session stickiness — same user always hits same node
    # This is important for WebSocket connections
    ip_hash;

    server localhost:8081 max_fails=3 fail_timeout=30s;
    server localhost:8082 max_fails=3 fail_timeout=30s backup;  # Failover node
    keepalive 16;
}

# Separate upstream for WebSocket connections:
upstream vaultwarden_ws {
    ip_hash;
    server localhost:3013 max_fails=3 fail_timeout=30s;
    server localhost:3014 max_fails=3 fail_timeout=30s backup;
}

server {
    listen 443 ssl http2;
    server_name vault.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/vault.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/vault.yourdomain.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    # Security headers:
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    add_header X-Content-Type-Options nosniff always;
    add_header X-Frame-Options SAMEORIGIN always;

    # Health check endpoint — returns node status:
    location /alive {
        proxy_pass http://vaultwarden;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        access_log off;
    }

    # WebSocket for real-time sync:
    location /notifications/hub {
        proxy_pass http://vaultwarden_ws;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location /notifications/hub/negotiate {
        proxy_pass http://vaultwarden;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
    }

    location / {
        proxy_pass http://vaultwarden;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_read_timeout 90s;
    }
}

# Test HA failover:
# docker stop vaultwarden_node1
# curl https://vault.yourdomain.com/alive
# Should still return 200 — served by node2
# docker start vaultwarden_node1

SSO Integration with OpenID Connect

Vaultwarden 1.30+ supports OpenID Connect (OIDC) SSO, which lets users authenticate with their existing corporate identity provider — Keycloak, Authentik, Google Workspace, Azure AD, or any OIDC-compatible provider. Users click "Log in with SSO" and authenticate through your IdP without creating a separate Vaultwarden password.

Configuring OIDC in Vaultwarden

# Enable SSO in your .env file:
# Vaultwarden 1.30+ required for SSO support

SSOENABLED=true

# OIDC configuration — adjust for your provider:
# For Keycloak:
SSOCLIENT_ID=vaultwarden
SSOCLIENT_SECRET=your-keycloak-client-secret
SSOAUTHORITY=https://auth.yourdomain.com/realms/company

# For Authentik:
# SSOAUTHORITY=https://authentik.yourdomain.com/application/o/vaultwarden/

# For Google Workspace:
# SSOAUTHORITY=https://accounts.google.com
# SSOSCOPES=openid email profile  (Google needs explicit scopes)

# For Azure AD:
# SSOAUTHORITY=https://login.microsoftonline.com/YOUR_TENANT_ID/v2.0

# Optional: Force SSO for all users (disable password login)
# SSO_ONLY=true  # WARNING: locks out users without IdP access

# Allow both SSO and password login during transition:
SSOONLY=false

# Redirect URL to configure in your OIDC provider:
# https://vault.yourdomain.com/identity/connect/oidc-signin

# Restart Vaultwarden to apply:
docker compose up -d --force-recreate vaultwarden
docker logs vaultwarden --tail 20 | grep -i sso

Configuring the OIDC Client in Keycloak

# Create a Keycloak client for Vaultwarden:
# Keycloak Admin → Clients → Create Client

# Client settings:
# Client ID: vaultwarden
# Client Type: OpenID Connect
# Client Authentication: ON (confidential)
# Valid Redirect URIs: https://vault.yourdomain.com/identity/connect/oidc-signin
# Web Origins: https://vault.yourdomain.com

# After creating, go to Credentials tab:
# Copy the Client Secret → set as SSOCLIENT_SECRET in Vaultwarden

# Configure via kcadm.sh:
docker exec keycloak /opt/keycloak/bin/kcadm.sh create clients \
  --server https://auth.yourdomain.com \
  --realm company \
  --user admin \
  -s clientId=vaultwarden \
  -s enabled=true \
  -s clientAuthenticatorType=client-secret \
  -s 'redirectUris=["https://vault.yourdomain.com/identity/connect/oidc-signin"]' \
  -s 'webOrigins=["https://vault.yourdomain.com"]' \
  -s standardFlowEnabled=true \
  -s publicClient=false

# Verify SSO is working by checking the discovery document:
curl -s https://auth.yourdomain.com/realms/company/.well-known/openid-configuration | \
  jq '{issuer: .issuer, auth_endpoint: .authorization_endpoint}'

# Test SSO login from the browser:
# Open https://vault.yourdomain.com
# Click "Enterprise Single Sign-On"
# Enter your organization identifier (configured in Vaultwarden admin)
# Should redirect to Keycloak login page

Migrating from Cloud Password Managers

Moving your team from Bitwarden Cloud, LastPass, or 1Password to self-hosted Vaultwarden requires careful planning. The migration window is brief but critical — credentials need to transfer without losing history, without exposing anything in plaintext for longer than necessary, and without disrupting people's daily work.

Migrating from Bitwarden Cloud

#!/bin/bash
# migrate-from-bitwarden-cloud.sh
# Automates the Bitwarden Cloud → self-hosted Vaultwarden migration
# Run on each user's machine

set -euo pipefail

# Configuration:
SELF_HOSTED_URL="https://vault.yourdomain.com"
CLOUD_EMAIL="[email protected]"

echo "=== Bitwarden Cloud to Self-Hosted Migration ==="
echo "This will migrate your vault to: $SELF_HOSTED_URL"
echo ""

# Step 1: Install/update Bitwarden CLI
npm install -g @bitwarden/cli --silent 2>/dev/null || true

# Step 2: Log in to Bitwarden Cloud and export
echo "[1/4] Exporting from Bitwarden Cloud..."
bw config server https://vault.bitwarden.com
bw logout 2>/dev/null || true
bw login "$CLOUD_EMAIL"

export BW_SESSION=$(bw unlock --raw)
bw sync --session "$BW_SESSION" > /dev/null

# Export encrypted JSON (most secure — preserves folder structure and notes)
bw export --format encrypted_json --session "$BW_SESSION" \
  --output "./vault_export_$(date +%Y%m%d).json"

echo "Export saved. Log out from cloud..."
bw logout

# Step 3: Connect to self-hosted instance
echo "[2/4] Connecting to self-hosted Vaultwarden..."
bw config server "$SELF_HOSTED_URL"
bw login "$CLOUD_EMAIL"  # Create account on self-hosted first
export BW_SESSION=$(bw unlock --raw)

# Step 4: Import the exported vault
echo "[3/4] Importing vault to self-hosted..."
bw import bitwardenjson "./vault_export_$(date +%Y%m%d).json" \
  --session "$BW_SESSION"

# Step 5: Verify
echo "[4/4] Verifying migration..."
ITEM_COUNT=$(bw list items --session "$BW_SESSION" | jq 'length')
echo "Migrated $ITEM_COUNT items to self-hosted Vaultwarden"

# Clean up the plaintext export immediately:
rm -f "./vault_export_$(date +%Y%m%d).json"
echo "Export file deleted."

bw logout
echo "Migration complete! Please log in to $SELF_HOSTED_URL to verify."

Migrating from LastPass

#!/usr/bin/env python3
# migrate-from-lastpass.py
# Converts LastPass CSV export to Bitwarden JSON format for Vaultwarden import

import csv
import json
import sys
import os
from datetime import datetime

def lastpass_to_bitwarden(csv_path: str) -> dict:
    """Convert LastPass CSV export to Bitwarden JSON import format."""
    items = []
    folders = set()

    with open(csv_path, 'r', encoding='utf-8-sig') as f:
        reader = csv.DictReader(f)
        for row in reader:
            # Collect folder names
            if row.get('grouping'):
                folders.add(row['grouping'])

            # Determine item type
            if row.get('username') or row.get('password') or row.get('url'):
                item_type = 1  # Login
            elif row.get('notes', '').startswith('NoteType:'):
                item_type = 2  # Secure Note
            else:
                item_type = 1  # Default to login

            item = {
                "id": None,
                "organizationId": None,
                "folderId": None,
                "type": item_type,
                "name": row.get('name', 'Unnamed Item'),
                "notes": row.get('extra', '') or None,
                "favorite": row.get('fav', '0') == '1',
                "fields": [],
                "reprompt": 0,
                "collectionIds": []
            }

            if item_type == 1:  # Login
                item["login"] = {
                    "uris": [{"match": None, "uri": row.get('url', '')}] if row.get('url') else [],
                    "username": row.get('username', ''),
                    "password": row.get('password', ''),
                    "totp": row.get('totp', '') or None
                }
            elif item_type == 2:  # Secure Note
                item["secureNote"] = {"type": 0}

            items.append(item)

    export = {
        "encrypted": False,
        "folders": [{"id": None, "name": f} for f in sorted(folders)],
        "items": items
    }

    print(f"Converted {len(items)} items from {len(folders)} folders")
    return export

if __name__ == "__main__":
    if len(sys.argv) < 2:
        print("Usage: python3 migrate-from-lastpass.py lastpass_export.csv")
        sys.exit(1)

    csv_path = sys.argv[1]
    output_path = f"bitwarden_import_{datetime.now().strftime('%Y%m%d_%H%M')}.json"

    data = lastpass_to_bitwarden(csv_path)
    with open(output_path, 'w') as f:
        json.dump(data, f, indent=2)

    print(f"Saved to: {output_path}")
    print(f"Import via: bw import bitwardenjson {output_path}")
    print("⚠️  Delete the CSV and JSON files immediately after import!")
    print("Both contain plaintext passwords.")

Compliance Auditing and Event Monitoring

Regulated industries (healthcare, finance, SOC 2, ISO 27001) require demonstrable evidence of access controls, credential governance, and activity monitoring. Vaultwarden's admin panel and database provide the raw data — this section covers how to extract it systematically and turn it into compliance evidence.

Extracting Audit Events from the Database

#!/usr/bin/env python3
# vaultwarden-audit.py
# Generates compliance audit reports from Vaultwarden's database
# Run monthly for SOC 2 / ISO 27001 evidence collection

import sqlite3
import json
import os
from datetime import datetime, timedelta
from pathlib import Path

# If using PostgreSQL instead of SQLite:
# import psycopg2
# conn = psycopg2.connect(os.environ['DATABASE_URL'])

DB_PATH = os.environ.get('VAULTWARDEN_DB', '/data/db.sqlite3')
REPORT_DIR = Path('/opt/reports/vaultwarden')
REPORT_DIR.mkdir(parents=True, exist_ok=True)

def get_connection():
    return sqlite3.connect(DB_PATH)

def report_active_users(conn) -> dict:
    """List all active users with their last login date."""
    cur = conn.cursor()
    cur.execute("""
        SELECT
            u.email,
            u.name,
            u.created_at,
            u.last_active,
            CASE u.enabled WHEN 1 THEN 'active' ELSE 'disabled' END as status,
            (SELECT COUNT(*) FROM users_organizations uo WHERE uo.user_uuid = u.uuid) as org_count
        FROM users u
        ORDER BY u.last_active DESC NULLS LAST
    """)
    cols = [d[0] for d in cur.description]
    return {"users": [dict(zip(cols, r)) for r in cur.fetchall()]}

def report_dormant_accounts(conn, days: int = 90) -> dict:
    """Find accounts that haven't been active in N days."""
    threshold = (datetime.utcnow() - timedelta(days=days)).isoformat()
    cur = conn.cursor()
    cur.execute("""
        SELECT email, name, last_active, created_at
        FROM users
        WHERE enabled = 1
          AND (last_active IS NULL OR last_active < ?)
        ORDER BY last_active ASC NULLS FIRST
    """, (threshold,))
    cols = [d[0] for d in cur.description]
    dormant = [dict(zip(cols, r)) for r in cur.fetchall()]
    return {
        "dormant_threshold_days": days,
        "dormant_account_count": len(dormant),
        "dormant_accounts": dormant,
        "recommendation": f"Review and disable/remove accounts inactive for {days}+ days"
    }

def report_organization_access(conn) -> dict:
    """Audit which users have access to which organizations."""
    cur = conn.cursor()
    cur.execute("""
        SELECT
            o.name as organization,
            u.email,
            u.name as user_name,
            CASE uo.atype
                WHEN 0 THEN 'Owner'
                WHEN 1 THEN 'Admin'
                WHEN 2 THEN 'User'
                WHEN 3 THEN 'Manager'
                WHEN 4 THEN 'Custom'
            END as role,
            CASE uo.status
                WHEN 0 THEN 'Invited'
                WHEN 1 THEN 'Accepted'
                WHEN 2 THEN 'Confirmed'
            END as membership_status
        FROM users_organizations uo
        JOIN organizations o ON uo.org_uuid = o.uuid
        JOIN users u ON uo.user_uuid = u.uuid
        ORDER BY organization, role
    """)
    cols = [d[0] for d in cur.description]
    return {"org_memberships": [dict(zip(cols, r)) for r in cur.fetchall()]}

def report_password_health(conn) -> dict:
    """Summarize credential counts and types per organization."""
    cur = conn.cursor()
    cur.execute("""
        SELECT
            COALESCE(o.name, 'Personal') as owner,
            COUNT(*) as total_items,
            SUM(CASE c.atype WHEN 1 THEN 1 ELSE 0 END) as login_items,
            SUM(CASE c.atype WHEN 2 THEN 1 ELSE 0 END) as secure_notes,
            SUM(CASE c.atype WHEN 3 THEN 1 ELSE 0 END) as cards,
            SUM(CASE c.atype WHEN 4 THEN 1 ELSE 0 END) as identities
        FROM ciphers c
        LEFT JOIN organizations o ON c.organization_uuid = o.uuid
        GROUP BY owner
        ORDER BY total_items DESC
    """)
    cols = [d[0] for d in cur.description]
    return {"vault_contents": [dict(zip(cols, r)) for r in cur.fetchall()]}

# Generate the full report:
conn = get_connection()
report_date = datetime.utcnow().strftime('%Y-%m-%d')

report = {
    "report_generated": datetime.utcnow().isoformat(),
    "report_type": "Vaultwarden Compliance Audit",
    "period": report_date,
    "active_users": report_active_users(conn),
    "dormant_accounts": report_dormant_accounts(conn, days=90),
    "organization_access": report_organization_access(conn),
    "vault_contents": report_password_health(conn)
}

conn.close()

# Save report:
output_path = REPORT_DIR / f"audit-{report_date}.json"
output_path.write_text(json.dumps(report, indent=2, default=str))
print(f"Audit report saved: {output_path}")

# Summary:
users = report['active_users']['users']
dormant = report['dormant_accounts']['dormant_accounts']
print(f"Total users: {len(users)}")
print(f"Dormant (90+ days): {len(dormant)}")
if dormant:
    print("Dormant accounts requiring review:")
    for u in dormant[:5]:
        print(f"  - {u['email']} (last active: {u['last_active']})")

# Schedule monthly:
# 0 8 1 * * python3 /opt/scripts/vaultwarden-audit.py | mail -s "Monthly Vault Audit" [email protected]

Automated Deprovisioning for Departed Employees

#!/bin/bash
# deprovision-user.sh
# Offboards a departing user from Vaultwarden
# Run as part of your HR offboarding process

set -euo pipefail

USER_EMAIL="${1:-}"
if [ -z "$USER_EMAIL" ]; then
  echo "Usage: $0 [email protected]"
  exit 1
fi

VW_URL="https://vault.yourdomain.com"
ADMIN_TOKEN="$VAULTWARDEN_ADMIN_TOKEN"

echo "Offboarding: $USER_EMAIL from Vaultwarden"

# Get user ID via admin API:
USER_UUID=$(curl -s "${VW_URL}/admin/users" \
  -H 'Content-Type: application/json' \
  -b "admin_token=${ADMIN_TOKEN}" | \
  jq -r --arg email "$USER_EMAIL" '.[] | select(.email == $email) | .id')

if [ -z "$USER_UUID" ]; then
  echo "User not found: $USER_EMAIL"
  exit 1
fi

echo "Found user: $USER_UUID"

# Step 1: Disable the account (prevents login, preserves data)
curl -X POST "${VW_URL}/admin/users/${USER_UUID}/disable" \
  -b "admin_token=${ADMIN_TOKEN}"
echo "Account disabled"

# Step 2: Remove from all organizations:
curl -X DELETE "${VW_URL}/admin/users/${USER_UUID}/remove-org" \
  -b "admin_token=${ADMIN_TOKEN}" 2>/dev/null || true
echo "Removed from organizations"

# Step 3: Invalidate all sessions (force logout from all devices):
curl -X POST "${VW_URL}/admin/users/${USER_UUID}/sessions/purge" \
  -b "admin_token=${ADMIN_TOKEN}"
echo "Sessions invalidated"

# Step 4: Log the action for compliance:
echo "$(date -u +%Y-%m-%dT%H:%M:%SZ) | OFFBOARD | $USER_EMAIL | $USER_UUID" \
  >> /var/log/vaultwarden-offboarding.log

echo "Offboarding complete for: $USER_EMAIL"
echo "Note: Account data preserved for 30 days before deletion per retention policy"

Tips, Gotchas, and Troubleshooting

HA Instances Out of Sync After Network Partition

# Vaultwarden's HA model uses PostgreSQL for all state
# If instances appear out of sync, it's usually a PostgreSQL connection issue

# Check both instances can reach the database:
docker exec vaultwarden_node1 curl -s http://localhost/alive
docker exec vaultwarden_node2 curl -s http://localhost/alive

# Check PostgreSQL connections from each node:
docker exec vaultwarden_node1 env | grep DATABASE_URL
docker exec vaultwarden_node2 env | grep DATABASE_URL
# Both must point to the SAME PostgreSQL host and database

# Check for diverged RSA keys (most common sync issue):
# Both nodes must use identical RSA keys for JWT signing
# Verify by checking the shared volume:
docker exec vaultwarden_node1 ls -la /data/keys/
docker exec vaultwarden_node2 ls -la /data/keys/
# File modification times and sizes must be identical
# If different, one node generated its own keys — fix by:
# 1. Stop both nodes
# 2. Delete keys from node2 volume
# 3. Copy keys from node1 to node2
# 4. Restart both nodes

# Test that tokens from node1 are valid on node2:
# Log in through nginx (gets routed to one node)
# Try to use the session (nginx may route to the other node)
# If JWT validation fails: keys are different between nodes

SSO Login Redirecting Incorrectly

# Check Vaultwarden SSO config is loaded:
docker logs vaultwarden --tail 30 | grep -i sso

# Verify the OIDC discovery document is accessible:
curl -s "${SSOAUTHORITY}/.well-known/openid-configuration" | \
  jq '{issuer: .issuer, auth: .authorization_endpoint, token: .token_endpoint}'

# Common redirect URI issues:
# The redirect URI registered in your OIDC provider must EXACTLY match:
# https://vault.yourdomain.com/identity/connect/oidc-signin
# Case-sensitive, no trailing slash

# Test the full OIDC flow:
curl -v "${SSOAUTHORITY}/protocol/openid-connect/auth?" \
  "client_id=vaultwarden&" \
  "response_type=code&" \
  "redirect_uri=https://vault.yourdomain.com/identity/connect/oidc-signin&" \
  "scope=openid+email+profile"
# Should return a 302 redirect to your OIDC provider's login page

# If using Keycloak and getting "client not found":
# Check the client is in the right REALM
# SSOAUTHORITY must end with /realms/YOUR_REALM_NAME
# NOT /realms/master (master realm is for Keycloak admin only)

Migration Import Duplicating Items

# Bitwarden/Vaultwarden import doesn't check for duplicates
# Running import twice creates duplicate entries

# Check current item count before import:
bw list items --session $BW_SESSION | jq 'length'

# If you accidentally imported twice:
# Option 1: Delete all items and re-import (clean start)
bw list items --session $BW_SESSION | jq -r '.[].id' | \
  xargs -I{} bw delete item {} --session $BW_SESSION

# Option 2: Find and delete duplicates:
python3 << 'EOF'
import json
import subprocess
import os

session = os.environ['BW_SESSION']
result = subprocess.run(
    ['bw', 'list', 'items', '--session', session],
    capture_output=True, text=True
)
items = json.loads(result.stdout)

# Group by name + URL to find duplicates:
from collections import defaultdict
groups = defaultdict(list)
for item in items:
    key = f"{item['name']}|{item.get('login', {}).get('uris', [{}])[0].get('uri', '')}"
    groups[key].append(item['id'])

# Delete all but the first occurrence of each duplicate:
for key, ids in groups.items():
    if len(ids) > 1:
        for dup_id in ids[1:]:  # Keep first, delete rest
            subprocess.run(['bw', 'delete', 'item', dup_id, '--session', session])
            print(f"Deleted duplicate: {key[:50]}")
EOF

Pro Tips

  • Run PostgreSQL with automatic failover for true HA — a Vaultwarden HA cluster is only as available as its PostgreSQL instance. Use Patroni or AWS RDS Multi-AZ for PostgreSQL HA, or at minimum schedule frequent PostgreSQL backups and test the restore time against your RTO requirements.
  • Test SSO failure modes before rolling it out to the team — if your OIDC provider goes down and SSO_ONLY=true is set, nobody can log in. Keep a break-glass admin account with a local password that bypasses SSO for emergency access during IdP outages. Never set SSO_ONLY=true without a documented emergency access procedure.
  • Schedule quarterly access reviews using the audit script — set a calendar reminder to run the audit report, review dormant accounts, and remove access that's no longer needed. This is the most important ongoing operational task for credential security, and it's easy to defer indefinitely without a schedule.
  • Document the encryption key storage location for all four guides' worth of admins — this is the fourth guide and the single most important thing to document: where is the admin token? Where is the GPG backup encryption key? Who has it? Test that at least two people can independently access and restore the vault from scratch.
  • Treat the PostgreSQL database as the crown jewel — in a PostgreSQL-backed Vaultwarden HA setup, all vault data is in the database. The Vaultwarden containers are stateless (except for RSA keys). Your backup, monitoring, and DR planning should be centered on PostgreSQL, not the application containers.

Wrapping Up

This four-guide series now covers the complete Vaultwarden production lifecycle: initial deployment and client configuration, team organizations and security hardening, CI/CD secrets and infrastructure automation, and this guide's HA setup, SSO integration, migration tooling, and compliance auditing.

The result is a credential management platform that meets enterprise requirements without a SaaS vendor in the supply chain. Every credential your organization depends on lives on infrastructure you control, backed up according to your retention policies, audited according to your compliance requirements, and accessible via your existing corporate identity — not a separate password manager account your IT team can't govern.


Need Enterprise Credential Infrastructure Built for Your Organization?

Designing HA Vaultwarden with PostgreSQL clustering, OIDC SSO integration with your existing IdP, systematic migration from current password managers, and compliance audit tooling that satisfies your security team — the sysbrix team builds credential management infrastructure for organizations that need enterprise-grade security on self-controlled infrastructure.

Talk to Us →
Windmill Self-Host Setup: Git Sync, Worker Groups, App Builder, and Enterprise Workflow Patterns
Go beyond basic Windmill deployment — learn how to implement GitOps for your scripts, configure dedicated worker groups for different workloads, build internal tools with the App Builder, and design enterprise workflow patterns that handle failures, approvals, and complex branching.