Workflow automation platforms have become essential infrastructure for modern teams, yet most organizations route their business data through proprietary cloud services that charge per-task premiums and store execution logs on third-party servers. Activepieces is an open-source, code-first automation platform that lets you build complex workflows with a visual builder, run them on your own hardware, and extend functionality with custom pieces written in TypeScript. It supports triggers from webhooks, schedules, and third-party services, then executes actions across databases, APIs, messaging systems, and AI models.
This guide deploys Activepieces on Ubuntu with Docker Compose, Caddy for automatic HTTPS, PostgreSQL with pgvector for workflow metadata and vector search, and Redis for job queuing and caching. By the end, you will have a production-ready automation stack that can replace commercial workflow tools, keep sensitive data on premises, and scale horizontally by adding worker replicas.
Architecture and flow overview
Caddy sits at the edge and terminates TLS with automatically managed Letβs Encrypt certificates. It reverse-proxies HTTPS traffic to the Activepieces app container on its internal HTTP port. The app container serves the React frontend and the REST API, persists flow definitions, run history, user accounts, and project settings in PostgreSQL, and enqueues job payloads in Redis. Worker containers poll Redis for pending jobs, execute the flow steps inside isolated sandbox processes, and write results back to the database.
All services except Caddy remain inside an isolated Docker bridge network. The database and Redis are unreachable from the public internet because they do not publish host ports. The app and worker containers share a cache volume for flow artifacts and temporary files, but worker sandboxes run with restricted permissions to limit blast radius if a piece behaves unexpectedly. Persistent volumes keep PostgreSQL data, Redis append-only files, Caddy certificates, and Activepieces cache across container restarts and image upgrades.
Prerequisites
- Ubuntu 22.04 or 24.04 LTS server with at least 2 vCPU, 4 GB RAM, and 40 GB SSD. For teams running more than one hundred concurrent flows, scale to 4 vCPU and 8 GB RAM, and provision a separate worker host.
- Docker Engine 24.x and the Docker Compose plugin installed. Verify with
docker compose version. - A DNS A record pointing
automation.example.comto your server public IP. Do not start Caddy before DNS resolves, or certificate rate limits may block your domain temporarily. - SMTP credentials for email notifications. Activepieces uses email for password resets, invitation links, and failure alerts.
- UFW enabled with default-deny incoming, plus SSH allowed from your management IP range and ports 80 and 443 open for Caddy.
Step-by-step deployment
1. Create directory structure and environment file
Create a dedicated directory for the Activepieces stack, set strict permissions, and prepare an environment file for secrets that never enters version control. Use strong passwords and store the real values in your password manager or secrets vault.
sudo mkdir -p /opt/activepieces/{caddy-data,caddy-config,cache}
sudo useradd -r -s /usr/sbin/nologin -d /opt/activepieces activepieces || true
sudo chown -R activepieces:activepieces /opt/activepieces/cache
sudo chmod 750 /opt/activepieces
cd /opt/activepieces
cat > .env <<'EOF'
AP_FRONTEND_URL=https://automation.example.com
AP_WEBHOOK_URL=https://automation.example.com
AP_POSTGRES_DATABASE=activepieces
AP_POSTGRES_HOST=postgres
AP_POSTGRES_PORT=5432
AP_POSTGRES_USERNAME=activepieces
AP_POSTGRES_PASSWORD=$(openssl rand -hex 24)
AP_REDIS_URL=redis://redis:6379
AP_ENCRYPTION_KEY=$(openssl rand -hex 16)
AP_JWT_SECRET=$(openssl rand -hex 32)
AP_TELEMETRY_ENABLED=false
AP_ENVIRONMENT=production
AP_ENGINE_EXECUTABLE_PATH=dist/packages/engine/main.js
AP_SMTP_HOST=smtp.sendgrid.net
AP_SMTP_PORT=587
AP_SMTP_USERNAME=apikey
AP_SMTP_PASSWORD=your-smtp-password
AP_SMTP_SENDER_NAME=Activepieces
[email protected]
EOF
chmod 600 .env
2. Write the Docker Compose file
Pin major image versions, define restart policies, keep the database and Redis on the private network, and run the app container without published ports because Caddy handles ingress. The worker depends on the app so that migrations run before background jobs begin.
cat > docker-compose.yml <<'EOF'
services:
caddy:
image: caddy:2.8
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy-data:/data
- ./caddy-config:/config
depends_on:
- app
networks:
- activepieces
app:
image: ghcr.io/activepieces/activepieces:0.83.0
container_name: activepieces-app
restart: unless-stopped
env_file: .env
environment:
- AP_CONTAINER_TYPE=APP
volumes:
- ./cache:/usr/src/app/cache
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
networks:
- activepieces
worker:
image: ghcr.io/activepieces/activepieces:0.83.0
restart: unless-stopped
env_file: .env
environment:
- AP_CONTAINER_TYPE=WORKER
volumes:
- ./cache:/usr/src/app/cache
depends_on:
- app
- postgres
- redis
networks:
- activepieces
deploy:
replicas: 2
postgres:
image: pgvector/pgvector:0.8.0-pg14
container_name: activepieces-postgres
restart: unless-stopped
env_file: .env
environment:
- POSTGRES_DB=${AP_POSTGRES_DATABASE}
- POSTGRES_USER=${AP_POSTGRES_USERNAME}
- POSTGRES_PASSWORD=${AP_POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${AP_POSTGRES_USERNAME} -d ${AP_POSTGRES_DATABASE}"]
interval: 10s
timeout: 5s
retries: 5
networks:
- activepieces
redis:
image: redis:7.0.7
container_name: activepieces-redis
restart: unless-stopped
volumes:
- redis_data:/data
networks:
- activepieces
volumes:
postgres_data:
redis_data:
networks:
activepieces:
driver: bridge
EOF
3. Write the Caddyfile
Caddy handles HTTPS automatically, compresses responses, and adds security headers. The reverse proxy forwards all traffic to the app container. If you need WebSocket support for real-time updates, Caddy upgrades the connection transparently.
cat > Caddyfile <<'EOF'
automation.example.com {
reverse_proxy app:80
encode gzip
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
}
}
EOF
4. Start the stack
Pull images, create volumes, and start services in detached mode. Watch the app container logs until migrations complete and the API starts listening. The first startup may take two to three minutes because the app runs database migrations and seeds default pieces.
docker compose pull
docker compose up -d
sleep 10
docker compose logs -f app
When you see a log line indicating the server is listening on port 80, open your browser and navigate to https://automation.example.com. Register the first admin account, create an organization, and verify that email sending works by inviting a test user.
5. Verify worker execution
Create a simple scheduled flow that sends an HTTP request to a public endpoint, then trigger a manual test run. Watch the worker logs to confirm the job is picked up and executed successfully.
docker compose logs -f worker
Configuration and secrets handling
Never commit the .env file to Git. Store a backup in your team password manager or a secrets vault such as HashiCorp Vault or 1Password. Rotate the AP_ENCRYPTION_KEY and AP_JWT_SECRET during annual security reviews; note that changing AP_ENCRYPTION_KEY after data exists will invalidate stored credentials, so plan a migration window.
Configure SMTP through the environment file rather than the UI so that settings survive container recreation. If you use SendGrid, Amazon SES, or Mailgun, replace the AP_SMTP_* values with the provider-specific host, port, and API key. For Microsoft 365, use smtp.office365.com on port 587 with STARTTLS.
Webhooks are a common attack surface for automation platforms. Set AP_WEBHOOK_URL to the public HTTPS endpoint so that external services can deliver payloads. If you run Activepieces behind a corporate firewall, whitelist the partner IP ranges or use a dedicated webhook ingress path. Inside the UI, enforce signature verification on every webhook trigger to prevent spoofed events from running flows.
Verification
Confirm that all containers are healthy and that the public URL responds with a valid certificate.
docker compose ps
curl -sS -o /dev/null -w "%{http_code}" https://automation.example.com/api/v1/health
Run a database connectivity check from the app container to ensure migrations applied cleanly.
docker compose exec app npx pg_isready -h postgres -p 5432 -U activepieces
Create a test flow with a webhook trigger and a PostgreSQL insert action, trigger it with curl, and query the database to confirm the row was written. Check Redis for queue depth with docker compose exec redis redis-cli LLEN bull:flow-queue. A depth of zero after flow completion indicates workers are keeping up with demand.
Common issues and fixes
App container exits with migration errors
If the app crashes on startup with a database schema error, the postgres healthcheck may have passed before the database was actually ready to accept DDL. Stop the stack, remove the app and worker containers, and restart so migrations run after PostgreSQL is fully initialized.
docker compose down
docker compose rm -f app worker
docker compose up -d app worker
Flows stay in pending state and never execute
This usually means the worker container cannot reach Redis or the AP_CONTAINER_TYPE variable is missing. Verify the worker environment includes AP_CONTAINER_TYPE=WORKER and that Redis is reachable on the activepieces network. Check worker logs for connection refused errors.
Caddy serves a blank page or 502 error
Ensure the DNS A record resolves to the server IP before Caddy starts. If you started Caddy too early, stop the stack, clear the Caddy data directory to remove cached certificate states, and restart.
sudo rm -rf /opt/activepieces/caddy-data/*
docker compose up -d caddy
Email invitations never arrive
Test SMTP connectivity manually from the app container with swaks or openssl s_client. If the provider requires TLS on port 465 instead of STARTTLS on port 587, update AP_SMTP_PORT and verify the provider documentation. Also confirm the sender domain has valid SPF and DKIM records.
Worker memory usage grows over time
Each flow step spawns a sandbox process. Memory leaks in custom pieces or long-running loops can exhaust RAM. Set Docker memory limits on the worker container, monitor with docker stats, and restart workers nightly if needed until the upstream memory issue is patched.
FAQ
Can I scale workers independently of the app container?
Yes. The worker container is stateless and polls Redis for jobs. Increase the deploy.replicas count in the Compose file, or run worker containers on separate hosts that connect to the same Redis and PostgreSQL endpoints. Horizontal scaling is limited by database connection pool size and Redis throughput, so monitor both before adding more than ten workers.
Does Activepieces support AI pieces in self-hosted mode?
Yes. The self-hosted image includes the standard AI pieces for OpenAI, Anthropic, and local models. If you need vector search for embeddings, the PostgreSQL image already includes pgvector. Configure an AI piece with your API key inside the UI; the key is encrypted with AP_ENCRYPTION_KEY before storage.
How do I back up the PostgreSQL database?
Use pg_dump from a temporary container or the host. Schedule a nightly cron job that writes dumps to an S3-compatible object store or a separate backup server. Test restoration quarterly on a staging instance to confirm dump integrity.
What is the difference between APP and WORKER containers?
The APP container serves the REST API, React frontend, and runs database migrations. The WORKER container polls Redis for pending flow jobs and executes them inside sandbox processes. Separating the roles lets you scale workers without exposing additional API surface area, and lets you restart workers without dropping web traffic.
Can I use an external SMTP relay like SendGrid or Amazon SES?
Yes. Any SMTP-compatible relay works. For SendGrid, set AP_SMTP_HOST to smtp.sendgrid.net, AP_SMTP_PORT to 587, and AP_SMTP_USERNAME to apikey with your SendGrid API key as the password. For Amazon SES, use the SMTP endpoint and credentials from the SES console.
How do I update Activepieces without losing flows?
Pull the new image tag, run docker compose up -d, and the app container will apply any pending migrations automatically. Always back up PostgreSQL before major version upgrades. Review the upstream release notes for breaking changes in piece APIs or environment variables.
Is pgvector required, or can I use standard PostgreSQL?
Activepieces works with standard PostgreSQL, but the official image uses pgvector because some AI pieces store embeddings in the same database. If you do not plan to use vector features, you can replace the image with postgres:14-alpine. Migration between the two is transparent because the schemas are identical.
How do I secure the worker containers?
Run workers on a private network segment with no public IP. Mount only the cache volume, not the Docker socket. Set resource limits in the Compose file to prevent a runaway flow from consuming all host memory. Review custom pieces before installing them from the community catalog, because they execute code inside the worker sandbox.
Internal links
- Production Guide: Deploy n8n with Docker Compose + Caddy + PostgreSQL + Redis on Ubuntu
- Production Guide: Deploy Ghost with Docker Compose + Caddy + MySQL on Ubuntu
- Production Guide: Deploy Vaultwarden with Docker Compose + Caddy on Ubuntu
Talk to us
If you want this implemented with hardened defaults, observability, and tested recovery playbooks, our team can help.