Skip to Content

Deploy with Coolify: Git Deployments, Databases, Multi-Server Management, and Production CI/CD

Go beyond basic app deployment — learn how to wire up Git-based CI/CD, manage production databases with automated backups, run multi-server infrastructure from one dashboard, and configure resource limits and health checks in Coolify.
Coolify setup guide

Deploy with Coolify: Git Deployments, Databases, Multi-Server Management, and Production CI/CD

Getting your first app deployed on Coolify takes minutes. Running a real production platform on it — with Git-based CI/CD that auto-deploys on push, managed PostgreSQL with scheduled S3 backups, multiple servers managed from one dashboard, resource limits that prevent one app from starving the others, and health checks that catch broken deployments before they go live — takes a bit more configuration. This guide covers advanced Coolify usage for teams running serious infrastructure on their own servers.

If you haven't installed Coolify and deployed your first app yet, start with our Coolify getting started guide which covers installation, HTTPS setup, and basic Docker image deployments. This guide picks up from a working Coolify instance with at least one app running.


Prerequisites

  • A running Coolify instance with HTTPS configured — see our getting started guide
  • At least one application already deployed and working
  • A GitHub, GitLab, or Gitea account with repositories you want to deploy from
  • SSH access to your VPS for debugging and advanced operations
  • Ports 80, 443, and 8000 open (Coolify dashboard)

Confirm your Coolify instance is healthy before making changes:

ssh root@your-server-ip

# Check all Coolify containers are running
docker ps | grep coolify

# Check Traefik (reverse proxy) is healthy
docker ps | grep traefik

# Verify Coolify API is responding
curl -I http://localhost:8000
# Should return 200

Git-Based CI/CD Deployments

Coolify's most powerful feature for development teams is its Git integration — connect a repository, configure a branch, and every push automatically triggers a build and deploy. No separate CI/CD pipeline needed for straightforward deployments.

Connecting a Git Source

In the Coolify dashboard, go to Sources → Add. Connect your Git provider:

  • GitHub — install the Coolify GitHub App on your organization or personal account. This gives Coolify read access to repos and the ability to register webhooks without a personal access token.
  • GitLab — create a GitLab OAuth application, paste the credentials into Coolify. Works for both gitlab.com and self-hosted GitLab instances.
  • Gitea — use a Gitea personal access token. Set the Gitea instance URL to your self-hosted Gitea domain.
  • Any public Git URL — paste a repository URL directly without authentication for public repos.

Deploying from a Git Repository

Create a new resource: New Resource → Application → Public/Private Repository. Select your connected source, choose the repository and branch. Coolify then needs to know how to build your app:

  • Dockerfile — if your repo has a Dockerfile, Coolify detects and uses it automatically. Full control over the build.
  • Nixpacks — Coolify auto-detects your language and framework (Node.js, Python, Ruby, Go, etc.) and builds without a Dockerfile. Works for most standard apps out of the box.
  • Buildpacks — Heroku-compatible buildpacks for legacy apps already configured for Heroku-style deployment.

Configuring the Webhook for Auto-Deploy

After creating the application, go to its Webhooks tab to get the webhook URL. Register it in your Git provider:

# For GitHub — add webhook via API:
curl -X POST https://api.github.com/repos/YOUR_ORG/YOUR_REPO/hooks \
  -H "Authorization: Bearer YOUR_GITHUB_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "web",
    "active": true,
    "events": ["push"],
    "config": {
      "url": "https://coolify.yourdomain.com/webhooks/source/github/events/manual?token=YOUR_COOLIFY_WEBHOOK_TOKEN",
      "content_type": "json",
      "insecure_ssl": "0"
    }
  }'

# Alternatively: in GitHub repo → Settings → Webhooks → Add webhook
# Paste the URL from Coolify's Webhooks tab
# Content type: application/json
# Events: Just the push event

Branch-Based Deployment Strategy

Configure Coolify to deploy different branches to different environments. The pattern: one Coolify application per environment, each pointing at a different branch of the same repository:

# Recommended multi-environment setup in Coolify:

# Production app:
# Repository: github.com/yourorg/yourapp
# Branch: main
# Domain: app.yourdomain.com
# Environment: Production env vars

# Staging app:
# Repository: github.com/yourorg/yourapp
# Branch: staging
# Domain: staging.yourdomain.com
# Environment: Staging env vars (staging DB, test API keys)

# Preview app (optional — per PR):
# Coolify supports preview deployments natively
# Enable in app settings: "Enable Preview Deployments"
# Each PR gets its own subdomain: pr-42.yourdomain.com

Build Arguments and Secrets

Build-time secrets (API keys needed during docker build, not at runtime) go in the application's Build Arguments section. Runtime secrets go in Environment Variables. Keep them separate — build args can end up in Docker layer history; runtime env vars do not:

# In your Dockerfile, reference build args safely:
FROM node:20-alpine AS builder

# Build arg — only available during build, not in final image
ARG NPM_TOKEN
RUN echo "//registry.npmjs.org/:_authToken=${NPM_TOKEN}" > ~/.npmrc
RUN npm ci
RUN rm ~/.npmrc  # Remove before final layer

FROM node:20-alpine AS runtime
COPY --from=builder /app/dist ./dist
# Runtime env vars injected by Coolify at container start
CMD ["node", "dist/server.js"]

# In Coolify: Application → Build Arguments
# NPM_TOKEN = your-npm-token (marked as secret)

Managed Databases with Automated Backups

Deploying a Production Database

Coolify manages database lifecycle — deployment, persistent storage, connection strings, and scheduled backups. Go to New Resource → Database and choose your engine. For a PostgreSQL database:

  • Name: something descriptive like myapp-postgres-prod
  • Version: pin to a specific minor version (e.g., 16.2) — never use latest for databases
  • Username/Password/Database: Coolify generates these; copy the connection string before leaving the page
  • Publicly available: leave off unless you specifically need external access — databases should be internal only

Connecting Apps to Coolify-Managed Databases

# Coolify provides two connection strings per database:

# Internal URL (use this when app is on same Coolify server):
# postgresql://user:pass@postgres-container-name:5432/dbname
# Faster, doesn't leave the Docker network, no firewall rules needed

# External URL (use for external tools, local dev):
# postgresql://user:pass@your-server-ip:EXPOSED_PORT/dbname
# Only available if "Publicly available" is enabled

# In your app's Environment Variables in Coolify:
# DATABASE_URL = postgresql://coolify:generatedpass@myapp-postgres-prod:5432/myapp

# Verify connectivity from inside the app container:
docker exec YOUR_APP_CONTAINER \
  sh -c 'pg_isready -h myapp-postgres-prod -U coolify'
# Should output: myapp-postgres-prod:5432 - accepting connections

Configuring S3 Backups

Every database in Coolify supports scheduled backups to S3-compatible storage. Go to the database settings → Backups and configure:

# Coolify backup settings (configured in dashboard):
# S3 Endpoint: https://s3.yourdomain.com  (or s3.amazonaws.com for AWS)
# S3 Bucket: your-backup-bucket
# S3 Access Key: YOUR_ACCESS_KEY
# S3 Secret Key: YOUR_SECRET_KEY
# S3 Region: us-east-1  (or your region)
# Backup Schedule: 0 2 * * *  (daily at 2am)
# Retention: 30  (keep 30 backups)

# Test the backup manually:
# Database → Backups → Run Now

# Verify backup landed in S3:
aws s3 ls s3://your-backup-bucket/coolify/ --recursive | tail -5
# Or with mc if using MinIO:
mc ls minio/your-backup-bucket/coolify/

Multi-Server Infrastructure Management

Coolify's multi-server feature lets you manage applications across multiple VPS instances from a single dashboard. This is where Coolify starts to feel like a genuine internal cloud platform.

Adding a Remote Server

# On the REMOTE server — prepare it to accept Coolify management:
# 1. Install Docker (Coolify will do this if you let it)
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER

# 2. Generate an SSH key pair ON THE COOLIFY SERVER:
ssh-keygen -t ed25519 -C "coolify-server-key" -f ~/.ssh/coolify_remote
cat ~/.ssh/coolify_remote.pub
# Copy this public key

# 3. Add the public key to the remote server's authorized_keys:
ssh root@remote-server-ip
echo "PASTE_PUBLIC_KEY_HERE" >> ~/.ssh/authorized_keys

# 4. In Coolify dashboard:
# Servers → Add Server
# IP: remote-server-ip
# Port: 22
# User: root
# Private Key: paste contents of ~/.ssh/coolify_remote

# 5. Coolify validates connectivity and installs Docker if missing
# Click "Validate & Save" then "Install Docker" if prompted

Workload Separation Strategy

With multiple servers connected, plan your workload placement deliberately. A practical separation for a growing team:

  • Server 1 (Coolify host) — Coolify itself, shared tools, low-traffic internal apps
  • Server 2 (Application server) — production-facing web apps and APIs
  • Server 3 (Data server) — databases, Redis, MinIO — storage-optimized disk, no public traffic
  • Server 4 (Worker server) — background workers, n8n, AI tools — CPU-optimized, can be stopped outside business hours

For a practical example of deploying automation tools like n8n on a dedicated Coolify-managed server, see our guide on n8n Coolify deployment.

Deploying Apps to Specific Servers

When creating a new application or database, Coolify lets you select which connected server it runs on. You can also move existing resources between servers from the resource settings page. Each server runs its own Traefik instance managed by Coolify — domains and SSL are configured per-server automatically.


Health Checks and Resource Limits

Configuring Health Checks

Coolify uses Docker health checks to determine whether a deployment succeeded. Without a health check, Coolify marks a deploy as successful the moment the container starts — even if the app inside immediately crashes. Configure it in your application's Advanced settings:

# Option 1: Health check in Dockerfile (preferred — version-controlled)
FROM node:20-alpine

# Install curl for health check
RUN apk add --no-cache curl

COPY . .
RUN npm ci --production

EXPOSE 3000

# Health check: hit /health every 30s, timeout after 10s
# Unhealthy if it fails 3 times in a row
HEALTHCHECK --interval=30s --timeout=10s --retries=3 --start-period=40s \
  CMD curl -f http://localhost:3000/health || exit 1

CMD ["node", "server.js"]

# Option 2: Health check in Coolify Advanced settings:
# Health Check Command: curl -f http://localhost:3000/health
# Interval: 30
# Timeout: 10
# Retries: 3
# Start Period: 40

Your app needs a /health endpoint that returns a non-2xx status if the app is unhealthy (database disconnected, required services unavailable, etc.). A minimal Node.js implementation:

// health.js — add to your Express/Fastify/Hono app
app.get('/health', async (req, res) => {
  const checks = {};
  let healthy = true;

  // Check database connectivity
  try {
    await db.query('SELECT 1');
    checks.database = 'ok';
  } catch (err) {
    checks.database = 'error';
    healthy = false;
  }

  // Check Redis connectivity
  try {
    await redis.ping();
    checks.redis = 'ok';
  } catch (err) {
    checks.redis = 'error';
    healthy = false;
  }

  const status = healthy ? 200 : 503;
  res.status(status).json({
    status: healthy ? 'ok' : 'degraded',
    checks,
    timestamp: new Date().toISOString()
  });
});

Setting Resource Limits

Without resource limits, one resource-hungry app can starve everything else on the server. Set limits in each application's Advanced settings:

# Resource limit guidelines by app type:

# Small web app / API (low traffic):
# CPU: 0.5 cores
# Memory: 256MB

# Medium web app / API (moderate traffic):
# CPU: 1.0 cores
# Memory: 512MB

# Node.js / Python app with background processing:
# CPU: 1.0-2.0 cores
# Memory: 1GB

# AI/ML workloads (e.g., Dify worker, n8n with complex flows):
# CPU: 2.0-4.0 cores
# Memory: 2-4GB

# Databases — let them use what they need but cap them:
# PostgreSQL: CPU 2.0, Memory 1-2GB
# Redis: CPU 0.5, Memory 512MB

# Monitor actual usage to tune:
docker stats --no-stream | sort -k4 -h
# Shows CPU and memory usage per container — sorted by memory

Advanced Deployment Patterns

Zero-Downtime Deployments

By default, Coolify stops the old container and starts the new one — brief downtime during deploy. For zero-downtime, enable rolling deployments in the application's Advanced settings. Coolify starts the new container, waits for the health check to pass, then stops the old one. Requires a health check to be configured:

# In Coolify application Advanced settings:
# Rolling Update: Enabled
# Start Period: 40s  (time to wait before checking health)
# Health Check must be configured for this to work

# Verify zero-downtime is working:
# Run a continuous curl loop during a deploy:
while true; do
  STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://app.yourdomain.com/health)
  echo "$(date +%H:%M:%S) - $STATUS"
  sleep 1
done
# Should show 200 throughout the deploy, never going to 000 or 503

Cron Jobs and Scheduled Tasks

Coolify supports running one-off commands and scheduled tasks against any application container. Go to the application → Scheduled Tasks → Add. This is useful for database migrations, cache warming, and report generation:

# Example scheduled tasks to configure in Coolify:

# Run database migrations after each deploy:
# Name: DB Migrations
# Command: node dist/migrate.js
# Schedule: (run after deploy, not on cron)

# Daily cache clear:
# Name: Clear Cache
# Command: node dist/scripts/clear-cache.js
# Schedule: 0 3 * * *

# Weekly report generation:
# Name: Weekly Report
# Command: python scripts/generate_report.py --send-email
# Schedule: 0 8 * * 1  (Monday 8am)

# Or run one-off commands manually via Coolify's terminal:
# Application → Execute Command
docker exec -it YOUR_APP_CONTAINER node -e "console.log(process.env.NODE_ENV)"

Tips, Gotchas, and Troubleshooting

Build Failing with No Useful Error

# View detailed build logs:
# Application → Deployments → [Failed deployment] → Logs

# Or from the server:
docker logs coolify --tail 50 | grep -i error

# Common build failures:

# 1. Out of disk space during build:
df -h /var/lib/docker
# Fix: prune unused Docker artifacts:
docker system prune -af --volumes
# Warning: this removes all stopped containers and unused images

# 2. Out of memory during build (npm install, webpack):
free -h
# Fix: add swap space:
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

# 3. Nixpacks wrong language detected:
# Force the builder in app settings:
# Build Pack: Dockerfile
# Then add a minimal Dockerfile

App Deploys Green But Site Returns 502

# 502 = Traefik can reach the container but the app isn't listening
# Most common cause: wrong internal port configured in Coolify

# Find what port the app is actually listening on:
docker exec YOUR_APP_CONTAINER sh -c 'ss -tlnp || netstat -tlnp'

# Or check the app logs for the bound port:
docker logs YOUR_APP_CONTAINER --tail 30 | grep -iE 'listen|port|started|running'

# Update the port in Coolify:
# Application → Configuration → Ports → Internal Port
# Set to match what your app actually binds to
# Save and redeploy

# Check Traefik routing is correct:
docker exec coolify-proxy traefik healthcheck
docker logs coolify-proxy --tail 20 | grep -i error

Webhook Deployments Not Triggering

# Test webhook manually with curl:
curl -X POST "https://coolify.yourdomain.com/webhooks/source/github/events/manual?token=YOUR_TOKEN" \
  -H 'Content-Type: application/json' \
  -d '{"ref": "refs/heads/main", "repository": {"full_name": "org/repo"}}'

# Check GitHub webhook delivery history:
# GitHub → Repo → Settings → Webhooks → [webhook] → Recent Deliveries
# Look for failed deliveries and their response codes

# Common causes:
# 1. Coolify domain not accessible from GitHub's servers
#    Test: curl -I https://coolify.yourdomain.com from any external machine

# 2. Wrong branch — deployment branch in Coolify must match the pushed branch
#    Check: Application → Configuration → Git → Branch

# 3. Token mismatch — regenerate webhook token in Coolify and update GitHub

Updating Coolify

# Update via the dashboard (recommended):
# Settings → Update Coolify → Update

# Or via CLI on the server:
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash
# The installer is idempotent — safely updates existing installs

# Verify update:
docker inspect coolify | jq -r '.[0].Config.Image'

# All your apps, databases, and configs are preserved
# Coolify data lives in Docker volumes, not the container
docker volume ls | grep coolify

Pro Tips

  • Use Coolify's Teams feature for access control — create teams and invite collaborators with specific server and resource access. Your frontend team doesn't need access to your database servers; your DevOps team doesn't need access to all application environment variables.
  • Pin Docker image versions for databases and infrastructure — use postgres:16.2-alpine not postgres:latest. Automatic version bumps on infrastructure are how production databases get corrupted.
  • Use Coolify's API for infrastructure as code — Coolify has a REST API that lets you create, configure, and deploy resources programmatically. Useful for recreating your entire stack from a script after a disaster or when spinning up a new environment.
  • Set up Coolify's notification webhook — go to Settings → Notifications and configure a webhook or Slack/Discord/Telegram notification for deploy failures. You should know about failed deploys before your users do.
  • Back up the Coolify server itself — Coolify stores its configuration in a local SQLite database. Take a weekly snapshot of your Coolify server or at minimum copy /data/coolify to external storage.

Wrapping Up

Advanced Coolify usage turns your VPS into a genuine internal cloud platform — Git-triggered deployments, managed databases with automatic backups, multi-server infrastructure under one dashboard, health-checked zero-downtime deploys, and resource limits that keep everything running smoothly under load.

The progression is natural: start with the basics covered in our Coolify getting started guide, add Git CI/CD for your first app, wire up database backups, then expand to multiple servers as your workload grows. The pattern is consistent throughout — and once it's working, deploying a new service is genuinely a matter of minutes rather than an afternoon of Nginx and Docker configuration.


Need Your Entire Self-Hosted Platform Designed and Deployed?

Moving a team's full infrastructure to self-hosted — with proper CI/CD, database replication, multi-server failover, secrets management, and monitoring — is a significant architecture project. The sysbrix team designs and implements complete self-hosted platforms on Coolify, so your team inherits a production-ready system rather than building it piece by piece.

Talk to Us →
Dify AI Platform Setup: Advanced RAG Pipelines, Agents, and Production Workflows for Real Apps
Go beyond basic deployment — learn how to build production-grade RAG pipelines, tool-using agents, and multi-step workflows in Dify, then expose them as APIs your team can actually ship.