Windmill Self-Host Setup: Turn Scripts and Workflows Into Internal Tools That Actually Run
Every engineering team has a graveyard of scripts — Python files scattered across laptops, bash scripts living in Slack messages, cron jobs no one remembers setting up. Windmill is the answer: an open-source platform that turns scripts into schedulable, auditable, shareable jobs with a UI, API, and proper secret management. It supports Python, TypeScript, Go, Bash, and SQL out of the box. This guide walks you through a complete Windmill self-host setup from scratch — from Docker deployment to running your first workflow in production.
Prerequisites
- A Linux server or local machine (Ubuntu 20.04+ recommended)
- Docker Engine and Docker Compose v2 installed
- At least 2GB RAM and 10GB disk space
- A domain name (optional for local testing, required for team access)
- Ports 80 and 443 available, or a custom port like 8000
- Basic familiarity with Docker Compose and YAML
Check your environment before starting:
docker --version
docker compose version
free -h
df -h /
What Is Windmill and When Should You Use It?
Windmill is an open-source developer platform for building internal tools, scripts, and automations. It's what you get when you cross a script runner with a workflow engine and add a proper UI on top.
Core Capabilities
- Script editor with auto-generated UI — write a Python or TypeScript function, Windmill generates a form UI from the function signature automatically. Anyone can run it without touching code.
- Multi-step workflows — chain scripts together with branches, loops, error handling, and approval steps. Build DAGs visually or in code.
- Scheduler — cron-based scheduling for any script or workflow. Full run history, logs, and retry logic built in.
- Secret and variable management — store API keys and credentials as encrypted secrets. Scripts reference them by name — the actual values never appear in code.
- Webhooks — every script gets a webhook endpoint. Trigger runs from external systems with a single HTTP POST.
- Multi-language — Python, TypeScript/Deno, Go, Bash, SQL. Mix languages across workflow steps.
- Audit log — every run is logged with who triggered it, what inputs were used, and what the output was.
Windmill vs. n8n vs. Airflow
n8n excels at connecting third-party SaaS tools with a visual node editor. Airflow is built for data pipeline orchestration at scale. Windmill lives in a different lane: it's for teams that write actual code and need a platform to manage, share, and schedule that code without the overhead of a full data engineering setup. If your workflows involve custom Python logic, database queries, or API calls written by your own developers — Windmill is the right tool.
Deploying Windmill with Docker Compose
Clone the Official Compose Setup
Windmill maintains an official Docker Compose configuration. Clone it and use it as the base:
mkdir -p ~/windmill
cd ~/windmill
# Download the official Compose file
curl -o docker-compose.yml \
https://raw.githubusercontent.com/windmill-labs/windmill/main/docker-compose.yml
# Download the default Caddy config (Windmill uses Caddy as its reverse proxy)
curl -o Caddyfile \
https://raw.githubusercontent.com/windmill-labs/windmill/main/Caddyfile
Configure the Environment
Create a .env file with the critical settings. Windmill needs a base URL and a PostgreSQL connection at minimum:
# .env
# Your public-facing domain (or IP for local testing)
WINDMILL_BASE_URL=https://windmill.yourdomain.com
# Database credentials — used internally by the Compose stack
POSTGRES_DB=windmill
POSTGRES_USER=windmill
POSTGRES_PASSWORD=a-strong-postgres-password
# Number of workers — increase for parallel job execution
NUM_WORKERS=2
# Worker mode: default handles all job types
WORKER_GROUP=default
Review and Start the Stack
The official Compose file includes Windmill server, workers, PostgreSQL, and Caddy. Update the Caddyfile to use your domain:
# Caddyfile — replace with your domain
windmill.yourdomain.com {
reverse_proxy windmill_server:8000
}
For local testing without a domain, update it to listen on a port instead:
# Local testing Caddyfile
:8080 {
reverse_proxy windmill_server:8000
}
Now bring the stack up:
docker compose up -d
# Watch the startup logs
docker compose logs -f windmill_server
# Verify all services are healthy
docker compose ps
Once windmill_server is running, open your browser at http://localhost:8080 (or your domain). You'll be prompted to create your first superadmin account and workspace. Do that, then you're in.
Writing Scripts and Building Workflows
Your First Script: Python
In the Windmill UI, go to + New → Script → Python. Scripts in Windmill are just functions — the arguments become the auto-generated UI form. Here's a real example: a script that sends a Slack notification:
# Windmill Python script: send_slack_notification.py
# Dependencies are declared inline using pip-style comments
# requirements:
# requests
import requests
def main(
message: str,
channel: str = "#general",
webhook_url: str = "$var:SLACK_WEBHOOK_URL" # References a Windmill secret
):
"""
Send a message to a Slack channel via webhook.
"""
payload = {
"channel": channel,
"text": message,
"username": "Windmill Bot"
}
response = requests.post(webhook_url, json=payload)
response.raise_for_status()
return {"status": "sent", "channel": channel, "message": message}
That $var:SLACK_WEBHOOK_URL reference pulls the value from Windmill's secret store at runtime — the actual webhook URL never appears in the script code. Click Test, fill in the form Windmill generated from your function signature, and run it. The output, logs, and duration all appear immediately.
TypeScript Script Example
TypeScript scripts run on Deno in Windmill. Dependencies are imported via URL — no package.json or npm install needed:
// Windmill TypeScript script: fetch_github_pr_count.ts
import { Octokit } from "npm:@octokit/rest@20";
export async function main(
owner: string,
repo: string,
state: "open" | "closed" | "all" = "open",
github_token: string = "$var:GITHUB_TOKEN"
): Promise<{ count: number; repo: string; state: string }> {
const octokit = new Octokit({ auth: github_token });
const { data } = await octokit.pulls.list({
owner,
repo,
state,
per_page: 1
});
// Get total from Link header pagination
const prs = await octokit.paginate(octokit.pulls.list, {
owner,
repo,
state
});
return { count: prs.length, repo: `${owner}/${repo}`, state };
}
Building a Multi-Step Workflow
Workflows in Windmill chain scripts together, passing outputs as inputs to the next step. Go to + New → Flow. Each step is an existing script or inline code. The flow editor lets you:
- Reference outputs from previous steps using
results.step_name - Add branches for conditional logic (if/else at the workflow level)
- Add for loops to iterate over arrays in parallel or sequentially
- Add approval steps that pause execution and wait for a human to approve before continuing
- Add error handlers that run a specific script when any step fails
A typical workflow might look like: fetch data from an API → transform it with a Python script → write to a database → send a Slack summary. Each step is independently testable and reusable across other workflows.
Secrets, Variables, and Resource Management
Storing Secrets Securely
Go to Variables → New Variable and toggle Secret on. Secrets are encrypted at rest and never returned in plaintext through the API after creation. Reference them in scripts with $var:VARIABLE_NAME.
You can also manage variables via the Windmill CLI, which is useful for seeding secrets in CI/CD:
# Install Windmill CLI
npm install -g windmill-cli
# or
pip install wmill
# Authenticate
wmill workspace add my-workspace https://windmill.yourdomain.com --token YOUR_TOKEN
wmill workspace switch my-workspace
# Push a secret variable
wmill variable push \
--path u/admin/SLACK_WEBHOOK_URL \
--value "https://hooks.slack.com/services/..." \
--is-secret
Resource Types for Database Connections
Windmill has a concept of Resources — typed connection objects for databases, cloud providers, and APIs. Instead of storing a raw PostgreSQL connection string as a secret, you create a PostgreSQL resource with the proper schema. Scripts reference it as a structured object and Windmill passes the connection details at runtime.
In the UI: Resources → New Resource → PostgreSQL. Fill in host, port, database, user, and password. Then in a Python script:
# requirements:
# psycopg2-binary
import psycopg2
from typing import TypedDict
class postgresql(TypedDict):
host: str
port: int
user: str
password: str
dbname: str
sslmode: str
def main(
db: postgresql, # Windmill injects the resource
query: str = "SELECT COUNT(*) FROM users WHERE created_at > NOW() - INTERVAL '1 day'"
):
conn = psycopg2.connect(**{k: v for k, v in db.items() if v})
cur = conn.cursor()
cur.execute(query)
result = cur.fetchall()
conn.close()
return {"rows": result}
Scheduling, Webhooks, and the CLI
Scheduling a Script or Workflow
Open any script or flow and click Schedule. Enter a cron expression and the default input values. The schedule is saved, the next run time is shown, and all historical runs are logged with their inputs and outputs.
# Example cron expressions
0 9 * * 1-5 # Every weekday at 9am
*/15 * * * * # Every 15 minutes
0 0 * * * # Daily at midnight
0 6 * * 1 # Every Monday at 6am
Triggering Scripts via Webhook
Every script and workflow in Windmill has a webhook URL. Find it under Triggers → Webhook inside the script editor. Call it from any external system:
# Trigger a script run via webhook
curl -X POST \
https://windmill.yourdomain.com/api/w/my-workspace/jobs/run/p/u/admin/send_slack_notification \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_WINDMILL_TOKEN' \
-d '{
"message": "Deploy completed successfully",
"channel": "#deployments"
}'
# The response includes a job ID you can poll for results
# GET /api/w/my-workspace/jobs/completed/get/{job_id}
Using the Windmill CLI for GitOps
Scripts and flows can be synced to a Git repository, enabling proper version control and CI/CD for your automations:
# Pull all scripts/flows from Windmill to local files
wmill sync pull
# Push local changes back to Windmill
wmill sync push
# Run a specific script from the CLI
wmill script run u/admin/send_slack_notification \
--data '{"message": "test from CLI", "channel": "#test"}'
# Watch logs for a specific job
wmill job logs JOB_ID
Tips, Gotchas, and Troubleshooting
Worker Not Picking Up Jobs
Jobs sit in the queue but never execute — the worker isn't connecting. Check worker logs and confirm it's reaching the database:
docker compose logs windmill_worker --tail 50
# Check all services are healthy
docker compose ps
# Restart the worker if it's stuck
docker compose restart windmill_worker
The most common cause is a database connection issue — double-check your DATABASE_URL env var matches your Postgres container name and credentials exactly.
Script Dependencies Not Installing
For Python scripts, Windmill installs dependencies declared in the # requirements: block at the top of the file. If a dependency fails to install, the job fails at the preparation stage — look for pip install errors in the job logs rather than the script output. Common fixes:
- Pin the version:
# psycopg2-binary==2.9.9 - Use the binary variant for compiled packages:
psycopg2-binarynotpsycopg2 - Check that the worker container has internet access to reach PyPI
Updating Windmill
cd ~/windmill
# Pull latest images
docker compose pull
# Restart with new images — migrations run automatically
docker compose up -d
# Verify server came back up
docker compose logs windmill_server --tail 20
PostgreSQL data, scripts, workflows, secrets, and schedules all survive updates — they live in the database volume, not the container.
Scaling Workers for Parallel Execution
If jobs are queuing up because the single worker is saturated, scale horizontally. Add more worker replicas in Compose:
# Scale to 4 workers without editing the Compose file
docker compose up -d --scale windmill_worker=4
# Or set replicas in docker-compose.yml:
windmill_worker:
image: ghcr.io/windmill-labs/windmill:main
deploy:
replicas: 4
Pro Tips
- Use the script preview panel aggressively — the inline test runner in the editor lets you iterate on scripts with real inputs before saving. Faster than writing unit tests for most internal tools.
- Build an app UI on top of your scripts — Windmill's App Builder lets you create drag-and-drop internal tools that wrap your scripts with buttons, tables, and forms. No frontend code required.
- Keep workflows flat where possible — deeply nested workflows with many branches are hard to debug. Prefer multiple simple flows triggered in sequence over one mega-workflow that does everything.
- Use approval steps for destructive operations — any workflow that deletes data or charges a customer should have a human approval step. Windmill pauses, sends a notification, and waits for someone to click approve.
- Commit your workspace to Git — use
wmill sync pullin a CI job to export your scripts on every change. You get version history, diffs, and rollback for free.
Wrapping Up
A solid Windmill self-host setup gives your team a place where scripts actually live — versioned, scheduled, auditable, and runnable by anyone on the team without SSH access or local environment setup. It's the difference between "run this Python file on your laptop" and "here's a form, fill it in and click Run."
Start with the Docker Compose deployment, migrate your most frequently-run internal scripts into Windmill first, and add schedules where you're currently using cron. Once the habit forms, the workflow builder and app builder open up a whole layer of internal tooling that would otherwise take weeks to build from scratch.
Need Windmill Deployed and Integrated Into Your Stack?
If you're rolling out Windmill for a team — with SSO, Git sync, worker autoscaling, or integration into your existing CI/CD and data infrastructure — the sysbrix team can design and implement it end to end. We get your internal tooling platform production-ready so your team can focus on what runs on it, not what keeps it running.
Talk to Us →