OpenClaw Docker VPS Deploy: Run Your AI Personal Assistant in the Cloud 24/7
Running OpenClaw on your laptop is fine for testing. Running it on a VPS is where it gets genuinely useful — your AI assistant is always on, always reachable from any device, responds to messages while you sleep, and runs scheduled tasks without needing your computer to be open. This guide covers a complete OpenClaw Docker VPS deploy: from a blank server to a fully operational AI assistant accessible from Telegram, Signal, Discord, or the web — running on infrastructure you own.
If you're new to OpenClaw and haven't done a basic local setup yet, start with our guide on how to set up OpenClaw as an open-source AI personal assistant first. This guide picks up where that one leaves off and focuses specifically on VPS deployment.
Prerequisites
- A Linux VPS (Ubuntu 22.04 LTS recommended) with at least 1 vCPU and 1GB RAM — 2GB+ recommended if running multiple agents
- A public IP address and SSH root or sudo access
- Docker Engine and Docker Compose v2 installed
- A domain name (optional but strongly recommended for HTTPS and webhook endpoints)
- An API key for at least one LLM provider (OpenAI, Anthropic, etc.) or a local Ollama endpoint
- Node.js 20+ installed on the VPS (required if installing OpenClaw via npm rather than Docker image)
- Ports 80, 443, and 3000 open on your firewall
SSH into your VPS and verify the environment:
ssh root@your-server-ip
# Check OS
lsb_release -a
# Check available resources
free -h
nproc
df -h /
# Verify Docker is installed and running
docker --version
docker compose version
sudo systemctl status docker
# Check firewall
sudo ufw status
What Is OpenClaw and What Changes on a VPS?
OpenClaw is an open-source AI assistant platform that runs locally and connects to messaging channels, executes skills, manages memory, and runs scheduled tasks via cron jobs. On a local machine it works great — but it only runs when your computer is on and available.
What You Gain on a VPS
- Always-on availability — your assistant responds to messages at 3am without your laptop being open
- Reliable cron jobs — scheduled reminders, reports, and automations fire on time, every time
- Webhook reachability — messaging platforms like Telegram and Discord can POST webhook events to your VPS's public IP; this doesn't work on a local machine behind NAT
- Persistent memory — conversation history and memory files live on a stable server, not a machine that gets rebooted or goes to sleep
- Multi-channel access — reach your assistant from any device, anywhere, through whatever messaging channel you've configured
Architecture on a VPS
The VPS deployment runs OpenClaw inside a Docker container with persistent volumes for configuration, memory, and agent workspaces. A reverse proxy (Nginx) sits in front to handle HTTPS and route webhook traffic. The OpenClaw Gateway daemon manages the runtime, and individual agents run as isolated processes within the container.
Installing OpenClaw on the VPS
Step 1: Install Node.js and OpenClaw
OpenClaw is distributed as an npm package. Install Node.js 20 via NodeSource and then install OpenClaw globally:
# Install Node.js 20 via NodeSource
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
# Verify
node --version # Should be v20.x.x
npm --version
# Install OpenClaw globally
npm install -g openclaw
# Verify installation
openclaw --version
openclaw help
Step 2: Initialize OpenClaw
Run the OpenClaw setup wizard to create your initial configuration:
# Initialize OpenClaw — creates ~/.openclaw directory with default config
openclaw init
# The init wizard will ask for:
# - Your LLM provider and API key
# - Your gateway public URL (use your domain: https://claw.yourdomain.com)
# - Initial agent configuration
# Verify the config directory was created
ls ~/.openclaw/
# Should show: config.json, agents/, shared/
Step 3: Configure Your LLM Provider
Edit the main OpenClaw config to set your LLM provider. The config lives at ~/.openclaw/config.json:
# View current config
cat ~/.openclaw/config.json
# Key fields to configure:
# {
# "gateway": {
# "bind": "0.0.0.0:3000",
# "remote": {
# "url": "https://claw.yourdomain.com"
# }
# },
# "model": {
# "provider": "openai",
# "name": "gpt-4o",
# "apiKey": "sk-your-openai-key"
# }
# }
# Or configure via CLI:
openclaw config set model.apiKey sk-your-api-key
openclaw config set gateway.remote.url https://claw.yourdomain.com
Running OpenClaw as a Persistent Service
Option 1: Docker Compose (Recommended)
Containerizing OpenClaw makes it portable, restartable, and easy to update. Create a dedicated directory for the deployment:
mkdir -p ~/openclaw-deploy
cd ~/openclaw-deploy
# docker-compose.yml
version: '3.8'
services:
openclaw:
image: node:20-alpine
container_name: openclaw
restart: unless-stopped
working_dir: /app
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- OPENCLAW_GATEWAY_BIND=0.0.0.0:3000
- OPENCLAW_GATEWAY_URL=https://claw.yourdomain.com
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- TZ=UTC
volumes:
# Persist OpenClaw config, memory, and agent workspaces
- openclaw_config:/root/.openclaw
# Optional: mount host Docker socket for container-aware skills
- /var/run/docker.sock:/var/run/docker.sock:ro
command: sh -c "npm install -g openclaw && openclaw gateway start"
volumes:
openclaw_config:
Create the .env file with your API keys:
# .env
OPENAI_API_KEY=sk-your-openai-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here
# Never commit this file to version control
Option 2: systemd Service (Direct Install)
If you prefer running OpenClaw directly on the host without Docker, set it up as a systemd service so it starts automatically and restarts on failure:
# Create systemd service file
sudo tee /etc/systemd/system/openclaw.service << 'EOF'
[Unit]
Description=OpenClaw AI Assistant Gateway
After=network.target
Wants=network-online.target
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu
ExecStart=/usr/bin/openclaw gateway start
Restart=always
RestartSec=10
Environment=NODE_ENV=production
EnvironmentFile=/home/ubuntu/.openclaw/.env
StandardOutput=journal
StandardError=journal
SyslogIdentifier=openclaw
[Install]
WantedBy=multi-user.target
EOF
# Create the env file for sensitive values
cat > ~/.openclaw/.env << 'EOF'
OPENAI_API_KEY=sk-your-key-here
EOF
chmod 600 ~/.openclaw/.env
# Enable and start the service
sudo systemctl daemon-reload
sudo systemctl enable openclaw
sudo systemctl start openclaw
# Check status
sudo systemctl status openclaw
journalctl -u openclaw -f
Configuring HTTPS and Webhook Access
For messaging platforms to send events to OpenClaw (Telegram webhooks, Discord interactions, etc.), your VPS needs a publicly reachable HTTPS endpoint. Set up Nginx as a reverse proxy:
DNS Setup
# Create an A record pointing your subdomain at the VPS:
# Type: A
# Name: claw (→ claw.yourdomain.com)
# Value: YOUR_VPS_IP
# TTL: 300
# Verify DNS propagation:
dig +short claw.yourdomain.com
# Must return your VPS IP before proceeding
Nginx Reverse Proxy Config
sudo apt install nginx certbot python3-certbot-nginx -y
# Create the Nginx config
sudo tee /etc/nginx/sites-available/openclaw << 'EOF'
server {
listen 80;
server_name claw.yourdomain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name claw.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/claw.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/claw.yourdomain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 300s;
proxy_buffering off;
}
}
EOF
sudo ln -s /etc/nginx/sites-available/openclaw /etc/nginx/sites-enabled/
sudo nginx -t
# Get SSL certificate
sudo certbot --nginx -d claw.yourdomain.com
sudo systemctl reload nginx
# Verify HTTPS is working
curl -I https://claw.yourdomain.com
# Should return 200 OK
Updating OpenClaw's Gateway URL
Tell OpenClaw its public URL so it generates correct webhook URLs and node pairing codes:
# If running via systemd direct install:
openclaw config set gateway.remote.url https://claw.yourdomain.com
openclaw gateway restart
# If running via Docker, update the environment variable:
# OPENCLAW_GATEWAY_URL=https://claw.yourdomain.com
# Then restart the container:
docker compose up -d --force-recreate openclaw
# Verify the gateway is up and reporting correctly:
openclaw gateway status
# Should show:
# Gateway: running
# Public URL: https://claw.yourdomain.com
# Agents: [list of configured agents]
Configuring Agents and Connecting Channels
Adding Your First Agent
Agents in OpenClaw are isolated AI assistants with their own persona, memory, skills, and channel connections. Create an agent for your primary use case:
# List existing agents
openclaw agents list
# Create a new agent
openclaw agents create --name my-assistant
# The agent gets its own workspace directory:
# ~/.openclaw/agents/my-assistant/workspace/
# Configure the agent's identity and model
openclaw agents config my-assistant set model.name gpt-4o
openclaw agents config my-assistant set model.provider openai
# Start the agent
openclaw agents start my-assistant
# Check agent status
openclaw agents status my-assistant
Connecting Telegram
Telegram is the most reliable channel for always-on VPS deployments because it uses webhooks — messages are pushed to your VPS rather than polled. Set up a Telegram bot and connect it:
# 1. Create a bot via @BotFather on Telegram — get the bot token
# 2. Get your Telegram user ID via @userinfobot
# Configure the Telegram plugin for your agent
openclaw agents config my-assistant set plugins.telegram.token YOUR_BOT_TOKEN
openclaw agents config my-assistant set plugins.telegram.allowedUsers YOUR_TELEGRAM_USER_ID
# Register the webhook (requires your HTTPS URL to be live)
openclaw agents webhook register my-assistant telegram
# Verify webhook is registered with Telegram:
curl https://api.telegram.org/botYOUR_BOT_TOKEN/getWebhookInfo | jq .result
# Should show your webhook URL: https://claw.yourdomain.com/...
Connecting the Mobile App via Node Pairing
OpenClaw's companion mobile app connects to your VPS via the node pairing flow. On the VPS, generate a pairing code:
# Generate a QR code for mobile app pairing
openclaw node pair
# This outputs a QR code and a setup code
# Scan the QR from the OpenClaw mobile app
# Or enter the setup code manually in the app
# Verify node is connected after pairing:
openclaw node status
# The gateway URL must be HTTPS and publicly reachable
# for the mobile app to connect from outside your network
For detailed troubleshooting on node pairing issues, see our guide on setting up OpenClaw as an AI personal assistant which covers the initial pairing flow in depth.
Tips, Gotchas, and Troubleshooting
Gateway Won't Start or Crashes on Boot
# Check gateway logs
openclaw gateway logs
# If using systemd:
journalctl -u openclaw -n 100 --no-pager
# If using Docker:
docker logs openclaw --tail 50
# Common causes:
# 1. Port 3000 already in use:
sudo ss -tlnp | grep 3000
# Kill the conflicting process or change OpenClaw's bind port
# 2. Missing or invalid API key:
openclaw config get model.apiKey
# 3. Config file permission issue:
ls -la ~/.openclaw/config.json
chmod 600 ~/.openclaw/config.json
Telegram Messages Not Arriving
If your Telegram bot isn't responding on the VPS but worked locally, the webhook registration is the first thing to check. Telegram requires a valid HTTPS URL to deliver messages:
# Check what webhook URL Telegram has registered
curl https://api.telegram.org/botYOUR_BOT_TOKEN/getWebhookInfo | jq .
# Key fields to check:
# url: should be your HTTPS VPS URL
# has_custom_certificate: false (Let's Encrypt certs are trusted)
# last_error_message: will show why delivery is failing
# If the URL is wrong or missing, re-register:
openclaw agents webhook register my-assistant telegram
# If Telegram reports certificate errors:
curl -I https://claw.yourdomain.com
# Cert must be valid — check Nginx and Certbot config
Mobile App Can't Connect to VPS Node
The most common cause is the gateway URL not being set correctly or the port not being publicly accessible:
# Verify the gateway URL is set to your public HTTPS domain
openclaw config get gateway.remote.url
# Must be: https://claw.yourdomain.com (not localhost)
# Test that the gateway is reachable from outside:
curl https://claw.yourdomain.com/health
# Should return a 200 response
# Check firewall rules
sudo ufw status
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
# Regenerate pairing code if the previous one expired:
openclaw node pair
High Memory Usage with Multiple Agents
Each agent loads its own LLM context and skill set. If you're running multiple agents on a small VPS, monitor memory usage and consider using lighter models for less critical agents:
# Monitor overall memory
free -h
# Check per-process memory for OpenClaw
ps aux | grep openclaw | awk '{print $2, $4, $11}'
# Monitor Docker container resources
docker stats openclaw --no-stream
# If memory is tight, use a lighter model for non-critical agents:
openclaw agents config secondary-agent set model.name gpt-4o-mini
# Or set a memory limit on the Docker container:
# Add to docker-compose.yml:
# deploy:
# resources:
# limits:
# memory: 512M
Updating OpenClaw
# Update the npm package
npm update -g openclaw
# Restart the gateway to pick up the new version
openclaw gateway restart
# Or if using systemd:
sudo systemctl restart openclaw
# Or if using Docker — rebuild the container:
docker compose pull
docker compose up -d --force-recreate openclaw
# Verify the new version
openclaw --version
Pro Tips
- Use a lightweight VPS for OpenClaw alone — OpenClaw itself is not CPU or memory intensive; the LLM API calls happen on the provider's infrastructure. A $6/month VPS with 1GB RAM is enough for one or two agents.
- Store your OpenClaw config in Git — the
~/.openclaw/agents/directory contains your agent workspace files (AGENTS.md, SOUL.md, MEMORY.md, etc.). Commit these to a private repo so you can restore your setup if the VPS is wiped. - Set up automated SSL renewal — Certbot installs a renewal timer automatically, but verify it:
sudo certbot renew --dry-run. Expired certs break Telegram webhooks and mobile app connectivity silently. - Use cron jobs for scheduled tasks — OpenClaw's cron system works best on a VPS where it runs continuously. Schedule daily summaries, reminders, and report generation that would be unreliable on a laptop.
- Back up the
~/.openclawdirectory regularly — it contains your memory files, agent configurations, and conversation history. A weeklytar czfto an S3 bucket or MinIO instance is all you need.
Wrapping Up
A complete OpenClaw Docker VPS deploy transforms OpenClaw from a local tool you occasionally open into a genuine always-on AI assistant that's available from every device and every messaging channel you use. The setup takes an afternoon: provision a VPS, install OpenClaw, configure your agent and LLM provider, put Nginx and HTTPS in front of it, and connect your messaging channels. From that point, your assistant is simply there — responding to Telegram messages, firing scheduled reminders, and running automations without you needing to think about infrastructure.
Start with a single agent and one channel. Once the pattern is working, add more agents for different purposes — a work assistant, a personal assistant, a content agent — each with their own persona, memory, and connected channels, all running on the same VPS.
Need OpenClaw Deployed and Configured for Your Team?
Deploying OpenClaw for a single user is one thing — setting it up for a team with multiple agents, custom skills, SSO, and integration into your existing tooling is another. The sysbrix team can design and deploy a production OpenClaw setup tailored to exactly how your team works.
📚 More Openclaw Guides on Sysbrix