OpenClaw Docker VPS Deploy: Advanced Cron Automation, Discord Integration, Skill Chaining, and Production Hardening
Three guides into the OpenClaw VPS series, you have a solid foundation: initial setup and local configuration, always-on VPS deployment with Telegram, and multi-agent architecture and custom skills. This fourth guide covers the operational patterns that make an AI assistant genuinely valuable over time: sophisticated cron jobs that run proactive workflows without manual triggering, Discord integration for team-shared assistants, multi-skill pipeline chaining where one skill's output becomes the next skill's input, and the production hardening — monitoring, resource limits, automated restarts, and security — that keeps it running reliably without constant babysitting.
Prerequisites
- A running OpenClaw instance on a VPS — see our VPS deployment guide
- At least one agent configured and active — see our multi-agent guide
- OpenClaw running via systemd or Docker on the VPS
- SSH access to the VPS for configuration changes
- A Discord server and developer account for Discord integration
- At least 2GB RAM — skill chaining and multiple agents benefit from the headroom
Verify your current OpenClaw state before proceeding:
ssh root@your-vps-ip
# Check gateway status and all agents:
openclaw gateway status
openclaw agents list
# Verify HTTPS endpoint is healthy:
curl -I https://claw.yourdomain.com/health
# Check current cron jobs:
crontab -l 2>/dev/null || echo "No crontab configured yet"
# Check agent memory files exist:
ls ~/.openclaw/agents/*/workspace/MEMORY.md 2>/dev/null | head -5
# Check resource usage:
free -h && ps aux | grep openclaw | grep -v grep | awk '{print $1, $3, $4, $11}'
Advanced Cron Automation: Proactive Agent Workflows
The real power of an always-on AI assistant isn't responding to messages — it's doing useful work without being asked. OpenClaw's cron system lets you schedule agents to run workflows, generate reports, check systems, send summaries, and update their own memory based on recurring tasks. This section covers patterns that go far beyond simple scheduled messages.
Complex Cron Workflow: Daily Intelligence Briefing
# The HEARTBEAT.md file in each agent's workspace defines
# what the agent does on each scheduled heartbeat
# This is the most powerful way to create proactive automation
cat > ~/.openclaw/agents/my-assistant/workspace/HEARTBEAT.md << 'EOF'
# HEARTBEAT.md
# Runs on each heartbeat (controlled by cron schedule)
## Morning Briefing (07:00 UTC)
- Check if it's between 07:00-07:30 UTC
- If yes: fetch 3 most important news items in tech/AI
- Summarize key projects in progress from MEMORY.md
- List today's scheduled tasks and reminders
- Send the briefing to Telegram as a formatted message
- Do NOT do this if already done today (check MEMORY.md for last_briefing_date)
## System Health Check (every 6 hours)
- Check if VPS disk usage is above 80%
- Check if any Docker containers are in unhealthy state
- Check if key services (Gitea, Portainer) are responding
- If any issues found: alert via Telegram immediately
- Log check result to MEMORY.md under ## Health Log
## Weekly Summary (Sundays at 18:00 UTC)
- Review MEMORY.md for decisions made this week
- Summarize key projects and their status
- List any pending items that were deferred
- Send the weekly summary to Telegram
- Update ## Weekly Summaries section in MEMORY.md
EOF
# Configure the heartbeat cron schedule:
# OpenClaw Gateway heartbeats are controlled by the cron config
# Set to run every 30 minutes:
crontab -e
# Add:
# */30 * * * * openclaw gateway heartbeat 2>/dev/null
# Or use a more sophisticated schedule with different tasks at different times:
cat > /opt/scripts/openclaw-heartbeat.sh << 'EOF'
#!/bin/bash
# Smart heartbeat that passes time context to agents
HOUR=$(date -u +%H)
MINUTE=$(date -u +%M)
DAY_OF_WEEK=$(date -u +%u) # 1=Monday, 7=Sunday
# Set context so agent knows what time-based tasks to run:
export OPENCLAW_CONTEXT="hour=${HOUR},minute=${MINUTE},day_of_week=${DAY_OF_WEEK}"
openclaw gateway heartbeat
EOF
chmod +x /opt/scripts/openclaw-heartbeat.sh
# Schedule the smart heartbeat:
crontab -e
# Add:
# */30 * * * * /opt/scripts/openclaw-heartbeat.sh >> /var/log/openclaw-heartbeat.log 2>&1
Cron-Driven Data Pipeline Agent
# Create a specialized data agent that runs scheduled data tasks
# This agent has no messaging channel — it runs purely on cron
openclaw agents create --name data-agent
# Configure as a background automation agent:
cat > ~/.openclaw/agents/data-agent/workspace/AGENTS.md << 'EOF'
# AGENTS.md
## Mission
Run scheduled data collection, processing, and reporting tasks
automatically. This agent operates autonomously — no messaging channel.
## Tools
- Web search for fetching data
- File write for saving reports to workspace
- Scheduled database queries via SQL skill
## Startup instructions
- On heartbeat, check /root/.openclaw/shared/tasks.json
for any tasks assigned to data-agent with status queued
- When starting a task, update its status to in_progress
- When complete, write output to workspace and update status to done
- If error, update status to failed with error details
## Workflow for scheduled reports
1. Fetch data from configured sources
2. Process and analyze with Python script skill
3. Write formatted report to workspace/reports/YYYY-MM-DD-report.md
4. Update MEMORY.md with report summary and key metrics
5. Send notification to my-assistant agent that report is ready
EOF
# Create the shared tasks file for inter-agent communication:
mkdir -p ~/.openclaw/shared
cat > ~/.openclaw/shared/tasks.json << 'EOF'
{
"tasks": [
{
"id": "weekly-metrics",
"assigned_to": "data-agent",
"schedule": "0 9 * * 1",
"status": "queued",
"description": "Compile weekly metrics report from database",
"output_path": "workspace/reports/"
}
]
}
EOF
# Start the data agent:
openclaw agents start data-agent
openclaw agents status data-agent
Discord Integration: Team-Shared AI Assistant
Telegram is excellent for personal assistants. Discord is better for team-shared access where multiple developers need to interact with the same agent, share conversation context, and benefit from each other's queries. OpenClaw's Discord plugin routes messages from specific channels or DMs to configured agents.
Creating a Discord Bot and Connecting to OpenClaw
# Step 1: Create a Discord Application and Bot
# discord.com/developers/applications → New Application
# Application name: Team AI Assistant
# Navigate to: Bot → Add Bot
# Bot settings:
# - Presence Intent: ON
# - Server Members Intent: ON
# - Message Content Intent: ON (REQUIRED — message content requires this)
# Copy the Bot Token
# Step 2: Invite the bot to your Discord server
# OAuth2 → URL Generator
# Scopes: bot, applications.commands
# Permissions: Send Messages, Read Message History, Add Reactions, Embed Links
# Use the generated URL to invite the bot
# Step 3: Get your Discord server and channel IDs
# Enable Developer Mode in Discord: Settings → Advanced → Developer Mode
# Right-click your server → Copy Server ID
# Right-click a channel → Copy Channel ID
# Step 4: Configure the Discord plugin for your agent
openclaw agents config coding-agent set plugins.discord.token YOUR_DISCORD_BOT_TOKEN
openclaw agents config coding-agent set plugins.discord.allowedChannels "CHANNEL_ID_1,CHANNEL_ID_2"
openclaw agents config coding-agent set plugins.discord.allowedUsers "USER_ID_1,USER_ID_2,USER_ID_3"
openclaw agents config coding-agent set plugins.discord.commandPrefix "!ai"
# Verify Discord config is set:
openclaw agents config coding-agent get plugins.discord
# Restart the agent to pick up Discord config:
openclaw agents restart coding-agent
# Test: In your Discord channel, type:
# !ai What is the current git branch we're working on?
# The agent should respond in the channel
# Check Discord connection in agent logs:
openclaw agents logs coding-agent --tail 20 | grep -i discord
Discord Channel Routing for Multiple Agents
# Route different Discord channels to different OpenClaw agents
# Each agent handles queries relevant to its specialty
# #ai-coding channel → coding-agent
openclaw agents config coding-agent set plugins.discord.allowedChannels "CODING_CHANNEL_ID"
openclaw agents config coding-agent set plugins.discord.respondInThread true
# Creates threaded responses to keep the channel clean
# #ai-devops channel → devops-agent
openclaw agents config devops-agent set plugins.discord.token YOUR_DEVOPS_BOT_TOKEN
# Note: each agent can use a DIFFERENT bot token (separate Discord apps)
# or the SAME bot token with different channel routing
openclaw agents config devops-agent set plugins.discord.allowedChannels "DEVOPS_CHANNEL_ID"
# #ai-general channel → my-assistant (general purpose)
openclaw agents config my-assistant set plugins.discord.allowedChannels "GENERAL_CHANNEL_ID"
# Set a channel-specific mention requirement:
# Only respond when the bot is mentioned (@BotName) not all messages
openclaw agents config coding-agent set plugins.discord.requireMention true
# This prevents the agent from responding to every message in a busy channel
# Configure Discord role-based access:
# Only users with "developer" role can interact with the coding agent
openclaw agents config coding-agent set plugins.discord.allowedRoles "DEVELOPER_ROLE_ID,SENIOR_DEV_ROLE_ID"
# Restart all agents with Discord config:
openclaw agents restart coding-agent
openclaw agents restart devops-agent
openclaw agents restart my-assistant
# Verify all Discord webhooks are registered:
openclaw agents status coding-agent | grep -i discord
openclaw agents status devops-agent | grep -i discord
Skill Chaining: Pipelines Across Multiple Skills
Individual skills are useful. Chaining them into pipelines — where one skill's output feeds the next — creates compound capabilities that no single tool can match. A research pipeline might: search the web, extract key points, check against the knowledge base, generate a formatted report, and save it to the workspace. Each step is a separate skill; the agent orchestrates them.
Building a Research Pipeline with Chained Skills
# SOUL.md for a research agent that chains skills into pipelines
cat > ~/.openclaw/agents/research-agent/workspace/SOUL.md << 'EOF'
# SOUL.md
## Role
Senior research analyst. Conducts thorough, multi-source research
and produces well-structured reports.
## Research Pipeline Pattern
When asked to research a topic:
1. web_search: Search for 3-5 relevant sources
2. For each source URL: fetch_page to extract content
3. Synthesize findings across all sources
4. Check MEMORY.md for any related prior research to incorporate
5. Generate a structured report in Markdown
6. file_write: Save report to workspace/research/YYYY-MM-DD-{topic}.md
7. Update MEMORY.md with research summary and key findings
8. If research was requested via Telegram: send formatted summary back
## Output format for reports
```
# Research Report: {Topic}
**Date:** YYYY-MM-DD
**Sources:** {n} sources reviewed
## Summary
{2-3 sentence executive summary}
## Key Findings
- Finding 1
- Finding 2
## Details
{Expanded analysis}
## Recommendations
{Action items if applicable}
## Sources
1. {URL} — {brief description}
```
## Skill chaining rules
- Always use web_search BEFORE generating any factual claims
- Always save research to workspace BEFORE responding to user
- Always update MEMORY.md with summary AFTER saving report
- Chain order is strict: search → fetch → synthesize → save → respond
EOF
# Example conversation that triggers the full pipeline:
# User: "Research the current state of Rust adoption in enterprise software"
# Agent chain:
# 1. Calls web_search("Rust enterprise adoption 2026")
# 2. Gets URLs, calls fetch_page on each
# 3. Synthesizes content across sources
# 4. Calls file_write(path="workspace/research/2026-04-11-rust-enterprise.md", content=...)
# 5. Calls memory_update(summary=...)
# 6. Responds via Telegram with formatted summary + note that full report is saved
Multi-Agent Pipeline: Handoff Between Agents
# Agents can hand off tasks to each other via the shared tasks.json file
# This creates a multi-agent pipeline where each agent handles its specialty
# Example: code review pipeline
# User asks my-assistant: "Review the latest PR and send a summary"
# my-assistant creates a task for coding-agent and data-agent
# HEARTBEAT.md for my-assistant — includes pipeline orchestration:
cat >> ~/.openclaw/agents/my-assistant/workspace/HEARTBEAT.md << 'EOF'
## Pipeline Orchestration
- Check /root/.openclaw/shared/tasks.json for completed tasks
from other agents that need user notification
- If a coding-agent task has status=done:
- Read the output from the specified output_path
- Send a formatted summary to Telegram/Discord
- Update task status to notified
- If a data-agent task has status=done:
- Read the report from workspace/reports/
- Format key metrics into a Telegram message
- Send to appropriate channel
EOF
# Python script to create pipeline tasks programmatically:
cat > /opt/scripts/create-pipeline-task.py << 'EOF'
import json
import os
import uuid
from datetime import datetime
from pathlib import Path
TASKS_FILE = Path.home() / ".openclaw/shared/tasks.json"
def create_task(assigned_to: str, description: str,
context: dict = None, output_path: str = "") -> str:
"""Create a task for an OpenClaw agent."""
tasks_data = json.loads(TASKS_FILE.read_text()) if TASKS_FILE.exists() else {"tasks": []}
task_id = str(uuid.uuid4())[:8]
task = {
"id": task_id,
"assigned_to": assigned_to,
"created_at": datetime.utcnow().isoformat(),
"status": "queued",
"description": description,
"context": context or {},
"output_path": output_path
}
tasks_data["tasks"].append(task)
TASKS_FILE.write_text(json.dumps(tasks_data, indent=2))
return task_id
# Example: trigger a code review
task_id = create_task(
assigned_to="coding-agent",
description="Review the latest pull request on Gitea org/myapp and summarize issues",
context={"repo": "org/myapp", "pr_number": "latest"},
output_path="workspace/reviews/"
)
print(f"Created task: {task_id}")
EOF
python3 /opt/scripts/create-pipeline-task.py
Production Hardening: Reliability at Scale
An AI assistant that crashes at 3am and isn't restarted until you notice 8 hours later isn't very useful as an "always-on" assistant. Production hardening means: automatic restart on failure, resource limits so a runaway LLM call doesn't starve the VPS, health monitoring with alerts, and security hardening so your agent's tools can't be exploited.
Systemd Configuration with Resource Limits
# Create a production-hardened systemd service for OpenClaw
sudo tee /etc/systemd/system/openclaw.service << 'EOF'
[Unit]
Description=OpenClaw AI Assistant Gateway
After=network-online.target docker.service
Wants=network-online.target
Requires=docker.service
[Service]
Type=simple
User=ubuntu
Group=ubuntu
WorkingDirectory=/home/ubuntu
# Start all agents after gateway:
ExecStart=/usr/bin/openclaw gateway start
ExecStartPost=/bin/bash -c "sleep 10 && /usr/bin/openclaw agents start-all"
# Clean shutdown:
ExecStop=/usr/bin/openclaw gateway stop
# Restart policy:
Restart=always
RestartSec=30
RestartPreventExitStatus=0
# Don't restart if it exits cleanly (admin stopped it):
# Resource limits:
MemoryMax=1G # Max 1GB RAM for OpenClaw process
MemorySwapMax=0 # No swap (fail fast if OOM)
CPUQuota=75% # Use at most 75% of one CPU
LimitNOFILE=65536 # Allow many file descriptors
# Security hardening:
NoNewPrivileges=true
PrivateDevices=true
# Don't allow accessing most kernel interfaces
# Environment:
EnvironmentFile=/home/ubuntu/.openclaw/.env
Environment=NODE_ENV=production
# Logging:
StandardOutput=journal
StandardError=journal
SyslogIdentifier=openclaw
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable openclaw
sudo systemctl start openclaw
# Monitor the service:
sudo systemctl status openclaw
journalctl -u openclaw -f --no-pager
# Check resource usage:
systemctl show openclaw --property=MemoryCurrent --property=CPUUsageNSec | head -5
Automated Health Monitoring and Self-Healing
#!/bin/bash
# /opt/scripts/openclaw-health-monitor.sh
# Comprehensive health check and self-healing for OpenClaw
# Run every 5 minutes via cron
set -uo pipefail
GATEWAY_URL="https://claw.yourdomain.com"
TELEGRAM_TOKEN="${TELEGRAM_BOT_TOKEN:-}"
TELEGRAM_CHAT_ID="${TELEGRAM_CHAT_ID:-}"
LOG_FILE="/var/log/openclaw-health.log"
ISSUE_LOCKFILE="/tmp/openclaw-issue-reported"
log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $1" | tee -a "$LOG_FILE"; }
alert() {
local msg="$1"
# Only alert once per issue (prevent spam):
if [ ! -f "$ISSUE_LOCKFILE" ]; then
touch "$ISSUE_LOCKFILE"
log "ALERT: $msg"
if [ -n "$TELEGRAM_TOKEN" ]; then
curl -sf -X POST "https://api.telegram.org/bot${TELEGRAM_TOKEN}/sendMessage" \
-d chat_id="$TELEGRAM_CHAT_ID" \
-d text="🚨 OpenClaw Alert: $msg" > /dev/null 2>&1 || true
fi
fi
}
clear_alert() {
if [ -f "$ISSUE_LOCKFILE" ]; then
rm -f "$ISSUE_LOCKFILE"
log "RECOVERY: Issue resolved"
if [ -n "$TELEGRAM_TOKEN" ]; then
curl -sf -X POST "https://api.telegram.org/bot${TELEGRAM_TOKEN}/sendMessage" \
-d chat_id="$TELEGRAM_CHAT_ID" \
-d text="✅ OpenClaw recovered" > /dev/null 2>&1 || true
fi
fi
}
# Check 1: Gateway HTTP health endpoint
if ! curl -sf --max-time 10 "${GATEWAY_URL}/health" > /dev/null 2>&1; then
log "Gateway health check failed"
# Attempt self-healing:
sudo systemctl restart openclaw
sleep 30
if ! curl -sf --max-time 10 "${GATEWAY_URL}/health" > /dev/null 2>&1; then
alert "Gateway unreachable after restart attempt"
else
log "Self-healed: gateway recovered after restart"
clear_alert
fi
exit 0
fi
# Check 2: At least one agent is running
AGENT_COUNT=$(openclaw agents list 2>/dev/null | grep -c 'running' || echo 0)
if [ "$AGENT_COUNT" -eq 0 ]; then
log "No agents running — attempting restart"
openclaw agents start-all
sleep 15
AGENT_COUNT=$(openclaw agents list 2>/dev/null | grep -c 'running' || echo 0)
if [ "$AGENT_COUNT" -eq 0 ]; then
alert "No agents running after restart attempt"
else
log "Self-healed: $AGENT_COUNT agent(s) started"
clear_alert
fi
exit 0
fi
# Check 3: Memory usage not exceeding 85%
MEM_PCT=$(free | awk '/Mem:/{printf "%.0f", $3/$2*100}')
if [ "$MEM_PCT" -gt 85 ]; then
alert "Memory usage at ${MEM_PCT}% — possible memory leak"
fi
# Check 4: Disk not above 90%
DISK_PCT=$(df / | awk 'NR==2{print $5}' | tr -d '%')
if [ "$DISK_PCT" -gt 90 ]; then
alert "Disk usage at ${DISK_PCT}%"
fi
# All checks passed:
clear_alert
log "Health OK: gateway up, $AGENT_COUNT agents running, mem=${MEM_PCT}%, disk=${DISK_PCT}%"
# Schedule:
# */5 * * * * TELEGRAM_BOT_TOKEN=xxx TELEGRAM_CHAT_ID=yyy /opt/scripts/openclaw-health-monitor.sh
Security Hardening for Production
Securing the Gateway and Skill Access
# 1. Restrict which IPs can access the OpenClaw gateway
# Only Telegram/Discord webhook IPs and your own IP should reach it
# Add to Nginx config for claw.yourdomain.com:
cat >> /etc/nginx/sites-available/openclaw << 'EOF'
# Rate limiting for webhook endpoints:
limit_req_zone $binary_remote_addr zone=webhook:10m rate=30r/m;
# In the server block:
location /webhook/ {
limit_req zone=webhook burst=20 nodelay;
limit_req_status 429;
# Telegram webhook IP ranges (as of 2026):
# Allow Telegram's servers:
allow 149.154.160.0/20;
allow 91.108.4.0/22;
allow 91.108.56.0/22;
allow 91.108.8.0/22;
# Allow Discord webhook IPs:
allow 35.190.247.0/24;
# Your own IP:
allow YOUR_IP/32;
deny all;
proxy_pass http://localhost:3000;
# ... standard proxy headers
}
EOF
sudo nginx -t && sudo systemctl reload nginx
# 2. Restrict skill file access
# Skills should only read/write to designated directories
# not arbitrary filesystem paths
# Add to your OpenClaw agent config:
openclaw agents config my-assistant set security.allowedWritePaths "/home/ubuntu/.openclaw/agents/my-assistant/workspace"
openclaw agents config my-assistant set security.allowedReadPaths "/home/ubuntu/.openclaw,/tmp/openclaw-readonly"
# 3. Environment variable isolation
# Each agent should only have access to its own secrets
# Store agent-specific secrets in separate .env files:
cat > ~/.openclaw/agents/coding-agent/.env << 'EOF'
GITEA_TOKEN=coding-agent-specific-token
# NOT the admin token or other agents' secrets
EOF
chmod 600 ~/.openclaw/agents/coding-agent/.env
# 4. Audit log for all agent actions
# OpenClaw logs all LLM calls and tool invocations
# Ensure logs are retained and accessible:
mkdir -p /var/log/openclaw
openclaw gateway config set logging.path /var/log/openclaw
openclaw gateway config set logging.retention_days 30
# Review recent tool invocations:
grep -i 'tool_call\|skill_invoke' /var/log/openclaw/*.log 2>/dev/null | tail -20
Tips, Gotchas, and Troubleshooting
HEARTBEAT.md Not Triggering Expected Actions
# If HEARTBEAT.md tasks aren't executing:
# 1. Verify the heartbeat cron is running:
crontab -l | grep openclaw
# Should show: */30 * * * * openclaw gateway heartbeat
# 2. Check the last heartbeat execution:
grep 'heartbeat' /var/log/openclaw/*.log 2>/dev/null | tail -5
# 3. Test a manual heartbeat:
openclaw gateway heartbeat --verbose
# Should show: agent receiving heartbeat, processing HEARTBEAT.md
# 4. Check HEARTBEAT.md formatting:
# OpenClaw reads the file as plain text and gives it to the agent as context
# Common issues:
# - File is empty: cat ~/.openclaw/agents/my-assistant/workspace/HEARTBEAT.md
# - Wrong format: headings must be ## for task sections
# - Overly complex instructions: simplify, agents need clear directives
# 5. Check if agent is in an error state:
openclaw agents status my-assistant
# If status shows error or stopped:
openclaw agents restart my-assistant
journalctl -u openclaw -n 30 | grep -i error
# 6. Manual trigger to test HEARTBEAT.md interpretation:
openclaw agents run my-assistant \
--message "[HEARTBEAT] Please run your scheduled morning briefing task now."
# Watch the output to see if the agent understands and executes the task
Discord Bot Not Responding in Channels
# If the Discord bot isn't responding:
# 1. Check bot permissions in Discord:
# Server Settings → Integrations → Bots → check permissions
# Required: Read Messages, Send Messages, Read Message History
# If requireMention=true: the bot also needs "Mention Everyone" or use @BotName
# 2. Verify Message Content Intent is enabled:
# developer.discord.com → Your App → Bot → Privileged Gateway Intents
# MESSAGE CONTENT INTENT must be ON
# Without this, the bot receives messages but can't read the content
# 3. Check the channel ID is correct:
# In Discord with Developer Mode: right-click channel → Copy Channel ID
docker exec openclaw env | grep DISCORD # Check configured channel IDs
# 4. Test Discord connectivity from the container:
docker exec openclaw curl -s https://discord.com/api/v10/gateway | jq .url
# Should return: "wss://gateway.discord.gg"
# 5. Check for Gateway connection errors:
docker logs openclaw 2>&1 | grep -i 'discord\|gateway\|error' | tail -20
# 6. Verify the bot is actually in the server and channel:
# In Discord: check the server members list for your bot
# If not there: the invite link used wrong permissions
# 7. Test with a direct DM to the bot (if DMs are configured):
# Send a DM directly to the bot and see if it responds
# DMs bypass channel permission issues
Pro Tips
- Write HEARTBEAT.md like a junior employee's daily checklist — be specific, provide decision criteria, and include examples of what to do in different situations. "Check if it's before 8am" is ambiguous. "Check if the current UTC hour is between 0 and 7 (midnight to 7am); if yes, skip the morning briefing" is actionable. The more precise your HEARTBEAT.md, the more reliably the agent executes your intentions.
- Use the shared tasks.json for pipeline state, not MEMORY.md — MEMORY.md is for the agent's long-term knowledge. tasks.json is for pipeline coordination between agents. Keep them separate: MEMORY.md grows indefinitely with context; tasks.json is a queue that gets processed and cleared. Don't mix coordination state with knowledge state.
- Set a Discord channel-specific command prefix to avoid collisions — if your team uses multiple bots, set a unique command prefix per OpenClaw agent:
!claw,!ai, or/assist. Without this, your OpenClaw agent might respond to commands meant for another bot or vice versa. - Back up agent workspaces before major AGENTS.md or SOUL.md changes — MEMORY.md accumulates valuable context over weeks. Before making significant changes to an agent's identity files that might cause it to reinterpret or overwrite memory, back up the workspace:
cp -r ~/.openclaw/agents/my-assistant/workspace ~/backups/agent-backup-$(date +%Y%m%d). - Monitor the health-monitor script with an external push monitor — your health monitor needs its own monitor. Add a push monitor in Uptime Kuma: at the end of the health-monitor script, add a curl to the push URL on success. If the health monitor stops running (the VPS reboots and cron doesn't restart), the push monitor alerts you.
Wrapping Up
The four OpenClaw VPS guides together cover the complete journey from first setup to production AI assistant platform: initial setup, always-on VPS deployment, multi-agent architecture and skills, and this guide's advanced automation, Discord integration, skill chaining, and production hardening.
The HEARTBEAT.md pattern and skill chaining are where OpenClaw transforms from a responsive assistant into a proactive one — doing useful work continuously, not just when someone asks. The health monitoring and systemd hardening ensure that work continues reliably even when the network blips, the process crashes, or memory climbs too high. Together they're what makes "always-on" mean something more than just "server is running."
Need an OpenClaw Platform Built and Hardened for Your Team?
Designing a multi-agent OpenClaw platform with Discord integration, sophisticated cron automation, skill pipelines connecting your internal tools, and production hardening appropriate for business-critical workflows — the sysbrix team designs and deploys AI assistant platforms on OpenClaw that engineering teams can genuinely rely on as part of their daily operations.
Talk to Us →