Skip to Content

n8n Windows Setup WSL2: Advanced Workflows, Custom Nodes, Database Integration, and Production-Ready Automation

Build on your n8n WSL2 foundation with advanced workflow patterns, custom JavaScript nodes, PostgreSQL database connections, and the configuration tweaks that transform a development install into reliable production automation on Windows.
n8n setup guide

n8n Windows Setup WSL2: Advanced Workflows, Custom Nodes, Database Integration, and Production-Ready Automation

The foundational n8n Windows WSL2 guide got you running with basic workflows and Telegram integration. This guide covers the depth that makes n8n genuinely powerful on Windows: complex multi-step workflows with error handling and retry logic, custom JavaScript nodes for business logic that built-in nodes can't handle, PostgreSQL as a persistent backend instead of SQLite, and the production-grade configuration that makes your Windows n8n installation reliable enough to depend on for real automation. When you're ready to move beyond Windows development to a production server, the n8n Coolify deployment guide and queue mode scaling guide cover that path.


Prerequisites

  • n8n running in WSL2 — see our WSL2 setup guide for the baseline installation
  • WSL2 with Ubuntu 22.04+ and Node.js 20+
  • At least 8GB RAM on your Windows machine (n8n + PostgreSQL + Docker)
  • Docker Desktop for Windows (for PostgreSQL via Docker)
  • VS Code with the WSL extension for editing custom nodes
  • A few basic workflows already working — this guide builds on that foundation

Verify your current n8n setup in WSL2 is healthy:

# In your WSL2 terminal:
n8n --version
node --version  # Should be 20+

# Check current n8n data directory:
ls ~/.n8n/
# Should show: database.sqlite, config (if configured)

# Check if n8n is configured with env vars:
cat ~/.n8n/config 2>/dev/null || echo "No config file — using defaults"

# Verify available disk space in WSL2:
df -h ~
# Need at least 10GB for production use

# Check Windows memory accessible to WSL2:
free -h
# Should show 4GB+ available

Switching to PostgreSQL: Production Database Backend

SQLite works fine for learning and simple personal automation. But SQLite has write locking issues under concurrent load, doesn't support the queue mode features that handle heavy workloads, and can corrupt during unexpected shutdowns (not uncommon on Windows). PostgreSQL is the right backend the moment you depend on n8n for real workflows.

Running PostgreSQL via Docker in WSL2

# Start PostgreSQL in Docker (within WSL2 or via Docker Desktop):
docker run -d \
  --name n8n-postgres \
  --restart unless-stopped \
  -e POSTGRES_DB=n8n \
  -e POSTGRES_USER=n8n \
  -e POSTGRES_PASSWORD=your-strong-password \
  -p 5432:5432 \
  -v n8n_postgres_data:/var/lib/postgresql/data \
  postgres:15-alpine

# Verify PostgreSQL is running:
docker exec n8n-postgres pg_isready -U n8n
# Should return: /var/run/postgresql:5432 - accepting connections

# Create the n8n environment configuration:
mkdir -p ~/.n8n
cat > ~/.n8n/.env << 'EOF'
# Database configuration
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=localhost
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=your-strong-password

# n8n basic configuration
N8N_HOST=localhost
N8N_PORT=5678
N8N_PROTOCOL=http
WEBHOOK_URL=http://localhost:5678/

# Encryption key (generate once, never change)
N8N_ENCRYPTION_KEY=$(openssl rand -hex 24)

# Execution settings
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
EXECUTIONS_DATA_PRUNE_MAX_COUNT=10000

# Timezone
GENERIC_TIMEZONE=UTC
EOF

# Set a permanent encryption key (run ONCE and save the output):
echo "N8N_ENCRYPTION_KEY=$(openssl rand -hex 24)"
# Copy this key to your .env file — if you lose it, all credentials become unreadable

# Start n8n with the new PostgreSQL config:
export $(cat ~/.n8n/.env | xargs)
n8n start

# Verify n8n migrated to PostgreSQL successfully:
docker exec n8n-postgres psql -U n8n -c '\dt'
# Should show n8n tables: workflow_entity, execution_entity, etc.

Migrating Existing Workflows from SQLite

# If you have existing workflows in SQLite, export them first:
# 1. Open n8n UI at http://localhost:5678
# 2. Go to each workflow → Export (download JSON)
# OR export all via CLI:

# Stop n8n:
Ctrl+C  # or: pkill -f n8n

# Export all workflows from SQLite:
N8N_CONFIG_FILES=~/.n8n/config n8n export:workflow --all --output ~/n8n-workflows-backup/

# Export credentials:
N8N_CONFIG_FILES=~/.n8n/config n8n export:credentials --all --output ~/n8n-credentials-backup/

# Now switch to PostgreSQL config and restart:
export $(cat ~/.n8n/.env | xargs)
n8n start

# Import workflows into PostgreSQL-backed n8n:
n8n import:workflow --input ~/n8n-workflows-backup/

# Import credentials:
n8n import:credentials --input ~/n8n-credentials-backup/
# Note: credentials need the same N8N_ENCRYPTION_KEY to decrypt successfully

# Verify import worked:
curl http://localhost:5678/api/v1/workflows -H 'X-N8N-API-KEY: your-api-key' | jq '. | length'
# Should show the count of your imported workflows

Custom Nodes: Business Logic That Built-In Nodes Can't Handle

n8n's node library covers common integrations. For proprietary APIs, internal systems, or custom business logic that doesn't fit any existing node, custom nodes give you full TypeScript control within n8n's execution environment. They appear in the node panel, take typed inputs, and integrate with n8n's error handling and credential system.

Creating Your First Custom Node

# Set up the custom node development environment in WSL2:
mkdir -p ~/.n8n/custom
cd ~/.n8n/custom

# Initialize a custom node package:
npm init -y
npm install n8n-workflow
npm install -D typescript @types/node

# Create tsconfig.json:
cat > tsconfig.json << 'EOF'
{
  "compilerOptions": {
    "target": "ES2019",
    "module": "commonjs",
    "outDir": "./dist",
    "sourceMap": true,
    "declaration": true,
    "strict": false,
    "esModuleInterop": true
  },
  "include": ["nodes/**/*"],
  "exclude": ["node_modules"]
}
EOF

# Create the custom node structure:
mkdir -p nodes/InternalCRM

# The custom node file:
cat > nodes/InternalCRM/InternalCRM.node.ts << 'EOF'
import {
  IExecuteFunctions,
  INodeExecutionData,
  INodeType,
  INodeTypeDescription,
  NodeOperationError,
} from 'n8n-workflow';

export class InternalCRM implements INodeType {
  description: INodeTypeDescription = {
    displayName: 'Internal CRM',
    name: 'internalCRM',
    icon: 'fa:address-book',
    group: ['output'],
    version: 1,
    description: 'Interact with the internal CRM system',
    defaults: {
      name: 'Internal CRM',
    },
    inputs: ['main'],
    outputs: ['main'],
    credentials: [
      {
        name: 'internalCrmApi',
        required: true,
      },
    ],
    properties: [
      {
        displayName: 'Operation',
        name: 'operation',
        type: 'options',
        noDataExpression: true,
        options: [
          { name: 'Get Customer', value: 'getCustomer' },
          { name: 'Create Customer', value: 'createCustomer' },
          { name: 'Update Customer', value: 'updateCustomer' },
        ],
        default: 'getCustomer',
      },
      {
        displayName: 'Customer ID',
        name: 'customerId',
        type: 'string',
        displayOptions: {
          show: { operation: ['getCustomer', 'updateCustomer'] },
        },
        default: '',
        description: 'The ID of the customer',
      },
      {
        displayName: 'Customer Email',
        name: 'customerEmail',
        type: 'string',
        displayOptions: {
          show: { operation: ['createCustomer'] },
        },
        default: '',
        description: 'Email address for the new customer',
      },
    ],
  };

  async execute(this: IExecuteFunctions): Promise {
    const items = this.getInputData();
    const operation = this.getNodeParameter('operation', 0) as string;
    const credentials = await this.getCredentials('internalCrmApi');
    const baseUrl = credentials.baseUrl as string;
    const apiKey = credentials.apiKey as string;

    const results: INodeExecutionData[] = [];

    for (let i = 0; i < items.length; i++) {
      try {
        let responseData;

        if (operation === 'getCustomer') {
          const customerId = this.getNodeParameter('customerId', i) as string;
          const response = await this.helpers.request({
            method: 'GET',
            url: `${baseUrl}/customers/${customerId}`,
            headers: { 'Authorization': `Bearer ${apiKey}` },
            json: true,
          });
          responseData = response;
        } else if (operation === 'createCustomer') {
          const email = this.getNodeParameter('customerEmail', i) as string;
          const response = await this.helpers.request({
            method: 'POST',
            url: `${baseUrl}/customers`,
            headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' },
            body: { email },
            json: true,
          });
          responseData = response;
        }

        results.push({ json: responseData || {} });
      } catch (error) {
        if (this.continueOnFail()) {
          results.push({ json: { error: error.message }, error });
          continue;
        }
        throw new NodeOperationError(this.getNode(), error);
      }
    }

    return [results];
  }
}
EOF

# Build the custom node:
npx tsc

# Register in n8n:
# In ~/.n8n/.env add:
# N8N_CUSTOM_EXTENSIONS=/home/yourusername/.n8n/custom

# Restart n8n and the node will appear in the node panel
echo "Custom node built. Add N8N_CUSTOM_EXTENSIONS to your .env and restart n8n."

Advanced Workflow Patterns

Basic workflows run linearly — get data, transform it, send it somewhere. Real automation handles errors gracefully, retries on transient failures, processes data in parallel when it's safe to do so, and routes to different branches based on conditions. These patterns are the difference between workflows that work once in testing and workflows you can trust in production.

Error Handling and Retry Logic

# Advanced workflow configuration via n8n's JSON export/import
# This represents a workflow with proper error handling

# Key patterns to implement in the n8n UI:

# Pattern 1: Error workflow
# Every workflow should have an Error Trigger workflow that handles failures
# Settings → Error Workflow → select your error handler workflow

# The error handler workflow receives:
# $json.execution.id - the failed execution ID
# $json.execution.error.message - the error message
# $json.workflow.id - which workflow failed
# $json.workflow.name - workflow name for notifications

# Pattern 2: Retry on failure (via Function node)
# Use a Code node before HTTP Request nodes that might fail transiently:
const MAX_RETRIES = 3;
const RETRY_DELAY_MS = 2000;

const items = $input.all();
const results = [];

for (const item of items) {
  let lastError;
  let success = false;

  for (let attempt = 1; attempt <= MAX_RETRIES; attempt++) {
    try {
      // The actual operation (HTTP call, DB query, etc.):
      const response = await $http.get('https://api.yourdomain.com/data');
      results.push({ json: { data: response, attempt } });
      success = true;
      break;
    } catch (error) {
      lastError = error;
      if (attempt < MAX_RETRIES) {
        await new Promise(r => setTimeout(r, RETRY_DELAY_MS * attempt));
        console.log(`Attempt ${attempt} failed, retrying...`);
      }
    }
  }

  if (!success) {
    // Add to failed items — continue processing other items:
    results.push({
      json: { error: lastError.message, failed: true },
      pairedItem: item.pairedItem
    });
  }
}

return results;

# Pattern 3: Split processing and merge results
# Use Split In Batches node for large datasets
# Then Merge node at the end to recombine

# Pattern 4: Conditional routing with IF node
# IF node: {{ $json.status === 'active' }}
# True branch → process active users
# False branch → log inactive users and skip

Database Integration Workflow

# n8n PostgreSQL node configuration:
# Add a Postgres credential:
# Settings → Credentials → New Credential → PostgreSQL
# Host: localhost (or your DB host)
# Port: 5432
# Database: your_db
# User: your_user
# Password: your_password

# Example workflow: Daily sync from external API to local database
# Node 1: Schedule Trigger (every day at 2am)
# Node 2: HTTP Request → fetch external data
# Node 3: Code node → transform and validate data
# Node 4: PostgreSQL → upsert to local database
# Node 5: PostgreSQL → query records modified today
# Node 6: Send report via Email/Slack

# The PostgreSQL node SQL for upsert:
/*
  INSERT INTO customers (external_id, name, email, last_sync)
  VALUES (:externalId, :name, :email, NOW())
  ON CONFLICT (external_id)
  DO UPDATE SET
    name = EXCLUDED.name,
    email = EXCLUDED.email,
    last_sync = NOW()
  WHERE customers.email != EXCLUDED.email
    OR customers.name != EXCLUDED.name;
*/

# PostgreSQL node parameters:
# Operation: Execute Query
# Query: (paste the SQL above)
# Query Parameters: JSON mapping of n8n variables to SQL params:
# {
#   "externalId": "={{ $json.id }}",
#   "name": "={{ $json.full_name }}",
#   "email": "={{ $json.email }}"
# }

# Test your database connection from WSL2:
psql -h localhost -U n8n -d your_database -c 'SELECT version();'
# Should connect and return PostgreSQL version

Production Configuration for Windows WSL2

A development n8n install on Windows restarts when you reboot, loses workflows if WSL2 crashes, and doesn't survive Windows updates gracefully. These configurations make your WSL2 n8n installation genuinely reliable for automation you depend on.

Auto-Start n8n with WSL2

# Method 1: Windows Task Scheduler to start n8n on Windows login
# Create a PowerShell script:

# Save this as C:\Users\YourName\start-n8n.ps1
$wslPath = "wsl.exe"
$args = @(
  "-d", "Ubuntu",
  "--", "bash", "-c",
  "cd ~ && source ~/.profile && export $(cat ~/.n8n/.env | xargs) && nohup n8n start > ~/.n8n/n8n.log 2>&1 &"
)
Start-Process $wslPath -ArgumentList $args -WindowStyle Hidden

# Create a Windows Task Scheduler task:
# 1. Open Task Scheduler
# 2. Create Basic Task
# 3. Name: "Start n8n"
# 4. Trigger: At Log On
# 5. Action: Start a Program
# 6. Program: powershell.exe
# 7. Arguments: -ExecutionPolicy Bypass -File "C:\Users\YourName\start-n8n.ps1"
# 8. Check "Run with highest privileges"

# Method 2: systemd in WSL2 (Ubuntu 22.04+ with WSL2 version 0.67.6+)
# Check if systemd is enabled:
cat /proc/1/comm  # Should return 'systemd' not 'init'

# If systemd is available, create a service:
sudo tee /etc/systemd/system/n8n.service << 'EOF'
[Unit]
Description=n8n Workflow Automation
After=network.target postgresql.service

[Service]
Type=simple
User=your-username
WorkingDirectory=/home/your-username
EnvironmentFile=/home/your-username/.n8n/.env
ExecStart=/usr/bin/n8n start
Restart=always
RestartSec=10
StandardOutput=append:/home/your-username/.n8n/n8n.log
StandardError=append:/home/your-username/.n8n/n8n-error.log

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl enable n8n
sudo systemctl start n8n
sudo systemctl status n8n

# Enable WSL2 systemd in /etc/wsl.conf:
sudo tee /etc/wsl.conf << 'EOF'
[boot]
systemd=true
EOF

# Restart WSL2 from PowerShell:
# wsl --shutdown
# Then reopen WSL2 — n8n starts automatically

Log Management and Monitoring

# Configure log rotation to prevent disk bloat:
sudo tee /etc/logrotate.d/n8n << 'EOF'
/home/your-username/.n8n/n8n.log {
    daily
    rotate 7
    compress
    missingok
    notifempty
    create 0640 your-username your-username
}
EOF

# Monitor n8n health with a simple PowerShell script:
# Save as C:\Users\YourName\check-n8n.ps1
$response = try {
    Invoke-WebRequest -Uri 'http://localhost:5678/healthz' -TimeoutSec 5
} catch {
    $null
}

if (-not $response -or $response.StatusCode -ne 200) {
    Write-Host "n8n is down, restarting..."
    wsl -d Ubuntu -- bash -c "export $(cat ~/.n8n/.env | xargs) && nohup n8n start > ~/.n8n/n8n.log 2>&1 &"
} else {
    Write-Host "n8n is running OK"
}

# Schedule this health check every 5 minutes in Task Scheduler

# View n8n logs from Windows:
# In Windows Terminal with WSL2:
tail -f ~/.n8n/n8n.log | grep -v 'debug'

# Or view from Windows side:
# Open: \\wsl$\Ubuntu\home\yourusername\.n8n\n8n.log
# This is your Linux filesystem accessible from Windows Explorer

# Configure n8n log level for less noise:
# Add to ~/.n8n/.env:
# N8N_LOG_LEVEL=warn  # Options: debug, info, warn, error
# N8N_LOG_OUTPUT=file
# N8N_LOG_FILE_LOCATION=~/.n8n/n8n.log
# N8N_LOG_FILE_COUNT_MAX=7
# N8N_LOG_FILE_SIZE_MAX=20

Performance Optimization for Windows WSL2

WSL2 Memory and CPU Configuration

# WSL2 by default can consume excessive memory on Windows
# Create a .wslconfig file to limit resources:

# Create or edit: C:\Users\YourName\.wslconfig
# Open in Notepad from PowerShell:
notepad $env:USERPROFILE\.wslconfig

# Paste this configuration:
[wsl2]
# Limit memory to 4GB (adjust based on your RAM):
memory=4GB
# Limit to 4 CPU cores:
processors=4
# Enable swap:
swap=2GB
# Faster page reporting:
pageReporting=true
# WSL2 localhostForwarding:
localhostForwarding=true

# Restart WSL2 from PowerShell to apply:
wsl --shutdown
# Wait 8 seconds, then reopen WSL2 terminal

# Optimize n8n performance in .env:
cat >> ~/.n8n/.env << 'EOF'
# Worker threads for parallel execution:
N8N_CONCURRENCY_PRODUCTION_LIMIT=5

# Queue mode settings (requires Redis — optional for advanced setups):
# EXECUTIONS_MODE=queue
# QUEUE_BULL_REDIS_HOST=localhost
# QUEUE_BULL_REDIS_PORT=6379

# Execution timeout (prevent runaway workflows):
EXECUTIONS_TIMEOUT=600
EXECUTIONS_TIMEOUT_MAX=1800
EOF

# Check WSL2 memory usage from Windows PowerShell:
Get-Process vmmemWSL | Select-Object ProcessName, WorkingSet64

# Check from inside WSL2:
free -h
top -bn1 | head -15

Tips, Gotchas, and Troubleshooting

# ISSUE: n8n can't connect to PostgreSQL after Windows restart
# Cause: Docker Desktop may not have restarted before WSL2 tries to connect

# Fix 1: Add Docker dependency to n8n startup:
# In your start-n8n.ps1, add a wait loop:
# do { Start-Sleep 5 } until (Test-NetConnection localhost -Port 5432 -InformationLevel Quiet)

# Fix 2: Configure Docker Desktop to start on Windows startup:
# Docker Desktop → Settings → General → "Start Docker Desktop when you log in"

# ISSUE: Webhooks from external services not reaching n8n
# n8n runs on localhost in WSL2 — external webhooks can't reach it directly
# Solution: Use ngrok for temporary external access (development only)
npm install -g ngrok
ngrok http 5678
# Copy the https://xxxx.ngrok.io URL → update WEBHOOK_URL in .env

# For permanent external webhook access, consider:
# Option A: VPS deployment (see Coolify guide above)
# Option B: Cloudflare Tunnel:
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb -o cloudflared.deb
sudo dpkg -i cloudflared.deb
cloudflared tunnel --url http://localhost:5678

# ISSUE: Custom node not appearing in n8n node panel
# Check the node compiled without errors:
cd ~/.n8n/custom
npx tsc --noEmit
# Should return no errors

# Verify N8N_CUSTOM_EXTENSIONS is set:
echo $N8N_CUSTOM_EXTENSIONS
# Should return: /home/yourusername/.n8n/custom

# Check n8n logs for custom node loading errors:
grep -i 'custom\|extension\|error' ~/.n8n/n8n.log | tail -20

# ISSUE: WSL2 IP address changes after restart
# Windows networking with WSL2 uses a virtual adapter with changing IPs
# For webhooks, use localhost (127.0.0.1) instead of the WSL2 IP
# Or use the hostname: \\WSL$\Ubuntu for file access

# ISSUE: High memory usage over time
# n8n accumulates execution data — ensure pruning is enabled:
export $(cat ~/.n8n/.env | xargs)
curl -X POST http://localhost:5678/api/v1/executions/delete \
  -H 'X-N8N-API-KEY: your-api-key' \
  -d '{"filters": {"status": ["success", "error"]}, "timePeriodStart": 0, "timePeriodEnd": 1704067200000}'
# Deletes all executions before Jan 1 2024

Pro Tips

  • Store your .n8n/config in a Windows-accessible location for easier backup — symlink your n8n data directory to a Windows folder: ln -s /mnt/c/Users/YourName/Documents/n8n-data ~/.n8n-backup. Windows backup tools (OneDrive, Backblaze) can then automatically back up your workflows and credentials without any extra configuration.
  • Use VS Code's WSL Remote extension for custom node development — the WSL extension lets you open the WSL2 filesystem directly in VS Code on Windows, with full IntelliSense, TypeScript checking, and the integrated terminal running in WSL2. File → Open Remote → WSL: Ubuntu → navigate to ~/.n8n/custom. This is dramatically faster than editing files from the Windows side.
  • Export your workflows to Git regularly, not just file backup — use n8n's CLI export feature in a cron job: n8n export:workflow --all --output ~/n8n-workflows/ runs daily and commits the JSON files to a Git repository. This gives you version history, diff tracking, and the ability to restore specific workflow versions — far more useful than a binary SQLite backup.
  • When you're ready for 24/7 reliability, move to a VPS — WSL2 on Windows is excellent for development and personal automation, but Windows updates, sleep states, and resource pressure mean it's not suitable for automation you depend on for business processes. When that threshold arrives, the n8n Coolify deployment guide provides the clean path to a production server while preserving all your existing workflows.
  • Test your encryption key setup before creating important credentials — generate the key, set it in the environment, create one test credential, restart n8n, and verify the credential is still accessible. Only then add production API keys and credentials. If you discover you need to change the encryption key later, all credentials will need to be manually re-entered.

Wrapping Up

With PostgreSQL replacing SQLite, custom nodes extending the built-in library, proper error handling in your workflow patterns, and auto-start configuration, your WSL2 n8n installation is genuinely production-capable for personal automation and small team use. The WSL2 environment gives you the Linux toolchain and Docker ecosystem on Windows without dual-booting, and n8n runs significantly more reliably in this environment than the native Windows installer.

The natural next step when your automation grows beyond personal use is a dedicated VPS — either using our Coolify deployment guide for a clean managed approach, or the queue mode guide when you need to handle dozens of concurrent heavy workflows. Your workflows transfer directly — just export from the Windows install and import to the server.


Need Production n8n Infrastructure for Your Team?

When your n8n automation grows beyond what a Windows workstation can reliably host — production deployments with queue mode, custom nodes for proprietary integrations, PostgreSQL HA, and the monitoring that catches failures before they affect your business — the sysbrix team designs and deploys production n8n infrastructure that teams can depend on.

Talk to Us →
Deploy Immich with Docker Compose + Caddy + PostgreSQL on Ubuntu (Production Guide)
A practical, production-oriented Immich deployment with secure edge TLS, backups, monitoring, and recovery runbooks.