Skip to Content

Portainer Docker Setup: API Automation, Edge Deployments, Security Hardening, and Container Logging at Scale

Learn how to automate Portainer management via its REST API, deploy and manage edge devices from one dashboard, harden your Portainer installation against attack, and build centralized logging across all your container environments.
Portainer setup guide

Portainer Docker Setup: API Automation, Edge Deployments, Security Hardening, and Container Logging at Scale

The first two guides in this series covered Portainer installation and basic container management, then GitOps stacks, RBAC, and multi-environment deployments. This third guide covers the operational depth most teams eventually need: the Portainer REST API for infrastructure-as-code automation, Edge deployments for managing containers on remote and air-gapped servers, security hardening that goes beyond default settings, and centralized log aggregation across your entire container fleet.


Prerequisites

  • A running Portainer CE or BE instance with HTTPS — see our getting started guide
  • Portainer version 2.19+ — Edge and API features covered here require recent releases
  • Admin access and at least two connected environments for multi-environment examples
  • For Edge deployments: remote servers or IoT devices running Linux with Docker
  • curl and jq on your workstation for API testing

Confirm your Portainer version and API access:

# Get your API token (store this securely):
curl -X POST https://portainer.yourdomain.com/api/auth \
  -H 'Content-Type: application/json' \
  -d '{"username": "admin", "password": "yourpassword"}' | jq -r .jwt

# Verify API access with the token:
export PT_TOKEN="your-jwt-token-here"
curl https://portainer.yourdomain.com/api/status \
  -H "Authorization: Bearer $PT_TOKEN" | jq '{version: .Version, instances: .Instances}'

# List all environments:
curl https://portainer.yourdomain.com/api/endpoints \
  -H "Authorization: Bearer $PT_TOKEN" | \
  jq '[.[] | {id: .Id, name: .Name, type: .Type, status: .Status}]'

Portainer REST API: Infrastructure Automation

Every action you can take in the Portainer UI has an equivalent API endpoint. This makes Portainer scriptable — you can provision stacks, create networks, manage secrets, and deploy containers as part of CI/CD pipelines or infrastructure automation scripts without ever opening a browser.

API Authentication and Token Management

#!/bin/bash
# portainer-api.sh — Helper functions for Portainer API automation

PORTAINER_URL="https://portainer.yourdomain.com"

# Get a fresh JWT token (tokens expire after 8 hours)
get_token() {
  curl -s -X POST "${PORTAINER_URL}/api/auth" \
    -H 'Content-Type: application/json' \
    -d "{\"username\":\"${PORTAINER_USER}\",\"password\":\"${PORTAINER_PASSWORD}\"}" | \
    jq -r .jwt
}

# Get environment (endpoint) ID by name
get_env_id() {
  local env_name="$1"
  local token="$2"
  curl -s "${PORTAINER_URL}/api/endpoints" \
    -H "Authorization: Bearer ${token}" | \
    jq -r ".[] | select(.Name == \"${env_name}\") | .Id"
}

# Usage in scripts:
export PT_TOKEN=$(get_token)
export PT_ENV_ID=$(get_env_id "Production" "$PT_TOKEN")
echo "Environment ID: $PT_ENV_ID"

# API token expiry handling — refresh if needed:
check_token() {
  local token="$1"
  local status=$(curl -s -o /dev/null -w "%{http_code}" \
    "${PORTAINER_URL}/api/users/me" \
    -H "Authorization: Bearer ${token}")
  [ "$status" = "200" ] && echo "valid" || echo "expired"
}

Automating Stack Deployments via API

#!/usr/bin/env python3
# deploy-stack.py
# Deploys or updates a Portainer stack via API
# Use in CI/CD pipelines as an alternative to the Portainer webhook

import requests
import json
import os
import sys
from pathlib import Path

PORTAINER_URL = os.environ["PORTAINER_URL"]
PORTAINER_USER = os.environ["PORTAINER_USER"]
PORTAINER_PASSWORD = os.environ["PORTAINER_PASSWORD"]
ENV_NAME = os.environ.get("PORTAINER_ENV", "Production")

def get_token() -> str:
    resp = requests.post(
        f"{PORTAINER_URL}/api/auth",
        json={"username": PORTAINER_USER, "password": PORTAINER_PASSWORD}
    )
    resp.raise_for_status()
    return resp.json()["jwt"]

def get_env_id(token: str, env_name: str) -> int:
    resp = requests.get(
        f"{PORTAINER_URL}/api/endpoints",
        headers={"Authorization": f"Bearer {token}"}
    )
    resp.raise_for_status()
    for env in resp.json():
        if env["Name"] == env_name:
            return env["Id"]
    raise ValueError(f"Environment not found: {env_name}")

def get_stack_id(token: str, stack_name: str) -> int | None:
    resp = requests.get(
        f"{PORTAINER_URL}/api/stacks",
        headers={"Authorization": f"Bearer {token}"}
    )
    resp.raise_for_status()
    for stack in resp.json():
        if stack["Name"] == stack_name:
            return stack["Id"]
    return None

def deploy_stack(token: str, env_id: int, stack_name: str,
                compose_file: str, env_vars: dict) -> dict:
    stack_id = get_stack_id(token, stack_name)
    compose_content = Path(compose_file).read_text()
    env_list = [{"name": k, "value": v} for k, v in env_vars.items()]

    if stack_id:
        # Update existing stack
        print(f"Updating stack: {stack_name} (ID: {stack_id})")
        resp = requests.put(
            f"{PORTAINER_URL}/api/stacks/{stack_id}?endpointId={env_id}",
            headers={"Authorization": f"Bearer {token}"},
            json={"stackFileContent": compose_content, "env": env_list,
                  "Prune": True, "PullImage": True}
        )
    else:
        # Create new stack
        print(f"Creating stack: {stack_name}")
        resp = requests.post(
            f"{PORTAINER_URL}/api/stacks/create/standalone/string?endpointId={env_id}",
            headers={"Authorization": f"Bearer {token}"},
            json={"name": stack_name, "stackFileContent": compose_content, "env": env_list}
        )

    resp.raise_for_status()
    return resp.json()

if __name__ == "__main__":
    compose_file = sys.argv[1] if len(sys.argv) > 1 else "docker-compose.yml"
    stack_name = sys.argv[2] if len(sys.argv) > 2 else "myapp"

    token = get_token()
    env_id = get_env_id(token, ENV_NAME)

    # Read env vars from environment (set in CI/CD secrets)
    env_vars = {
        "APP_VERSION": os.environ.get("APP_VERSION", "latest"),
        "DATABASE_URL": os.environ["DATABASE_URL"],
        "API_KEY": os.environ["API_KEY"],
    }

    result = deploy_stack(token, env_id, stack_name, compose_file, env_vars)
    print(f"Stack deployed: ID={result.get('Id')}, Name={result.get('Name')}")

Managing Secrets and Configs via API

# Create a Docker secret via Portainer API
# Useful for seeding secrets before stack deployment

# Create secret:
curl -X POST \
  "https://portainer.yourdomain.com/api/endpoints/${ENV_ID}/docker/secrets/create" \
  -H "Authorization: Bearer $PT_TOKEN" \
  -H 'Content-Type: application/json' \
  -d "{
    \"Name\": \"db_password\",
    \"Data\": \"$(echo -n 'your-db-password' | base64)\"
  }" | jq .ID

# List existing secrets:
curl "https://portainer.yourdomain.com/api/endpoints/${ENV_ID}/docker/secrets" \
  -H "Authorization: Bearer $PT_TOKEN" | \
  jq '[.[] | {id: .ID, name: .Spec.Name, created: .CreatedAt}]'

# Update a secret (Docker secrets are immutable — delete and recreate):
SECRET_ID=$(curl -s \
  "https://portainer.yourdomain.com/api/endpoints/${ENV_ID}/docker/secrets" \
  -H "Authorization: Bearer $PT_TOKEN" | \
  jq -r '.[] | select(.Spec.Name == "db_password") | .ID')

# Delete old:
curl -X DELETE \
  "https://portainer.yourdomain.com/api/endpoints/${ENV_ID}/docker/secrets/${SECRET_ID}" \
  -H "Authorization: Bearer $PT_TOKEN"

# Create new with same name:
curl -X POST \
  "https://portainer.yourdomain.com/api/endpoints/${ENV_ID}/docker/secrets/create" \
  -H "Authorization: Bearer $PT_TOKEN" \
  -H 'Content-Type: application/json' \
  -d "{\"Name\": \"db_password\", \"Data\": \"$(echo -n 'new-password' | base64)\"}"

# Automate secret rotation in a cron job or CI/CD pipeline
# This pattern works for all rotatable secrets: API keys, certificates, DB passwords

Portainer Edge: Managing Remote and Air-Gapped Servers

Portainer Edge extends your management plane to servers that can't accept inbound connections — servers behind NAT, IoT devices on cellular networks, air-gapped production environments, and remote edge locations. Instead of Portainer connecting to the agent, the Edge Agent polls Portainer, making it work through any firewall configuration.

Understanding Edge Architecture

The key difference from standard agent connections:

  • Standard Agent — Portainer initiates connection to port 9001 on the remote server. Requires the server to be reachable from Portainer's network.
  • Edge Agent — the agent on the remote server polls Portainer's Edge server (port 8000). Works through NAT, firewalls, and cellular connections because the connection is outbound from the remote server.
  • Async Edge — for intermittently connected devices. Commands are queued and executed when the device next checks in. No real-time connection required.

Deploying the Edge Agent

# Step 1: Create an Edge environment in Portainer:
# Environments → Add Environment → Docker Standalone → Edge Agent
# Set:
# - Name: remote-server-frankfurt
# - Portainer server URL: https://portainer.yourdomain.com (must be public)
# - Edge tunnel server: https://portainer.yourdomain.com:8000

# Portainer generates a unique Edge Key for this environment
# Copy the generated docker run command from the UI, which looks like:

# Step 2: On the REMOTE server, run the Edge Agent:
docker run -d \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /var/lib/docker/volumes:/var/lib/docker/volumes \
  -v /:/host \
  -v portainer_agent_data:/data \
  --restart always \
  -e EDGE=1 \
  -e EDGE_ID=YOUR_EDGE_ID \
  -e EDGE_KEY=YOUR_EDGE_KEY \
  -e EDGE_INSECURE_POLL=0 \
  --name portainer_edge_agent \
  portainer/agent:latest

# The agent starts polling Portainer's tunnel server on port 8000
# Once connected, the environment appears as active in Portainer

# Verify connection from Portainer:
curl https://portainer.yourdomain.com/api/endpoints \
  -H "Authorization: Bearer $PT_TOKEN" | \
  jq '.[] | select(.Name == "remote-server-frankfurt") | {status: .Status, edgeId: .EdgeID}'
# Status 1 = active, 2 = inactive

Edge Stack Deployments for Remote Servers

# Deploy the same stack to multiple Edge environments simultaneously:
# Useful for IoT fleets, retail locations, or distributed edge nodes

# Create an Edge Stack via API:
EDGE_ENV_IDS="1,5,9,12"  # Comma-separated IDs of target Edge environments

curl -X POST "https://portainer.yourdomain.com/api/edge_stacks/create/string" \
  -H "Authorization: Bearer $PT_TOKEN" \
  -H 'Content-Type: application/json' \
  -d "{
    \"name\": \"edge-data-collector\",
    \"stackFileContent\": $(cat edge-compose.yml | python3 -c 'import json,sys; print(json.dumps(sys.stdin.read()))'),
    \"edgeGroups\": [1],
    \"deploymentType\": 0
  }" | jq .Id

# edge-compose.yml for a typical edge data collector:
cat > edge-compose.yml << 'EOF'
version: '3.8'

services:
  collector:
    image: your-registry/data-collector:latest
    restart: unless-stopped
    environment:
      - UPSTREAM_API=https://data.yourdomain.com/ingest
      - DEVICE_ID=${DEVICE_ID:-unknown}
      - COLLECTION_INTERVAL=60
    volumes:
      - collector_data:/data
    network_mode: host  # Common for edge devices accessing local sensors

  watchdog:
    image: containrrr/watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command: --interval 3600 collector  # Auto-update collector every hour
    restart: unless-stopped

volumes:
  collector_data:
EOF

# Monitor Edge environment status across all devices:
curl "https://portainer.yourdomain.com/api/endpoints" \
  -H "Authorization: Bearer $PT_TOKEN" | \
  jq '[.[] | select(.Type == 4) | {name: .Name, status: .Status, lastCheckIn: .LastCheckInDate}]'

Security Hardening Your Portainer Installation

Portainer controls every container on every connected server. A compromised Portainer instance means a compromised infrastructure. The default installation is functional but not hardened — these settings close the most significant gaps.

TLS Mutual Authentication for Agent Connections

# Generate certificates for mutual TLS between Portainer and agents
# This prevents unauthorized agents from connecting to your Portainer instance

mkdir -p ~/portainer-certs
cd ~/portainer-certs

# Generate CA key and certificate:
openssl genrsa -out ca-key.pem 4096
openssl req -new -x509 -days 3650 -key ca-key.pem -out ca.pem \
  -subj "/CN=PortainerCA/O=YourOrg"

# Generate server certificate for Portainer:
openssl genrsa -out server-key.pem 4096
openssl req -new -key server-key.pem -out server.csr \
  -subj "/CN=portainer.yourdomain.com"
openssl x509 -req -days 3650 -in server.csr -CA ca.pem -CAkey ca-key.pem \
  -CAcreateserial -out server-cert.pem

# Generate agent certificate:
openssl genrsa -out agent-key.pem 4096
openssl req -new -key agent-key.pem -out agent.csr \
  -subj "/CN=portainer-agent"
openssl x509 -req -days 3650 -in agent.csr -CA ca.pem -CAkey ca-key.pem \
  -CAcreateserial -out agent-cert.pem

# Deploy Portainer with TLS:
docker run -d \
  -p 8000:8000 \
  -p 9443:9443 \
  --name portainer \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v portainer_data:/data \
  -v ~/portainer-certs:/certs \
  portainer/portainer-ce:latest \
  --sslcert /certs/server-cert.pem \
  --sslkey /certs/server-key.pem

# Deploy agent with TLS:
docker run -d \
  -p 9001:9001 \
  --name portainer_agent \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /var/lib/docker/volumes:/var/lib/docker/volumes \
  -v ~/portainer-certs:/certs \
  -e AGENT_SECRET=your-shared-agent-secret \
  portainer/agent:latest \
  --sslcert /certs/agent-cert.pem \
  --sslkey /certs/agent-key.pem \
  --sslcacert /certs/ca.pem

Restricting Portainer Dashboard Access

# Nginx configuration to restrict Portainer to VPN or office IP
# /etc/nginx/sites-available/portainer

geo $allowed_access {
    default           0;
    10.8.0.0/24       1;  # VPN subnet
    203.0.113.5       1;  # Office static IP
    127.0.0.1         1;  # Localhost
}

server {
    listen 443 ssl http2;
    server_name portainer.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/portainer.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/portainer.yourdomain.com/privkey.pem;

    # Block all non-approved IPs at Nginx level
    if ($allowed_access = 0) {
        return 403 "Access denied. Connect via VPN.";
    }

    # Additional rate limiting for the auth endpoint
    location /api/auth {
        limit_req zone=portainer_auth burst=5 nodelay;
        proxy_pass https://localhost:9443;
        proxy_ssl_verify off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    location / {
        proxy_pass https://localhost:9443;
        proxy_ssl_verify off;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

limit_req_zone $binary_remote_addr zone=portainer_auth:10m rate=10r/m;

sudo nginx -t && sudo systemctl reload nginx

Security Configuration Checklist

# Security settings to configure via Portainer Settings:

# 1. Force HTTPS only:
#    Settings → SSL Certificate → Force HTTPS

# 2. Enable session timeout:
#    Settings → Authentication → User Session Lifetime: 8h (or less)

# 3. Configure LDAP/OAuth (instead of local accounts):
#    Settings → Authentication → LDAP or OAuth
#    This gives you centralized user management and automatic deprovisioning

# 4. Disable public API access:
#    Settings → Security → Enable Edge Compute features (if not using Edge)
#    This feature should be OFF if you're not using Edge

# 5. Audit current admin accounts:
curl https://portainer.yourdomain.com/api/users \
  -H "Authorization: Bearer $PT_TOKEN" | \
  jq '[.[] | select(.Role == 1) | {username: .Username, id: .Id}]'
# Role 1 = Administrator — review this list regularly

# 6. Review and remove stale API tokens:
curl https://portainer.yourdomain.com/api/users \
  -H "Authorization: Bearer $PT_TOKEN" | jq '.[].AuthenticationMethod'

# 7. Enable container capability restrictions in Portainer:
# Environments → [env] → Security → Restrict container capabilities
# Check: Prevent privilege escalation
# Check: No new privileges flag
# This prevents containers from gaining additional Linux capabilities

Centralized Container Logging

When you're running containers across multiple environments, debugging requires more than opening individual container log views in Portainer. You need centralized log aggregation that lets you search across all containers, correlate events across services, and retain logs beyond what Docker's log buffer holds.

Docker Logging Driver Configuration

# Configure Docker to send logs to Loki (for Grafana integration)
# Apply this on each Docker host managed by Portainer

# Install Loki Docker driver:
docker plugin install grafana/loki-docker-driver:latest \
  --alias loki \
  --grant-all-permissions

# Set Loki as the default logging driver for all new containers:
# Edit /etc/docker/daemon.json on each host:
sudo tee /etc/docker/daemon.json << 'EOF'
{
  "log-driver": "loki",
  "log-opts": {
    "loki-url": "https://loki.yourdomain.com/loki/api/v1/push",
    "loki-batch-size": "400",
    "loki-retries": "3",
    "loki-timeout": "10s",
    "labels": "container_name,compose_project,compose_service",
    "no-file": "false",
    "keep-file": "false"
  }
}
EOF

# Restart Docker to apply:
sudo systemctl restart docker

# Verify the driver is active:
docker info | grep 'Logging Driver'
# Should show: Logging Driver: loki

# Test that a container's logs reach Loki:
docker run --rm alpine echo "test log entry from $(hostname)"
# Then query Loki:
curl 'https://loki.yourdomain.com/loki/api/v1/query_range' \
  --data-urlencode 'query={container_name="alpine"}' \
  --data-urlencode 'limit=5' | jq '.data.result[0].values'

Per-Stack Log Labels for Searchability

# Add logging labels to your Compose stacks deployed through Portainer
# This makes logs searchable by environment, team, and service in Grafana

version: '3.8'

services:
  api:
    image: myapp:latest
    restart: unless-stopped
    logging:
      driver: loki
      options:
        loki-url: "https://loki.yourdomain.com/loki/api/v1/push"
        loki-pipeline-stages: |
          - json:
              expressions:
                level: level
                message: message
        labels: "env,team,service"
    labels:
      env: "production"
      team: "backend"
      service: "api"

  worker:
    image: myapp:latest
    command: node worker.js
    restart: unless-stopped
    logging:
      driver: loki
      options:
        loki-url: "https://loki.yourdomain.com/loki/api/v1/push"
        labels: "env,team,service"
    labels:
      env: "production"
      team: "backend"
      service: "worker"

# In Grafana/Loki, query across your entire production backend:
# {env="production", team="backend"} |= "error"

# Query just workers for the last hour:
# {env="production", service="worker"} | json | level="error"

# Cross-environment error comparison:
# {service="api"} |= "error" | json | level="error" != ""

Operational Monitoring: Portainer's Resource Statistics

Container Resource Monitoring via API

#!/usr/bin/env python3
# resource-report.py
# Generates a resource usage report across all Portainer environments

import requests
import json
import os
from datetime import datetime

PORTAINER_URL = os.environ["PORTAINER_URL"]
PT_TOKEN = os.environ["PT_TOKEN"]  # Pre-fetched JWT

HEADERS = {"Authorization": f"Bearer {PT_TOKEN}"}

def get_environments():
    resp = requests.get(f"{PORTAINER_URL}/api/endpoints", headers=HEADERS)
    resp.raise_for_status()
    return resp.json()

def get_containers(env_id: int):
    resp = requests.get(
        f"{PORTAINER_URL}/api/endpoints/{env_id}/docker/containers/json?all=true",
        headers=HEADERS
    )
    if resp.status_code != 200:
        return []
    return resp.json()

def get_container_stats(env_id: int, container_id: str):
    resp = requests.get(
        f"{PORTAINER_URL}/api/endpoints/{env_id}/docker/containers/{container_id}/stats?stream=false",
        headers=HEADERS,
        timeout=15
    )
    if resp.status_code != 200:
        return None
    return resp.json()

def calculate_cpu_percent(stats: dict) -> float:
    cpu_delta = stats["cpu_stats"]["cpu_usage"]["total_usage"] - \
                stats["precpu_stats"]["cpu_usage"]["total_usage"]
    system_delta = stats["cpu_stats"]["system_cpu_usage"] - \
                   stats["precpu_stats"]["system_cpu_usage"]
    num_cpus = stats["cpu_stats"]["online_cpus"]
    return (cpu_delta / system_delta) * num_cpus * 100

def format_bytes(bytes_val: int) -> str:
    for unit in ['B', 'KB', 'MB', 'GB']:
        if bytes_val < 1024:
            return f"{bytes_val:.1f}{unit}"
        bytes_val /= 1024
    return f"{bytes_val:.1f}TB"

print(f"Container Resource Report — {datetime.now().strftime('%Y-%m-%d %H:%M')}")
print("=" * 80)

for env in get_environments():
    env_id = env["Id"]
    env_name = env["Name"]
    containers = get_containers(env_id)
    running = [c for c in containers if c["State"] == "running"]

    print(f"\n[{env_name}] — {len(running)} running / {len(containers)} total")

    for container in running[:10]:  # Limit to top 10 per environment
        name = container["Names"][0].lstrip("/")
        stats = get_container_stats(env_id, container["Id"])
        if not stats:
            continue

        try:
            cpu = calculate_cpu_percent(stats)
            mem_usage = stats["memory_stats"]["usage"]
            mem_limit = stats["memory_stats"]["limit"]
            mem_pct = (mem_usage / mem_limit) * 100
            print(f"  {name:40} CPU:{cpu:5.1f}%  MEM:{format_bytes(mem_usage):10} ({mem_pct:.1f}%)")
        except (KeyError, ZeroDivisionError):
            print(f"  {name:40} stats unavailable")

# Run weekly or add to a monitoring cron:
# 0 9 * * 1 python3 /opt/scripts/resource-report.py | mail -s "Weekly Container Report" [email protected]

Tips, Gotchas, and Troubleshooting

Edge Agent Not Connecting to Portainer

# Check Edge agent logs on the remote server:
docker logs portainer_edge_agent --tail 50

# Common errors and fixes:

# 1. "dial tcp ... connection refused"
#    Port 8000 must be open and reachable from the Edge server
#    Test from the Edge server:
curl -I https://portainer.yourdomain.com:8000
# If this fails, port 8000 isn't accessible

# 2. "certificate signed by unknown authority"
#    If Portainer uses a self-signed cert, add to Edge agent:
#    -e AGENT_INSECURE_POLL=1  (development only)
#    Or mount the CA cert and use -v /path/to/ca.crt:/certs/ca.crt

# 3. Edge agent connects but environment shows inactive:
#    Check the EDGE_KEY matches what Portainer generated
#    Regenerate the edge key in Portainer if needed:
#    Environments → [edge env] → Actions → Reset Edge Key

# 4. Connection established but command execution fails:
#    Check Docker socket is properly mounted
docker exec portainer_edge_agent ls /var/run/docker.sock
# Should exist and be accessible

API Token Expiring Mid-Pipeline

# Portainer JWT tokens expire after 8 hours by default
# For long-running CI/CD pipelines or monitoring scripts, handle token refresh:

#!/bin/bash
# portainer-with-refresh.sh

PORTAINER_URL="https://portainer.yourdomain.com"

refresh_token() {
  curl -s -X POST "${PORTAINER_URL}/api/auth" \
    -H 'Content-Type: application/json' \
    -d "{\"username\":\"${PORTAINER_USER}\",\"password\":\"${PORTAINER_PASSWORD}\"}" | \
    jq -r .jwt
}

api_call() {
  local method="$1"
  local path="$2"
  shift 2

  # Try the call
  local response
  response=$(curl -s -w "\n%{http_code}" -X "$method" \
    "${PORTAINER_URL}${path}" \
    -H "Authorization: Bearer ${PT_TOKEN}" \
    "$@")

  local http_code=$(echo "$response" | tail -1)
  local body=$(echo "$response" | head -n -1)

  # Refresh token on 401 and retry once
  if [ "$http_code" = "401" ]; then
    echo "Token expired, refreshing..." >&2
    PT_TOKEN=$(refresh_token)
    export PT_TOKEN

    response=$(curl -s -w "\n%{http_code}" -X "$method" \
      "${PORTAINER_URL}${path}" \
      -H "Authorization: Bearer ${PT_TOKEN}" \
      "$@")
    body=$(echo "$response" | head -n -1)
  fi

  echo "$body"
}

# Usage:
export PT_TOKEN=$(refresh_token)
api_call GET /api/endpoints | jq 'length'

Container Logs Not Appearing in Loki

# Test the Loki driver is working:
docker run --rm \
  --log-driver=loki \
  --log-opt loki-url="https://loki.yourdomain.com/loki/api/v1/push" \
  alpine echo "test log from $(hostname) at $(date)"

# Query Loki immediately after:
curl -G 'https://loki.yourdomain.com/loki/api/v1/query_range' \
  --data-urlencode 'query={container_name="alpine"}' \
  --data-urlencode "start=$(date -d '5 minutes ago' +%s)000000000" | jq '.data.result'

# If empty, check Loki is reachable FROM the Docker host:
curl -I https://loki.yourdomain.com/loki/api/v1/push

# Check Loki driver plugin is running:
docker plugin ls | grep loki
# Status should be: enabled

# If plugin is disabled, enable it:
docker plugin enable grafana/loki-docker-driver:latest

# Check for driver errors in system journal:
journalctl -u docker --since '10 minutes ago' | grep -i loki

Pro Tips

  • Use Portainer's webhook-based stack updates as your primary deployment trigger — the stack webhook URL is simpler than the full API for straightforward redeploys. Reserve the API for complex operations like creating stacks, managing secrets, and cross-environment orchestration.
  • Implement Portainer in a network segment separate from production traffic — your container management plane should be accessible only via VPN or a dedicated management network. If Portainer is on the same network as your public-facing apps, a compromised app container could potentially reach the management API.
  • Use Portainer Edge groups for fleet-wide stack updates — assign Edge environments to groups (e.g., "EU Region", "Retail Stores") and deploy stacks to entire groups at once. When you update a stack definition, all environments in the group update automatically — no manual per-device deployments.
  • Export your Portainer configuration as backup — periodically export your environment list, team configuration, and stack definitions via the API. A Portainer instance failure without a backup means manually reconnecting every environment and reconfiguring every stack.
  • Set up Portainer's built-in Docker event notifications — Portainer can send webhooks on container events (start, stop, crash). Connect this to your alerting system to get instant notification when any container across your fleet exits unexpectedly.

Wrapping Up

The three Portainer guides together cover the complete management lifecycle: initial deployment and basic container management, GitOps stacks, RBAC, and multi-environment deployments, and this guide's API automation, Edge deployments, security hardening, and centralized logging.

The API automation layer is what separates a manually operated Portainer installation from one that's genuinely integrated into your DevOps workflow. Once stacks deploy themselves through your CI/CD pipeline, secrets rotate automatically, and resource reports run on a schedule, Portainer stops being the tool you log into to fix things and becomes the infrastructure layer that keeps everything running without requiring your attention.


Need Enterprise Container Management Designed for Your Fleet?

Managing a large container fleet — with Edge deployments across distributed locations, CI/CD pipeline integration, LDAP-backed RBAC, centralized logging, and security hardening appropriate for regulated environments — is a significant infrastructure project. The sysbrix team designs and implements enterprise-grade container management platforms built on Portainer for organizations that need reliability, auditability, and control.

Talk to Us →
Flowise Self-Host Guide: Assistants, Document Processing Pipelines, Evaluations, and Stack Integrations
Learn how to use Flowise Assistants for persistent memory agents, build document processing pipelines that handle any file type, evaluate flow quality with systematic testing, and wire Flowise into your broader self-hosted stack with n8n and Ollama.