Skip to Content

Portainer Docker Setup: Advanced Stack Management, RBAC, Templates, and Multi-Environment Deployments

Go beyond basic container management — learn how to use Portainer for production stack deployments, role-based access control, custom app templates, automated webhooks, and managing multiple Docker environments from one dashboard.
Portainer setup guide

Portainer Docker Setup: Advanced Stack Management, RBAC, Templates, and Multi-Environment Deployments

Portainer's basic interface gets you far: start, stop, inspect containers, view logs, open a console. But teams running production infrastructure on Docker need more — stack deployments that stay in sync with Git, role-based access control that keeps junior developers out of production environments, custom app templates for one-click internal tool deployments, and webhook-triggered rolling updates. This guide covers advanced Portainer Docker setup for teams running real workloads on real servers.

If you haven't set up Portainer yet, start with our Portainer getting started guide which covers installation, Docker socket configuration, and connecting your first environment. This guide picks up from a running Portainer instance with at least one connected environment.


Prerequisites

  • A running Portainer CE or BE instance — see our getting started guide
  • Docker Engine and Docker Compose v2 on at least one connected environment
  • Portainer version 2.19+ recommended — some features covered here require recent releases
  • Admin access to Portainer and SSH access to your Docker hosts for troubleshooting
  • A Git repository (GitHub, GitLab, or Gitea) for GitOps stack deployments

Verify your Portainer version and connected environments:

# Check Portainer version from CLI
docker exec portainer /portainer --version

# Verify all environments are reachable
# In Portainer UI: Environments → check status indicators
# Or via API:
curl -X POST https://portainer.yourdomain.com/api/auth \
  -H 'Content-Type: application/json' \
  -d '{"username": "admin", "password": "yourpassword"}' | jq .jwt

# Use the JWT for subsequent API calls:
curl https://portainer.yourdomain.com/api/endpoints \
  -H "Authorization: Bearer YOUR_JWT" | jq '[.[] | {id: .Id, name: .Name, status: .Status}]'

Managing Stacks: From Compose to GitOps

Portainer's stack management is where it genuinely earns its place in a production workflow. Rather than SSH-ing into servers and running docker compose up manually, you manage the full lifecycle of multi-service applications from the dashboard.

Deploying a Stack from Docker Compose

Go to Stacks → Add Stack. Give it a name and paste your Compose configuration directly in the editor. Portainer provides syntax highlighting and validates the YAML before deploying. For environment-specific variables, use Portainer's built-in environment variable editor rather than hardcoding values:

# Example production stack with environment variable placeholders
# Variables defined in Portainer's env editor, not in the Compose file
version: '3.8'

services:
  app:
    image: myregistry.yourdomain.com/myapp:${APP_VERSION}
    restart: unless-stopped
    environment:
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
      - JWT_SECRET=${JWT_SECRET}
      - NODE_ENV=production
    ports:
      - "3000:3000"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
    networks:
      - app_network

  worker:
    image: myregistry.yourdomain.com/myapp:${APP_VERSION}
    command: node worker.js
    restart: unless-stopped
    environment:
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
    networks:
      - app_network

networks:
  app_network:
    external: true

In Portainer's Environment Variables section below the editor, define APP_VERSION, DATABASE_URL, REDIS_URL, and JWT_SECRET. These are stored encrypted in Portainer and injected at deploy time — they never appear in your Git history or Compose files.

GitOps: Stack Deployments from a Git Repository

The most powerful Portainer stack deployment mode: connect a stack to a Git repository and Portainer automatically redeploys when the repository changes. Select Repository as the build method when creating a stack:

# Git repository stack configuration:
# Repository URL: https://github.com/yourorg/yourapp
# Branch: main
# Compose file path: docker/compose.prod.yml

# For private repositories, add credentials:
# Authentication: Username/Password or Deploy Token
# GitHub: use a Personal Access Token as the password
# GitLab: use a project deploy token
# Gitea: use a personal access token

# Directory structure in your repo:
yourapp/
├── docker/
│   ├── compose.prod.yml    ← Portainer deploys this
│   ├── compose.staging.yml
│   └── compose.dev.yml
├── src/
└── ...

# Enable auto-update in Portainer stack settings:
# Auto Update: Polling every 5 minutes
# OR
# Webhook: Portainer gives you a URL — trigger from GitHub Actions

Stack Update Webhooks for CI/CD Integration

Enable the stack's webhook URL (shown in the stack settings) and trigger it from your CI/CD pipeline after a successful build. This gives you GitOps without Portainer needing to poll your repository:

# .github/workflows/deploy.yml
name: Build and Deploy

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build and push Docker image
        run: |
          docker build -t myregistry.yourdomain.com/myapp:${{ github.sha }} .
          docker push myregistry.yourdomain.com/myapp:${{ github.sha }}

      - name: Update Portainer stack
        run: |
          # Trigger Portainer webhook — pulls latest Compose from Git
          # and redeploys with the new image
          curl -X POST \
            "${{ secrets.PORTAINER_STACK_WEBHOOK_URL }}" \
            --fail

          echo "Portainer stack update triggered"

Role-Based Access Control (RBAC)

Portainer's RBAC system lets you define exactly who can do what in each environment. This is the feature that makes Portainer viable for teams — developers can see their application's containers without touching production infrastructure, and operations can manage servers without developers accidentally exposing or deleting critical services.

Understanding Portainer's Permission Model

  • Administrator — full access to everything: environments, users, registries, settings
  • Standard User — access only to resources they own or have been granted access to
  • Teams — groups of users; access is granted at the team level to environments and stacks
  • Environment-level roles: Environment Administrator, Operator, Helpdesk, Standard User, Read-only User

Setting Up Teams and Access

# Portainer RBAC setup via API
# (All of these actions are also available in the UI)

# Step 1: Get admin JWT
JWT=$(curl -s -X POST https://portainer.yourdomain.com/api/auth \
  -H 'Content-Type: application/json' \
  -d '{"username":"admin","password":"adminpass"}' | jq -r .jwt)

# Step 2: Create teams
curl -X POST https://portainer.yourdomain.com/api/teams \
  -H "Authorization: Bearer $JWT" \
  -H 'Content-Type: application/json' \
  -d '{"Name": "Backend Engineers"}'

curl -X POST https://portainer.yourdomain.com/api/teams \
  -H "Authorization: Bearer $JWT" \
  -H 'Content-Type: application/json' \
  -d '{"Name": "DevOps"}'

# Step 3: Create users and add to teams
curl -X POST https://portainer.yourdomain.com/api/users \
  -H "Authorization: Bearer $JWT" \
  -H 'Content-Type: application/json' \
  -d '{"Username": "alice", "Password": "temppassword", "Role": 2}'
# Role 1 = Administrator, Role 2 = Standard User

# Step 4: Grant team access to an environment with a specific role
# Environment roles: 1=Environment Admin, 2=Operator, 3=Helpdesk, 4=Standard, 5=Read-only
curl -X PUT https://portainer.yourdomain.com/api/endpoints/1/teamaccesspolicies \
  -H "Authorization: Bearer $JWT" \
  -H 'Content-Type: application/json' \
  -d '{"1": {"RoleId": 4}, "2": {"RoleId": 1}}'  # Team 1 = Standard, Team 2 = Admin

Practical RBAC Configuration for Engineering Teams

A recommended permission structure for a typical engineering team:

# Environment: Production Server
# DevOps team: Environment Administrator (full control)
# Backend Engineers team: Standard User
#   → Can view containers, logs, stats for their stacks
#   → Cannot create/delete networks, volumes, or modify system containers
#   → Cannot access environment settings

# Environment: Staging Server
# DevOps team: Environment Administrator
# Backend Engineers team: Operator
#   → Can start/stop/restart containers they own
#   → Can deploy and update their own stacks
#   → Cannot modify system-level resources

# Environment: Development Server
# All engineers: Environment Administrator
#   → Full freedom to experiment

# In practice: configure via UI at
# Settings → Users → Teams
# Environments → [Environment] → Access Control

Custom App Templates

Portainer's App Templates feature lets you build a library of pre-configured applications that any team member can deploy with a single click. Instead of writing Docker Compose from scratch every time someone needs a Redis cache or a Postgres database for a new project, they pick from your template library and fill in a few variables.

Creating a Custom Template Library

Portainer reads templates from a JSON URL. Host your template file on your internal Gitea or any web server and point Portainer at it in Settings → App Templates:

{
  "version": "2",
  "templates": [
    {
      "type": 3,
      "title": "PostgreSQL 16",
      "description": "Production-ready PostgreSQL with persistent storage and health checks",
      "categories": ["database"],
      "platform": "linux",
      "logo": "https://www.postgresql.org/media/img/about/press/elephant.png",
      "repository": {
        "url": "https://git.yourdomain.com/templates/databases",
        "stackfile": "postgres/compose.yml"
      },
      "env": [
        {
          "name": "POSTGRES_DB",
          "label": "Database Name",
          "description": "Name of the database to create"
        },
        {
          "name": "POSTGRES_USER",
          "label": "Database User",
          "default": "appuser"
        },
        {
          "name": "POSTGRES_PASSWORD",
          "label": "Database Password",
          "description": "Use a strong random password"
        }
      ]
    },
    {
      "type": 3,
      "title": "Redis 7 with Persistence",
      "description": "Redis with AOF persistence and memory limits configured",
      "categories": ["cache", "database"],
      "platform": "linux",
      "repository": {
        "url": "https://git.yourdomain.com/templates/databases",
        "stackfile": "redis/compose.yml"
      },
      "env": [
        {
          "name": "REDIS_PASSWORD",
          "label": "Redis Password"
        },
        {
          "name": "MAX_MEMORY",
          "label": "Max Memory",
          "default": "256mb"
        }
      ]
    }
  ]
}

Host this file at a URL accessible from Portainer, then set it as the App Templates URL in Portainer's settings. Your team now has a self-service deployment library — no Docker expertise required to spin up a new database or cache.


Multi-Environment Management

One of Portainer's most valuable capabilities for teams running multiple servers: manage every Docker environment — production, staging, development, remote edge servers — from a single dashboard with one login.

Connecting Remote Docker Environments via Agent

# On each remote server you want to manage:
# Deploy the Portainer Agent
docker run -d \
  -p 9001:9001 \
  --name portainer_agent \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /var/lib/docker/volumes:/var/lib/docker/volumes \
  portainer/agent:latest

# Verify the agent is running:
docker logs portainer_agent --tail 10
# Should show: Starting Portainer agent on :9001

# In Portainer dashboard:
# Environments → Add Environment → Agent
# Name: Production-Server-1
# Agent URL: https://prod-server-1.yourdomain.com:9001
# Or using IP: http://SERVER_IP:9001

# For TLS-secured agent connections (recommended for production):
# Generate certs and configure agent with --tlscert, --tlskey, --tlscacert flags

Environment Groups and Tags

As your environment count grows, use Portainer's grouping and tagging features to organize them. Go to Environments → Groups to create logical groupings:

  • Production — all production servers; only DevOps team has Admin access
  • Staging — pre-production servers; engineers have Operator access
  • Development — individual dev servers; everyone has full access
  • Edge — remote/IoT servers managed via Portainer Edge

Tags on environments (e.g., region:eu-west, tier:primary) make bulk operations and filtering practical when managing 10+ servers from one Portainer instance.

Syncing Stacks Across Environments

Deploy the same application to multiple environments with environment-specific variables using Portainer's API. This is useful for promoting a deployment from staging to production:

#!/bin/bash
# promote-to-production.sh
# Promotes a stack from staging to production with different env vars

set -euo pipefail

PORTAINER_URL="https://portainer.yourdomain.com"
STACKFILE="docker/compose.prod.yml"
GIT_REPO="https://github.com/yourorg/yourapp"
APP_VERSION="${1:-latest}"  # Pass version as argument

# Get JWT
JWT=$(curl -s -X POST "${PORTAINER_URL}/api/auth" \
  -H 'Content-Type: application/json' \
  -d "{\"username\":\"admin\",\"password\":\"${PORTAINER_ADMIN_PASSWORD}\"}" | jq -r .jwt)

# Get production environment ID
PROD_ENV_ID=$(curl -s "${PORTAINER_URL}/api/endpoints" \
  -H "Authorization: Bearer ${JWT}" | \
  jq -r '.[] | select(.Name == "Production") | .Id')

# Check if stack already exists
STACK_ID=$(curl -s "${PORTAINER_URL}/api/stacks" \
  -H "Authorization: Bearer ${JWT}" | \
  jq -r ".[] | select(.Name == \"myapp\" and .EndpointId == ${PROD_ENV_ID}) | .Id")

if [ -z "$STACK_ID" ]; then
  echo "Creating new stack in production..."
  curl -X POST "${PORTAINER_URL}/api/stacks/create/swarm/repository?endpointId=${PROD_ENV_ID}" \
    -H "Authorization: Bearer ${JWT}" \
    -H 'Content-Type: application/json' \
    -d "{
      \"Name\": \"myapp\",
      \"RepositoryURL\": \"${GIT_REPO}\",
      \"ComposeFile\": \"${STACKFILE}\",
      \"Env\": [
        {\"name\": \"APP_VERSION\", \"value\": \"${APP_VERSION}\"},
        {\"name\": \"DATABASE_URL\", \"value\": \"${PROD_DATABASE_URL}\"}
      ]
    }"
else
  echo "Updating existing production stack (ID: ${STACK_ID})..."
  curl -X PUT "${PORTAINER_URL}/api/stacks/${STACK_ID}?endpointId=${PROD_ENV_ID}" \
    -H "Authorization: Bearer ${JWT}" \
    -H 'Content-Type: application/json' \
    -d "{\"Env\": [{\"name\": \"APP_VERSION\", \"value\": \"${APP_VERSION}\"}], \"Prune\": true, \"PullImage\": true}"
fi

echo "Production stack updated to version: ${APP_VERSION}"

Registries, Images, and Private Container Registries

Connecting a Private Registry

Store your built Docker images in a private registry and let Portainer pull from it on every deploy. Go to Registries → Add Registry:

# Connect a self-hosted registry (e.g., Gitea container registry or Docker Registry):
# Registry provider: Custom registry
# Name: Internal Registry
# Registry URL: https://registry.yourdomain.com
# Authentication: enabled
# Username: your-registry-user
# Password: your-registry-token

# Or connect Gitea's built-in registry:
# Registry URL: https://git.yourdomain.com
# Username: your-gitea-username
# Password: your-gitea-token

# Verify registry connectivity after adding:
curl https://portainer.yourdomain.com/api/registries \
  -H "Authorization: Bearer $JWT" | jq '[.[] | {name: .Name, url: .URL}]'

# Test pulling an image from the registry in Portainer:
# Images → Pull Image
# Registry: select your private registry
# Image: org/myapp:latest

Automated Image Updates with Webhooks

Portainer CE supports container-level webhooks that trigger a pull-and-restart when a new image version is pushed. Enable it per-container in Container → Webhooks. Combined with your CI/CD pipeline, this gives you zero-touch deployments:

# The container webhook URL looks like:
# https://portainer.yourdomain.com/api/webhooks/WEBHOOK_TOKEN

# Add to your CI/CD pipeline after pushing a new image:
curl -X POST \
  "https://portainer.yourdomain.com/api/webhooks/YOUR_WEBHOOK_TOKEN" \
  --fail

# Or in GitHub Actions:
- name: Trigger Portainer redeploy
  run: |
    curl -X POST \
      "${{ secrets.PORTAINER_CONTAINER_WEBHOOK }}" \
      --fail
    echo "Redeploy triggered for container"

Tips, Gotchas, and Troubleshooting

Stack Deployment Fails with "Network Not Found"

# If your Compose file references an external network, create it first:
docker network create app_network

# Or check which networks exist on the target environment:
docker network ls

# In Portainer: Networks → Add Network
# This creates the network on the connected environment

# For Compose files using external: true:
# The network must exist BEFORE deploying the stack
# Create it via Portainer's Networks UI or via SSH on the target server

# Verify the network exists from Portainer:
curl https://portainer.yourdomain.com/api/endpoints/ENV_ID/docker/networks \
  -H "Authorization: Bearer $JWT" | jq '[.[] | .Name]'

Git Repository Stack Not Updating

# Check stack's Git configuration:
# Stacks → [Stack] → Editor → Git settings

# Common causes:
# 1. Authentication credentials expired (PAT rotated)
#    → Update credentials in Stack → Git Authentication

# 2. Branch name changed (main vs master)
#    → Verify correct branch is set in stack config

# 3. Compose file path wrong
#    → Check the stackfile path matches your repo structure exactly

# 4. Webhook not reaching Portainer (firewall/network issue)
#    → Test webhook manually:
curl -X POST 'https://portainer.yourdomain.com/api/stacks/webhooks/YOUR_WEBHOOK_TOKEN'

# 5. Force a manual pull and redeploy:
# Stacks → [Stack] → Pull and Redeploy

Portainer Loses Connection to an Environment

# Check agent is running on the remote server:
ssh user@remote-server
docker ps | grep portainer_agent
docker logs portainer_agent --tail 20

# Restart agent if needed:
docker restart portainer_agent

# Verify port 9001 is accessible from Portainer server:
curl -I http://REMOTE_SERVER_IP:9001
# Should return a response (even an error means connectivity works)

# Check firewall on remote server:
sudo ufw status | grep 9001
sudo ufw allow 9001/tcp  # If using UFW and port is blocked

# If using TLS, verify certs haven't expired:
openssl x509 -enddate -noout -in /path/to/agent.crt

Updating Portainer Without Losing Data

# All Portainer data is in the portainer_data volume
# Stop, remove, pull, restart — data persists

docker stop portainer
docker rm portainer
docker pull portainer/portainer-ce:latest

docker run -d \
  -p 8000:8000 \
  -p 9443:9443 \
  --name portainer \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v portainer_data:/data \
  portainer/portainer-ce:latest

# Verify update and check all environments reconnected:
docker logs portainer --tail 20
curl -I https://portainer.yourdomain.com

# Pin to a specific version for production:
# portainer/portainer-ce:2.20.0

Pro Tips

  • Use Portainer's built-in terminal for emergency access — during an incident, the container console in Portainer is faster than SSH-ing into the server and running docker exec. It's available even when container health checks are failing, as long as the container is running.
  • Enable Portainer activity logging — in Settings → Authentication, enable activity logging. This records every action taken through Portainer with the user, timestamp, and action type. Essential for regulated environments and incident post-mortems.
  • Use stack labels for operational metadata — add labels to your stacks like team=backend, app=myapp, criticality=p0. These make filtering and bulk operations practical as your stack count grows.
  • Schedule container restarts for memory-leaking applications — Portainer can schedule periodic container restarts via its cron-based scheduling feature. Not a fix for memory leaks, but a pragmatic mitigation while the root cause is addressed.
  • Back up your Portainer data volume — it contains all your environment configurations, user accounts, team memberships, stack definitions, and registry credentials. A weekly backup is insurance against a Portainer server failure that would require full reconfiguration.

Wrapping Up

Advanced Portainer Docker setup turns Portainer from a container inspection tool into a genuine internal deployment platform. GitOps stacks that stay synchronized with your repository, RBAC that gives engineers appropriate access without exposing production infrastructure, custom templates that let any team member self-serve a new database or cache, and multi-server management under one dashboard — this is what makes Docker teams actually productive at scale.

If you're starting from scratch, the Portainer getting started guide gets your initial deployment running in under 30 minutes. Come back here once you're comfortable with the basics and ready to wire up your team's deployment workflow properly.


Need a Production Container Platform Designed for Your Team?

Setting up Portainer with proper RBAC, GitOps pipelines, private registries, and multi-environment management for a real engineering team takes architecture decisions that are hard to undo. The sysbrix team designs and implements container platforms that teams can rely on from day one — not ones they outgrow in six months.

Talk to Us →
Vaultwarden Bitwarden Self-Host: CI/CD Secrets, Infrastructure Automation, and Zero-Trust Credential Access
Learn how to use your self-hosted Vaultwarden as the single source of truth for all infrastructure secrets — injecting credentials into CI/CD pipelines, Terraform, Ansible, and Docker deployments without a single hardcoded secret anywhere in your stack.