← Back to all articles

Automating Docker with Komodo — Builds, Syncs, and Procedures

Use Komodo's Resource Syncs for GitOps, Procedures for automated workflows, Builds for CI/CD pipelines, and the CLI for headless Docker management.

Updated 11 min read

Our Komodo setup guide walked through deploying Core, installing Periphery agents, and managing a first stack. If that’s where you stopped, you’re using maybe 20% of what Komodo offers.

This article picks up where the setup guide left off — variables and secrets, building Docker images from Git, defining your infrastructure as TOML files, automating multi-step workflows, and managing it all from the command line.

Variables and secrets

If you’ve been hardcoding database passwords and API keys into each stack’s environment block, variables fix that. Define a value once, reference it anywhere — stacks, deployments, builds — using [[VARIABLE_NAME]] interpolation.

There are three places to store them, each with different visibility rules:

  • UI Variables — defined in Settings > Variables in the web UI. Stored in the database, accessible to all resources. Mark a variable as “secret” to hide its value in the UI and logs (admin-only access).
  • Core config secrets — defined in a [secrets] block in core.config.toml. These are never exposed through the API at all.
  • Periphery secrets — defined in periphery.config.toml on each server. These never leave the server — Core can reference them by name but never sees the actual values.

In any resource’s environment field, reference them with double brackets:

DB_HOST = [[DATABASE_ADDRESS]]
API_KEY = [[SECRET_API_KEY]]

At deployment time, Komodo looks up each [[...]] reference across all three layers, writes the resolved values to a .env file on the host, and passes it to Docker Compose via --env-file.

Builds — Docker image CI/CD

Beyond deploying stacks, Komodo has a full image build pipeline: it clones a Git repo, runs optional pre-build commands, executes docker build, and pushes the tagged image to your registry.

1. Create a Builder

Builders define which machine runs your builds. They are found in Settings (not in the sidebar).

2. Create a Build resource

In the Komodo dashboard, navigate to Builds and click New Build. Configure:

  • Git provider — GitHub, GitLab, or a self-hosted instance (configured in core.config.toml)
  • Repository and branch — the repo containing your Dockerfile
  • Builder — select the Builder you created in step 1
  • Image registry and name — where to push the built image (e.g. docker.io/myorg/myapp)

Each build auto-increments the patch version and pushes three tags:

myorg/myapp:1.2.3        # version tag
myorg/myapp:latest        # rolling tag
myorg/myapp:abc1234       # commit hash tag

3. Trigger a build

Click Build in the UI to trigger a manual build. Komodo clones the repo on the Builder machine, runs docker build, and pushes the resulting image to your registry.

4. Auto-build with webhooks

To trigger builds automatically on every Git push, add a webhook in your Git provider:

URL format:

https://<KOMODO_HOST>/listener/github/build/<build-name>/build

Set the webhook secret to your KOMODO_WEBHOOK_SECRET value. For GitHub, select “Push” events. Komodo only triggers the build when the push matches the configured branch.

Multi-platform builds Advanced

To build images for multiple architectures (e.g. amd64 and arm64), set up Docker Buildx on the Builder machine:

docker buildx create --name builder --use --bootstrap
docker buildx install

Then add --platform linux/amd64,linux/arm64 to the Build resource’s Extra Args field. Multi-platform builds use QEMU emulation for non-native architectures and take significantly longer.

Resource Syncs — infrastructure as code

Once you have more than a handful of stacks and builds, clicking through the UI to set each one up gets old. Resource Syncs let you declare everything — servers, stacks, builds, variables, user groups — as TOML files in a Git repo. Push a change, and Komodo diffs it against the live state and shows you exactly what would change before applying anything.

1. Create TOML declarations in a Git repo

Create a repository (or a directory in an existing repo) with TOML files declaring your resources. You can split declarations across multiple files and nested folders.

Example stacks.toml:

[[stack]]
name = "monitoring"
[stack.config]
server_id = "docker-host-01"
git_provider = "github.com"
git_account = "myuser"
repo = "myuser/monitoring-stack"
branch = "main"
file_paths = ["compose.yaml"]
environment = """
GRAFANA_PASSWORD = [[GRAFANA_SECRET]]
INFLUXDB_TOKEN = [[INFLUXDB_TOKEN]]
"""

[[stack]]
name = "app-backend"
[stack.config]
server_id = "docker-host-02"
after = ["monitoring"]
git_provider = "github.com"
git_account = "myuser"
repo = "myuser/backend"
branch = "main"
file_paths = ["compose.yaml"]

The after field ensures app-backend deploys only after monitoring is up. Variable interpolation with [[VARIABLE_NAME]] works here too.

2. Create a ResourceSync in Komodo

In the dashboard, navigate to Resource Syncs and click New Sync. Configure:

  • Git provider, account, repo, and branch — pointing to your TOML declarations
  • Resource path — the file(s) or folder(s) containing TOML declarations (e.g. ["stacks.toml", "builds.toml"] or ["resources/"])
  • Match Tags — optionally scope this sync to only manage resources with specific tags

3. Review the diff and execute

Click Refresh to have Komodo compare your TOML declarations against the current state. The UI shows a list of actions: resources to create, update, or delete. Review the diff, then click Execute to apply.

4. Auto-sync with webhooks

Add a webhook in your Git provider to trigger a sync on every push:

https://<KOMODO_HOST>/listener/github/sync/<sync-name>/sync

Now pushing a change to your infrastructure repo automatically updates Komodo’s resources.

Every Komodo resource type has a TOML representation — not just Stacks, but Servers, Builds, Procedures, User Groups, Variables, and everything else. The Komodo docs have the full TOML schema.

Procedures — automated workflows

A Procedure chains multiple steps into a workflow. It’s split into stages — stages run one after another, but the executions inside each stage run in parallel. So you can say “build both images at the same time, wait for both to finish, then deploy both stacks at the same time.”

Execution model

Stage 1 (parallel):     RunBuild "api"    +    RunBuild "frontend"

Stage 2 (parallel):     DeployStack "api"  +    DeployStack "frontend"

Stage 3:                SendAlert "deploy-complete"

There are 40+ execution types — everything from RunBuild and DeployStack to PruneImages and BackupCoreDatabase.

Common use cases

  • Build then deploy — Stage 1 builds images, Stage 2 deploys stacks
  • Scheduled cleanup — Prune unused Docker images, containers, and volumes nightly
  • Rolling restarts — Restart stacks across servers in a controlled sequence
  • Backup before deploy — Stage 1 backs up the database, Stage 2 deploys

Batch matching

Batch execution types (like BatchDeployStackIfChanged) support pattern matching:

  • Exact names: my-stack
  • Wildcards: app-*
  • Regex: ^prod-.*$

So BatchDeployStackIfChanged with pattern prod-* redeploys only production stacks that actually have pending changes — no need to list each one.

Scheduling

Procedures can run on a schedule using two formats:

Each schedule has its own timezone setting and an enable/disable toggle (so you can pause it without deleting the expression). Individual executions and entire stages also have an enabled = false flag — handy for temporarily skipping a step without ripping apart the procedure.

Example: build-then-deploy procedure in TOML

[[procedure]]
name = "deploy-production"
description = "Build images and deploy all production stacks"
tags = ["production"]

[[procedure.config.stage]]
name = "Build images"
executions = [
  { execution.type = "RunBuild", execution.params.build = "api" },
  { execution.type = "RunBuild", execution.params.build = "frontend" },
]

[[procedure.config.stage]]
name = "Deploy stacks"
executions = [
  { execution.type = "BatchDeployStackIfChanged", execution.params.pattern = "prod-*" },
]

[[procedure.config.stage]]
name = "Notify"
executions = [
  { execution.type = "SendAlert", execution.params.name = "deploy-complete" },
]

Procedures can be triggered manually, via webhook, or on a schedule.

Actions — TypeScript automation Advanced

Actions let you write TypeScript that runs directly inside Komodo. The komodo client object is pre-authenticated — no API keys to manage — and gives you full read, write, and execute access.

// Example: update all builds to use a release branch
const branch = "release/2.0";

const builds = await komodo.read("ListBuilds", {});
for (const build of builds) {
  await komodo.write("UpdateBuild", {
    id: build.id,
    config: { branch },
  });
}

Use Actions when you need conditional logic, loops, or API calls that Procedures can’t express — bulk config changes, conditional rollbacks, or anything that’s more “script” than “pipeline.” For straightforward build-then-deploy sequences, stick with Procedures.

Actions can be triggered manually, via webhook, or from within a Procedure using the RunAction execution type.

The Komodo CLI

Not everything needs a browser. The km CLI gives you quick access to deployments, builds, variables, and backups from the terminal.

Installation

Configuration

Create a config file at ~/.config/komodo/komodo.cli.toml:

host = "https://komodo.example.com"
cli_key = "<your-api-key>"
cli_secret = "<your-api-secret>"

Generate an API key in the Komodo web UI under your user settings.

Common commands

CommandDescription
km deploy stack my-stackDeploy a stack
km run action my-action -yRun an Action (skip confirmation)
km database backup -yTrigger a database backup
km db restore -yRestore from the latest backup
km set var MY_VAR value -ySet a variable
km update build my-build "version=1.19.0"Update a Build’s config
km x commit my-syncCommit a managed ResourceSync

Webhooks

We touched on webhooks in the Builds and Resource Syncs sections already, but here’s the full reference. Every Komodo webhook follows the same URL pattern:

https://<HOST>/listener/<AUTH_TYPE>/<RESOURCE_TYPE>/<ID_OR_NAME>/<EXECUTION>

Where:

  • AUTH_TYPEgithub (validates X-Hub-Signature-256) or gitlab (validates X-Gitlab-Token)
  • RESOURCE_TYPEbuild, repo, stack, sync, procedure, action
  • EXECUTION — depends on the resource type:
ResourceAvailable executions
Build/build
Repo/pull, /clone, /build
Stack/deploy, /refresh
Sync/sync, /refresh
Procedure/<branch> or /__ANY__
Action/<branch> or /__ANY__

Alerting

When a container dies, a build fails, or a server’s disk usage crosses a threshold, you probably want to know about it somewhere other than the Komodo UI.

Alerters (configured in Settings, not the sidebar) push notifications to:

  • Slack webhooks
  • Discord webhooks
  • Email
  • Custom webhook URLs

You can whitelist or blacklist specific resources per Alerter, and filter by event type — so your on-call Discord channel only gets production alerts, not every dev stack restart.

Backup and restore

Komodo sets up an automatic backup Procedure on first install — daily at 01:00, gzip-compressed, stored in the /backups mount from your Compose file. It keeps the 14 most recent by default.

To trigger a manual backup or restore via CLI:

# Manual backup
km database backup -y

# Restore from latest backup
km db restore -y

# Restore from a specific backup
km db restore -y --restore-folder 2026-03-01_01-00

What’s next

That covers the main automation features. A few things we didn’t get into here that are worth a look:

  • Permissions and user groups — four levels (None, Read, Execute, Write) with regex pattern matching for bulk grants. Useful once you have multiple team members. Docs.
  • OIDC and OAuth — Keycloak, GitHub, or Google SSO instead of local accounts. Docs.
  • API and SDKs — REST/WebSocket API with client libraries for TypeScript (komodo_client on npm) and Rust (komodo_client on crates.io) if you want to build your own tooling on top.

Managing Docker Across Multiple Servers with Komodo

Deploy Komodo Core with Docker Compose and install Periphery agents on remote servers to manage Docker containers, stacks, and builds from a single dashboard.

Mapping Docker Containers Across Multiple Servers

An open-source CLI tool that SSHs into multiple servers, discovers running Docker containers, and generates Mermaid diagrams or HTML tables for documentation.

Hosting a Monitoring Stack - Grafana, InfluxDB, and Telegraf

Deploy a complete self-hosted monitoring stack using Grafana, InfluxDB, and Telegraf with Docker Compose — from installation to your first dashboard.

Search articles
esc to close