Skip to main content

Podman quadlets

Podman quadlets are systemd unit files with .container and .pod extensions that describe containers declaratively. The systemd quadlet generator converts them into standard .service units on daemon-reload, so containers are managed entirely by systemd — no separate container daemon required.

Why quadlets instead of Docker Compose

Docker ComposePodman quadlets
Runtime daemonRequired (dockerd)None — containers are systemd units
Container lifecycledocker compose up/downsystemctl start/stop
Logsdocker compose logsjournalctl -u service
Rootless supportPartialNative
Web managementDocker Desktop / PortainerCockpit + cockpit-podman
Docker image compatibilityNativepodman-docker shim — images run unchanged
The podman-docker package provides a docker CLI shim that translates docker commands to podman. Frigate and Home Assistant images are pulled and run unchanged.

Installing Podman

The image script installs the following packages at build time:
PODMAN_PACKAGES=(
    podman
    podman-docker          # docker CLI shim → podman
    cockpit
    cockpit-podman
    cockpit-networkmanager
    cockpit-storaged       # btrfs/SATA disk management in UI
    cockpit-packagekit     # package updates via UI
    buildah                # image builds if needed
    crun                   # OCI runtime (faster than runc on aarch64)
)

pacstrap /mnt "${PODMAN_PACKAGES[@]}"
crun is preferred over runc on aarch64 — it has lower startup overhead and smaller memory footprint per container.

Quadlet unit directory

System-wide (root) quadlet units go in /etc/containers/systemd/. The systemd generator picks up all .container and .pod files in this directory automatically on daemon-reload.
/etc/containers/systemd/
├── pod-hass.pod
├── mosquitto.container
├── homeassistant.container
└── frigate.container

Pod definition

The pod-hass.pod file groups Mosquitto and Home Assistant into a single pod sharing host network:
/etc/containers/systemd/pod-hass.pod
[Pod]
PodName=hass-pod
Network=host
Containers that declare Pod=hass-pod.pod in their [Container] section are automatically joined to this pod.

Container units

homeassistant.container

/etc/containers/systemd/homeassistant.container
[Unit]
Description=Home Assistant
After=network-online.target mosquitto.service
Wants=network-online.target
# ensure mosquitto is up before HA tries to connect

[Container]
Image=ghcr.io/home-assistant/home-assistant:stable
ContainerName=homeassistant
Pod=hass-pod.pod

# host network for mDNS discovery of devices — critical for HA
Network=host

# HA needs system time and host device access for USB dongles
Volume=/etc/homeassistant/config:/config:Z
Volume=/etc/localtime:/etc/localtime:ro
Volume=/run/dbus:/run/dbus:ro   # DBus for bluetooth if used

# USB Zigbee/Zwave dongle — adjust device path
# AddDevice=/dev/ttyUSB0

Environment=TZ=UTC

PodmanArgs=--pull=newer

[Service]
Restart=on-failure
RestartSec=15
# give HA time to write state cleanly on stop
TimeoutStopSec=60

[Install]
WantedBy=multi-user.target default.target

hw-health-check.container

The hardware health check container runs at startup to verify that all expected device nodes are present. It logs pass/fail status to the systemd journal, which is visible in Cockpit.
/etc/containers/systemd/hw-health-check.container
[Unit]
Description=Infrastructure Health Check
# Ensure hardware symlinks are present before checking
Requires=dev-gps0.device dev-axelera0.device dev-cruiser-iot.device

[Container]
Image=alpine
# Simple shell script to verify devices and log to Cockpit/Journal
Exec=sh -c "for d in /dev/gps0 /dev/axelera0 /dev/cruiser-iot; do \
      if [ -e $$d ]; then echo \"✅ $$d is UP\"; \
      else echo \"❌ $$d is MISSING\"; exit 1; fi; done"
Restart=on-failure

[Install]
WantedBy=multi-user.target
The Requires= directives create systemd device unit dependencies — the container will not start until udev has created all three device nodes. If any node is missing at startup, the container exits with a non-zero code, logs the failure, and retries.

zentyal-tpm-wait.service

This is a plain systemd service (not a quadlet) that reads TPM PCR values before slapd or samba-ad-dc start, ensuring the TPM is accessible for LDAP key sealing:
/etc/systemd/system/zentyal-tpm-wait.service
[Unit]
Description=Wait for TPM for Zentyal LDAP Sealing
Before=slapd.service samba-ad-dc.service
ConditionPathExists=/dev/tpm0

[Service]
Type=oneshot
ExecStart=/usr/bin/tpm2_pcrread sha256:0,7
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
ConditionPathExists=/dev/tpm0 makes this unit a no-op on systems without a TPM — it skips silently rather than failing the boot sequence.

How systemd discovers quadlet units

1

Place unit files

Copy .container and .pod files to /etc/containers/systemd/. The image script does this automatically during build.
2

Run daemon-reload

systemctl daemon-reload
The systemd quadlet generator runs as part of daemon-reload and produces .service units from the quadlet files. Generated units are written to a transient directory — you do not manage them directly.
3

Verify generated units

systemctl list-units | grep -E 'homeassistant|mosquitto|frigate'
You should see homeassistant.service, mosquitto.service, and frigate.service listed.
4

Enable and start

systemctl enable --now homeassistant.service
For units with WantedBy=multi-user.target in their [Install] section, the quadlet generator handles activation on first boot — explicit enable is only needed when adding units after the first boot.

First boot sequence

On first boot, the following happens automatically:
systemd starts
  → systemd-generator runs quadlet generator
    → generates homeassistant.service mosquitto.service frigate.service
      → pulls container images (requires network)
        → starts pod-hass pod
            → mosquitto starts
            → homeassistant starts → writes /config → ready on :8123
        → frigate starts → connects MQTT → ready on :5000

cockpit.socket → on-demand activation → :9090
Container image pulls happen on first start and require internet access. On a headless CM5 brought up for the first time, allow a few minutes for pulls to complete before expecting services to respond.

Checking logs

journalctl -u homeassistant.service -f

Cockpit web UI

Cockpit provides a browser-based management interface at https://<cm5-ip>:9090. The cockpit-podman plugin adds a Containers view where you can inspect running containers, view logs, and restart services without SSH.
systemctl enable --now cockpit.socket
Use the Cockpit Storage view (provided by cockpit-storaged) to monitor Btrfs subvolume usage, SATA disk health via SMART, and NVMe health — all without leaving the browser.

Automatic image updates

The image script enables podman-auto-update.timer, which runs podman auto-update on a schedule and pulls newer images for containers that declare PodmanArgs=--pull=newer:
systemctl status podman-auto-update.timer
To trigger an immediate update check:
podman auto-update

Build docs developers (and LLMs) love