The Docker Agent is an ephemeral container that clones a job branch, executes the task autonomously, commits results, and creates a pull request. Each job runs in complete isolation with its own environment, preventing state leakage between tasks.
Agent Backends
The Pope Bot supports two agent backends:
Pi Coding Agent Third-party agent by @mariozechner. Supports multiple LLM providers (Anthropic, OpenAI, Google, custom/local). Uses custom skills system.
Claude Code CLI Anthropic’s official coding agent. Anthropic models only. Built-in tools (Read, Edit, Bash, Glob, Grep, WebSearch, WebFetch) + MCP support.
Comparison Table
Feature Pi Coding Agent Claude Code CLI LLM Providers Anthropic, OpenAI, Google, custom/local Anthropic only Tools Custom skills (brave-search, browser-tools, etc.) Built-in + MCP Auth API key (pay-per-token) OAuth token (subscription) or API key Billing API credits Pro/Max subscription or API credits Choose when Non-Anthropic LLMs, custom Pi skills Subscription billing, official Anthropic tooling
Switching Backends
# Switch to Claude Code
npx thepopebot set-var AGENT_BACKEND claude-code
# Switch to Pi
npx thepopebot set-var AGENT_BACKEND pi
The AGENT_BACKEND variable is stored in GitHub repository variables and read by run-job.yml to select the correct Docker image.
Docker Images
All images are tagged stephengpope/thepopebot:{tag}-{version}:
Image Backend Purpose pi-coding-agent-jobPi Runs Pi agent for job execution claude-code-jobClaude Code Runs Claude Code CLI for job execution
Images include:
Node.js 22
Git + GitHub CLI
Agent binary (Pi or Claude Code)
Puppeteer + Chromium (for browser-tools skill)
Container Lifecycle
When run-job.yml triggers, the container:
1. Environment Setup
Extract job ID from branch name
if [[ " $BRANCH " == job/ * ]]; then
JOB_ID = "${ BRANCH # job / }"
else
JOB_ID = $( cat /proc/sys/kernel/random/uuid )
fi
Export secrets as environment variables
Protected secrets (AGENT_*) - Filtered from LLM’s bash subprocess:# GitHub token, Anthropic API key, etc.
eval $( echo " $SECRETS " | jq -r 'to_entries | .[] | "export \(.key)=\(.value | @sh)"' )
LLM-accessible secrets (AGENT_LLM_*) - Available to skills:# Browser passwords, skill API keys, etc.
eval $( echo " $LLM_SECRETS " | jq -r 'to_entries | .[] | "export \(.key)=\(.value | @sh)"' )
Configure Git credentials
gh auth setup-git
GH_USER_JSON = $( gh api user -q '{name: .name, login: .login, email: .email, id: .id}' )
GH_USER_NAME = $( echo " $GH_USER_JSON " | jq -r '.name // .login' )
GH_USER_EMAIL = $( echo " $GH_USER_JSON " | jq -r '.email // "\(.id)+\(.login)@users.noreply.github.com"' )
git config --global user.name " $GH_USER_NAME "
git config --global user.email " $GH_USER_EMAIL "
Clone job branch
git clone --single-branch --branch " $BRANCH " --depth 1 " $REPO_URL " /job
cd /job
2. Skill Preparation
Install skill dependencies
Start Chrome (if browser-tools is active)
for skill_dir in /job/skills/active/*/ ; do
if [ -f "${ skill_dir }package.json" ]; then
echo "Installing skill deps: $( basename " $skill_dir ")"
( cd " $skill_dir " && npm install --omit=dev --no-package-lock )
fi
done
3. System Prompt Construction
The container builds the agent’s system prompt from config files:
SYSTEM_FILES = ( "SOUL.md" "JOB_AGENT.md" )
> /job/.pi/SYSTEM.md
for i in "${ ! SYSTEM_FILES [ @ ]}" ; do
cat "/job/config/${ SYSTEM_FILES [ $i ]}" >> /job/.pi/SYSTEM.md
if [ " $i " -lt $((${ # SYSTEM_FILES [ @ ]} - 1)) ]; then
echo -e "\n\n" >> /job/.pi/SYSTEM.md
fi
done
# Resolve {{datetime}} variable
sed -i "s/{{datetime}}/$( date -u +"%Y-%m-%dT%H:%M:%SZ")/g" /job/.pi/SYSTEM.md
4. Job Execution
The container reads job metadata from job.config.json and runs the agent:
Pi Coding Agent
Claude Code CLI
JOB_CONFIG = "/job/logs/${ JOB_ID }/job.config.json"
TITLE = $( jq -r '.title // empty' " $JOB_CONFIG " )
JOB_DESCRIPTION = $( jq -r '.job // empty' " $JOB_CONFIG " )
PROMPT = "
# Your Job
${ JOB_DESCRIPTION }"
LLM_PROVIDER = "${ LLM_PROVIDER :- anthropic }"
MODEL_FLAGS = "--provider $LLM_PROVIDER "
if [ -n " $LLM_MODEL " ]; then
MODEL_FLAGS = " $MODEL_FLAGS --model $LLM_MODEL "
fi
pi $MODEL_FLAGS -p " $PROMPT " --session-dir "${ LOG_DIR }"
5. Commit Strategy
The container commits based on agent exit code:
Success (exit 0)
Failure (exit != 0)
# Commit everything including code changes
git add -A
git add -f "${ LOG_DIR }"
git commit -m "🤖 Agent Job: ${ TITLE }" || true
6. Pull Request Creation
Capture log commit SHA
LOG_SHA = $( git rev-parse HEAD )
Remove logs from branch
Logs are removed so they don’t merge into main (they’re preserved in the log commit): git rm -rf "${ LOG_DIR }"
git commit -m "done." || true
git push origin
Create PR with log permalink
REPO_SLUG = $( gh repo view --json nameWithOwner -q .nameWithOwner )
LOG_URL = "https://github.com/${ REPO_SLUG }/tree/${ LOG_SHA }/logs/${ JOB_ID }"
gh pr create --title "🤖 Agent Job: ${ TITLE }" \
--body "📋 [View Job Logs](${ LOG_URL })"$' \n\n --- \n\n '"${ JOB_DESCRIPTION }" \
--base main || true
Environment Variables
Passed to the container by run-job.yml:
Variable Description Source REPO_URLRepository clone URL ${{ github.server_url }}/${{ github.repository }}.gitBRANCHJob branch name ${{ github.ref_name }} (e.g., job/550e8400-e29b...)SECRETSProtected credentials (JSON) AGENT_* GitHub secrets (filtered from LLM)LLM_SECRETSLLM-accessible credentials (JSON) AGENT_LLM_* GitHub secretsLLM_PROVIDERLLM provider job.config.json override or GitHub variableLLM_MODELLLM model job.config.json override or GitHub variableOPENAI_BASE_URLCustom provider base URL GitHub variable AGENT_BACKENDAgent backend (pi or claude-code) job.config.json override or GitHub variable
Secret Filtering
Protected secrets (AGENT_*) are filtered from the agent’s bash environment using Pi’s env-sanitizer extension:
.pi/extensions/env-sanitizer/index.js
export function bashEnvironmentFilter ( environment ) {
const filtered = { ... environment };
// Filter protected secrets (AGENT_* prefix)
const protectedKeys = [
'GH_TOKEN' ,
'ANTHROPIC_API_KEY' ,
'OPENAI_API_KEY' ,
'GOOGLE_API_KEY' ,
'CUSTOM_API_KEY'
];
for ( const key of protectedKeys ) {
delete filtered [ key ];
}
return filtered ;
}
This prevents the agent from accidentally leaking secrets through bash commands like env or echo $GH_TOKEN.
LLM-accessible secrets (AGENT_LLM_*) are NOT filtered. Only use this prefix for credentials the agent needs to access (browser passwords, skill API keys, etc.).
Custom LLM Providers
For OpenAI-compatible endpoints (Ollama, local models, etc.), set LLM_PROVIDER=custom and OPENAI_BASE_URL:
Environment
Generated models.json
LLM_PROVIDER = custom
LLM_MODEL = llama3.2
OPENAI_BASE_URL = http://localhost:11434/v1
CUSTOM_API_KEY = not-needed # Optional, uses dummy if not set
Claude Code OAuth Token
Claude Pro (20 / m o ) a n d M a x ( 20/mo) and Max ( 20/ m o ) an d M a x ( 100+/mo) subscribers can use a 1-year OAuth token instead of API credits:
Install Claude Code CLI
npm install -g @anthropic-ai/claude-code
Generate token
claude setup-token
# Opens browser for authentication
# Token starts with sk-ant-oat01-
Set GitHub secret
npx thepopebot set-agent-secret CLAUDE_CODE_OAUTH_TOKEN
# Paste token when prompted
Anthropic only allows OAuth tokens with Claude Code, not the Messages API. Your API key is still required for event handler web chat. Pro users may hit usage limits sooner since limits are shared with Claude.ai.
Session Logs
Each job creates a directory at logs/{JOB_ID}/ containing:
job.config.json - Job metadata (title, description, LLM overrides)
session-*.jsonl - Agent session logs (messages, tool calls, responses)
Logs are committed to the job branch, then removed before merging. The PR includes a permalink to the log commit so you can review the agent’s actions.
job.config.json
session-*.jsonl
{
"title" : "Update README with installation instructions" ,
"job" : "Add a detailed installation section to the README..." ,
"llm_provider" : "anthropic" ,
"llm_model" : "claude-sonnet-4-20250514"
}
Next Steps
Skills System Extend your agent with custom skills and tools
Architecture Review the complete architecture and job lifecycle