Agent Format Overview
OpenCode Agents uses a 4-section expert format optimized for LLM consumption. This format replaces the generic upstream template (lists of skills, fictional metrics) with structured decision-making guidance. Quality difference: Upstream agents typically score 3-4/10, while curated agents score 8-9/10.The 4-Section Format
Every agent must follow this structure:- Identity — Unheaded paragraph (50-300 words)
- ## Decisions — IF/THEN decision trees (≥5 rules)
- ## Examples — Code examples (≥3 blocks)
- ## Quality Gate — Validation checklist (≥5 items)
1. Identity (Unheaded Paragraph)
The identity appears immediately after the frontmatter, before the first## heading. It establishes:
- Role and expertise — What the agent specializes in
- Context and versions — Technologies, frameworks, year references
- Scope and boundaries — What the agent does and doesn’t do
typescript-pro.md):
- 50-300 words (sweet spot: 100-150)
- Mention specific versions (e.g., “TypeScript 5.x”, “Node.js 20+”)
- Include year context (e.g., “as of 2024”, “2023+”)
- No heading — starts immediately after
---
2. Decisions Section
The## Decisions section contains IF/THEN decision trees that guide the agent’s behavior in different scenarios.
Format:
- ≥5 decision rules for a score of 5/5
- Use IF/THEN/ELIF/ELSE keywords (case-insensitive)
- Cover key scenarios the agent will encounter
- Be specific and actionable
3. Examples Section
The## Examples section provides concrete code examples showing the agent in action.
Format:
- ≥5 bullet points for a score of 5/5
- Use
- [ ]checkbox syntax or simple bullets - Cover code quality, testing, documentation
- Include measurable criteria when possible
Quality Scoring System
The 8 Dimensions
Every agent is scored on 8 dimensions (1-5 scale):| Dimension | What’s Measured | 5/5 Criteria |
|---|---|---|
| frontmatter | description, mode, permission present | All 3 fields exist |
| identity | Unheaded paragraph word count | 50-300 words |
| decisions | IF/THEN rule count | ≥5 rules |
| examples | Fenced code block count | ≥3 blocks |
| quality_gate | Bullet point count | ≥5 items |
| conciseness | Line count + filler phrase density | 70-120 lines, ≤3% filler |
| no_banned_sections | Absence of deprecated headings | 0 banned sections |
| version_pinning | Versions and years in identity | Both present |
Pass Criteria
To be accepted, an agent must:- Average score ≥ 3.5
- AND no dimension < 2
Quality Labels
- Excellent (≥4.5) — Top-tier agents
- Good (≥3.5) — Acceptable quality
- Needs improvement (≥2.5) — Requires revision
- Poor (<2.5) — Not acceptable
Adding a New Agent (Step-by-Step)
Discovery
Identify a gap in the existing 69 agents. Check:Question to ask: Does this agent provide a unique capability not covered by existing agents?
Create the File
Create a new markdown file in the appropriate category directory:Naming convention: Use
kebab-case (lowercase with hyphens)Write the Frontmatter
Start with YAML frontmatter:Permission guidelines:
- Most agents:
read/write/edit: allow - Bash: Restrict to specific commands or
askfor safety - Task: Usually
allowfor all - MCP: Only for MCP-specific agents
Write the 4 Sections
Follow the format described above:
- Identity paragraph (50-300 words, include version + year)
## Decisionswith ≥5 IF/THEN rules## Exampleswith ≥3 code blocks## Quality Gatewith ≥5 validation items
Run Quality Scorer
Check your agent’s quality score:Expected output:If the score is low: Review the output, identify weak dimensions, and revise.
Update Manifest and README Scores
Regenerate the manifest and README score tables:Important: Always commit the agent file and README updates together.
Manual Curation Process
Why manual curation? Upstream agents (~133 available) follow a generic format optimized for breadth, not depth. The OpenCode Agents format prioritizes:- Executable guidance (IF/THEN) over skill lists
- Concrete examples over abstract descriptions
- Validation criteria over vague quality claims
- Version specificity over timeless generalities
- Discovery — Use
sync-agents.py --listto see upstream catalog - Dry-run fetch —
gh workflow run "Sync Agents" -f tier=core -f dry_run=true - Manual rewrite — Transform into 4-section format
- Quality validation — Run
quality_scorer.pyto verify ≥3.5 - Manifest update — Run
generate_readme_scores.pyto update tables
Checklist for PR Submission
Before submitting your PR, verify:- File uses
kebab-case.mdnaming - YAML frontmatter is valid (description, mode, permission)
- Uses
permission:field (not deprecatedtools:) - Mode is
subagent(orprimaryif in root directory) - Identity paragraph is 50-300 words
- Identity mentions version numbers and year context
-
## Decisionshas ≥5 IF/THEN rules -
## Exampleshas ≥3 fenced code blocks -
## Quality Gatehas ≥5 validation items - No banned sections (Workflow, Tools, Anti-patterns, Collaboration)
- Quality score ≥3.5 with no dimension <2
- README scores regenerated:
python3 scripts/generate_readme_scores.py - All tests pass:
npm test && python3 tests/run_tests.py - Commit message follows conventional commits format
Common Pitfalls
Resources
- Agent Templates — Browse existing agents
- Quality Scorer Source — Understand scoring logic
- Contributing Guide — Full contribution guidelines
- Issue Templates — Report bugs or request agents
Next Steps
Testing
Learn how to run tests and validate your changes
Contributing Overview
Understand the development workflow and CI/CD pipeline