Skip to main content

Agent Format Overview

OpenCode Agents uses a 4-section expert format optimized for LLM consumption. This format replaces the generic upstream template (lists of skills, fictional metrics) with structured decision-making guidance. Quality difference: Upstream agents typically score 3-4/10, while curated agents score 8-9/10.

The 4-Section Format

Every agent must follow this structure:
  1. Identity — Unheaded paragraph (50-300 words)
  2. ## Decisions — IF/THEN decision trees (≥5 rules)
  3. ## Examples — Code examples (≥3 blocks)
  4. ## Quality Gate — Validation checklist (≥5 items)

1. Identity (Unheaded Paragraph)

The identity appears immediately after the frontmatter, before the first ## heading. It establishes:
  • Role and expertise — What the agent specializes in
  • Context and versions — Technologies, frameworks, year references
  • Scope and boundaries — What the agent does and doesn’t do
Example (from typescript-pro.md):
---
description: "Expert TypeScript developer specializing in type-safe, maintainable code"
mode: subagent
permission:
  read: allow
  write: allow
  edit: allow
  bash:
    "npm install *": ask
    "npm uninstall *": ask
  task:
    "*": allow
---

I'm an expert TypeScript developer specializing in type-safe, production-grade
TypeScript across Node.js, Deno, and browser environments. I prioritize strict
type safety, modern ESNext+ features (as of 2024), and maintainable architecture.
I follow TypeScript 5.x best practices, enforce strict mode, and leverage advanced
type system features like conditional types, template literals, and branded types.
I write clean, well-documented code with comprehensive type coverage and proper
error handling.
Requirements:
  • 50-300 words (sweet spot: 100-150)
  • Mention specific versions (e.g., “TypeScript 5.x”, “Node.js 20+”)
  • Include year context (e.g., “as of 2024”, “2023+”)
  • No heading — starts immediately after ---

2. Decisions Section

The ## Decisions section contains IF/THEN decision trees that guide the agent’s behavior in different scenarios. Format:
## Decisions

- IF project has tsconfig.json → THEN respect existing compiler options
- IF no types available → THEN create type definitions in `types/` directory
- IF using external API → THEN generate types from OpenAPI/schema
- IF complex type needed → THEN use conditional types and generics
- IF type error unclear → THEN add type assertions with comments explaining why
- IF performance-critical code → THEN profile before optimizing, prefer readability
Requirements:
  • ≥5 decision rules for a score of 5/5
  • Use IF/THEN/ELIF/ELSE keywords (case-insensitive)
  • Cover key scenarios the agent will encounter
  • Be specific and actionable
Alternative formats (also recognized by the scorer):
## Decisions

IF the codebase has existing types:
  THEN preserve type definitions and extend them
  ELSE create new type files in `types/`

IF types are complex:
  THEN break into smaller utility types
  THEN document type parameters

3. Examples Section

The ## Examples section provides concrete code examples showing the agent in action. Format:
## Examples

**Creating a type-safe API client:**

```typescript
interface User {
  id: string;
  email: string;
  role: 'admin' | 'user';
}

type ApiResponse<T> = 
  | { success: true; data: T }
  | { success: false; error: string };

class ApiClient {
  async getUser(id: string): Promise<ApiResponse<User>> {
    try {
      const response = await fetch(`/api/users/${id}`);
      const data = await response.json();
      return { success: true, data };
    } catch (error) {
      return { 
        success: false, 
        error: error instanceof Error ? error.message : 'Unknown error'
      };
    }
  }
}
Using conditional types for type narrowing:
type UnwrapPromise<T> = T extends Promise<infer U> ? U : T;

// Usage
type A = UnwrapPromise<Promise<string>>; // string
type B = UnwrapPromise<number>;          // number
Branded types for compile-time safety:
type UserId = string & { readonly __brand: 'UserId' };
type Email = string & { readonly __brand: 'Email' };

function getUserById(id: UserId): User { /* ... */ }

// Type error — can't pass raw string
getUserById("123");  // Error!

// Must use type assertion
getUserById("123" as UserId);  // OK

**Requirements**:
- ≥3 fenced code blocks for a score of 5/5
- Use ` ```language ` syntax (not indented code blocks)
- Show realistic, production-ready code
- Include variety (API calls, types, utils, edge cases)

### 4. Quality Gate Section

The `## Quality Gate` section is a **validation checklist** the agent must verify before considering work complete.

**Format**:

```markdown
## Quality Gate

Before completing any task, verify:

- [ ] All code compiles without type errors (`tsc --noEmit`)
- [ ] Strict mode enabled in tsconfig.json
- [ ] No `any` types without explicit justification comments
- [ ] Complex types have JSDoc documentation
- [ ] Type coverage ≥95% (use `type-coverage` tool)
- [ ] Error handling with proper Result/Option types
- [ ] Tests pass with type checking enabled
Requirements:
  • ≥5 bullet points for a score of 5/5
  • Use - [ ] checkbox syntax or simple bullets
  • Cover code quality, testing, documentation
  • Include measurable criteria when possible

Quality Scoring System

The 8 Dimensions

Every agent is scored on 8 dimensions (1-5 scale):
DimensionWhat’s Measured5/5 Criteria
frontmatterdescription, mode, permission presentAll 3 fields exist
identityUnheaded paragraph word count50-300 words
decisionsIF/THEN rule count≥5 rules
examplesFenced code block count≥3 blocks
quality_gateBullet point count≥5 items
concisenessLine count + filler phrase density70-120 lines, ≤3% filler
no_banned_sectionsAbsence of deprecated headings0 banned sections
version_pinningVersions and years in identityBoth present

Pass Criteria

To be accepted, an agent must:
  • Average score ≥ 3.5
  • AND no dimension < 2

Quality Labels

  • Excellent (≥4.5) — Top-tier agents
  • Good (≥3.5) — Acceptable quality
  • Needs improvement (≥2.5) — Requires revision
  • Poor (<2.5) — Not acceptable
Current registry stats: 69 agents, 4.59/5 average, 100% pass rate (49 Excellent, 20 Good)

Adding a New Agent (Step-by-Step)

1

Discovery

Identify a gap in the existing 69 agents. Check:
# List all agents by category
npx github:dmicheneau/opencode-template-agent list

# Search existing agents
npx github:dmicheneau/opencode-template-agent search "your topic"

# List upstream agents available for inspiration
python3 scripts/sync-agents.py --list --tier=extended
Question to ask: Does this agent provide a unique capability not covered by existing agents?
2

Create the File

Create a new markdown file in the appropriate category directory:
# Example: adding a Zig programming language agent
touch agents/languages/zig-pro.md
Naming convention: Use kebab-case (lowercase with hyphens)
3

Write the Frontmatter

Start with YAML frontmatter:
---
description: "Expert Zig developer for systems programming"
mode: subagent
permission:
  read: allow
  write: allow
  edit: allow
  bash:
    "zig build *": allow
    "zig test *": allow
    "*": ask
  task:
    "*": allow
---
Permission guidelines:
  • Most agents: read/write/edit: allow
  • Bash: Restrict to specific commands or ask for safety
  • Task: Usually allow for all
  • MCP: Only for MCP-specific agents
4

Write the 4 Sections

Follow the format described above:
  1. Identity paragraph (50-300 words, include version + year)
  2. ## Decisions with ≥5 IF/THEN rules
  3. ## Examples with ≥3 code blocks
  4. ## Quality Gate with ≥5 validation items
Tip: Look at similar agents in the same category for inspiration.
5

Run Quality Scorer

Check your agent’s quality score:
python3 scripts/quality_scorer.py agents/languages/zig-pro.md
Expected output:
============================================================
  agents/languages/zig-pro.md
============================================================
  frontmatter                    [#####] 5/5
  identity                       [#####] 5/5
  decisions                      [#####] 5/5
  examples                       [#####] 5/5
  quality_gate                   [####.] 4/5
  conciseness                    [#####] 5/5
  no_banned_sections             [#####] 5/5
  version_pinning                [####.] 4/5
                                 --------
  overall                        4.75/5.00
  label                          Excellent
  passed                         YES
If the score is low: Review the output, identify weak dimensions, and revise.
6

Update Manifest and README Scores

Regenerate the manifest and README score tables:
# Update README.md and README.en.md with new scores
python3 scripts/generate_readme_scores.py

# Verify scores are current (CI will check this)
python3 scripts/generate_readme_scores.py --check
Important: Always commit the agent file and README updates together.
7

Test Everything

Run the full test suite:
# Node.js tests
npm test

# Python tests
python3 tests/run_tests.py

# Plugin tests (if Bun installed)
bun test tests/plugin/
All 893 tests should pass.
8

Commit and Open PR

Commit using conventional commit format:
git add agents/languages/zig-pro.md README.md README.en.md manifest.json
git commit -m "feat: add zig-pro agent for systems programming"
git push origin feature/zig-pro-agent
Open a PR on GitHub. The CI will automatically validate:
  • Quality scores are up to date
  • YAML frontmatter is valid
  • No deprecated tools: field
  • All tests pass

Manual Curation Process

Why manual curation? Upstream agents (~133 available) follow a generic format optimized for breadth, not depth. The OpenCode Agents format prioritizes:
  • Executable guidance (IF/THEN) over skill lists
  • Concrete examples over abstract descriptions
  • Validation criteria over vague quality claims
  • Version specificity over timeless generalities
Process:
  1. Discovery — Use sync-agents.py --list to see upstream catalog
  2. Dry-run fetchgh workflow run "Sync Agents" -f tier=core -f dry_run=true
  3. Manual rewrite — Transform into 4-section format
  4. Quality validation — Run quality_scorer.py to verify ≥3.5
  5. Manifest update — Run generate_readme_scores.py to update tables
Sync frequency: No automatic syncing. All new agents are added manually after evaluation.

Checklist for PR Submission

Before submitting your PR, verify:
  • File uses kebab-case.md naming
  • YAML frontmatter is valid (description, mode, permission)
  • Uses permission: field (not deprecated tools:)
  • Mode is subagent (or primary if in root directory)
  • Identity paragraph is 50-300 words
  • Identity mentions version numbers and year context
  • ## Decisions has ≥5 IF/THEN rules
  • ## Examples has ≥3 fenced code blocks
  • ## Quality Gate has ≥5 validation items
  • No banned sections (Workflow, Tools, Anti-patterns, Collaboration)
  • Quality score ≥3.5 with no dimension <2
  • README scores regenerated: python3 scripts/generate_readme_scores.py
  • All tests pass: npm test && python3 tests/run_tests.py
  • Commit message follows conventional commits format

Common Pitfalls

Using the deprecated tools: fieldThe old tools: field is deprecated in OpenCode. Use permission: instead:Don’t:
tools:
  - read
  - write
Do:
permission:
  read: allow
  write: allow
Skipping version pinningAlways mention specific versions and year context in the identity paragraph:Don’t: “Expert TypeScript developer”Do: “Expert TypeScript 5.x developer (as of 2024)”
Using generic decision rulesDecision rules should be specific and actionable:Don’t: “IF quality matters → THEN write good code”Do: “IF no types available → THEN generate types from OpenAPI schema in types/ directory”
Forgetting to update README scoresAlways regenerate README scores after modifying agents:
python3 scripts/generate_readme_scores.py
git add README.md README.en.md manifest.json agents/
The CI will fail if scores are out of date.

Resources

Next Steps

Testing

Learn how to run tests and validate your changes

Contributing Overview

Understand the development workflow and CI/CD pipeline

Build docs developers (and LLMs) love