How to write an effective AGENTS.md that produces consistent, accurate code reviews.
GGA sends your rules file to the AI as the authoritative coding standard for every review. The quality of your reviews depends directly on how well-written this file is.
AGENTS.md is a plain Markdown file that tells the AI what to look for during a review. It defines the rules the AI must enforce, the keywords that control pass/fail behavior, and optionally references to more detailed skill files.By default GGA looks for AGENTS.md in the project root. You can change this with the RULES_FILE option in your .gga config.
Your AGENTS.md doubles as documentation for human reviewers. A well-structured rules file helps onboard new contributors just as much as it guides the AI.
Target 100–200 lines. A focused file produces better reviews than a large one. Large prompts introduce noise that degrades AI response quality.
# Bad: verbose explanations dilute focus## TypeScript GuidelinesWhen writing TypeScript code, it's important to consider type safety.The `any` type should be avoided because it defeats the purpose ofusing TypeScript in the first place. Instead, you should always...(continues for 50 more lines)
# Good: direct and actionable## TypeScriptREJECT if:- `any` type used- Missing return types on public functions- Type assertions without justification
The AI scans structured lists faster and more accurately than prose.
# Good: scannable structure## TypeScript/ReactREJECT if:- `import * as React` → use `import { useState }`- Union types `type X = "a" | "b"` → use `const X = {...} as const`- `any` type without `// @ts-expect-error` justificationPREFER:- Named exports over default exports- Composition over inheritance
For large projects or monorepos, reference other files instead of concatenating everything into one place. Claude, Gemini, and Codex have built-in file-reading tools — when they see a reference like ui/AGENTS.md, they can read it for deeper context.
# Code Review Rules## References- UI guidelines: `ui/AGENTS.md`- API guidelines: `api/AGENTS.md`- Shared rules: `docs/CODE-STYLE.md`---## Critical Rules (ALL files)REJECT if:- Hardcoded secrets/credentials- `console.log` in production code- Missing error handling
Ollama is a pure LLM without file-reading tools. If you use Ollama and your rules reference external files, consolidate everything into a single file before running the review.
Here is a battle-tested AGENTS.md from a production TypeScript + Python monorepo:
# Code Review Rules## References- UI details: `ui/AGENTS.md`- SDK details: `sdk/AGENTS.md`---## ALL FILESREJECT if:- Hardcoded secrets/credentials- `any` type (TypeScript) or missing type hints (Python)- Code duplication (violates DRY)- Silent error handling (empty catch blocks)---## TypeScript/ReactREJECT if:- `import React` → use `import { useState }`- `var()` or hex colors in className → use Tailwind- `useMemo`/`useCallback` without justification (React 19 Compiler handles this)- Missing `"use client"` in client componentsPREFER:- `cn()` for conditional class merging- Semantic HTML over divs- Colocated files (component + test + styles)---## PythonREJECT if:- Missing type hints on public functions- Bare `except:` without specific exception- `print()` instead of `logger`REQUIRE:- Docstrings on all public classes/methods---## Response FormatFIRST LINE must be exactly:STATUS: PASSEDorSTATUS: FAILEDIf FAILED, list: `file:line - rule violated - issue`
This file is 89 lines, uses clear keywords, and delegates component-specific detail to referenced files.
For large or multi-stack projects, a skill-based approach avoids overloading the AI with irrelevant rules. Instead of one massive file, you define an index that maps file patterns to focused skill files. The AI reads the index, identifies which files are in the diff, and loads only the relevant rules.Why this matters:
More context does not equal better reviews. Large prompts introduce noise that degrades response quality.
A Python-only PR should not load React or Go rules.
Avoids OS-level argument size limits (ARG_MAX) that can cause failures on large PRs.
Structure your AGENTS.md with two parts:
# Code Review Rules## Skill Index| Trigger (file pattern) | Skill | Location ||------------------------|-------|----------|| `*.ts`, `*.tsx` | TypeScript | `docs/skills/typescript.md` || `*.tsx`, `*.jsx` | React | `docs/skills/react.md` || `*.css`, `*.scss`, `className=` | Styling | `docs/skills/tailwind.md` || `*.py` | Python | `docs/skills/python.md` || `*.test.*`, `*.spec.*` | Testing | `docs/skills/testing.md` || `*.go` | Go | `docs/skills/go.md` || `Dockerfile`, `*.yml` | Infrastructure | `docs/skills/infra.md` |---## General Rules (always active)REJECT if:- Hardcoded secrets or credentials- `console.log` / `print()` in production code- Empty catch/except blocks (silent error swallowing)- Code duplication (DRY violation)- Missing error handlingREQUIRE:- Descriptive variable and function names- Error messages that help debugging## Response FormatFIRST LINE must be exactly:STATUS: PASSEDorSTATUS: FAILEDIf FAILED, list: `file:line - rule violated - issue`
Each skill file is a focused, self-contained set of rules:
# TypeScript Review RulesREJECT if:- `any` type without `// @ts-expect-error` justification- Missing return types on exported functions- Type assertions (`as X`) without comment explaining why- `enum` used → use `as const` objects insteadPREFER:- Discriminated unions over type guards- `satisfies` over type assertions- Named exports over default exports
The AI sees the index, checks which files are in the diff, and reads only the skill files for matching patterns. A PR that only touches .py files never loads the TypeScript or React rules.
The skill-based approach works best with providers that have file-reading capabilities: Claude, Gemini, and Codex. For Ollama or other pure LLMs without tool use, keep all rules in a single self-contained file.