Skip to main content
GGA sends your rules file to the AI as the authoritative coding standard for every review. The quality of your reviews depends directly on how well-written this file is.

What is AGENTS.md?

AGENTS.md is a plain Markdown file that tells the AI what to look for during a review. It defines the rules the AI must enforce, the keywords that control pass/fail behavior, and optionally references to more detailed skill files. By default GGA looks for AGENTS.md in the project root. You can change this with the RULES_FILE option in your .gga config.
Your AGENTS.md doubles as documentation for human reviewers. A well-structured rules file helps onboard new contributors just as much as it guides the AI.

Action keywords

Use these keywords to give the AI unambiguous instructions. The AI reads them as signals about how strictly to enforce each rule.
KeywordMeaningAI behavior
REJECT ifHard rule — must never appear in codeReturns STATUS: FAILED
REQUIREMandatory pattern — must be presentReturns STATUS: FAILED if missing
PREFERSoft recommendationNotes the issue but does not fail

Best practices

1. Keep it concise

Target 100–200 lines. A focused file produces better reviews than a large one. Large prompts introduce noise that degrades AI response quality.
# Bad: verbose explanations dilute focus

## TypeScript Guidelines

When writing TypeScript code, it's important to consider type safety.
The `any` type should be avoided because it defeats the purpose of
using TypeScript in the first place. Instead, you should always...
(continues for 50 more lines)
# Good: direct and actionable

## TypeScript

REJECT if:

- `any` type used
- Missing return types on public functions
- Type assertions without justification

2. Use bullet points, not paragraphs

The AI scans structured lists faster and more accurately than prose.
# Good: scannable structure

## TypeScript/React

REJECT if:

- `import * as React` → use `import { useState }`
- Union types `type X = "a" | "b"` → use `const X = {...} as const`
- `any` type without `// @ts-expect-error` justification

PREFER:

- Named exports over default exports
- Composition over inheritance

3. Use references for complex projects

For large projects or monorepos, reference other files instead of concatenating everything into one place. Claude, Gemini, and Codex have built-in file-reading tools — when they see a reference like ui/AGENTS.md, they can read it for deeper context.
# Code Review Rules

## References

- UI guidelines: `ui/AGENTS.md`
- API guidelines: `api/AGENTS.md`
- Shared rules: `docs/CODE-STYLE.md`

---

## Critical Rules (ALL files)

REJECT if:

- Hardcoded secrets/credentials
- `console.log` in production code
- Missing error handling
Ollama is a pure LLM without file-reading tools. If you use Ollama and your rules reference external files, consolidate everything into a single file before running the review.

4. Always include a response format section

Tell the AI exactly how to structure its output. GGA looks for STATUS: PASSED or STATUS: FAILED in the first 15 lines of the response.
## Response Format

FIRST LINE must be exactly:
STATUS: PASSED
or
STATUS: FAILED

If FAILED, list: `file:line - rule violated - issue`

Complete example

Here is a battle-tested AGENTS.md from a production TypeScript + Python monorepo:
# Code Review Rules

## References

- UI details: `ui/AGENTS.md`
- SDK details: `sdk/AGENTS.md`

---

## ALL FILES

REJECT if:

- Hardcoded secrets/credentials
- `any` type (TypeScript) or missing type hints (Python)
- Code duplication (violates DRY)
- Silent error handling (empty catch blocks)

---

## TypeScript/React

REJECT if:

- `import React` → use `import { useState }`
- `var()` or hex colors in className → use Tailwind
- `useMemo`/`useCallback` without justification (React 19 Compiler handles this)
- Missing `"use client"` in client components

PREFER:

- `cn()` for conditional class merging
- Semantic HTML over divs
- Colocated files (component + test + styles)

---

## Python

REJECT if:

- Missing type hints on public functions
- Bare `except:` without specific exception
- `print()` instead of `logger`

REQUIRE:

- Docstrings on all public classes/methods

---

## Response Format

FIRST LINE must be exactly:
STATUS: PASSED
or
STATUS: FAILED

If FAILED, list: `file:line - rule violated - issue`
This file is 89 lines, uses clear keywords, and delegates component-specific detail to referenced files.

Skill-based approach for large projects

For large or multi-stack projects, a skill-based approach avoids overloading the AI with irrelevant rules. Instead of one massive file, you define an index that maps file patterns to focused skill files. The AI reads the index, identifies which files are in the diff, and loads only the relevant rules. Why this matters:
  • More context does not equal better reviews. Large prompts introduce noise that degrades response quality.
  • A Python-only PR should not load React or Go rules.
  • Avoids OS-level argument size limits (ARG_MAX) that can cause failures on large PRs.
Structure your AGENTS.md with two parts:
# Code Review Rules

## Skill Index

| Trigger (file pattern) | Skill | Location |
|------------------------|-------|----------|
| `*.ts`, `*.tsx` | TypeScript | `docs/skills/typescript.md` |
| `*.tsx`, `*.jsx` | React | `docs/skills/react.md` |
| `*.css`, `*.scss`, `className=` | Styling | `docs/skills/tailwind.md` |
| `*.py` | Python | `docs/skills/python.md` |
| `*.test.*`, `*.spec.*` | Testing | `docs/skills/testing.md` |
| `*.go` | Go | `docs/skills/go.md` |
| `Dockerfile`, `*.yml` | Infrastructure | `docs/skills/infra.md` |

---

## General Rules (always active)

REJECT if:
- Hardcoded secrets or credentials
- `console.log` / `print()` in production code
- Empty catch/except blocks (silent error swallowing)
- Code duplication (DRY violation)
- Missing error handling

REQUIRE:
- Descriptive variable and function names
- Error messages that help debugging

## Response Format

FIRST LINE must be exactly:
STATUS: PASSED
or
STATUS: FAILED

If FAILED, list: `file:line - rule violated - issue`
Each skill file is a focused, self-contained set of rules:
# TypeScript Review Rules

REJECT if:
- `any` type without `// @ts-expect-error` justification
- Missing return types on exported functions
- Type assertions (`as X`) without comment explaining why
- `enum` used → use `as const` objects instead

PREFER:
- Discriminated unions over type guards
- `satisfies` over type assertions
- Named exports over default exports
The AI sees the index, checks which files are in the diff, and reads only the skill files for matching patterns. A PR that only touches .py files never loads the TypeScript or React rules.
The skill-based approach works best with providers that have file-reading capabilities: Claude, Gemini, and Codex. For Ollama or other pure LLMs without tool use, keep all rules in a single self-contained file.

Build docs developers (and LLMs) love