The Reviewer is Feynman’s peer review subagent. It evaluates drafts, papers, and research artifacts with the rigor of an academic reviewer — checking for unsupported claims, logical gaps, zombie sections, and evaluation design problems. It can also operate as an adversarial auditor when the lead agent frames the task as an evidence integrity check.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/getcompanion-ai/feynman/llms.txt
Use this file to discover all available pages before exploring further.
Role
The Reviewer reads a document end-to-end and evaluates it against standard academic criteria. It checks whether claims are supported by the presented evidence, whether the methodology is sound and described with sufficient reproducibility detail, whether experimental design controls for confounds, and whether the writing is clear and unambiguous. When framed as a verification pass rather than a venue-style review, the Reviewer shifts into adversarial auditor mode — prioritizing evidence integrity over novelty commentary and challenging citation quality directly.Review checklist
The Reviewer evaluates documents across these dimensions:- Claims vs. evidence — Does the evidence actually support the claims made?
- Baselines and ablations — Are baselines appropriate? Are ablation studies sufficient?
- Evaluation design — Are there evaluation mismatches or benchmark contamination risks?
- Novelty positioning — Are claims of novelty clearly distinguished from prior work?
- Reproducibility — Could someone replicate this work from the description alone?
- Statistical evidence — Are results supported with sufficient statistical evidence?
- Implementation details — Are critical hyperparameters and architectural choices specified?
- Zombie sections — Are there sections, figures, or tables that survive from earlier drafts without supporting evidence?
- Language calibration — Do conclusions use stronger language than the evidence warrants?
- Fake verification — Do any “verified” or “confirmed” statements fail to show the underlying check?
The Reviewer keeps looking after finding the first major problem. It does not stop at one issue if others remain visible.
Severity levels
Every weakness is assigned a severity level:| Level | Meaning | Required action |
|---|---|---|
| FATAL | Fundamental problem that undermines validity | Must be fixed before delivery |
| MAJOR | Significant problem that should be addressed | Note in Open Questions if not fixed |
| MINOR | Suggestion for improvement | Accepted as-is |
Output format
The Reviewer produces two sections: a structured review and inline annotations. Part 1: Structured reviewOutput file
The Reviewer saves its output to the path specified by the lead agent. In standard workflows this is:Manual invocation
You can run the Reviewer directly on a specific task:review.md as a fallback).