The peer review workflow simulates a thorough academic peer review of a paper, draft, or research artifact. It produces severity-graded feedback covering methodology, claims, writing quality, and reproducibility — using the sameDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/getcompanion-ai/feynman/llms.txt
Use this file to discover all available pages before exploring further.
reviewer subagent that verifies outputs in other Feynman workflows.
Invocation
- CLI
- REPL
Workflow stages
Plan
Before starting, the lead agent outlines what will be reviewed, the review criteria (novelty, empirical rigor, baselines, reproducibility, and so on), and any verification-specific checks needed for claims, figures, and reported metrics. The plan is presented to you for confirmation before proceeding.
Gather evidence
For papers or artifacts with associated code and cited work, the
researcher subagent gathers evidence: it inspects the paper, the codebase, cited sources, and any linked experimental artifacts, saving findings to <slug>-research.md.For small or simple artifacts where evidence gathering is overkill, the
reviewer subagent is run directly without a prior research pass.Review
The
reviewer subagent reads the document end-to-end against standard academic criteria:- Are the claims supported by the methodology?
- Does the experimental design have potential confounds?
- Are baselines appropriate and fairly compared?
- Is the paper reproducible from the description given?
- Are reported metrics consistent with the experimental setup?
<slug>-research.md as source material for inline annotations.Severity grading
Each piece of feedback is assigned one of three severity levels:
If the first review finds FATAL issues and they are fixed, one additional verification-style review pass is run before delivery.
| Severity | Meaning |
|---|---|
| FATAL | Fundamental issues that undermine the paper’s core validity |
| MAJOR | Significant problems that should be addressed before publication |
| MINOR | Suggestions for improvement that do not block acceptance |
Outputs
| Artifact | Path |
|---|---|
| Research evidence (when gathered) | <slug>-research.md |
| Peer review report | outputs/<slug>-review.md |
Review report structure
- Summary assessment — overall evaluation and recommendation
- Strengths — what the paper does well
- FATAL issues — fundamental problems that must be addressed
- MAJOR issues — significant concerns with suggested fixes
- MINOR issues — smaller improvements and suggestions
- Inline annotations — specific comments tied to sections or claims in the document
Subagents used
| Subagent | Role |
|---|---|
researcher | Gathers evidence from the paper, code, and cited work |
reviewer | Produces the peer review with severity-graded annotations |
Customization
You can focus the review by being specific in your prompt:Related
- Paper Audit — compare paper claims against a public codebase
- Experiment Replication — execute replication steps to verify experimental claims
- Deep Research — investigate the broader context before reviewing