After a class has worked through an exercise, the cohort view atDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/bcanata/maieutic/llms.txt
Use this file to discover all available pages before exploring further.
/cohort/[id] shows a short Opus-generated narrative of how the cohort engaged with it. The narrative is grounded in aggregate session data — which spec-gate dimensions most students missed on their first submission, how divergences distributed across the drift/revision/bug categories, where alignment failures clustered — and it aims to give you a concrete picture of the exercise’s pedagogical effect, not just counts.
The cohort narrative
Opus generates the narrative from theCohortNarrativeOutput schema:
Narrative
Narrative
A two-to-three sentence paragraph summarising how the cohort engaged with the exercise overall. The register is concrete and data-grounded — for example: “Six of eight students missed case-sensitivity on their first spec; consider introducing it as an explicit dimension earlier in the unit.” Opus is instructed not to produce generalities like “students found this challenging.”If the sample is small (fewer than approximately three completed sessions) the narrative opens with an explicit acknowledgement — “Only N sessions completed so far; patterns below are provisional” — and the
provisional field is set to true.Solution techniques
Solution techniques
A list of the approaches students commonly used to implement the exercise. Examples: “Most students used a straightforward for-loop accumulator”, “A minority refactored to a sum() comprehension mid-writing.”This list can be empty if the aggregate data does not show a clear pattern.
Common drifts and errors
Common drifts and errors
Recurring divergences from the exercise’s expected patterns — the gaps students most often left implicit in their specs, the bugs that appeared more than once, the edge cases most people skipped. Each item is grounded in the divergence and alignment data from completed sessions.
Strengths
Strengths
What the cohort did well on this exercise — for example, iterating to a clean spec quickly, or correctly identifying a refactor opportunity. This list can be empty if the data shows nothing notable.
Difficulties
Difficulties
Where the cohort struggled. This is the list most useful for deciding what to address next class or how to adjust the exercise for the next cohort.
The provisional flag
When fewer than approximately three sessions have completed, the narrative is markedprovisional. The cohort view renders this with a visible badge next to the summary header. Patterns identified from one or two sessions carry much less signal than patterns from a full class run, and the narrative text says so explicitly.
The narrative is generated fresh each time you open the cohort view — it reflects all completed sessions at the moment of the request. There is no caching step, so opening the view partway through a lab session will give you a partial picture labelled provisional, while opening it the following day will reflect the full cohort.
The exercise library
The cohorts list at/cohorts shows every published exercise as a card. Each card displays:
- Sessions started and completed (with a completion percentage).
- The median and maximum number of spec iterations students needed.
- The divergence breakdown across drift, revision, and bug.
- The single most-missed spec-gate dimension on first submission.
- The number of help requests the exercise generated.