The supervisor agent is the entry point of the hiring workflow. Every time the graph needs to decide what to do next, control passes back to the supervisor. It reads the current state, inspects which data is already available, consults the LLM, and returns the name of the next agent to call — orDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/vrashmanyu605-eng/Langchain_Interview_Multi_Agents_Flow/llms.txt
Use this file to discover all available pages before exploring further.
"finished" when the pipeline is complete. Because routing logic lives in the LLM rather than in hard-coded conditionals, the supervisor can handle partial failures and unexpected state gracefully without you writing extra branches.
How it works
Read workflow history
The supervisor loads
workflow_history from state — a flat list of every agent name called so far. This list feeds both the LLM prompt and the loop-detection guardrail.Build the agent registry
A local
agents_info dict is constructed on every call. It lists all nine agents with their purpose, requires, and provides fields. The LLM receives this registry as JSON so it can reason about what is available and what is still missing.Check available data
The supervisor scans the current state and collects every key that is non-empty and is not a workflow-control field (
task, next_agent, workflow_stage, completed, error, workflow_history). The resulting list becomes available_data in the prompt.Call the LLM
A structured prompt is built with the task, current state summary, call counts, the full agent registry, and guardrail instructions. The LLM is asked to return a JSON object with
reasoning, next_agent, and input_check.Parse the JSON response
The raw response content is stripped of any markdown fences, then parsed with
json.loads. The next_agent and reasoning fields are extracted.Agent registry
Theagents_info dictionary is built fresh on every supervisor call and sent to the LLM as JSON. It covers all nine specialist agents:
Loop detection guardrail
The supervisor tracks how many times each agent has been called by counting occurrences inworkflow_history:
LLM prompt
The supervisor sends a single structured prompt that includes:- TASK — the free-text task string from state (e.g.,
"Run the full hiring pipeline") - CURRENT STATE — the
available_datalist, the last five entries ofworkflow_history, and the fullcall_countsdict - AGENT REGISTRY — the
agents_infodict serialized as pretty-printed JSON - GUARDRAILS — a plain-English description of the loop-prevention rules, including the current
flagged_agentslist - RESPONSIBILITIES — instructions to verify that all
requiresfields for the chosen agent are present inavailable_data, and to route to"finished"onceemail_contentis present
Response format
The supervisor expects the LLM to return a JSON object with exactly three fields:next_agent to route the graph and logs reasoning to stdout. The input_check field is informational and is not used programmatically.
State inputs and outputs
| Direction | Fields |
|---|---|
| Reads | task, all state fields (presence check), workflow_history |
| Writes | next_agent, workflow_stage, workflow_history |
workflow_history is extended with the chosen next_agent on every successful call, unless the decision is "finished".
Error handling
Ifjson.loads raises an exception — for example because the LLM returned plain text instead of JSON — the supervisor catches it and immediately routes to "finished" with the error stored in state:
The supervisor uses the LLM to make routing decisions, not hardcoded logic. This makes it flexible but means the quality of routing depends on the model’s instruction-following ability.