When you run the workflow, LangGraph passes a shared state dictionary through a series of specialist agents orchestrated by the supervisor. Each agent reads the fields it needs, performs its task using the local LLM, and writes its output back to state. The supervisor then inspects the updated state and decides which agent to invoke next. This continues until the email agent completes its draft and the supervisor routes toDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/vrashmanyu605-eng/Langchain_Interview_Multi_Agents_Flow/llms.txt
Use this file to discover all available pages before exploring further.
finished. You see each agent’s output printed to the terminal in real time as nodes complete — one node at a time, in streaming fashion.
Prepare and run the workflow
Prepare inputs
You need two files before running: a resume in PDF format and a job description as a plain The
.txt file. Update the path variables at the top of main.py to point to your local files.extract_pdf_text helper (from src/services/pdf_parsing) reads the PDF and returns plain text. The JD file is opened and read directly.Build the initial state dict
The workflow starts from a fixed initial state. This dict seeds the
HiringState TypedDict with the raw documents and workflow control fields. Copy this block from main.py exactly — the supervisor expects task, resume_text, jd_text, workflow_stage, completed, and workflow_history to be present at startup.Call app.stream() and iterate events
Pass Each iteration of the outer
initial_state to app.stream(). LangGraph yields one event dict per node execution. Iterate the outer loop to receive events and the inner loop to unpack the node name and its output state.for loop represents one agent completing its work. The inner event.items() loop unpacks the single key–value pair inside that event.Complete main.py source
Streaming vs. batch
The workflow streams node-by-node:app.stream() yields control back to your loop after each agent finishes, so you see output progressively rather than waiting for the entire pipeline to complete. This is useful for long-running workflows because you can observe which agents succeed, which fields are populated, and whether the supervisor is routing correctly — all in real time. If you prefer to wait for the full result, you can call app.invoke(initial_state) instead, which returns the final state dict directly without intermediate events.
Node output reference
| Node | State field printed |
|---|---|
supervisor | next_agent — the name of the next agent to invoke |
resume_parser | candidate_profile — structured JSON of parsed resume fields |
jd_analysis | jd_analysis — structured analysis of the job description |
matching | matching_analysis — candidate-to-JD fit scoring and gaps |
candidate_research | research_analysis — background research on the candidate |
hr_interview | hr_questions — 15 HR questions with evaluation criteria |
technical_interview | technical_questions — role-specific technical questions |
ceo_interview | ceo_questions — culture-fit and leadership questions |
evaluation | evaluation — final hire/no-hire recommendation |
email | email_content — drafted email to the candidate or hiring manager |