This guide walks you through cloning the project, installing dependencies, configuring LM Studio, and running the multi-agent hiring workflow end to end. By the end you will have streamed a complete hiring analysis — candidate profile, matching, interview questions, evaluation, and email draft — from your local machine.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/vrashmanyu605-eng/Langchain_Interview_Multi_Agents_Flow/llms.txt
Use this file to discover all available pages before exploring further.
Set up LM Studio
Download LM Studio and load a model. The project defaults to
gemma-3-4b-it, but any instruction-tuned model with an OpenAI-compatible server works.Once you have a model loaded, start the local server from LM Studio’s Local Server tab. By default it serves at http://localhost:1234/v1. Keep LM Studio running throughout the workflow.Configure the LLM
Open Change
llm.py and confirm (or update) the model name and endpoint to match your LM Studio setup:llm.py
model to match the model identifier shown in LM Studio if you loaded a different model. See configuration for a full description of each parameter.Update input file paths
Open
main.py and replace the placeholder paths with absolute paths to your own files:main.py
resume_pdf_path— the absolute path to the candidate’s PDF resumejd_text_path— the absolute path to a plain-text job description file
Run the workflow
Start the multi-agent pipeline:The workflow streams results node by node to your terminal as each agent completes its task.
Read the output
Each agent prints a separator block with its node name followed by its output. The supervisor prints the next agent it has chosen; all other agents print the relevant state field they populated.The workflow continues through matching, research, the three interview agents, evaluation, and finally the email agent before the supervisor routes to
finished and the stream ends.