Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/vrashmanyu605-eng/Langchain_Interview_Multi_Agents_Flow/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through cloning the project, installing dependencies, configuring LM Studio, and running the multi-agent hiring workflow end to end. By the end you will have streamed a complete hiring analysis — candidate profile, matching, interview questions, evaluation, and email draft — from your local machine.
Ensure LM Studio has a model loaded and running before starting. The workflow will fail if the LLM endpoint is unreachable.
1

Clone the repository

Clone the project and move into the project directory:
git clone https://github.com/vrashmanyu605-eng/Langchain_Interview_Multi_Agents_Flow.git && cd Langchain_Interview_Multi_Agents_Flow
2

Install dependencies

Install the required Python packages:
pip install langchain-openai langgraph pymupdf
3

Set up LM Studio

Download LM Studio and load a model. The project defaults to gemma-3-4b-it, but any instruction-tuned model with an OpenAI-compatible server works.Once you have a model loaded, start the local server from LM Studio’s Local Server tab. By default it serves at http://localhost:1234/v1. Keep LM Studio running throughout the workflow.
Start with a well-formatted PDF resume for best extraction results.
4

Configure the LLM

Open llm.py and confirm (or update) the model name and endpoint to match your LM Studio setup:
llm.py
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gemma-3-4b-it",
    base_url="http://localhost:1234/v1",
    api_key="lm-studio",
    temperature=0.3,
    streaming=True
)
Change model to match the model identifier shown in LM Studio if you loaded a different model. See configuration for a full description of each parameter.
5

Update input file paths

Open main.py and replace the placeholder paths with absolute paths to your own files:
main.py
resume_pdf_path = "path/to/your/resume.pdf"
jd_text_path = "path/to/your/job_description.txt"
  • resume_pdf_path — the absolute path to the candidate’s PDF resume
  • jd_text_path — the absolute path to a plain-text job description file
6

Run the workflow

Start the multi-agent pipeline:
python main.py
The workflow streams results node by node to your terminal as each agent completes its task.
7

Read the output

Each agent prints a separator block with its node name followed by its output. The supervisor prints the next agent it has chosen; all other agents print the relevant state field they populated.
====================================
NODE: SUPERVISOR
Next Agent: resume_parser

====================================
NODE: RESUME_PARSER
{"candidate_name": "Jane Smith", "email": "[email protected]", ...}

====================================
NODE: SUPERVISOR
Next Agent: jd_analysis

====================================
NODE: JD_ANALYSIS
{"required_skills": ["Python", "LangChain", ...], "experience": "3+ years", ...}
The workflow continues through matching, research, the three interview agents, evaluation, and finally the email agent before the supervisor routes to finished and the stream ends.

Build docs developers (and LLMs) love