Every agent in this workflow is a plain Python function that constructs an f-string prompt, invokes the LLM, and returns a dict with one output field. Because all configuration lives directly in source code rather than a config file, customizing the workflow means editing Python — no framework-specific abstractions to learn. This page walks you through the three most common changes: editing a prompt, adding a new state field, and adjusting LLM behavior.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/vrashmanyu605-eng/Langchain_Interview_Multi_Agents_Flow/llms.txt
Use this file to discover all available pages before exploring further.
Changing an agent’s prompt
Each agent builds its prompt as an f-string and passes it tollm.invoke(). To change what an agent does, edit the text of that f-string.
Here is the full prompt from resume_parser_agent.py:
languages_spoken — append it to the Extract: list:
Return structured JSON only. with your preferred instruction. Keep in mind that downstream agents (such as matching and hr_interview) read candidate_profile and expect JSON, so changing the format requires updating those agents too.
Adding output fields
Follow these four steps any time you want a new agent to produce a new piece of data.Add the field to HiringState
Open
graph/state.py and add your new field to the HiringState TypedDict. All fields are Optional by default because total=False is set on the class.Return the field from your agent
In your agent function, add the new key to the returned dict.LangGraph merges this dict into the shared state, so all subsequent agents will see
salary_benchmark available.Register the field with the supervisor
Open
agents/supervisor.py and find the agents_info dict. Add your agent’s entry (or update an existing one) so the supervisor knows what the agent provides and requires. This is how the supervisor decides whether the required inputs are available before routing.Adjusting LLM temperature
The temperature is set inllm.py:
temperature controls randomness. The default 0.3 is a good balance for this workflow:
- Lower temperature (0.0–0.2) — more deterministic output. Use this when agents must return valid JSON (such as
resume_parserorevaluation). Malformed JSON causes the supervisor’sjson.loads()call to raise an exception and route tofinishedearly. - Higher temperature (0.5–0.9) — more creative, varied output. Use this if you want the interview agents to generate more diverse questions across runs. Be aware that higher temperature also increases the chance of malformed JSON.
llm instance, changing temperature in llm.py affects every agent. If you need different temperatures per agent, instantiate separate ChatOpenAI objects in individual agent files.
Changing the number of questions
Thehr_interview_agent prompt currently asks for 15 HR questions. To change that, open agents/hr_interview_agent.py and edit this line:
technical_interview_agent.py and ceo_interview_agent.py — find the Generate: section and update the count.