Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/vrashmanyu605-eng/Langchain_Interview_Multi_Agents_Flow/llms.txt

Use this file to discover all available pages before exploring further.

The candidate research agent reads both the candidate_profile and jd_analysis and produces strategic insights that help interviewers know what to probe and how deep to go. Unlike the matching agent — which scores fit — the research agent focuses on how to interview the candidate effectively. Its output, research_analysis, feeds directly into the evaluation agent, making the depth of the research analysis a significant factor in the quality of the final recommendation.

Source code

from llm import llm


def candidate_research_agent(state):

    jd_analysis = state["jd_analysis"]

    candidate_profile = state["candidate_profile"]

    response = llm.invoke(
        f"""
        You are a Candidate Research Agent.

        Research likely interview expectations.

        Candidate:
        {candidate_profile}

        JD Analysis:
        {jd_analysis}

        Generate:
        - likely interview focus
        - expected technical depth
        - leadership expectations
        - strategic interview insights
        - preparation recommendations

        Return structured JSON only.
        """
    )

    return {
        "research_analysis": response.content
    }

Inputs

candidate_profile
string
required
The structured JSON profile produced by the resume parser agent.
jd_analysis
string
required
The structured JSON analysis produced by the JD analysis agent.

Output

research_analysis
string
A JSON string containing strategic interview guidance. Read by the evaluation agent when forming the final hiring verdict.
The JSON contains the following fields:
FieldDescription
likely_interview_focusTopics and domains the interviewers are likely to probe
expected_technical_depthThe level of detail and sophistication expected in technical answers
leadership_expectationsWhat leadership or mentoring signals interviewers should look for
strategic_interview_insightsHigh-level observations about the candidate’s fit that should shape interview strategy
preparation_recommendationsSpecific areas the candidate should prepare for, useful for internal calibration

Example output

{
  "likely_interview_focus": ["Distributed systems", "API design", "Team leadership"],
  "expected_technical_depth": "Senior-level system design and hands-on Python",
  "leadership_expectations": "Technical lead with mentoring responsibilities",
  "strategic_interview_insights": "Candidate shows strong backend experience but limited cloud-native exposure",
  "preparation_recommendations": ["Brush up on Kubernetes basics", "Prepare system design examples"]
}

State diagram

Requires: candidate_profile, jd_analysis

candidate_research_agent

Provides: research_analysis
The research analysis feeds directly into the evaluation agent — richer insights here lead to a more accurate final evaluation.

Build docs developers (and LLMs) love