Skip to main content

Overview

The AutogenDeepSearchAgent is a sophisticated agent that performs deep web searches and analysis using a multi-agent system. It coordinates between a researcher agent (for thinking and analysis) and an executor agent (for executing search operations).

Class Definition

class AutogenDeepSearchAgent:
    def __init__(
        self,
        llm_config=None,
        code_execution_config=None,
        return_chat_history=False,
        save_log=False
    )

Parameters

llm_config
dict
LLM configuration dictionary. If not provided, defaults to service configuration for “deepsearch”.Structure:
  • model: Model name to use
  • api_key: API key for the LLM service
  • base_url: Base URL for the API
  • temperature: Temperature for generation (0-1)
  • Other provider-specific parameters
code_execution_config
dict
Code execution configuration. Defaults to {"work_dir": 'coding', "use_docker": False}Structure:
  • work_dir: Working directory for code execution
  • use_docker: Whether to use Docker for code execution
return_chat_history
bool
default:"False"
Whether to return the full chat history along with the final answer
save_log
bool
default:"False"
Whether to save execution logs to file

Methods

async def deep_search(query: str) -> str
Execute a deep search operation asynchronously.
query
str
required
The user’s search query or research question
result
str
The final answer to the search query, or a tuple of (answer, chat_history) if return_chat_history is True
Example:
import asyncio
from src.services.agents.deep_search_agent import AutogenDeepSearchAgent

agent = AutogenDeepSearchAgent()

# Single result
result = await agent.deep_search(
    "What are the top semiconductor companies by gross profit margin in 2024?"
)
print(result)

# With chat history
agent = AutogenDeepSearchAgent(return_chat_history=True)
result, history = await agent.deep_search(
    "What are the top semiconductor companies by gross profit margin in 2024?"
)

web_agent_answer

def web_agent_answer(query: str) -> str
Synchronous wrapper for deep_search. Uses asyncio.run() internally.
query
str
required
The initial search query
result
str
JSON string containing search results
Example:
from src.services.agents.deep_search_agent import AutogenDeepSearchAgent

agent = AutogenDeepSearchAgent()
result = agent.web_agent_answer(
    "What is the latest news about AI regulation?"
)

a_web_agent_answer

async def a_web_agent_answer(query: str) -> str
Async version of web_agent_answer with error handling.
query
str
required
The initial search query
result
str
Search results or error message if the search fails
Example:
import asyncio

result = await agent.a_web_agent_answer(
    "Compare Python and JavaScript performance"
)

run

async def run(query: str) -> dict
Execute deep search and return both the final answer and execution trajectory.
query
str
required
The search query
result
dict
Dictionary containing:
  • final_answer: The final search result
  • trajectory: Full chat history of the search process
Example:
result = await agent.run(
    "How does quantum computing work?"
)

print("Answer:", result["final_answer"])
print("Steps:", result["trajectory"])

Configuration

Internal Agents

The AutogenDeepSearchAgent creates two internal agents:
  1. Researcher Agent (ExtendedAssistantAgent)
    • Responsible for thinking, analysis, and planning
    • Uses the configured LLM to generate search strategies
    • Determines when to terminate the search
  2. Executor Agent (DeepSearchExecutor)
    • Executes search and browse operations
    • Processes tool calls from the researcher
    • Summarizes tool responses when they exceed token limits

Message Summarization

The agent automatically summarizes long tool responses to manage context:
  • max_tool_messages_before_summary: Default 2 rounds
  • token_limit: Default 2000 tokens
  • Uses cl100k_base encoding for token counting

Tool Registration

The agent registers the following tools:
  • searching: Web search functionality
  • browsing: Web page browsing and content extraction

Advanced Usage

Custom LLM Configuration

custom_config = {
    "model": "gpt-4",
    "api_key": "your-api-key",
    "temperature": 0.7,
    "max_tokens": 2000
}

agent = AutogenDeepSearchAgent(
    llm_config=custom_config,
    return_chat_history=True
)

Custom Working Directory

agent = AutogenDeepSearchAgent(
    code_execution_config={
        "work_dir": "/path/to/workspace",
        "use_docker": False
    }
)

Error Handling

The async methods include built-in error handling:
try:
    result = await agent.a_web_agent_answer(query)
except Exception as e:
    print(f"Search failed: {e}")
    # Error details are also written to search_error_log.txt

Notes

  • The agent uses a maximum of 30 conversation turns by default
  • Messages are automatically summarized using LLM when tool responses exceed token limits
  • The agent supports termination detection via TERMINATE keyword in messages
  • All searches are logged if save_log=True

Build docs developers (and LLMs) love