Skip to main content
Every ODAI agent follows a consistent four-step pattern: define a tool function, wrap it in an Agent, register the agent with the orchestrator, and add its tool calls to the TOOL_CALLS progress map.

The standard pattern

from agents import Agent, function_tool, RunContextWrapper
from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX
from connectors.utils.responses import ToolResponse
from connectors.utils.context import ChatContext

@function_tool
def my_tool_function(wrapper: RunContextWrapper[ChatContext], param: str) -> dict:
    """One-line summary of what this tool does.

    Detailed description used by the LLM to decide when to call this tool.
    Include trigger phrases and what the tool returns.

    Args:
        wrapper: Execution context containing user info, settings, and auth flags
        param: Description of this parameter

    Returns:
        ToolResponse with the processed result
    """
    result = call_external_api(param)
    return ToolResponse(
        response_type="my_service_result",
        agent_name="My Service",
        friendly_name="My Service Results",
        display_response=True,
        response=result
    ).to_dict()


MY_AGENT = Agent(
    name="My Service",
    model="gpt-4o",
    instructions=RECOMMENDED_PROMPT_PREFIX + "Description of what this agent does and when to use it.",
    handoff_description=RECOMMENDED_PROMPT_PREFIX + "Short description used by the orchestrator to decide when to hand off here.",
    tools=[my_tool_function],
)

@function_tool and is_enabled

The @function_tool decorator registers a Python function as a callable tool in the OpenAI Agents SDK. The optional is_enabled parameter accepts a predicate function that determines at runtime whether the tool is available for the current user.
from connectors.utils.context import is_google_enabled, is_plaid_enabled

# Tool only available when the user has connected their Google account
@function_tool(is_enabled=is_google_enabled)
def fetch_google_email_inbox(wrapper: RunContextWrapper[ChatContext], unread: bool = False) -> dict:
    ...

# Tool only available when the user has connected a Plaid account
@function_tool(is_enabled=is_plaid_enabled)
def get_accounts_at_plaid(wrapper: RunContextWrapper[ChatContext]) -> dict:
    ...

# Tool always available
@function_tool
def search_google(wrapper: RunContextWrapper[ChatContext], query: str) -> dict:
    ...
The predicates are defined in connectors/utils/context.py:
def is_google_enabled(ctx, agent) -> bool:
    return ctx.context.is_google_enabled

def is_plaid_enabled(ctx, agent) -> bool:
    return ctx.context.is_plaid_enabled

ChatContext

RunContextWrapper[ChatContext] is the first parameter of every tool function. It provides access to the current request’s user, settings, and feature flags.
@dataclass
class ChatContext:
    user_id: str           # Firebase UID of the authenticated user
    logged_in: bool        # Whether the user is authenticated
    chat_id: str           # Active chat session ID
    prompt: str            # The user's current message
    production: bool       # True in production, False in development
    project_id: str        # Google Cloud project ID
    user: User             # Full Firebase User model
    settings: Settings     # Application settings (API keys, flags)
    openai_client: OpenAI  # Shared OpenAI client instance
    is_google_enabled: bool  # True if user has connected Google account
    is_plaid_enabled: bool   # True if user has connected Plaid account
Access context data inside a tool function via wrapper.context:
@function_tool
def my_tool(wrapper: RunContextWrapper[ChatContext]) -> dict:
    user_id = wrapper.context.user_id
    production = wrapper.context.production
    settings = wrapper.context.settings
    ...

Steps to add a new agent

1

Create the agent file

Create connectors/my_service.py. Implement your tool functions and define the Agent object following the pattern above.
# connectors/my_service.py
from agents import Agent, function_tool, RunContextWrapper
from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX
from connectors.utils.responses import ToolResponse
from connectors.utils.context import ChatContext

@function_tool
def search_my_service(wrapper: RunContextWrapper[ChatContext], query: str) -> dict:
    """Search My Service for results matching query."""
    results = call_my_service_api(query)
    return ToolResponse(
        response_type="my_service_search_results",
        agent_name="My Service",
        friendly_name="My Service Search",
        display_response=True,
        response=results
    ).to_dict()

MY_SERVICE_AGENT = Agent(
    name="My Service",
    model="gpt-4o",
    instructions=RECOMMENDED_PROMPT_PREFIX + "Search My Service for relevant results.",
    handoff_description=RECOMMENDED_PROMPT_PREFIX + "My Service search.",
    tools=[search_my_service],
)
2

Import and add to orchestrator handoffs

Open connectors/orchestrator.py and import your agent, then add it to the handoffs list in both ORCHESTRATOR_AGENT and the Orchestrator class.
# connectors/orchestrator.py

# Add import
from .my_service import MY_SERVICE_AGENT

# Add to ORCHESTRATOR_AGENT handoffs
ORCHESTRATOR_AGENT = Agent(
    name="ODAI",
    model="gpt-4o",
    handoffs=[
        YELP_AGENT,
        # ... existing agents ...
        MY_SERVICE_AGENT,   # ← add here
    ],
    ...
)

# Also add to Orchestrator class handoffs list
class Orchestrator:
    def __init__(self, user: User):
        self.handoffs = [
            YELP_AGENT,
            # ... existing agents ...
            MY_SERVICE_AGENT,   # ← add here
        ]
3

Add TOOL_CALLS entries

Add an entry for each tool function to the TOOL_CALLS dict in connectors/orchestrator.py. The value is the progress message shown in the UI while the tool runs. Use None to suppress UI feedback for internal-only tools.
TOOL_CALLS = {
    # ... existing entries ...

    # My Service
    "search_my_service": "Searching My Service...",
}
4

Add to integrations.yaml (optional)

To surface the agent in the integrations UI, add a configuration block to integrations.yaml:
- id: MyService
  name: "My Service"
  description: "Search My Service for relevant results"
  logo: "https://example.com/logo.png"
  prompts:
    - "Search My Service for coffee shops nearby"
    - "Find the top results on My Service for pizza"

Minimal agent example

Here is the smallest valid agent that follows all ODAI conventions:
# connectors/example_agent.py
from agents import Agent, function_tool, RunContextWrapper
from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX
from connectors.utils.responses import ToolResponse
from connectors.utils.context import ChatContext

@function_tool
def greet_user(wrapper: RunContextWrapper[ChatContext], name: str) -> dict:
    """Greet the user by name.

    Use this tool when the user asks to be greeted or says hello.

    Args:
        wrapper: Execution context
        name: The name to include in the greeting

    Returns:
        ToolResponse with a greeting message
    """
    return ToolResponse(
        response_type="greeting",
        agent_name="Example",
        friendly_name="Greeting",
        display_response=True,
        response={"message": f"Hello, {name}!"}
    ).to_dict()


EXAMPLE_AGENT = Agent(
    name="Example",
    model="gpt-4o",
    instructions=RECOMMENDED_PROMPT_PREFIX + "Greet users by name when asked.",
    handoff_description=RECOMMENDED_PROMPT_PREFIX + "Greet users by name.",
    tools=[greet_user],
)
After creating this file, add EXAMPLE_AGENT to the orchestrator handoffs and add "greet_user": "Greeting..." to TOOL_CALLS.

Build docs developers (and LLMs) love