Skip to main content
Every agent tool in ODAI returns a ToolResponse object serialized to a dict via .to_dict(). This ensures the WebSocket handler, client renderer, and chat storage all work with a consistent data shape.

The ToolResponse class

Defined in connectors/utils/responses.py:
class ToolResponse:
    def __init__(
        self,
        response_type: str,
        agent_name: str,
        friendly_name: str,
        response: dict | str | list,
        display_response: bool = True
    ):
        self.response_type = response_type
        self.agent_name = agent_name
        self.friendly_name = friendly_name
        self.response = response
        self.display_response = display_response

    def to_dict(self) -> dict:
        return {
            "response_type": self.response_type,
            "agent_name": self.agent_name,
            "friendly_name": self.friendly_name,
            "response": self.response,
            "display_response": self.display_response
        }

Fields

response_type
string
required
An agent-specific string that identifies the kind of data in response. The client uses this value to choose the correct renderer (e.g., an email list, a business card grid, a financial table). See the response type reference below.
agent_name
string
required
The internal name of the agent that produced this response. Matches the Agent(name=...) value. Examples: "GMail", "Plaid", "Yelp", "Google".
friendly_name
string
required
A human-readable label shown in the UI alongside the tool output. Describes the specific action taken, for example "GMAIL Inbox" or "Searching Yelp for sushi in Boston, MA".
display_response
boolean
default:"true"
Controls whether the WebSocket handler forwards the tool output to the client. Set to false for intermediate results that the LLM should process internally without surfacing to the user. The handler checks this flag before emitting a tool_output event.
response
object | string | array
required
The data payload. Shape varies by response_type. Can be a list (email messages, search results, businesses), a dict (account details, transaction maps), or a string (error or status messages).

The .to_dict() pattern

All tool functions return ToolResponse(...).to_dict() rather than the ToolResponse object itself. This ensures the value is JSON-serializable and compatible with the OpenAI Agents SDK’s tool output format.
@function_tool
def my_tool(wrapper: RunContextWrapper[ChatContext]) -> dict:
    result = fetch_data()
    return ToolResponse(
        response_type="my_result_type",
        agent_name="My Service",
        friendly_name="My Service Results",
        display_response=True,
        response=result
    ).to_dict()

Real examples

Gmail — inbox fetch

connectors/gmail.pyfetch_google_email_inbox
return ToolResponse(
    response_type="google_email_inbox",
    agent_name="GMail",
    friendly_name="GMAIL Inbox",
    response=messages,   # list of email dicts
).to_dict()
Each item in response contains:
{
  "subject": "Re: Project update",
  "from": "alice@example.com",
  "to": "me@example.com",
  "markdown": "Hi, just following up...",
  "text": "Hi, just following up...",
  "unread": true,
  "id": "18f3a2b9c1d4e5f6",
  "thread_id": "18f3a2b9c1d4e5f6",
  "reply_to_id": "<CAGxyz@mail.gmail.com>",
  "cc": [],
  "bcc": []
}

Gmail — send email

connectors/gmail.pysend_google_email
return ToolResponse(
    response_type="send_google_email",
    agent_name="GMail",
    friendly_name="Sent Email",
    response={
        "to": ["bob@example.com"],
        "cc": [],
        "bcc": [],
        "subject": "Hello",
        "body": "Hi Bob, ...",
    },
).to_dict()

Plaid — account balances

connectors/plaid_agent.pyget_accounts_at_plaid
return ToolResponse(
    response_type="plaid_accounts_response",
    agent_name="Plaid",
    friendly_name="Getting Account Information",
    display_response=True,
    response=accounts_details   # list of Plaid AccountsBalanceGetResponse dicts
).to_dict()

Plaid — transactions

connectors/plaid_agent.pyget_transactions_at_plaid
return ToolResponse(
    response_type="plaid_transactions_response",
    agent_name="Plaid",
    friendly_name="Getting Transactions",
    display_response=True,
    response=account_transactions  # dict keyed by account_id
).to_dict()
account_transactions structure:
{
  "abc123": {
    "account_name": "Chase Checking",
    "account_official_name": "CHASE TOTAL CHECKING",
    "transactions": [
      {
        "date": "2024-11-01T00:00:00",
        "amount": 42.50,
        "name": "BLUE BOTTLE COFFEE",
        "category": ["Food and Drink", "Restaurants", "Coffee Shop"],
        "pending": false
      }
    ]
  }
}
connectors/yelp.pysearch_businesses_at_yelp
return ToolResponse(
    response_type="yelp_search_results",
    agent_name="Yelp",
    friendly_name="Searching Yelp for sushi in Boston, MA",
    display_response=True,
    response=data["businesses"]   # list of Yelp business objects
).to_dict()

Yelp — reviews

connectors/yelp.pyget_business_reviews_at_yelp
return ToolResponse(
    response_type="yelp_reviews",
    agent_name="Yelp",
    friendly_name="Getting reviews from Yelp",
    display_response=True,
    response=data["reviews"]   # up to 3 review objects
).to_dict()

display_response and client rendering

When the WebSocket handler receives a tool_call_output_item event from the agent runner, it checks output.get("display_response", True) before forwarding the data to the client:
# websocket/handlers.py
if output.get("display_response", True):
    response = {
        "type": "tool_output",
        "output": output,
        "current_agent": current_agent if current_agent else "ODAI",
    }
    await websocket.send_text(json.dumps(response, default=self._json_serial))
Set display_response=False for tools that fetch data as intermediate context for the LLM (for example, a location lookup that feeds into a subsequent search) without needing to render the raw data in the chat UI.

Response type reference

response_typeAgentDescription
google_email_inboxGMailInbox email list
google_email_searchGMailEmail search results
google_email_search_from_emailGMailEmails from a specific sender
send_google_emailGMailSent email confirmation
reply_to_google_emailGMailSent reply confirmation
plaid_accounts_responsePlaidBank account balances
plaid_transactions_responsePlaidTransaction history by account
yelp_search_resultsYelpBusiness search results
yelp_reviewsYelpBusiness reviews
google_search_resultsGoogle SearchWeb search results
errorAnyError message from a failed tool call
account_neededAnySignals that the user must connect an account
google_account_neededAnySignals that a Google account connection is required
connect_google_accountAnyPrompts the user to connect their Google account
connect_plaid_accountAnyPrompts the user to connect a Plaid account
open_windowOpen URLInstructs the client to open a URL in the current window
open_tabOpen URLInstructs the client to open a URL in a new tab

Built-in ToolResponse subclasses

For common scenarios, connectors/utils/responses.py provides ready-made subclasses:
# Prompt the user to connect their Google account
return GoogleAccountNeededResponse(agent_name="GMail").to_dict()

# Prompt the user to connect a Plaid bank account
return ConnectPlaidAccountResponse(agent_name="Plaid").to_dict()

# Instruct the client to open a URL in a new window
return OpenWindowResponse(agent_name="My Agent", url="https://example.com").to_dict()

# Instruct the client to open a URL in a new tab
return OpenTabResponse(agent_name="My Agent", url="https://example.com").to_dict()

How the orchestrator synthesizes multiple tool responses

When the orchestrator calls multiple agents in a single turn, each agent returns its own ToolResponse. The WebSocket handler emits a tool_output event for each one that has display_response=True. The LLM then receives all tool outputs as context and generates a single synthesized llm_response that references the combined results. The full sequence for a multi-agent turn looks like this:
tool_call (agent A)
agent_updated (→ Agent A)
tool_output (Agent A result)
agent_updated (→ Agent B)
tool_call (agent B)
tool_output (Agent B result)
llm_response (synthesized answer referencing both results)
end_of_stream
suggested_prompts
The client renders each tool_output inline as a structured card while streaming the final llm_response text alongside it.

Build docs developers (and LLMs) love