Overview
TheDedalusRunner class extends the Dedalus client with autonomous multi-step tool execution. It manages conversation loops, automatically executes tool calls, and supports both local Python functions and remote MCP servers.
Class Initialization
Constructor Parameters
The Dedalus client instance (sync or async).
Enable verbose logging of execution steps. Defaults to
False.run() Method
Execute a tool-enabled conversation with autonomous multi-step execution.Core Parameters
User input as a string or list of messages. Converted to a user message if string.
List of Python functions to make available as tools. Functions are introspected to generate schemas automatically.
Full conversation history. Alternative to
input for continuing conversations.System instructions. Converted to a system message. Overrides existing system messages if used with
messages.Model(s) to use. Can be:
- Single model ID:
"openai/gpt-4" - List of model IDs for fallback:
["openai/gpt-4", "anthropic/claude-3-5-sonnet"] - DedalusModel object with settings
- List of DedalusModel objects
Runner Configuration
Maximum number of execution steps before stopping. Defaults to
10.MCP servers to connect to. Can be:
- Single server slug or URL:
"filesystem" - List of slugs/URLs:
["filesystem", "github"] - MCP server objects with configuration
Credentials for MCP servers or other services.
Enable streaming responses. Defaults to
False.Transport protocol:
"http" or "realtime". Defaults to "http".Override instance-level verbose setting for this run.
Enable debug mode with additional logging.
Callback function called for each tool execution event.
Include intent information in the result. Defaults to
False.Policy function or dictionary to control execution behavior per step.
List of models available for dynamic routing. Defaults to the provided model list.
Enforce strict model validation against available_models. Defaults to
True.Chat Completion Parameters
All standard chat completion parameters are supported and forwarded to the API:Sampling temperature (0.0-2.0).
Maximum tokens to generate.
Nucleus sampling parameter.
Frequency penalty (-2.0 to 2.0).
Presence penalty (-2.0 to 2.0).
Reasoning effort level for models that support extended thinking.
Thinking configuration for models with reasoning capabilities.
Response format specification for structured output.
Control tool calling behavior:
"auto", "none", "required", or specific tool.CompletionCreateParams for full list).
Return Value
For non-streaming runs, returns a_RunResult object:
Final text output from the conversation.
Alias for
final_output.Alias for
final_output.List of all tool execution results.
Number of execution steps completed.
Full conversation history including all tool calls and responses.
Names of all tools that were called during execution.
Results from server-side MCP tool executions.
Intent information (if
return_intent=True).Result Methods
Get the full conversation history for continuation.