Skip to main content

Overview

The DedalusRunner class extends the Dedalus client with autonomous multi-step tool execution. It manages conversation loops, automatically executes tool calls, and supports both local Python functions and remote MCP servers.

Class Initialization

from dedalus_labs import Dedalus
from dedalus_labs.lib.runner import DedalusRunner

client = Dedalus(api_key="your-api-key")
runner = DedalusRunner(client=client, verbose=False)

Constructor Parameters

client
Dedalus | AsyncDedalus
required
The Dedalus client instance (sync or async).
verbose
boolean
Enable verbose logging of execution steps. Defaults to False.

run() Method

Execute a tool-enabled conversation with autonomous multi-step execution.
result = runner.run(
    input: str | list[Message] | None = None,
    tools: list[Callable] | None = None,
    messages: list[Message] | None = None,
    instructions: str | None = None,
    model: str | list[str] | DedalusModel | list[DedalusModel] | None = None,
    max_steps: int = 10,
    mcp_servers: MCPServersInput = None,
    credentials: Sequence[Any] | None = None,
    stream: bool = False,
    transport: Literal["http", "realtime"] = "http",
    verbose: bool | None = None,
    debug: bool | None = None,
    on_tool_event: Callable[[Dict[str, JsonValue]], None] | None = None,
    return_intent: bool = False,
    policy: PolicyInput = None,
    available_models: list[str] | None = None,
    strict_models: bool = True,
    # ... plus all chat completion parameters
)

Core Parameters

input
string | list[Message]
User input as a string or list of messages. Converted to a user message if string.
tools
list[Callable]
List of Python functions to make available as tools. Functions are introspected to generate schemas automatically.
messages
list[Message]
Full conversation history. Alternative to input for continuing conversations.
instructions
string
System instructions. Converted to a system message. Overrides existing system messages if used with messages.
model
string | list[string] | DedalusModel | list[DedalusModel]
required
Model(s) to use. Can be:
  • Single model ID: "openai/gpt-4"
  • List of model IDs for fallback: ["openai/gpt-4", "anthropic/claude-3-5-sonnet"]
  • DedalusModel object with settings
  • List of DedalusModel objects

Runner Configuration

max_steps
integer
Maximum number of execution steps before stopping. Defaults to 10.
mcp_servers
MCPServersInput
MCP servers to connect to. Can be:
  • Single server slug or URL: "filesystem"
  • List of slugs/URLs: ["filesystem", "github"]
  • MCP server objects with configuration
credentials
Sequence[Any]
Credentials for MCP servers or other services.
stream
boolean
Enable streaming responses. Defaults to False.
transport
string
Transport protocol: "http" or "realtime". Defaults to "http".
verbose
boolean
Override instance-level verbose setting for this run.
debug
boolean
Enable debug mode with additional logging.
on_tool_event
Callable
Callback function called for each tool execution event.
return_intent
boolean
Include intent information in the result. Defaults to False.
policy
PolicyInput
Policy function or dictionary to control execution behavior per step.
available_models
list[string]
List of models available for dynamic routing. Defaults to the provided model list.
strict_models
boolean
Enforce strict model validation against available_models. Defaults to True.

Chat Completion Parameters

All standard chat completion parameters are supported and forwarded to the API:
temperature
number
Sampling temperature (0.0-2.0).
max_tokens
integer
Maximum tokens to generate.
top_p
number
Nucleus sampling parameter.
frequency_penalty
number
Frequency penalty (-2.0 to 2.0).
presence_penalty
number
Presence penalty (-2.0 to 2.0).
reasoning_effort
string
Reasoning effort level for models that support extended thinking.
thinking
Dict[str, Any]
Thinking configuration for models with reasoning capabilities.
response_format
Dict[str, JsonValue] | type
Response format specification for structured output.
tool_choice
string | Dict[str, JsonValue]
Control tool calling behavior: "auto", "none", "required", or specific tool.
And many more standard parameters (see CompletionCreateParams for full list).

Return Value

For non-streaming runs, returns a _RunResult object:
final_output
string
Final text output from the conversation.
output
string
Alias for final_output.
content
string
Alias for final_output.
tool_results
list[ToolResult]
List of all tool execution results.
steps_used
integer
Number of execution steps completed.
messages
list[Message]
Full conversation history including all tool calls and responses.
tools_called
list[string]
Names of all tools that were called during execution.
mcp_results
list[MCPToolResult]
Results from server-side MCP tool executions.
intents
list[Dict[str, JsonValue]]
Intent information (if return_intent=True).

Result Methods

to_input_list()
list[Message]
Get the full conversation history for continuation.

Examples

Basic Tool Execution

from dedalus_labs import Dedalus
from dedalus_labs.lib.runner import DedalusRunner

client = Dedalus(api_key="your-api-key")
runner = DedalusRunner(client)

def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"The weather in {location} is sunny, 72°F"

result = runner.run(
    input="What's the weather in San Francisco?",
    tools=[get_weather],
    model="openai/gpt-4"
)

print(result.output)
# "The weather in San Francisco is sunny and 72°F."

Multi-Step Execution

def search_database(query: str) -> list[dict]:
    """Search the database for records."""
    return [{"id": 1, "name": "Example"}]

def fetch_details(record_id: int) -> dict:
    """Fetch detailed information for a record."""
    return {"id": record_id, "details": "Full details here"}

result = runner.run(
    input="Find all users named John and get their details",
    tools=[search_database, fetch_details],
    model="openai/gpt-4",
    max_steps=5
)

print(f"Steps used: {result.steps_used}")
print(f"Tools called: {result.tools_called}")

With MCP Servers

result = runner.run(
    input="List the files in the current directory",
    mcp_servers=["filesystem"],
    model="openai/gpt-4"
)

print(result.output)

Streaming with Tools

for chunk in runner.run(
    input="Calculate 15 * 23 and explain the result",
    tools=[lambda a, b: a * b],
    model="openai/gpt-4",
    stream=True
):
    if hasattr(chunk, 'choices') and chunk.choices:
        delta = chunk.choices[0].delta
        if hasattr(delta, 'content') and delta.content:
            print(delta.content, end='', flush=True)

Policy-Based Execution

def my_policy(context):
    """Control execution based on step number."""
    if context["step"] > 3:
        return {"model": "openai/gpt-4-turbo"}  # Switch to faster model
    return {}

result = runner.run(
    input="Perform a complex multi-step analysis",
    tools=[analyze_data, generate_report],
    model="openai/gpt-4",
    policy=my_policy
)

Build docs developers (and LLMs) love