Overview
Monty powers code-mode in PydanticAI . Instead of making sequential tool calls, the LLM writes Python code that calls your tools as functions, and Monty executes it safely.
Code-mode allows agents to work faster, cheaper, and more reliably by writing Python code instead of relying on traditional tool calling.
How It Works
The CodeModeToolset wraps your existing FunctionToolset and converts it into executable Python code that Monty can run safely in a sandbox.
Complete Example
Here’s a weather agent that uses Monty’s code-mode to compare weather across multiple cities:
Define your tools
Create a FunctionToolset with your agent’s tools: from pydantic_ai import RunContext
from pydantic_ai.toolsets.function import FunctionToolset
from httpx import AsyncClient
from typing_extensions import TypedDict
import json
class LatLng ( TypedDict ):
lat: float
lng: float
weather_toolset: FunctionToolset[AsyncClient] = FunctionToolset()
@weather_toolset.tool
async def get_lat_lng (
ctx : RunContext[AsyncClient], location_description : str
) -> LatLng:
"""Get the latitude and longitude of a location."""
r = await ctx.deps.get(
'https://demo-endpoints.pydantic.workers.dev/latlng' ,
params = { 'location' : location_description},
)
r.raise_for_status()
return json.loads(r.content)
@weather_toolset.tool
async def get_temp ( ctx : RunContext[AsyncClient], lat : float , lng : float ) -> float :
"""Get the temp at a location."""
r = await ctx.deps.get(
'https://demo-endpoints.pydantic.workers.dev/number' ,
params = { 'min' : 10 , 'max' : 30 },
)
r.raise_for_status()
return float (r.text)
@weather_toolset.tool
async def get_weather_description (
ctx : RunContext[AsyncClient], lat : float , lng : float
) -> str :
"""Get the weather description at a location."""
r = await ctx.deps.get(
'https://demo-endpoints.pydantic.workers.dev/weather' ,
params = { 'lat' : lat, 'lng' : lng},
)
r.raise_for_status()
return r.text
Wrap with CodeModeToolset
Replace the FunctionToolset with a CodeModeToolset wrapper: from pydantic_ai import Agent
from pydantic_ai.toolsets.code_mode import CodeModeToolset
agent = Agent(
'gateway/anthropic:claude-sonnet-4-5' ,
toolsets = [CodeModeToolset(weather_toolset)],
deps_type = AsyncClient,
)
The CodeModeToolset converts your function tools into a Python environment that the LLM can write code against.
Run your agent
Execute the agent normally - it will generate Python code internally: import asyncio
import logfire
logfire.configure()
logfire.instrument_pydantic_ai()
async def main ():
async with AsyncClient() as client:
await agent.run(
'Compare the weather of London, Paris, and Tokyo.' ,
deps = client
)
if __name__ == '__main__' :
asyncio.run(main())
The CodeModeToolset wrapper:
Generates type stubs - Creates Python type definitions for all your tools
Provides execution context - Gives the LLM access to your tools as callable functions
Handles external calls - Routes function calls back to your host implementation
Enforces safety - Runs all code in Monty’s secure sandbox
Traditional Tool Calling
Code-Mode with Monty
# Agent makes sequential tool calls
result1 = await get_lat_lng( 'London' )
result2 = await get_temp(result1[ 'lat' ], result1[ 'lng' ])
result3 = await get_lat_lng( 'Paris' )
result4 = await get_temp(result3[ 'lat' ], result3[ 'lng' ])
# Many round-trips to the LLM
Code-mode is:
Faster - Fewer round-trips to the LLM
Cheaper - Less token usage
More reliable - Complex logic is easier to express in code
Security
All code runs in Monty’s sandbox with:
No filesystem access
No network access
No environment variable access
Only functions you explicitly provide
Monty ensures that even malicious code generated by the LLM cannot escape the sandbox or access your host system.