Overview
This example demonstrates:- Integrating Workers AI models with the Sandbox SDK
- Using the Vercel AI SDK for clean function calling
- Executing Python code in isolated containers
- Handling code execution results and errors
How it works
User sends a prompt
The user sends a natural language prompt to the
/run endpoint requesting a calculation or code execution.Model receives the prompt
The GPT-OSS model receives the prompt along with an
execute_python tool definition.Model decides to execute code
The model determines whether Python execution is needed and generates the appropriate code.
Implementation
Create the Python execution function
This function handles code execution in the sandbox and extracts results:Set up the AI request handler
Integrate Workers AI with the Vercel AI SDK:Create the Worker endpoint
Set up the API endpoint to handle requests:Example usage
Test the code interpreter with various prompts:Setup and deployment
Run locally
The first run builds the Docker container (2-3 minutes). Subsequent runs are much faster.
Key features
- Workers AI Integration: Uses the
@cf/openai/gpt-oss-120bmodel via the workers-ai-provider package - Vercel AI SDK: Leverages
generateText()andtool()for clean function calling patterns - Sandbox Execution: Python code runs in isolated Cloudflare Sandbox containers
- Result Handling: Extracts outputs from both expression results and stdout/stderr logs
- Error Handling: Properly surfaces execution errors to the AI model