Basic Setup
First, import the necessary functions and create a provider:Chat Completions
Simple Text Generation
Generate text using any available participant:Multi-turn Conversations
Build conversational AI applications:With Generation Options
Customize generation parameters:Streaming Responses
Basic Streaming
Stream text generation for real-time output:React Server Component Streaming
Integrate with Next.js App Router:app/api/chat/route.ts
Client-Side Hook
Use withuseChat hook:
app/page.tsx
Routing Strategies
Participant Routing
Target a specific participant by ID:Model Routing
Use the first available participant running a specific model:Any Routing (Random)
Load balance across all online participants:Dynamic Routing Example
Choose routing strategy based on context:Listing Participants and Models
List All Participants
Get information about all participants in a room:List Available Models
Get OpenAI-compatible model list:Filter by Hardware Specs
Find participants with specific hardware:HTTP Client Usage
Creating and Managing Rooms
Use the HTTP client for room management:Joining as a Participant
Join a room with your local LLM endpoint:Password-Protected Rooms
Create and join password-protected rooms:Health Checks
Send periodic health checks to maintain online status:Error Handling
ClientError Handling
Handle HTTP client errors with type safety:AI SDK Error Handling
Handle generation errors gracefully:Retry Logic
Implement automatic retries with exponential backoff:Advanced Patterns
Local Hub with Integrated Client
Create a hub and use it immediately:Multi-Room Routing
Work with multiple rooms simultaneously:Custom Base URL Access
Access the OpenAI-compatible endpoint directly:Next Steps
API Reference
Explore the complete API documentation
CLI Guide
Learn about the Gambiarra CLI