List Models
Retrieve available models in a room (online participants).Endpoint
Path Parameters
Room code
Response
Status:200 OK
OpenAI-compatible models list with Gambiarra extensions.
Always
"list"Array of model objects
Example Response
Error Responses
404 Not Found - Room
404 Not Found - Room
Chat Completions
Proxy OpenAI-compatible chat completion requests to room participants.Endpoint
Path Parameters
Room code
Request Body
Model routing specification:
- Participant ID: Route to specific participant (e.g.,
"550e8400-e29b-41d4-a716-446655440000") - Model name prefix: Route to first online participant with matching model (e.g.,
"model:llama3.2:3b") - Wildcard: Route to random online participant (use
"*"or"any")
Array of message objects (OpenAI format)
Enable streaming responses (default:
false)Temperature (0-2)
Top-p sampling (0-1)
Maximum tokens to generate
Stop sequences
Frequency penalty (-2 to 2)
Presence penalty (-2 to 2)
Random seed for reproducibility
Additional provider-specific parameters are passed through to the participant endpoint.
Response (Non-Streaming)
Status:200 OK
OpenAI-compatible chat completion response from the participant.
Completion ID
Always
"chat.completion"Unix timestamp (seconds)
Model used by the participant
Array of completion choices
Token usage statistics
Example Response (Non-Streaming)
Response (Streaming)
Status:200 OK
Content-Type: text/event-stream
Server-Sent Events stream with OpenAI-compatible chat completion chunks.
Error Responses
404 Not Found - Room
404 Not Found - Room
404 Not Found - No Available Participant
404 Not Found - No Available Participant
503 Service Unavailable - Participant Offline
503 Service Unavailable - Participant Offline
502 Bad Gateway - Proxy Failed
502 Bad Gateway - Proxy Failed
Model Routing Logic
The hub routes requests based on themodel field:
1. Wildcard ("*" or "any")
Selects a random online participant from the room.
2. Model Name Prefix ("model:<name>")
Routes to the first online participant with a matching model name.
3. Participant ID (direct)
Routes to a specific participant by ID.4. Fallback: Model Name (without prefix)
If the value doesn’t match a participant ID, treats it as a model name.Streaming Behavior
Whenstream: true:
- Hub proxies the streaming response from the participant
- Response uses
Content-Type: text/event-stream - Stream is passed through without modification
- Connection is kept alive until completion or error
/rooms/:code/events endpoint to monitor llm:request and llm:complete events.