TheDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/cloudwaddie/lmarenabridge/llms.txt
Use this file to discover all available pages before exploring further.
/api/v1/chat/completions endpoint is the primary way to send messages to LMArena models through the bridge. It follows the OpenAI Chat Completions format, so any client that works with OpenAI’s API can be pointed at LMArena Bridge with minimal changes.
Authentication
Bearer token. Pass the API key you created in the dashboard:
Authorization: Bearer <api_key>. If no API keys are configured on the bridge, pass an empty string or omit the header.Request body
The public model ID to use. Retrieve the current list of available IDs from
GET /api/v1/models.Array of message objects forming the conversation. Each object must have a
role ("system", "user", or "assistant") and a content field.When
true, the response is returned as a Server-Sent Events stream. Each event carries a JSON delta; the stream ends with data: [DONE].Sampling temperature. Passed through to LMArena when provided.
Maximum number of tokens to generate. Passed through to LMArena when provided.
Multi-turn conversations
The bridge maintains a chat session for each API key. When you send a sequence of messages that begins with the same first user message, the bridge routes follow-up turns to the same LMArena session automatically — you do not need to track a session ID yourself. The conversation key is derived from your API key, the model name, and the first user message.If you change the model or the first user message, a new LMArena session is started. The bridge keeps all sessions in memory; restarting the server clears them.
Examples
Responses
Non-streaming response
200
Streaming response
Each SSE event has the formdata: <json>\n\n. The final event is data: [DONE].
SSE stream
Response fields
Unique identifier for this completion.
Always
"chat.completion" for non-streaming responses.The model that generated the response.
Token usage counts:
prompt_tokens, completion_tokens, total_tokens.Error responses
| Status | Meaning |
|---|---|
400 | Invalid JSON, missing model or messages, empty messages array, or prompt exceeds the ~113,000 character limit. |
401 | The Authorization header is missing or the API key is invalid. |
403 | Attempted to use a stealth model (one without a public organisation). |
404 | The requested model was not found. Check GET /api/v1/models for valid IDs. |
429 | Rate limit exceeded for your API key. |
503 | LMArena is unavailable, the model list could not be fetched, or the bridge failed to acquire a reCAPTCHA token. |
401
400