Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/cloudwaddie/lmarenabridge/llms.txt

Use this file to discover all available pages before exploring further.

LMArena Bridge implements POST /api/v1/messages as an Anthropic Messages API–compatible endpoint. This lets Anthropic-native clients — such as tools built with the Anthropic SDK, Cursor in Claude mode, or any app that calls the Anthropic API — route requests through LMArena without changing their client code.

Authentication

Authorization
string
required
Bearer token. Pass the API key you created in the dashboard: Authorization: Bearer <api_key>. If no API keys are configured, omit the header or pass any value.

Request body

model
string
required
Model ID. Use any model ID returned by GET /api/v1/models.
messages
object[]
required
Array of message objects. Each must have a role ("user" or "assistant") and a content field (string or array of content blocks).
system
string
Optional system prompt. Prepended as a role: "system" message before the conversation.
max_tokens
number
Maximum tokens to generate. Passed through to the underlying chat completions handler.
stream
boolean
default:"false"
When true, returns an Anthropic-format SSE stream with message_start, content_block_start, content_block_delta, content_block_stop, message_delta, and message_stop events.
temperature
number
Sampling temperature. Passed through when provided.

How translation works

The bridge converts the Anthropic request to OpenAI format internally, routes it through the same chat/completions pipeline, then converts the response back:
  • system → prepended {"role": "system", "content": "..."} message
  • Content blocks of type: "text" are joined with newlines into a single string
  • Streaming OpenAI SSE chunks are re-wrapped as Anthropic content_block_delta events
Image content blocks are not currently supported through this endpoint. To send images, use POST /api/v1/chat/completions directly with the OpenAI vision format.

Example

curl -X POST http://localhost:8000/api/v1/messages \
  -H "Authorization: Bearer your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-3-5-sonnet-20241022",
    "max_tokens": 1024,
    "messages": [
      {"role": "user", "content": "What is the capital of France?"}
    ]
  }'

Non-streaming response

200
{
  "id": "msg_01XFDUDYJgAACzvnptvVoYEL",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "Paris is the capital of France."
    }
  ],
  "model": "gpt-4o",
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 14,
    "output_tokens": 9
  }
}

Streaming response events

SSE stream
event: message_start
data: {"type":"message_start","message":{"id":"msg_01XFD...","type":"message","role":"assistant","content":[],"model":"gpt-4o","stop_reason":null}}

event: content_block_start
data: {"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}}

event: content_block_delta
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"Paris"}}

event: content_block_stop
data: {"type":"content_block_stop","index":0}

event: message_stop
data: {"type":"message_stop"}

Error responses

StatusMeaning
400Missing or invalid model or messages field.
401Missing or invalid API key.
429Rate limit exceeded for your API key.
503LMArena unavailable or bridge failed to acquire a token.

Build docs developers (and LLMs) love