Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/thenoname-gurl/EcliPanel/llms.txt

Use this file to discover all available pages before exploring further.

EcliPanel’s AI feature routes requests through a managed model registry, handling fallover across multiple upstream endpoints automatically. Users and organisations are linked to specific models by an administrator. The OpenAI-compatible endpoints let you point standard OpenAI client libraries at EcliPanel without code changes.
All AI endpoints return 503 with {"error":"feature_disabled"} when the ai feature flag is off. Enable the flag in Admin → Settings before using these routes.

Chat

Send a chat message

POST /api/ai/chat Sends a single message to the AI model assigned to the authenticated user or their organisation. Conversation history can be passed in to maintain context across turns.
message
string
required
The user’s message text.
modelId
number
Explicit model ID to use. If omitted, the user’s assigned model is used.
systemPrompt
string
System-level instruction prepended to the conversation.
history
object[]
Previous conversation turns.
curl -X POST https://your-panel.example.com/api/ai/chat \
  -H "Cookie: session=<token>" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "How do I increase my server RAM limit?",
    "history": [
      {"role": "user", "content": "What is EcliPanel?"},
      {"role": "assistant", "content": "EcliPanel is a game server control panel."}
    ]
  }'
reply
string
The AI model’s response text. If no model is configured, a user-friendly fallback message is returned instead of an error.

AI Studio

POST /api/ai/studio Advanced AI invocation endpoint with extended configuration options for power users. Requires the paid or higher portal tier.
message
string
required
The user’s message text.
modelId
number
Explicit model ID.
systemPrompt
string
System prompt override.
maxTokens
number
Maximum tokens in the response.
temperature
number
Sampling temperature (0.0–2.0).
history
object[]
Conversation history in the same format as /api/ai/chat.

Model discovery

List all models

GET /api/ai/models Returns all AI models registered in the panel. API keys and endpoint credentials are stripped from the response. Available to all authenticated users.
curl https://your-panel.example.com/api/ai/models \
  -H "Cookie: session=<token>"
id
number
Model ID.
name
string
Model display name.
config
object
tags
string[]
Descriptive tags assigned by the administrator.

List your accessible models

GET /api/ai/my-models Returns the models linked to the authenticated user or to any organisation they belong to, including per-link usage limits.
curl https://your-panel.example.com/api/ai/my-models \
  -H "Cookie: session=<token>"
model
object
limits
object

OpenAI-compatible proxy

These endpoints accept the standard OpenAI request format. Point OpenAI-compatible clients or libraries at your EcliPanel instance by changing the baseURL.
Set baseURL to https://your-panel.example.com/api/ai/openai in any OpenAI SDK to route requests through EcliPanel’s model registry automatically.

Chat completions

POST /api/ai/openai/v1/chat/completions Proxies a standard OpenAI chat completions request to the model assigned to the authenticated user. The model field in the request body is replaced with the provider model ID from the panel configuration.
messages
object[]
required
Conversation messages in OpenAI format.
modelId
number
Optional EcliPanel model ID override. If omitted, the user’s assigned model is used.
temperature
number
Sampling temperature.
max_tokens
number
Maximum completion tokens.
stream
boolean
Whether to stream the response using server-sent events.
curl -X POST https://your-panel.example.com/api/ai/openai/v1/chat/completions \
  -H "Cookie: session=<token>" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "List three game server optimization tips."}
    ],
    "temperature": 0.7,
    "max_tokens": 512
  }'
The response matches the standard OpenAI chat completions format:
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1716000000,
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "1. Allocate RAM matching the game's recommended settings..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 28,
    "completion_tokens": 120,
    "total_tokens": 148
  }
}

Text completions

POST /api/ai/openai/v1/completions Proxies an OpenAI-style text completion request (non-chat format).
prompt
string
required
The prompt text to complete.
modelId
number
Optional EcliPanel model ID override.
max_tokens
number
Maximum completion tokens.
temperature
number
Sampling temperature.
curl -X POST https://your-panel.example.com/api/ai/openai/v1/completions \
  -H "Cookie: session=<token>" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "The best way to reduce server latency is",
    "max_tokens": 100
  }'

Build docs developers (and LLMs) love