EcliPanel ships two AI surfaces — AI Chat for conversational interactions and AI Studio for more advanced prompting — backed by a model management layer that lets admins configure which AI models are available and restrict access by user or organisation. All AI features are controlled by theDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/thenoname-gurl/EcliPanel/llms.txt
Use this file to discover all available pages before exploring further.
ai feature flag. If your deployment does not have AI configured, these endpoints and the AI navigation items are hidden entirely.
AI features require the
ai feature flag to be enabled in your panel settings. Without it, all /api/ai/* endpoints return a feature-disabled error and the AI sections do not appear in the dashboard navigation.AI Chat
AI Chat is a lightweight conversational interface available to all users (no tier restriction):/dashboard/ai-chat in the frontend. The endpoint accepts a message and returns a streamed or buffered completion from the configured model.
AI Studio
AI Studio provides a more capable interface for advanced prompting, code generation, and longer sessions. It is restricted to Pro (paid) tier and above:/dashboard/ai-studio and requires the paid portal tier in addition to the ai feature flag.
OpenAI-compatible endpoint
EcliPanel exposes a drop-in OpenAI-compatible chat completions endpoint that you can point any OpenAI SDK or HTTP client at:openai Python or Node.js library with EcliPanel as the base URL:
requestWithFallback, which automatically retries against alternative endpoints if the primary one is rate-limited.
Model management
Admins configure which upstream models are available to the panel. EachAIModel record stores one or more endpoint definitions (base URL + API key). The backend cycles through endpoints and applies per-endpoint cooldowns when rate limits are encountered:
Per-user and per-org model access
Admins can grant specific models to individual users (AIModelUser) or to entire organisations (AIModelOrg). When the backend resolves available models for a request, it checks both the user’s direct grants and the grants for all organisations they belong to:
Usage tracking
Every AI request is recorded in theAIUsage table. This lets admins audit consumption per user and per organisation through the SOC dashboard:
admin:ai:cooldowns) for admin visibility.