TheDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/jundot/omlx/llms.txt
Use this file to discover all available pages before exploring further.
/v1/models endpoint returns the list of all models that oMLX has discovered in your configured model directory. The response is compatible with the OpenAI List Models API, so any OpenAI client can use it to enumerate available models. oMLX extends the standard response with model type and load status information available via the /v1/models/status endpoint.
List models
GET /v1/models
Returns all discovered models. For each model, the id field is the alias if one is configured in per-model settings, or the directory name otherwise. Both the alias and the directory name are accepted when specifying a model in generation requests.
Example
Response
Always
"list".Array of model objects.
Example response
Model status
GET /v1/models/status
Returns extended per-model information including the model type and context window configuration. This endpoint is an oMLX extension and is not part of the OpenAI API spec.
Array of model status objects.
Example response
Health check
GET /health
Returns a simple health check response. Useful for monitoring and readiness checks in scripts or container orchestration.
Load and unload models
Two additional endpoints let you control model loading state programmatically:POST /v1/models/{model_id}/load — Load a model into memory.
POST /v1/models/{model_id}/unload — Unload a model from memory.
These are equivalent to using the status badges in the admin panel. The model_id path parameter accepts the model alias or directory name.