TheDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/jundot/omlx/llms.txt
Use this file to discover all available pages before exploring further.
omlx launch command handles the full setup-and-start workflow for external coding tools: it writes the tool’s config file, sets the correct API endpoint and key, and then replaces the current process with the tool binary. All you need is a running oMLX server and the tool installed on your system.
How omlx launch works
When you runomlx launch <tool>, oMLX:
- Verifies the oMLX server is reachable at the configured host and port.
- Fetches available models from
/v1/models. - If
--modelis not specified, presents an interactive model picker (arrow keys, Enter to confirm). - Writes or updates the tool’s config file with the oMLX endpoint, API key, and selected model.
- Exec’s the tool binary, replacing the current process.
Codex
Codex integration writes~/.codex/config.toml, adding oMLX as a named model provider and setting it as the active model. If the selected model name contains a reasoning hint (thinking, o1, o3, or r1), model_reasoning_effort = "high" is added automatically.
OMLX_API_KEY environment variable is set to your oMLX API key (or "omlx" if no key is configured) before launching.
If Codex is not installed:
npm install -g @openai/codexOpenClaw
OpenClaw integration writes~/.openclaw/openclaw.json with an omlx provider block and sets it as the default model. It also configures ~/.openclaw/exec-approvals.json based on the tools profile you choose.
Tools profiles
The--tools-profile flag controls OpenClaw’s exec approval policy:
| Profile | Exec policy | Ask behavior |
|---|---|---|
minimal | allowlist | Prompt on miss |
coding | unrestricted | Off (default) |
messaging | allowlist | Prompt on miss |
full | unrestricted | Off |
omlx launch openclaw runs the non-interactive onboarding step automatically before writing the config.
If OpenClaw is not installed:
npm install -g openclawPi
Pi integration writes two files:~/.pi/agent/models.json (provider and model definition) and ~/.pi/agent/settings.json (default provider and model selection). VLM models are configured with image input support automatically.
If Pi is not installed:
npm install -g @mariozechner/pi-coding-agentOpenCode
OpenCode integration writes a provider entry to~/.config/opencode/opencode.json using the @ai-sdk/openai-compatible npm package. The selected model is set as the default in the config, and VLM models are configured with image attachment support.
If OpenCode is not installed:
curl -fsSL https://opencode.ai/install | bashChecking installed tools
Runomlx launch list to see every integration and whether the required binary is on your PATH:
Admin dashboard alternative
All integrations are also accessible from the Integrations tab in the admin dashboard athttp://localhost:8000/admin. The dashboard provides the same one-click setup without needing a terminal.