Skip to main content
Max exposes a local HTTP API that the TUI and any custom scripts can use to send messages, query state, and control the daemon. The API runs on http://127.0.0.1:7777 by default.
The API binds to 127.0.0.1 only and is never exposed to the network. It is intended for local use by the TUI and scripts running on the same machine.

Base URL

http://127.0.0.1:7777
To use a different port, set API_PORT in ~/.max/.env and restart the daemon.

Authentication

All endpoints except GET /status require a Bearer token. The token is generated automatically on first run and stored at ~/.max/api-token with mode 0600. Read the token:
cat ~/.max/api-token
Set it as an environment variable for use with curl:
export MAX_TOKEN=$(cat ~/.max/api-token)
Authorization header format:
Authorization: Bearer <token>
Requests without a valid token receive a 401 Unauthorized response:
{ "error": "Unauthorized" }

Endpoints

GET /status

Health check. Returns the daemon status and a snapshot of active workers. No authentication required.
curl http://127.0.0.1:7777/status
Response
status
string
required
Always "ok" when the daemon is running.
workers
object[]
List of active worker sessions.
{
  "status": "ok",
  "workers": [
    {
      "name": "auth-fix",
      "workingDir": "/home/user/dev/myapp",
      "status": "running"
    }
  ]
}

GET /stream

Open a Server-Sent Events (SSE) connection to receive real-time streaming responses. You must obtain a connectionId from this endpoint before calling POST /message.
curl -N \
  -H "Authorization: Bearer $MAX_TOKEN" \
  http://127.0.0.1:7777/stream
The connection sends a heartbeat every 20 seconds (:ping) to keep it alive. Initial event (on connect)
data: {"type":"connected","connectionId":"tui-1"}
Response field
connectionId
string
Opaque identifier for this SSE connection. Pass this value as connectionId when calling POST /message.
Event types received over the stream
TypeDescription
connectedSent once on connect. Contains connectionId.
deltaPartial streamed content. content is the full accumulated text so far.
messageFinal response. content is the complete text. May include a route field if auto-routing is enabled.
cancelledSent when a message is cancelled via POST /cancel.

POST /message

Send a message to the orchestrator. Requires an active SSE connection — responses stream back over /stream.
curl -X POST \
  -H "Authorization: Bearer $MAX_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "What sessions are running?", "connectionId": "tui-1"}' \
  http://127.0.0.1:7777/message
Request body
prompt
string
required
The message to send to the orchestrator.
connectionId
string
required
The connectionId obtained from GET /stream. The response will be streamed back to that connection.
Response
status
string
"queued" — the message has been accepted and will be processed. The actual response arrives over the SSE stream.
{ "status": "queued" }
If connectionId is missing or does not correspond to an active /stream connection, the request returns 400 Bad Request.

GET /sessions

List all active worker sessions with their current status and recent output.
curl -H "Authorization: Bearer $MAX_TOKEN" \
  http://127.0.0.1:7777/sessions
Response An array of worker session objects.
name
string
Unique name of the worker session.
workingDir
string
Filesystem path the worker is operating in.
status
string
Current status: "idle" or "running".
lastOutput
string
Up to 500 characters of the most recent output from the worker.
[
  {
    "name": "auth-fix",
    "workingDir": "/home/user/dev/myapp",
    "status": "running",
    "lastOutput": "Analyzing authentication flow..."
  }
]

GET /memory

Retrieve all stored memories. Memories are long-term facts, preferences, and context that Max carries across conversations.
curl -H "Authorization: Bearer $MAX_TOKEN" \
  http://127.0.0.1:7777/memory
Returns up to 100 memories across all categories. Response An array of memory objects.
id
number
Unique memory ID.
category
string
Memory category: "preference", "fact", "project", "person", or "routine".
content
string
The memory content.
[
  { "id": 1, "category": "preference", "content": "Prefers TypeScript over JavaScript" },
  { "id": 2, "category": "project",    "content": "Working on auth refactor in ~/dev/myapp" }
]

GET /skills

List all installed skills. Skills are loaded from three directories: the package bundled skills, ~/.max/skills/ (local), and ~/.agents/skills/ (global).
curl -H "Authorization: Bearer $MAX_TOKEN" \
  http://127.0.0.1:7777/skills
Response An array of skill objects.
slug
string
URL-safe identifier for the skill. Used with DELETE /skills/:slug.
name
string
Human-readable skill name.
description
string
Short description of what the skill provides.
source
string
Where the skill was loaded from: "bundled", "local", or "global".
directory
string
Filesystem path to the skill directory.
[
  {
    "slug": "gogcli",
    "name": "Google (gogcli)",
    "description": "Access Gmail, Calendar, Drive via gog CLI",
    "source": "bundled",
    "directory": "/usr/local/lib/node_modules/heymax/skills/gogcli"
  },
  {
    "slug": "my-skill",
    "name": "My Custom Skill",
    "description": "A locally installed skill",
    "source": "local",
    "directory": "/home/user/.max/skills/my-skill"
  }
]

DELETE /skills/:slug

Remove a locally installed skill. Only skills with source: "local" (installed in ~/.max/skills/) can be removed. Bundled and global skills cannot be deleted via this endpoint.
curl -X DELETE \
  -H "Authorization: Bearer $MAX_TOKEN" \
  http://127.0.0.1:7777/skills/my-skill
Path parameter
slug
string
required
The slug of the skill to remove. URL-encode slugs that contain special characters.
Success response
ok
boolean
true on success.
message
string
Confirmation message.
{ "ok": true, "message": "Skill 'my-skill' removed" }
Error response (400)
{ "error": "Skill 'my-skill' not found or cannot be removed" }

GET /model

Get the currently active Copilot model.
curl -H "Authorization: Bearer $MAX_TOKEN" \
  http://127.0.0.1:7777/model
Response
model
string
The model identifier currently in use (e.g. "claude-sonnet-4.6").
{ "model": "claude-sonnet-4.6" }

POST /model

Switch to a different Copilot model. The new model is validated against the list of models available from the Copilot CLI and persisted to ~/.max/.env.
curl -X POST \
  -H "Authorization: Bearer $MAX_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4.1"}' \
  http://127.0.0.1:7777/model
Request body
model
string
required
The model identifier to switch to. Run copilot listModels to see available options.
Response
previous
string
The model that was active before the switch.
current
string
The newly active model.
{ "previous": "claude-sonnet-4.6", "current": "gpt-4.1" }
Error response (400)
{ "error": "Model 'gpt-99' not found. Did you mean: gpt-4.1, gpt-5.1?" }

GET /auto

Get the current auto model routing configuration and the last routing decision.
curl -H "Authorization: Bearer $MAX_TOKEN" \
  http://127.0.0.1:7777/auto
Response
enabled
boolean
Whether auto-routing is currently active.
tierModels
object
Map of tier names ("fast", "standard", "premium") to model identifiers.
cooldownMessages
number
Number of messages before the router re-classifies intent after a switch.
currentModel
string
The model currently active.
lastRoute
object
The most recent routing decision, or null if none has occurred.
{
  "enabled": false,
  "tierModels": {
    "fast": "gpt-4.1",
    "standard": "claude-sonnet-4.6",
    "premium": "claude-opus-4.6"
  },
  "cooldownMessages": 2,
  "currentModel": "claude-sonnet-4.6",
  "lastRoute": {
    "model": "claude-sonnet-4.6",
    "routerMode": "auto",
    "tier": "standard"
  }
}

POST /auto

Update the auto model routing configuration.
curl -X POST \
  -H "Authorization: Bearer $MAX_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"enabled": true}' \
  http://127.0.0.1:7777/auto
Request body All fields are optional. Only provided fields are updated.
enabled
boolean
Enable or disable auto model routing.
tierModels
object
Map of tier names to model identifiers. Partial updates are merged.
cooldownMessages
number
Number of messages before the router re-classifies after a model switch.
Response The updated routing configuration object (same shape as GET /auto).
{ "enabled": true, "tierModels": { ... }, "cooldownMessages": 3 }

POST /cancel

Cancel the current in-flight message. A cancelled event is broadcast to all connected SSE clients.
curl -X POST \
  -H "Authorization: Bearer $MAX_TOKEN" \
  http://127.0.0.1:7777/cancel
Response
status
string
"ok"
cancelled
boolean
true if there was an active message to cancel, false if the orchestrator was idle.
{ "status": "ok", "cancelled": true }

POST /restart

Restart the Max daemon. The current process spawns a replacement and exits. The API responds before the restart completes.
curl -X POST \
  -H "Authorization: Bearer $MAX_TOKEN" \
  http://127.0.0.1:7777/restart
Response
status
string
"restarting"
{ "status": "restarting" }
The orchestrator session ID is preserved across restarts. Max resumes its previous session automatically.

POST /send-photo

Send a photo to the configured Telegram user. Requires Telegram to be configured in ~/.max/.env.
curl -X POST \
  -H "Authorization: Bearer $MAX_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"photo": "/tmp/screenshot.png", "caption": "Build output"}' \
  http://127.0.0.1:7777/send-photo
Request body
photo
string
required
Absolute filesystem path or URL to the image to send.
caption
string
Optional caption text for the photo.
Response
status
string
"sent" on success.
{ "status": "sent" }
Error response (500)
{ "error": "Telegram bot not configured" }

Build docs developers (and LLMs) love