Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nearai/ironclaw/llms.txt

Use this file to discover all available pages before exploring further.

The Web Gateway provides a rich browser interface for chatting with your IronClaw agent, browsing workspace files, managing jobs, and monitoring system status.

Features

  • Real-time chat - WebSocket or SSE streaming
  • Workspace browser - View and search memory/daily files
  • Job management - Launch and monitor sandbox jobs
  • Extension management - Install, configure, and authenticate extensions
  • Skill browser - Search and install skills
  • Log viewer - Real-time log streaming with level control
  • OpenAI-compatible API - Use IronClaw as a drop-in replacement for OpenAI API
  • Cost tracking - Token usage and cost estimates
  • Session history - Browse past conversations

Configuration

Set via environment variables or .env file:
# Gateway host and port
GATEWAY_HOST=0.0.0.0
GATEWAY_PORT=3000

# Optional: Authentication token (auto-generated if not set)
GATEWAY_AUTH_TOKEN=your-secret-token

# Optional: Fixed user ID for all gateway sessions
GATEWAY_USER_ID=default

Configuration Options

VariableTypeDefaultDescription
GATEWAY_HOSTstring"0.0.0.0"Bind address
GATEWAY_PORTinteger3000HTTP port
GATEWAY_AUTH_TOKENstringauto-generatedAuthentication token for API access
GATEWAY_USER_IDstring"default"Fixed user ID for all sessions

Quick Start

1. Start IronClaw

ironclaw run
The gateway starts automatically on http://localhost:3000.

2. Get Auth Token

If no token is configured, IronClaw generates one and prints it:
[INFO] Web Gateway started at http://0.0.0.0:3000
[INFO] Auth token: abc123xyz456

3. Open Browser

Visit http://localhost:3000 and enter the auth token when prompted.

API Endpoints

Chat

Send Message

POST /api/chat/send
Authorization: Bearer YOUR_TOKEN
Content-Type: application/json

{
  "message": "Hello, agent!",
  "thread_id": "optional-thread-id"
}
Response:
{
  "status": "ok",
  "message_id": "550e8400-e29b-41d4-a716-446655440000"
}

Event Stream (SSE)

GET /api/chat/events
Authorization: Bearer YOUR_TOKEN
Response (Server-Sent Events):
event: thinking
data: {"message": "Processing your request...", "thread_id": "thread-123"}

event: tool_started
data: {"name": "FileRead", "thread_id": "thread-123"}

event: tool_completed
data: {"name": "FileRead", "success": true, "thread_id": "thread-123"}

event: response
data: {"content": "Here's the answer...", "thread_id": "thread-123"}

WebSocket

const ws = new WebSocket('ws://localhost:3000/api/chat/ws?token=YOUR_TOKEN');

ws.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log(data);
};

ws.send(JSON.stringify({
  message: "Hello, agent!",
  thread_id: "thread-123"
}));

Memory/Workspace

List Files

GET /api/memory/list?path=daily
Authorization: Bearer YOUR_TOKEN
Response:
{
  "files": [
    {
      "name": "2026-03-03.md",
      "path": "daily/2026-03-03.md",
      "size": 1024,
      "modified": "2026-03-03T10:30:00Z"
    }
  ],
  "directories": ["memory"]
}

Read File

GET /api/memory/read?path=daily/2026-03-03.md
Authorization: Bearer YOUR_TOKEN
Response:
{
  "content": "# Daily Note\n\nToday I worked on...",
  "path": "daily/2026-03-03.md"
}

Search Files

GET /api/memory/search?query=TODO
Authorization: Bearer YOUR_TOKEN
Response:
{
  "results": [
    {
      "path": "daily/2026-03-03.md",
      "line": 15,
      "snippet": "TODO: Fix the bug in the parser"
    }
  ]
}

Jobs

List Jobs

GET /api/jobs/list
Authorization: Bearer YOUR_TOKEN
Response:
{
  "jobs": [
    {
      "id": "job-123",
      "title": "Run tests",
      "status": "running",
      "created_at": "2026-03-03T10:00:00Z",
      "browse_url": "http://localhost:8081"
    }
  ]
}

Get Job Details

GET /api/jobs/:id
Authorization: Bearer YOUR_TOKEN
Response:
{
  "id": "job-123",
  "title": "Run tests",
  "status": "running",
  "logs": "Running pytest...\nCollecting tests...",
  "created_at": "2026-03-03T10:00:00Z",
  "browse_url": "http://localhost:8081"
}

Stop Job

POST /api/jobs/:id/stop
Authorization: Bearer YOUR_TOKEN
Response:
{
  "status": "stopped"
}

Extensions

List Installed Extensions

GET /api/extensions/list
Authorization: Bearer YOUR_TOKEN
Response:
{
  "extensions": [
    {
      "name": "github",
      "version": "1.0.0",
      "enabled": true,
      "authenticated": true
    }
  ]
}

Install Extension

POST /api/extensions/install
Authorization: Bearer YOUR_TOKEN
Content-Type: application/json

{
  "name": "github",
  "registry": "builtin"
}

Authenticate Extension

POST /api/extensions/:name/auth
Authorization: Bearer YOUR_TOKEN
Content-Type: application/json

{
  "token": "ghp_abc123xyz456"
}

Skills

Search Skills

GET /api/skills/search?query=python
Authorization: Bearer YOUR_TOKEN
Response:
{
  "skills": [
    {
      "name": "python-debugger",
      "description": "Debug Python code",
      "version": "1.0.0",
      "installed": false
    }
  ]
}

Install Skill

POST /api/skills/install
Authorization: Bearer YOUR_TOKEN
Content-Type: application/json

{
  "name": "python-debugger"
}

Logs

Stream Logs (SSE)

GET /api/logs/stream
Authorization: Bearer YOUR_TOKEN
Response (Server-Sent Events):
data: {"level": "INFO", "message": "Agent started", "timestamp": "2026-03-03T10:00:00Z"}
data: {"level": "DEBUG", "message": "Processing message", "timestamp": "2026-03-03T10:00:01Z"}

Set Log Level

POST /api/logs/level
Authorization: Bearer YOUR_TOKEN
Content-Type: application/json

{
  "level": "debug"
}
Levels: trace, debug, info, warn, error

Status

System Status

GET /api/status
Authorization: Bearer YOUR_TOKEN
Response:
{
  "status": "healthy",
  "uptime_seconds": 3600,
  "active_sessions": 3,
  "active_jobs": 1,
  "token_usage": {
    "total_tokens": 150000,
    "estimated_cost_usd": 0.75
  },
  "channels": [
    {"name": "gateway", "status": "connected"},
    {"name": "telegram", "status": "connected"}
  ]
}

OpenAI-Compatible API

The gateway provides an OpenAI-compatible endpoint for drop-in replacement:

Chat Completions

POST /v1/chat/completions
Authorization: Bearer YOUR_TOKEN
Content-Type: application/json

{
  "model": "gpt-4",
  "messages": [
    {"role": "user", "content": "Hello!"}
  ],
  "stream": false
}
Response:
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 12,
    "total_tokens": 22
  }
}

Streaming

POST /v1/chat/completions
Authorization: Bearer YOUR_TOKEN
Content-Type: application/json

{
  "model": "gpt-4",
  "messages": [{"role": "user", "content": "Hello!"}],
  "stream": true
}
Response (SSE):
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","choices":[{"delta":{"content":"Hello"}}]}
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","choices":[{"delta":{"content":"!"}}]}
data: [DONE]

Use with OpenAI SDK

import openai

openai.api_key = "YOUR_TOKEN"
openai.api_base = "http://localhost:3000/v1"

response = openai.ChatCompletion.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Authentication

All API endpoints require a bearer token:
Authorization: Bearer YOUR_TOKEN
Or as a query parameter:
GET /api/status?token=YOUR_TOKEN
WebSocket connections use the query parameter:
const ws = new WebSocket('ws://localhost:3000/api/chat/ws?token=YOUR_TOKEN');

Rate Limiting

The gateway enforces rate limits:
  • Chat messages: 30 per minute
  • API requests: 100 per minute
Exceeding limits returns HTTP 429 (Too Many Requests).

Security

Auto-Generated Tokens

If no GATEWAY_AUTH_TOKEN is set, IronClaw generates a random 32-character token on startup:
use rand::Rng;

let token: String = rand::thread_rng()
    .sample_iter(&rand::distributions::Alphanumeric)
    .take(32)
    .map(char::from)
    .collect();
This ensures secure defaults even if you forget to set a token. For production use, deploy behind a reverse proxy with TLS:
server {
    listen 443 ssl;
    server_name ironclaw.example.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
    }
}

Source Code

  • Implementation: ~/workspace/source/src/channels/web/mod.rs
  • Server: ~/workspace/source/src/channels/web/server.rs
  • SSE: ~/workspace/source/src/channels/web/sse.rs
  • WebSocket: ~/workspace/source/src/channels/web/ws.rs
  • OpenAI API: ~/workspace/source/src/channels/web/openai_compat.rs

Troubleshooting

Cannot connect to gateway

  • Verify GATEWAY_PORT is correct
  • Check firewall rules allow port 3000
  • Ensure IronClaw is running (ironclaw run)
  • Check logs for “Web Gateway started at…“

401 Unauthorized

  • Verify auth token matches the one printed at startup
  • Check Authorization: Bearer header format
  • For WebSocket, use ?token= query parameter

WebSocket disconnects

  • Check browser console for errors
  • Verify reverse proxy (if any) supports WebSocket upgrades
  • Ensure connection isn’t timing out (send periodic pings)

SSE not receiving events

  • Verify Accept: text/event-stream header
  • Check browser EventSource API support
  • Ensure reverse proxy doesn’t buffer SSE responses

429 Too Many Requests

  • Reduce request frequency
  • Implement exponential backoff
  • Check rate limiter settings in source

Example Integration

React Chat Component

import { useEffect, useState } from 'react';

function Chat() {
  const [messages, setMessages] = useState([]);
  const [input, setInput] = useState('');

  useEffect(() => {
    const evtSource = new EventSource(
      'http://localhost:3000/api/chat/events?token=YOUR_TOKEN'
    );

    evtSource.addEventListener('response', (e) => {
      const data = JSON.parse(e.data);
      setMessages((prev) => [...prev, { role: 'assistant', content: data.content }]);
    });

    return () => evtSource.close();
  }, []);

  const sendMessage = async () => {
    setMessages((prev) => [...prev, { role: 'user', content: input }]);
    await fetch('http://localhost:3000/api/chat/send', {
      method: 'POST',
      headers: {
        'Authorization': 'Bearer YOUR_TOKEN',
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({ message: input })
    });
    setInput('');
  };

  return (
    <div>
      <div>
        {messages.map((msg, i) => (
          <div key={i}>
            <strong>{msg.role}:</strong> {msg.content}
          </div>
        ))}
      </div>
      <input value={input} onChange={(e) => setInput(e.target.value)} />
      <button onClick={sendMessage}>Send</button>
    </div>
  );
}

Build docs developers (and LLMs) love