Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nearai/ironclaw/llms.txt

Use this file to discover all available pages before exploring further.

Configuration Overview

IronClaw’s configuration is loaded with the following priority:
Environment Variables > TOML Config File > Database Settings > Defaults

Database Configuration

IronClaw supports PostgreSQL and libSQL backends.

PostgreSQL

DATABASE_BACKEND
string
default:"postgres"
Database backend to use (postgres or libsql)
DATABASE_URL
string
required
PostgreSQL connection URL
DATABASE_URL=postgres://user:password@host:port/database
Examples:
  • Local: postgres://localhost/ironclaw
  • Neon: postgres://user:pass@ep-cool-darkness-123456.us-east-2.aws.neon.tech/ironclaw?sslmode=require
  • Supabase: postgres://postgres:pass@db.projectid.supabase.co:5432/postgres
DATABASE_POOL_SIZE
integer
default:"10"
Maximum number of database connections in the pool
DATABASE_SSLMODE
string
default:"prefer"
SSL/TLS mode for PostgreSQL connections
  • disable — No TLS (local development only)
  • prefer — Try TLS, fall back to plaintext (default)
  • require — Require TLS or fail
PostgreSQL 15+ and pgvector are required. See Installation → PostgreSQL Setup.

libSQL (Embedded SQLite)

LIBSQL_PATH
string
default:"~/.ironclaw/ironclaw.db"
Path to the local libSQL database file
LIBSQL_URL
string
Turso cloud URL for remote replica sync (optional)
LIBSQL_URL=libsql://your-db.turso.io
LIBSQL_AUTH_TOKEN
string
Turso authentication token (required if LIBSQL_URL is set)
libSQL is perfect for local development and single-user deployments. No external database server required.

LLM Provider Configuration

NEAR AI (Default)

NEAR AI provides multi-model access via a single account.
LLM_BACKEND
string
default:"nearai"
Set to nearai to use NEAR AI
NEARAI_MODEL
string
default:"zai-org/GLM-latest"
Model to use for inferencePopular options:
  • zai-org/GLM-latest — Fast, default
  • anthropic::claude-sonnet-4-20250514 — Best quality
  • openai::gpt-5.3-codex — Flagship coding model
NEARAI_BASE_URL
string
default:"https://private.near.ai"
NEAR AI API base URL
NEARAI_AUTH_URL
string
default:"https://private.near.ai"
NEAR AI authentication URL
NEARAI_SESSION_TOKEN
string
Session token from browser OAuth (auto-generated by setup wizard)Stored in ~/.ironclaw/session.json by default.
NEARAI_API_KEY
string
API key from cloud.near.ai (alternative to session token)If set, the base URL defaults to https://cloud-api.near.ai.
The setup wizard handles NEAR AI authentication automatically via browser OAuth.

Anthropic (Claude)

LLM_BACKEND
string
Set to anthropic to use Anthropic’s API
ANTHROPIC_API_KEY
string
required
Your Anthropic API keyGet one from: https://console.anthropic.com/settings/keys
ANTHROPIC_MODEL
string
default:"claude-sonnet-4-20250514"
Claude model to useOptions:
  • claude-sonnet-4-20250514 — Latest Sonnet (recommended)
  • claude-opus-4-20250514 — Most capable
  • claude-3-5-sonnet-20241022 — Previous generation

OpenAI

LLM_BACKEND
string
Set to openai to use OpenAI’s API
OPENAI_API_KEY
string
required
Your OpenAI API keyGet one from: https://platform.openai.com/api-keys
OPENAI_MODEL
string
default:"gpt-4"
OpenAI model to useOptions:
  • gpt-5.3-codex — Latest flagship
  • gpt-5.2 — General purpose
  • gpt-4o — Multimodal
  • gpt-4-turbo — Fast and capable

Ollama (Local Models)

LLM_BACKEND
string
Set to ollama to use local Ollama models
OLLAMA_BASE_URL
string
default:"http://localhost:11434"
Ollama server URL
OLLAMA_MODEL
string
default:"llama3.2"
Ollama model to useMust be pulled first: ollama pull llama3.2
Ollama runs models locally on your machine. No API key required. See ollama.ai for installation.

OpenRouter (200+ Models)

LLM_BACKEND
string
Set to openai_compatible for OpenRouter
LLM_BASE_URL
string
Set to https://openrouter.ai/api/v1
LLM_API_KEY
string
required
Your OpenRouter API keyGet one from: https://openrouter.ai/settings/keys
LLM_MODEL
string
Model ID from OpenRouterExamples:
  • anthropic/claude-sonnet-4
  • openai/gpt-5.3-codex
  • google/gemini-pro-1.5
  • meta-llama/llama-3.3-70b-instruct
See openrouter.ai/models for the full list.
LLM_EXTRA_HEADERS
string
Custom HTTP headers (comma-separated key:value pairs)
LLM_EXTRA_HEADERS=HTTP-Referer:https://myapp.com,X-Title:MyApp

OpenAI-Compatible (vLLM, LiteLLM, Together, Fireworks)

LLM_BACKEND
string
Set to openai_compatible
LLM_BASE_URL
string
required
Base URL of the OpenAI-compatible endpointExamples:
  • vLLM: http://localhost:8000/v1
  • LM Studio: http://localhost:1234/v1
  • Together AI: https://api.together.xyz/v1
  • Fireworks AI: https://api.fireworks.ai/inference/v1
LLM_API_KEY
string
API key (optional for local servers)
LLM_MODEL
string
required
Model identifierExamples:
  • meta-llama/Llama-3.3-70B-Instruct-Turbo (Together AI)
  • accounts/fireworks/models/llama4-maverick-instruct-basic (Fireworks)
  • llama-3.2-3b-instruct-q4_K_M (LM Studio)
LLM_BACKEND=nearai
NEARAI_MODEL=zai-org/GLM-latest
NEARAI_BASE_URL=https://private.near.ai

Embeddings Configuration

Embeddings enable semantic search across your workspace.
EMBEDDINGS_ENABLED
boolean
default:"true"
Enable vector embeddings for semantic search
EMBEDDINGS_PROVIDER
string
default:"nearai"
Embeddings provider (nearai, openai, or same as LLM_BACKEND)
EMBEDDINGS_MODEL
string
Model to use for embeddings
  • NEAR AI: text-embedding-3-small (default)
  • OpenAI: text-embedding-3-small, text-embedding-3-large
Embeddings require PostgreSQL with pgvector or libSQL with vector support.

Secrets Configuration

Secrets (API keys, tokens) are encrypted with a master key.
SECRETS_MASTER_KEY
string
Master encryption key (256-bit hex string)Generated by the setup wizard and stored in OS keychain or environment.
Keep your master key secure! If lost, encrypted secrets cannot be recovered.

Agent Configuration

AGENT_NAME
string
default:"ironclaw"
Agent display name
AGENT_MAX_PARALLEL_JOBS
integer
default:"5"
Maximum number of concurrent jobs
AGENT_JOB_TIMEOUT_SECS
integer
default:"3600"
Job timeout in seconds (default: 1 hour)
AGENT_STUCK_THRESHOLD_SECS
integer
default:"300"
Time before marking a job as stuck (default: 5 minutes)
AGENT_USE_PLANNING
boolean
default:"true"
Enable planning phase before tool executionWhen enabled, the agent plans its approach before executing tools, improving accuracy.

Channel Configuration

HTTP Webhook Server

HTTP_HOST
string
default:"0.0.0.0"
Host to bind the HTTP server to
HTTP_PORT
integer
default:"8080"
Port for the HTTP webhook server
HTTP_WEBHOOK_SECRET
string
Secret for authenticating webhook requests

Telegram Bot

TELEGRAM_BOT_TOKEN
string
Telegram bot token from @BotFather
After setting the token, send /start to your bot in Telegram to pair.

Signal Messaging

SIGNAL_HTTP_URL
string
default:"http://127.0.0.1:8080"
signal-cli daemon HTTP URL
SIGNAL_ACCOUNT
string
Your Signal phone number (e.g., +1234567890)
SIGNAL_ALLOW_FROM
string
Comma-separated list of allowed senders (* for all)
SIGNAL_DM_POLICY
string
default:"pairing"
DM policy: open, allowlist, or pairing
Signal requires a running signal-cli daemon. See Signal Setup.

Slack Bot (WASM Channel)

SLACK_BOT_TOKEN
string
Slack bot token (xoxb-...)
SLACK_APP_TOKEN
string
Slack app-level token (xapp-...)
SLACK_SIGNING_SECRET
string
Slack request signing secret

Safety Configuration

SAFETY_MAX_OUTPUT_LENGTH
integer
default:"100000"
Maximum tool output length (characters)
SAFETY_INJECTION_CHECK_ENABLED
boolean
default:"true"
Enable prompt injection detection

Heartbeat Configuration

The heartbeat system runs background tasks on a schedule.
HEARTBEAT_ENABLED
boolean
default:"false"
Enable background heartbeat tasks
HEARTBEAT_INTERVAL_SECS
integer
default:"1800"
Heartbeat interval in seconds (default: 30 minutes)
HEARTBEAT_NOTIFY_CHANNEL
string
default:"cli"
Channel to send heartbeat notifications to
HEARTBEAT_NOTIFY_USER
string
default:"default"
User ID to notify
Heartbeat reads HEARTBEAT.md from your workspace and reports findings on the schedule.

Memory Hygiene Configuration

Automatic cleanup of stale workspace documents.
MEMORY_HYGIENE_ENABLED
boolean
default:"true"
Enable automatic cleanup of old daily notes
MEMORY_HYGIENE_RETENTION_DAYS
integer
default:"30"
Delete daily/ documents older than this many days
MEMORY_HYGIENE_CADENCE_HOURS
integer
default:"12"
Minimum hours between cleanup passes
Identity files (IDENTITY.md, SOUL.md) are never deleted.

Docker Sandbox Configuration

SANDBOX_MODE
string
default:"disabled"
Docker sandbox mode: disabled, enabled
SANDBOX_IMAGE
string
default:"ironclaw-sandbox:latest"
Docker image for sandbox execution
SANDBOX_TIMEOUT_SECS
integer
default:"300"
Sandbox execution timeout (default: 5 minutes)

Logging Configuration

RUST_LOG
string
default:"ironclaw=info"
Logging level and filtersExamples:
# Debug everything
RUST_LOG=ironclaw=debug

# Debug specific modules
RUST_LOG=ironclaw::agent=debug,ironclaw::tools=trace

# JSON output
RUST_LOG=ironclaw=info,json

Configuration Files Reference

~/.ironclaw/.env

Bootstrap environment variables (database URL, LLM backend)Written by the setup wizard. Loaded on startup.

~/.ironclaw/config.toml

Structured TOML configuration (optional)Overrides database settings, overridden by environment variables.

~/.ironclaw/session.json

NEAR AI session token (auto-generated)Created during OAuth flow. Do not edit manually.

Database settings table

Persistent settings stored in the databaseLowest priority. Managed via the wizard or API.

Next Steps

Tools & Extensions

Explore built-in tools and install extensions

Channels

Configure Telegram, HTTP webhooks, and more

CLI Reference

Explore all available commands

Development

Build custom tools and contribute to IronClaw

Build docs developers (and LLMs) love