Skip to main content
This guide takes you from zero to a live agent as fast as possible. By the end you’ll have the gateway running and an agent responding to messages.
1

Download the binary

Download the precompiled binary for your platform from the Releases page. Place it somewhere on your PATH~/.local/bin is a good choice on Linux and macOS.
curl -L https://github.com/avrilonline/Operator-OS/releases/latest/download/operator_Linux_x86_64.tar.gz \
  | tar -xz operator
mv operator ~/.local/bin/operator
chmod +x ~/.local/bin/operator
Verify the installation:
operator version
2

Initialize your workspace

Run operator onboard to create your configuration file and workspace directory at ~/.operator/:
operator onboard
This creates:
  • ~/.operator/config.json — your agent configuration
  • ~/.operator/workspace/ — the agent’s sandboxed working directory
You’ll see output like:
operator is ready!

Next steps:
  1. Add your API key to ~/.operator/config.json

     Recommended:
     - OpenRouter: https://openrouter.ai/keys (access 100+ models)
     - Ollama:     https://ollama.com (local, free)

  2. Chat: operator agent -m "Hello!"
If you already have a config.json and want to reset to defaults, operator onboard will ask before overwriting.
3

Add your API key

Open ~/.operator/config.json and add your LLM provider credentials to the model_list array. The agents.defaults.model_name field controls which model the agent uses by default.
{
  "agents": {
    "defaults": {
      "workspace": "~/.operator/workspace",
      "restrict_to_workspace": true,
      "model_name": "claude-sonnet-4.6",
      "max_tokens": 8192,
      "temperature": 0.7,
      "max_tool_iterations": 20
    }
  },
  "model_list": [
    {
      "model_name": "claude-sonnet-4.6",
      "model": "anthropic/claude-sonnet-4.6",
      "api_key": "sk-ant-your-key",
      "api_base": "https://api.anthropic.com/v1"
    },
    {
      "model_name": "gpt4",
      "model": "openai/gpt-5.2",
      "api_key": "sk-your-openai-key",
      "api_base": "https://api.openai.com/v1"
    }
  ]
}
Operator identifies providers by the prefix in the model field — anthropic/, openai/, gemini/, ollama/, etc. No code changes are needed to switch models; just update model_name in agents.defaults.
To use a local Ollama model with no API key, set "model": "ollama/llama3" and "api_base": "http://localhost:11434/v1", then omit the api_key field.
4

Start the gateway

The gateway daemon connects your agent to configured channels (Slack, Telegram, Discord, etc.) and keeps it available continuously:
operator gateway
You’ll see the gateway start and connect to any channels you’ve enabled in config.json. Leave this running in a terminal, a tmux session, or a systemd service.
If you haven’t configured any channels yet, the gateway still starts successfully — it just won’t accept inbound messages from external platforms until you configure one. You can always interact via the CLI.
5

Send your first message

With the gateway running (or without it, for a direct CLI invocation), use operator agent to send a message:
operator agent -m "Hello! What can you do?"
The agent will respond in your terminal. Try a more practical prompt:
operator agent -m "What is the current date and time, and list the files in my workspace?"
The agent will invoke its built-in tools and return a structured response.

What’s next

Now that you have a running agent, explore what Operator OS can do:

Connect a channel

Add Slack, Telegram, Discord, or WhatsApp so you can message your agent from anywhere.

Configure models

Switch providers, configure load balancing, or point to a local Ollama instance.

Built-in tools

Give your agent access to DuckDuckGo, Brave Search, web fetch, and more.

Deploy with Docker

Run the gateway as a fully containerized service with Docker Compose.

Build docs developers (and LLMs) love