Skip to main content
Warp’s AI agent is designed to work where you already work: in the terminal. Rather than switching to a separate chat interface, you interact with the agent inside your active session, where it has direct access to your shell, your files, and your codebase. The agent can read and write files, run shell commands, search your code with ripgrep, and work iteratively toward a goal — all without leaving Warp.

Agent surfaces

Warp exposes its AI capabilities through three surfaces that together cover local development, background automation, and cloud execution.

Agent Mode

An interactive AI coding agent embedded in your terminal session. Send a prompt and the agent reads your codebase, writes and edits files, runs commands, and shows you diffs before applying changes.

Ambient agents

Background agents that watch for events — such as a failing CI run or a new GitHub issue — and act automatically without a prompt from you.

Cloud agents (Oz)

Headless agents that run in isolated cloud environments. Dispatch a task, close your laptop, and come back to a finished pull request. Supports cron scheduling and multi-agent orchestration.

Supported models

Warp’s agent works with leading large language models. The model you use is configurable in Settings → AI.
ProviderModels
AnthropicClaude 3.5 Sonnet, Claude 3.7 Sonnet, Claude 3 Haiku
OpenAIGPT-4o, GPT-4o mini, o1, o3-mini
You can pin a different model per conversation, or configure a default in your agent profile settings. The oz model list command shows which models are available in your account.

How the agent integrates with your terminal

The agent runs inside your active Warp session and shares its context: your current working directory, shell environment variables, command history, and any blocks you explicitly attach as context. This means the agent already knows which project you are in and which commands you have recently run — you do not need to copy-paste terminal output into a chat box. When the agent executes a command, the output appears as a standard Warp block, so you can inspect it, copy it, or reference it in a follow-up prompt just like any other command.

Codebase context and indexing

Agent Mode can index your repository so the agent can search and retrieve relevant code without reading every file on every request.
When you open a repository in Warp for the first time, Warp indexes the source files using an embedding model. The index is stored on disk and updated incrementally as files change. When you send a prompt, the agent searches the index for relevant code snippets and attaches them as context automatically.
Codebase indexing is enabled per-repository. Look for the Codebase context toggle in the Agent Mode input bar, or enable it in Settings → AI → Codebase context. A speed-bump prompt will confirm before the initial index is built.
If your work spans multiple repositories, Warp can index and search across all of them in a single agent session. Enable cross-repo context in your agent profile settings.
Use the @-menu in the Agent Mode input bar to attach specific files, open diffs, or repository roots as context chips. This is useful when you want the agent to focus on a particular area of the codebase rather than relying solely on search.

Extending the agent

MCP servers

Connect Model Context Protocol servers to give the agent new tools — database queries, GitHub APIs, browser automation, and more.

Skills

Package reusable prompt instructions into Skills that shape how the agent behaves for a given task or codebase.

Rules

Define project-level rules that constrain or guide the agent’s behavior across every session in a repository.

Workflows

Save and share parameterized shell commands that the agent (and you) can invoke by name.

Build docs developers (and LLMs) love