Skip to main content
OTAS is an open-source observability platform purpose-built for AI agent systems. It captures every API call your agents make — whether in-domain or to external services — and surfaces that data through a real-time analytics dashboard. Connect an agent in minutes, then trace sessions, monitor latency, and track errors without instrumenting your code by hand.

Quick Start

Connect your first agent and start capturing events in under 10 minutes.

Architecture

Understand how UASAM, Brain, and the frontend work together.

Agent Integration

Follow the Agent Manifest to instrument any AI agent with OTAS.

API Reference

Explore every UASAM and Brain endpoint with full request/response schemas.

How OTAS works

OTAS consists of two backend services and a React dashboard:
  • UASAM (port 8000) manages users, projects, agents, and API keys. It issues the JWT tokens that authenticate every subsequent request.
  • Brain (port 8002) receives event logs from your backend SDK or directly from agents. It stores structured BackendEvent records and exposes analytics endpoints.
  • Frontend (port 5173) is a React + MUI dashboard where you can view agents, browse sessions, and explore charts for traffic, latency, and errors.
1

Create an account

Sign up at the OTAS frontend. Your user JWT is returned on login and used as X-OTAS-USER-TOKEN in every management request.
2

Create a project

Projects group your agents and their event data. Each project gets a UUID you use to scope all agent and key operations.
3

Register an agent and get a key

Create an agent under your project. OTAS generates an AgentKey (prefixed agent_…) that your AI agent uses to authenticate.
4

Start a session and log events

At the start of each run, your agent calls the session creation endpoint to get a session JWT. Every API call is then logged to Brain — automatically for in-domain requests, or manually for external calls.

Key capabilities

End-to-end event capture

Log every HTTP request your agents make — path, method, status, latency, request/response bodies, and custom metadata.

Session-scoped tracing

Group events into sessions to trace exactly what an agent did during a single task or run.

Latency percentiles

Daily p50, p95, and p99 latency breakdown per agent, rendered as charts in the dashboard.

Error rate monitoring

Track error counts over time and drill into failing events to understand what went wrong.

Build docs developers (and LLMs) love