Skip to main content
OTAS Brain exposes four analytics endpoints that power the dashboard charts, letting you understand how your agents are performing over any date range you choose. Each endpoint is scoped to a single agent and project, requires three auth headers, and accepts start_date / end_date query parameters in YYYY-MM-DD format. The Analytics view in the dashboard queries all four automatically for the last 7 days whenever you select an agent.

Path traffic timeseries

Endpoint: GET /api/v1/agent/path-timeseries/ This endpoint returns the daily event count broken down by the path field of each BackendEvent. Use it to answer questions like “which endpoint is called most often?” or “did traffic to /api/v1/infer/ spike on a particular day?”. Only dates that have at least one event are included in the response — there is no zero-fill — so gaps in the chart mean genuinely quiet days. The dashboard renders this data as a multi-line chart with one line per unique path. Clicking the expand icon on a card opens a fullscreen dialog for closer inspection. Example response:
{
  "status": 1,
  "agent_id": "a1b2c3d4-...",
  "project_id": "e5f6g7h8-...",
  "paths": [
    {
      "path": "/api/v1/some-resource",
      "data": [
        { "date": "2026-03-01", "count": 10 },
        { "date": "2026-03-02", "count": 7 }
      ]
    }
  ]
}

Sessions per day

The dashboard derives a sessions per day chart from the session list endpoint (GET /api/agent/v1/sessions/list/) rather than a dedicated analytics endpoint. It buckets sessions by their created_at date over the last 7 days. A rising trend here typically means your agent is being invoked more frequently, which is worth cross-referencing with the latency and error charts to confirm quality is keeping pace with volume.

Latency percentiles

Endpoint: GET /api/v1/agent/latency-percentiles/ Returns daily p50, p95, and p99 latency values in milliseconds, computed using PostgreSQL PERCENTILE_CONT (continuous interpolation) over the latency_ms field of each event.
PercentileMeaning
p50Median latency — half of all requests complete faster than this value
p9595 % of requests complete faster than this — a good proxy for “typical slow request”
p99Only 1 % of requests are slower — reveals tail latency and outliers
A widening gap between p50 and p99 over time signals that tail latency is getting worse even if median stays stable. Use this to catch performance regressions before users notice them.
Measure latency_ms at the call site — capture wall-clock time around the actual network or model call, not across your entire request handler. This value is the raw input to all percentile calculations, so precision here directly affects the accuracy of every chart.
Example curl:
curl -G "http://localhost:8002/api/v1/agent/latency-percentiles/" \
  --data-urlencode "start_date=2026-03-01" \
  --data-urlencode "end_date=2026-03-07" \
  -H "X-OTAS-USER-TOKEN: <your-user-jwt>" \
  -H "X-OTAS-AGENT-ID: <agent-uuid>" \
  -H "X-OTAS-PROJECT-ID: <project-uuid>"
Example response:
{
  "status": 1,
  "agent_id": "a1b2c3d4-...",
  "project_id": "e5f6g7h8-...",
  "data": [
    { "date": "2026-03-01", "p50": 142.3, "p95": 489.1, "p99": 1203.7 },
    { "date": "2026-03-02", "p50": 138.0, "p95": 460.5, "p99": 980.2 }
  ]
}

Error count

Endpoint: GET /api/v1/agent/error-count/ Returns a daily count of events where the error field is non-null and non-empty. This is a strict count of logged failures — it does not include HTTP 4xx or 5xx responses unless your SDK or agent explicitly populates the error field for those cases. Use this chart to track failure rates over time. A sudden jump in error_count on a given day, especially when correlated with a p99 spike in the latency chart, points to a systemic issue worth investigating in the session logs. Only dates with at least one error are included in the response.

Session events

Endpoint: GET /api/v1/agent/session/events/?session_id=<uuid> Returns up to 200 events (configurable up to 500 via ?limit=) for a single agent session, ordered by event_time ascending. This is the endpoint behind the session detail view in the dashboard and is the primary tool for root cause analysis: you can reconstruct the exact sequence of calls an agent made during a run, inspect request and response bodies, and locate exactly which call produced an error.

Date range queries

All analytics endpoints except session events accept start_date and end_date as required query parameters:
?start_date=YYYY-MM-DD&end_date=YYYY-MM-DD
Both dates are inclusive. If start_date is after end_date the API returns a 400 error. The dashboard always sends the last 7 days, but you can query any range directly using curl or your preferred HTTP client.

Required headers

Every analytics request must include all three of the following headers:
HeaderValue
X-OTAS-USER-TOKENJWT returned at login
X-OTAS-AGENT-IDUUID of the agent you are querying
X-OTAS-PROJECT-IDUUID of the project the agent belongs to
Missing any header returns a 400 or 401 response with a status_description field explaining which header is absent.

Build docs developers (and LLMs) love