Skip to main content

GetMetrics

Returns a point-in-time snapshot of the memory substrate’s health and behavioural metrics. The response payload is a JSON-encoded Snapshot object. This method takes no input parameters:
message GetMetricsRequest {}

message MetricsResponse {
    bytes snapshot = 1; // JSON-encoded metrics.Snapshot
}

Response fields

collected_at
string
RFC 3339 timestamp when the snapshot was collected.
total_records
number
Total number of records in the store across all memory types.
records_by_type
object
Count of records broken down by memory type.
avg_salience
number
Mean salience across all records. Range [0, 1].
avg_confidence
number
Mean confidence across all records. Range [0, 1].
salience_distribution
object
Count of records in each salience bucket.
active_records
number
Number of records with salience greater than 0.
pinned_records
number
Number of records marked as pinned (exempt from automatic decay and pruning).
total_audit_entries
number
Total number of audit log entries across all records.
memory_growth_rate
number
Fraction of records created in the last 24 hours: recent_records / total_records. Indicates how rapidly new memory is being accumulated.
retrieval_usefulness
number
Ratio of reinforce audit actions to total audit entries: reinforce_count / total_audit_entries. Measures how often retrieved records are marked as useful.
competence_success_rate
number
Average success_rate across all competence records that have performance data. Indicates how reliably the agent’s learned procedures succeed.
plan_reuse_frequency
number
Average execution_count across all plan graph records that have metrics. Higher values indicate plans are being discovered and reused rather than recreated.
revision_rate
number
Fraction of audit entries that are revisions (revise, fork, or merge actions): revision_count / total_audit_entries. Indicates how actively knowledge is being updated.

Metric descriptions

MetricDescription
memory_growth_rateFraction of records created in the last 24 hours
retrieval_usefulnessRatio of reinforce actions to total audit entries
competence_success_rateAverage success rate across competence records
plan_reuse_frequencyAverage execution count across plan graph records
revision_rateFraction of audit entries that are revisions (supersede, fork, merge)

Example snapshot

{
  "collected_at": "2026-02-05T14:23:10Z",
  "total_records": 142,
  "records_by_type": {
    "episodic": 80,
    "semantic": 35,
    "competence": 15,
    "plan_graph": 7,
    "working": 5
  },
  "avg_salience": 0.62,
  "avg_confidence": 0.78,
  "salience_distribution": {
    "0.0-0.2": 12,
    "0.2-0.4": 18,
    "0.4-0.6": 30,
    "0.6-0.8": 45,
    "0.8-1.0": 37
  },
  "active_records": 130,
  "pinned_records": 3,
  "total_audit_entries": 890,
  "memory_growth_rate": 0.15,
  "retrieval_usefulness": 0.42,
  "competence_success_rate": 0.85,
  "plan_reuse_frequency": 2.3,
  "revision_rate": 0.08
}

Calling GetMetrics

snap, err := m.GetMetrics(ctx)
if err != nil {
    log.Fatal(err)
}
fmt.Printf("Total records: %d\n", snap.TotalRecords)
fmt.Printf("Avg salience:  %.2f\n", snap.AvgSalience)
fmt.Printf("Retrieval usefulness: %.2f\n", snap.RetrievalUsefulness)
fmt.Printf("Competence success rate: %.2f\n", snap.CompetenceSuccessRate)
Poll GetMetrics periodically and alert when retrieval_usefulness drops below a threshold or revision_rate spikes unexpectedly. These two metrics are the most direct indicators of whether the agent is learning effectively.

Build docs developers (and LLMs) love