Skip to main content

Overview

The ECAResponse class provides methods for retrieving stored prediction responses. When predictions are made with store_response=True, they are saved and can be retrieved later using this API.

Methods

get_all()

Get all stored prediction responses.
response
tuple
Returns a tuple of (List[ECAResponse] | Error, status_code)
Example:
res, status = eca.response.get_all()
if status == 200:
    for response in res:
        print(f"Response ID: {response.id}")
        print(f"  Name: {response.name}")
        print(f"  Module: {response.twin_module_id}")
        print(f"  Percept ESS: {response.percept_ess_ids}")
        print(f"  Response ESS: {response.response_ess_ids}")
        print()

get_one(db_id: int)

Get a specific response by its database ID.
db_id
int
required
The database ID of the response
response
tuple
Returns a tuple of (ECAResponse | Error, status_code)
Example:
res, status = eca.response.get_one(db_id=100)
if status == 200:
    print(f"Response: {res.name}")
    print(f"Status: {res.status}")
    print(f"Percept: {res.percept_names}")
    print(f"Response: {res.response_names}")

Understanding Stored Responses

When you make a prediction using the Cortex API with store_response=True, the prediction is saved as an ECAResponse. This creates a record of:
  1. Percept: The current state that was used as input
  2. Response: The predicted next state(s)
  3. Metadata: Names, module IDs, timestamps, etc.
This is useful for:
  • Tracking prediction history
  • Analyzing prediction accuracy over time
  • Auditing and debugging
  • Building training datasets

Complete Workflow Example

import os
from avenieca.api.eca import ECA
from avenieca.api.model import Config, NextStateRequest

# Initialize client
config = Config(
    uri="http://localhost:2580/v1",
    username=os.getenv("USERNAME"),
    password=os.getenv("PASSWORD")
)
eca = ECA(config)

# Step 1: Make a prediction and store the response
request = NextStateRequest(
    module_id="air_conditioner",
    recall=10,
    range=5,
    n=2,
    status="e",
    store_response=True  # This will save the prediction
)

pred_res, pred_status = eca.cortex.predictions(data=request)

if pred_status == 200:
    print("Prediction made and stored successfully")
    print(f"Current state: {pred_res.current_state}")
    print(f"Predicted states: {pred_res.next_state}")

# Step 2: Retrieve all stored responses
all_responses, all_status = eca.response.get_all()

if all_status == 200:
    print(f"\nTotal stored responses: {len(all_responses)}")
    
    # Find the most recent response
    if all_responses:
        recent = all_responses[-1]
        print(f"\nMost recent response:")
        print(f"  ID: {recent.id}")
        print(f"  Module: {recent.twin_module_id}")
        print(f"  Created: {recent.created_at}")
        
        # Step 3: Get detailed information about this response
        detail_res, detail_status = eca.response.get_one(db_id=recent.id)
        
        if detail_status == 200:
            print(f"\nDetailed Response Information:")
            print(f"  Percept ESS IDs: {detail_res.percept_ess_ids}")
            print(f"  Percept Names: {detail_res.percept_names}")
            print(f"  Response ESS IDs: {detail_res.response_ess_ids}")
            print(f"  Response Names: {detail_res.response_names}")
            print(f"  Status: {detail_res.status}")

Analyzing Prediction History

from datetime import datetime

# Get all responses for analysis
responses, status = eca.response.get_all()

if status == 200:
    # Filter by module
    ac_responses = [
        r for r in responses 
        if r.twin_module_id == "air_conditioner"
    ]
    
    print(f"Air conditioner predictions: {len(ac_responses)}")
    
    # Analyze percept patterns
    percept_patterns = {}
    for resp in ac_responses:
        pattern = tuple(resp.percept_ess_ids)
        if pattern in percept_patterns:
            percept_patterns[pattern] += 1
        else:
            percept_patterns[pattern] = 1
    
    print("\nMost common percept patterns:")
    for pattern, count in sorted(
        percept_patterns.items(), 
        key=lambda x: x[1], 
        reverse=True
    )[:5]:
        print(f"  {pattern}: {count} times")
    
    # Get predictions by date
    for resp in ac_responses[-5:]:
        print(f"\n{resp.created_at}:")
        print(f"  From: {resp.percept_names}")
        print(f"  To: {resp.response_names}")

Integration with Cortex API

The Response API works hand-in-hand with the Cortex API:
from avenieca.api.model import NextStateRequest

# Make prediction WITHOUT storing
request = NextStateRequest(
    module_id="air_conditioner",
    n=1,
    store_response=False  # Don't store
)

res, status = eca.cortex.predictions(data=request)
print("Prediction made, not stored")

# Make prediction WITH storing
request_stored = NextStateRequest(
    module_id="air_conditioner",
    n=1,
    store_response=True  # Store this one
)

res, status = eca.cortex.predictions(data=request_stored)
print("Prediction made and stored")

# Retrieve the stored prediction
all_responses, status = eca.response.get_all()
latest = all_responses[-1]
print(f"Latest stored response ID: {latest.id}")

Best Practices

  1. Selective Storage: Only store responses when you need historical tracking
  2. Regular Cleanup: Periodically review and archive old responses
  3. Analysis: Use stored responses to improve your models and understand patterns
  4. Auditing: Leverage stored responses for debugging and validation

Build docs developers (and LLMs) love