Skip to main content
Chronoverse provides comprehensive analytics for tracking workflow and job performance, execution trends, and resource utilization. Analytics data is aggregated in real-time using ClickHouse for efficient querying at scale.

Overview

Analytics in Chronoverse offer insights into:
  • User-level metrics: Total workflows, jobs, logs, and execution time across all workflows
  • Workflow-level metrics: Per-workflow job counts, log volumes, and execution durations
  • Real-time aggregation: Metrics updated as events occur through Kafka stream processing
  • Efficient storage: ClickHouse provides fast aggregation queries even with millions of jobs
Analytics are computed asynchronously by the Analytics Processor, which consumes workflow, job, and log events from Kafka topics.

User Analytics

Get aggregate statistics for all workflows owned by a user.

Endpoint

Get User Analytics
curl "https://api.chronoverse.io/v1/analytics/users/{user_id}" \
  -H "Authorization: Bearer YOUR_TOKEN"

Response Format

User Analytics
{
  "total_workflows": 25,
  "total_jobs": 15420,
  "total_joblogs": 1847293,
  "total_job_execution_duration": 892340
}

Metrics Explained

MetricTypeDescription
total_workflowsintegerCount of all workflows ever created (active, terminated, deleted)
total_jobsintegerTotal jobs executed across all workflows
total_joblogsintegerTotal log entries generated across all jobs
total_job_execution_durationintegerSum of all job execution times in seconds
Counts distinct workflows including:
  • Active workflows (not terminated)
  • Terminated workflows
  • Deleted workflows (soft-deleted)
ClickHouse Query
SELECT COUNT(DISTINCT workflow_id) AS total_workflows
FROM analytics
WHERE user_id = ?

Use Cases

Resource Planning

Understand total compute usage to plan infrastructure capacity

Cost Tracking

Calculate execution costs based on total runtime and resource usage

Usage Trends

Track growth in workflows and job execution over time

Log Volume

Monitor log storage requirements based on total log entries

Workflow Analytics

Get detailed metrics for a specific workflow.

Endpoint

Get Workflow Analytics
curl "https://api.chronoverse.io/v1/analytics/workflows/{workflow_id}" \
  -H "Authorization: Bearer YOUR_TOKEN"

Response Format

Workflow Analytics
{
  "workflow_id": "550e8400-e29b-41d4-a716-446655440000",
  "total_jobs": 620,
  "total_joblogs": 74832,
  "total_job_execution_duration": 18450
}

Metrics Explained

MetricTypeDescription
workflow_idstringUUID of the workflow
total_jobsintegerTotal jobs executed by this workflow
total_joblogsintegerTotal log entries from all jobs
total_job_execution_durationintegerSum of execution times in seconds

Query Implementation

ClickHouse Query
SELECT
    workflow_id,
    jobs_count AS total_jobs,
    logs_count AS total_joblogs,
    total_job_execution_duration
FROM analytics
WHERE user_id = ? AND workflow_id = ?
LIMIT 1
Each workflow has a single analytics record that’s continuously updated as jobs execute and generate logs.

Derived Metrics

You can calculate additional insights from workflow analytics:
const avgExecutionTime = 
  analytics.total_job_execution_duration / analytics.total_jobs;

// Example: 18450 / 620 = 29.76 seconds per job

Analytics Processing Pipeline

Analytics are computed asynchronously through event stream processing:

Architecture

1

Event Generation

Services publish events to Kafka topics:
  • Workflow events (create, update, delete)
  • Job events (start, complete, fail)
  • Log events (log entries created)
2

Analytics Processor

Consumer processes events and updates aggregations
3

ClickHouse Update

Analytics table updated with new counts and durations
4

API Query

REST API queries ClickHouse for current metrics

Event Processing

Workflow Created
// Increment workflow count
INSERT INTO analytics (user_id, workflow_id, jobs_count, logs_count, total_job_execution_duration)
VALUES (?, ?, 0, 0, 0)
ON CONFLICT (user_id, workflow_id) DO NOTHING
Workflow Deleted
// Keep analytics record for historical data
// No deletion from analytics table

Analytics Table Schema

The ClickHouse analytics table stores aggregated data:
Table Structure
CREATE TABLE analytics (
    user_id UUID,
    workflow_id UUID,
    jobs_count UInt32,
    logs_count UInt64,
    total_job_execution_duration UInt64,
    created_at DateTime64(3),
    updated_at DateTime64(3)
)
ENGINE = MergeTree()
ORDER BY (user_id, workflow_id)
PRIMARY KEY (user_id, workflow_id);

Indexing Strategy

  • Primary Key: (user_id, workflow_id) for efficient user and workflow queries
  • Order By: Same as primary key for optimal compression
  • Engine: MergeTree for fast aggregations and updates
The primary key enables sub-millisecond query performance even with millions of workflows.

Error Handling

Not Found Errors

If no analytics exist for a user or workflow:
Error Response
{
  "code": "NOT_FOUND",
  "message": "no analytics found for workflow"
}
Causes:
  • Workflow was just created (analytics not yet processed)
  • Invalid workflow ID
  • User doesn’t own the workflow
Analytics records are created asynchronously. New workflows may not have analytics immediately available.

Invalid Request

Error Response
{
  "code": "INVALID_ARGUMENT",
  "message": "invalid user ID or workflow ID"
}
Causes:
  • Malformed UUID
  • Empty user_id or workflow_id

Real-Time vs. Eventual Consistency

Important Timing Considerations:Analytics are eventually consistent, not real-time:
  • Events must be processed through Kafka
  • Analytics Processor has processing latency (typically less than 1 second)
  • ClickHouse updates are batched for efficiency
Expected Delay: 1-5 seconds from event occurrence to analytics update

Consistency Examples

Performance Characteristics

Query Performance

OperationTypical LatencyScale
Get User AnalyticsLess than 10msMillions of workflows
Get Workflow AnalyticsLess than 5msBillions of jobs
Analytics UpdateLess than 100msHigh throughput

Scalability

ClickHouse enables analytics at scale:
  • User Analytics: Aggregates across thousands of workflows
  • Workflow Analytics: Handles billions of job records
  • Log Counting: Efficiently counts trillions of log entries
ClickHouse’s columnar storage and efficient aggregation make it ideal for analytics workloads, even with massive data volumes.

Best Practices

Cache on Client

Analytics change slowly—cache results for 30-60 seconds to reduce API calls

Calculate Rates

Derive metrics like average execution time, logs per job, and efficiency ratios

Expect Delays

Don’t expect instant analytics updates—allow 1-5 seconds for consistency

Monitor Trends

Track analytics over time to identify performance trends and anomalies

Limitations

Current Limitations:
  • No time-series breakdowns (daily, weekly, monthly)
  • No job status-specific counts (success vs. failure rates)
  • No percentile metrics (p50, p95, p99 execution times)
  • No workflow kind filtering
  • No custom date ranges
These features may be added in future releases based on user feedback.

Example Use Cases

Dashboard Widgets

User Overview Widget
const analytics = await getUserAnalytics(userId);

const widgets = [
  {
    title: 'Total Workflows',
    value: analytics.total_workflows,
    icon: 'workflow'
  },
  {
    title: 'Jobs Executed',
    value: analytics.total_jobs.toLocaleString(),
    icon: 'tasks'
  },
  {
    title: 'Execution Time',
    value: formatDuration(analytics.total_job_execution_duration),
    icon: 'clock'
  },
  {
    title: 'Log Entries',
    value: formatNumber(analytics.total_joblogs),
    icon: 'file-text'
  }
];

Workflow Comparison

Compare Workflows
const workflows = await listWorkflows();
const analyticsPromises = workflows.map(wf => 
  getWorkflowAnalytics(wf.id)
);
const analytics = await Promise.all(analyticsPromises);

// Sort by most active
const sortedByJobs = analytics.sort((a, b) => 
  b.total_jobs - a.total_jobs
);

// Find most verbose (logs per job)
const withRates = analytics.map(a => ({
  ...a,
  logsPerJob: a.total_joblogs / a.total_jobs
}));

Next Steps

Workflow Types

Learn about HEARTBEAT and CONTAINER workflows

API Reference

Complete API documentation for analytics endpoints

Build docs developers (and LLMs) love