Skip to main content
Chronoverse supports two distinct workflow types that enable different execution patterns: HEARTBEAT for health monitoring and CONTAINER for custom containerized workloads.

Overview

Workflows are the core building blocks in Chronoverse. Each workflow has:
  • Kind: The type of workflow (HEARTBEAT or CONTAINER)
  • Interval: Execution frequency in minutes (1-10,080 minutes, max 1 week)
  • Payload: JSON configuration specific to the workflow type
  • Build Status: Lifecycle state (QUEUED, STARTED, COMPLETED, FAILED, CANCELED)

HEARTBEAT Workflow

HEARTBEAT workflows perform HTTP health checks against specified endpoints. They’re ideal for monitoring service availability and uptime.

Configuration

{
  "timeout": "10s",
  "endpoint": "https://api.example.com/health",
  "expected_status_code": 200,
  "headers": {
    "Authorization": ["Bearer token123"],
    "X-Custom-Header": ["value1", "value2"]
  }
}

Parameters

FieldTypeRequiredDefaultDescription
timeoutstringNo10sRequest timeout (max 5 minutes)
endpointstringYes-Target URL for health check
expected_status_codeintegerNo200Expected HTTP status (100-599)
headersobjectNo{}Custom HTTP headers

How It Works

1

Schedule Creation

Create a HEARTBEAT workflow with your endpoint configuration
2

Automatic Execution

Jobs are automatically scheduled based on the interval
3

HTTP Request

GET request is sent to the endpoint with specified headers
4

Status Validation

Response status code is compared against expected value
5

Result Recording

Job completes with SUCCESS or FAILURE status

Example Usage

API Monitoring
// Monitor API endpoint every 5 minutes
{
  "name": "API Health Check",
  "kind": "HEARTBEAT",
  "interval": 5,
  "max_consecutive_job_failures_allowed": 3,
  "payload": {
    "timeout": "30s",
    "endpoint": "https://api.production.com/health",
    "expected_status_code": 200,
    "headers": {
      "Authorization": ["Bearer prod-token"]
    }
  }
}
HEARTBEAT workflows use HTTP GET requests only. The response body is not evaluated—only the status code matters.

CONTAINER Workflow

CONTAINER workflows execute custom Docker containers, enabling you to run arbitrary scripts, processes, and applications in isolated environments.

Configuration

{
  "timeout": "5m",
  "image": "python:3.11-slim",
  "cmd": [
    "python",
    "-c",
    "print('Processing data...'); import time; time.sleep(2); print('Done!')"
  ],
  "env": {
    "API_KEY": "secret123",
    "ENVIRONMENT": "production"
  }
}

Parameters

FieldTypeRequiredDefaultDescription
timeoutstringNo30sContainer execution timeout (max 1 hour)
imagestringYes-Docker image name and tag
cmdarrayNo[]Command and arguments to execute
envobjectNo{}Environment variables (KEY=VALUE)

Execution Lifecycle

1

Image Pull

Docker image is pulled from the registry during workflow creation
2

Container Creation

Container is created with specified command and environment
3

Container Start

Container starts executing with auto-removal enabled
4

Log Streaming

Both stdout and stderr are captured and streamed in real-time
5

Completion

Container exits and is automatically removed. Exit code determines job status

Log Streaming

CONTAINER workflows automatically capture and stream both stdout and stderr:
Log Entry Format
{
  "timestamp": "2026-03-03T10:30:45.123Z",
  "message": "Processing completed successfully",
  "sequence_num": 42,
  "stream": "stdout"
}
Standard output from the container process

Example Usage

Data Processing Pipeline
{
  "name": "Daily Data ETL",
  "kind": "CONTAINER",
  "interval": 1440,  // 24 hours
  "max_consecutive_job_failures_allowed": 5,
  "payload": {
    "timeout": "30m",
    "image": "myregistry/etl-processor:v2.1",
    "cmd": ["python", "etl.py", "--mode", "production"],
    "env": {
      "DATABASE_URL": "postgresql://db:5432/warehouse",
      "S3_BUCKET": "data-lake",
      "LOG_LEVEL": "INFO"
    }
  }
}
Containers are executed with auto-removal enabled. Ensure your container writes any persistent data to external volumes or services before exiting.

Build Status States

All workflows progress through build states during the workflow worker’s image preparation phase:
StatusDescription
QUEUEDWorkflow creation requested, pending build
STARTEDImage pull/validation in progress
COMPLETEDReady for job execution
FAILEDBuild failed (invalid image, network error)
CANCELEDBuild was canceled by user
Only workflows with COMPLETED build status can execute jobs. Check the build status before scheduling manual jobs.

Failure Handling

Both workflow types support automatic failure tracking:
  • max_consecutive_job_failures_allowed: Minimum 3, maximum 100
  • When threshold is reached, the workflow is automatically terminated
  • Successful job execution resets the consecutive failure counter
  • Terminated workflows stop scheduling new jobs but retain historical data
Set max_consecutive_job_failures_allowed based on your workflow’s criticality and expected failure rate. For HEARTBEAT workflows monitoring critical services, use a lower threshold (3-5).

Interval Configuration

Workflow intervals determine automatic job scheduling frequency:
  • Minimum: 1 minute
  • Maximum: 10,080 minutes (1 week)
  • Unit: Minutes (specified as integer)
Example Intervals
{
  "interval": 5,      // Every 5 minutes
  "interval": 60,     // Every hour
  "interval": 1440,   // Daily
  "interval": 10080   // Weekly
}

Next Steps

Job Scheduling

Learn about automatic and manual job triggers

Log Streaming

Real-time log access with Server-Sent Events

Build docs developers (and LLMs) love