Overview
Workflows are the core abstraction in Chronoverse that define what should be executed and how often. A workflow represents a scheduled task that runs at specified intervals, with built-in failure tracking and automatic termination capabilities.Workflow Types
Chronoverse supports two workflow types, each designed for different use cases:HEARTBEAT
A simple health check workflow that verifies system availability.Use Cases
Use Cases
- Service health monitoring
- Endpoint availability checks
- System liveness probes
- Network connectivity tests
Behavior
Behavior
- Executes instantly with minimal overhead
- Returns success/failure status
- No payload configuration required
- Ideal for high-frequency monitoring (every 1-5 minutes)
Example Payload
Example Payload
CONTAINER
Executes custom containerized applications and scripts in isolated Docker containers.Use Cases
Use Cases
- Data processing pipelines
- ETL jobs
- Backup and maintenance tasks
- Custom business logic
- Scheduled reports generation
- Database cleanup operations
Behavior
Behavior
- Pulls or builds Docker images
- Executes in isolated containers
- Captures stdout/stderr logs
- Supports environment variables and volumes
- Automatic cleanup after execution
Example Payload
Example Payload
Workflow Properties
Each workflow has the following properties:Unique identifier (UUID) for the workflow
ID of the user who owns the workflow
Human-readable name for the workflow
Workflow type:
HEARTBEAT or CONTAINERJSON string containing workflow configuration (varies by kind)
Execution interval in minutes (minimum: 1)
Maximum number of consecutive failures before auto-termination
Current count of consecutive failures (read-only)
Current build status for CONTAINER workflows:
QUEUED: Waiting to be processedSTARTED: Build in progressCOMPLETED: Ready for executionFAILED: Build failedCANCELED: Build was canceled
Workflow creation time (RFC3339 format)
Last update time (RFC3339 format)
Termination time if workflow is terminated (RFC3339 format)
Workflow Lifecycle
1. Creation
When a workflow is created:- User submits workflow definition via API
- System validates the payload and parameters
- Workflow is stored in PostgreSQL
- For CONTAINER workflows, build status is set to
QUEUED - Workflow Worker picks up CONTAINER workflows and prepares execution environment
HEARTBEAT workflows are immediately ready for execution and don’t require a build phase.
2. Building (CONTAINER only)
The Workflow Worker:- Validates the Docker image and configuration
- Prepares execution templates
- Stores configuration in Redis for fast access
- Updates build status accordingly
Workflows with
FAILED or CANCELED build status won’t be scheduled for execution.3. Scheduling
The Scheduling Worker continuously:- Polls PostgreSQL for workflows due for execution
- Calculates next execution time based on interval
- Creates job entries in the jobs table
- Publishes job events to Kafka’s
workflowstopic
Scheduling Logic
Scheduling Logic
4. Execution
When a job is scheduled:- Workflow Worker (for CONTAINER) or Execution Worker (for HEARTBEAT) consumes the event
- Job status changes from
PENDING→QUEUED→RUNNING - Execution happens in isolated environment
- Logs are captured and sent to Kafka’s
job_logstopic - Job completes with
COMPLETEDorFAILEDstatus
5. Failure Tracking
After each job execution:- On Success:
consecutive_job_failures_countis reset to 0 - On Failure:
consecutive_job_failures_countis incremented
6. Termination
Workflows can be terminated:- Manually: User terminates via API
- Automatically: After reaching max consecutive failures
- Via Deletion: Workflow deletion also terminates it
- Stop being scheduled for execution
- Retain all historical data
- Can be identified by non-null
terminated_attimestamp - Cannot be reactivated (create a new workflow instead)
Build Status Details
For CONTAINER workflows, the build status indicates readiness:QUEUED
Workflow is waiting in the build queue. The Workflow Worker will pick it up soon.
STARTED
Workflow Worker is currently building the execution environment. This includes validating the Docker image and preparing configuration.
COMPLETED
Build successful. Workflow is ready for execution and will be scheduled according to its interval.
FAILED
Build failed due to invalid configuration, missing image, or other errors. Check logs for details. The workflow won’t be scheduled until fixed.
CANCELED
Build was canceled before completion. This can happen if the workflow is updated or deleted during build.
Payload Configuration
HEARTBEAT Payload
Heartbeat workflows have minimal configuration:CONTAINER Payload
Container workflows support extensive configuration:Docker image to use (e.g.,
python:3.11-alpine, node:20-alpine, custom registry images)Override the default command of the image (e.g.,
["python", "-c"])Arguments to pass to the command
Environment variables as key-value pairs
Working directory inside the container
Best Practices
Interval Selection
High-Frequency Monitoring (1-5 minutes)
High-Frequency Monitoring (1-5 minutes)
- Use HEARTBEAT workflows
- Monitor critical services
- Keep payloads simple
- Set reasonable failure thresholds (e.g., 3-5)
Regular Jobs (10-60 minutes)
Regular Jobs (10-60 minutes)
- Suitable for most CONTAINER workflows
- Data processing, backups, reports
- Balance between timeliness and resource usage
- Set higher failure thresholds (e.g., 5-10)
Hourly/Daily Jobs (60+ minutes)
Hourly/Daily Jobs (60+ minutes)
- Heavy processing tasks
- Large data operations
- Batch processing
- Set appropriate timeouts
Failure Handling
Set Appropriate Thresholds
Consider the nature of your workflow:
- Flaky external APIs: Higher threshold (5-10)
- Critical internal services: Lower threshold (2-3)
- Experimental workflows: Medium threshold (3-5)
Monitor Failure Counts
Regularly check
consecutive_job_failures_count and investigate:- Approaching threshold? Address underlying issues
- Auto-terminated? Fix and recreate workflow
Container Optimization
- Use Alpine Images: Smaller, faster to pull (e.g.,
python:3.11-alpine) - Minimize Layers: Keep Docker images lean
- Cache Images: Frequently used images are cached by Docker
- Set Resource Limits: Prevent resource exhaustion
- Handle Signals: Implement graceful shutdown in your code
Next Steps
Create Workflow
Learn how to create your first workflow
Jobs
Understand job execution and monitoring
Workers
Learn how workers process workflows
API Reference
Complete workflow API documentation