Overview
The III Engine is the high-performance Rust core that orchestrates Motia applications. It provides:- Function invocation - Call handlers across workers
- Trigger management - Register and fire triggers
- Queue processing - Asynchronous event routing
- State management - Distributed key-value storage
- Stream synchronization - Real-time data streams
- WebSocket protocol - Communication between workers and engine
- Telemetry - OpenTelemetry-based observability
Architecture
Core Components
Engine Struct
The main engine orchestrator:engine/src/engine/mod.rs:142-150
Engine Trait
Defines the core engine operations:engine/src/engine/mod.rs:121-140
WebSocket Protocol
Workers communicate with the engine using a JSON-based WebSocket protocol.Message Types
engine/src/protocol.rs:33-107
Binary Frames
The engine supports binary frames with magic prefixes for telemetry:- OTLP (
OTLPprefix) - Trace spans - MTRC (
MTRCprefix) - Metrics - LOGS (
LOGSprefix) - Log records
engine/src/engine/mod.rs:35-39
Connection Lifecycle
Reference:
engine/src/engine/mod.rs:645-799
Module System
The engine uses a modular architecture where each module provides specific functionality.Module Trait
Available Modules
Queue Module
Manages event-driven messaging with adapters for Redis, RabbitMQ, etc.Location:
engine/src/modules/queue/State Module
Provides distributed key-value storage with state change triggers.Location:
engine/src/modules/state/Stream Module
Enables real-time data synchronization via WebSocket connections.Location:
engine/src/modules/stream/HTTP Module
Exposes REST API endpoints and handles HTTP function invocations.Location:
engine/src/modules/rest_api/Cron Module
Schedules and executes time-based triggers.Location:
engine/src/modules/cron/Observability Module
Collects and exports traces, metrics, and logs.Location:
engine/src/modules/observability/engine/src/lib.rs:20-38
Queue Module
Handles asynchronous event processing.Queue Adapter Trait
engine/src/modules/queue/mod.rs:19-39
Redis Adapter
Default queue implementation using Redis:- Standard queues: Redis lists for FIFO processing
- FIFO queues: Redis sorted sets with message group partitioning
- Dead-letter queues: Failed messages stored for retry
State Module
Provides distributed state management.State Operations
- set: Store or update a value
- get: Retrieve a value
- update: Apply JSON Patch operations
- delete: Remove a value
- list: List all values in a group
- clear: Clear all values in a group
State Triggers
State changes automatically fire triggers:engine/src/modules/state/mod.rs
Stream Module
Enables real-time data synchronization.Stream Operations
- set: Create or update stream item
- get: Retrieve stream item
- update: Apply JSON Patch operations
- delete: Remove stream item
- list: List items in a group
Stream Events
Stream operations emit events to subscribed clients:WebSocket Synchronization
Clients subscribe to streams via WebSocket and receive real-time updates:engine/src/modules/stream/mod.rs
Invocation Handler
Manages function invocations with distributed tracing.Invocation Flow
Reference:
engine/src/engine/mod.rs:177-220
Telemetry System
The engine includes comprehensive observability via OpenTelemetry.Trace Propagation
Traces follow W3C Trace Context specification:engine/src/telemetry.rs
Span Creation
engine/src/engine/mod.rs:315-324
Metrics Collection
The engine tracks:- Function registrations
- Trigger registrations
- Invocation counts
- Error rates
- Latencies
Worker Registry
Tracks connected workers and their capabilities.Worker Struct
Worker Lifecycle
- Connection: Worker connects, receives ID
- Registration: Worker registers functions and triggers
- Ready: Worker processes invocations
- Disconnection: Worker disconnects, cleanup occurs
Function Registry
Stores registered functions and their handlers.Trigger Registry
Manages trigger registrations and firing.Fire Triggers
When a trigger fires, the engine invokes all matching functions:engine/src/engine/mod.rs:616-643
Performance Characteristics
WebSocket Overhead
WebSocket Overhead
- Connection pooling reduces overhead
- Binary frames for telemetry reduce serialization cost
- Async I/O prevents blocking
Invocation Latency
Invocation Latency
- Local invocations: ~1-5ms
- Cross-worker invocations: ~5-20ms (depends on network)
- Queue processing: ~10-50ms (depends on adapter)
Throughput
Throughput
- 10,000+ invocations/second per engine
- Horizontally scalable with multiple engines
- Queue adapters support millions of messages/day
Resource Usage
Resource Usage
- Engine: ~50-100MB RAM baseline
- Per worker: ~20-50MB RAM
- CPU: Scales with workload
Best Practices
Deploy engine separately
Deploy engine separately
Run the engine as a separate process from workers for isolation and scalability.
Use connection pooling
Use connection pooling
Reuse WebSocket connections across invocations to reduce overhead.
Enable compression
Enable compression
Use WebSocket compression for large payloads.
Monitor telemetry
Monitor telemetry
Track invocation latency, error rates, and queue depths.
Configure adapters
Configure adapters
Choose appropriate queue/state/stream adapters for your workload (Redis, RabbitMQ, PostgreSQL).
Next Steps
Self-Hosting
Deploy your own III Engine
Engine Modules
Configure queue, state, and stream adapters
Observability
Monitor engine performance
Configuration
Configure resource limits