Documentation Index
Fetch the complete documentation index at: https://mintlify.com/nubskr/walrus/llms.txt
Use this file to discover all available pages before exploring further.
Protocol Overview
Distributed Walrus exposes a simple, length-prefixed text protocol over TCP. Clients can connect to any node in the cluster, and the system handles routing automatically.Design Principles
- Text-based: Commands are UTF-8 strings (human-readable, easy to debug)
- Length-prefixed: Fixed 4-byte header prevents message ambiguity
- Stateless: Each command is independent (except read cursors are server-side)
- Synchronous: Request-response model, one command at a time per connection
The protocol is inspired by Redis RESP but simplified for streaming log workloads.
Wire Format
Request Format
All requests follow this structure:PUT logs hello
Response Format
Responses use the same length-prefixed format:OK- Command succeededOK <payload>- Command succeeded with dataEMPTY- No data available (for GET)
ERR <message>- Command failed with error message
Command Reference
REGISTER
Create a topic if it doesn’t already exist. Idempotent. Syntax:topic: Topic name (alphanumeric, no spaces)
OK- Topic created or already existsERR <message>- Failed to create topic
- Node checks if topic exists in metadata
- If missing, proposes
CreateTopicvia Raft consensus - Initial leader is selected via consistent hashing
- Metadata is replicated across all nodes
PUT
Append data to a topic. Automatically routes to the current segment leader. Syntax:topic: Topic namepayload: Data to append (rest of command after topic, can contain spaces)
OK- Data appended successfullyERR unknown topic- Topic doesn’t exist (use REGISTER first)ERR NotLeaderForPartition- Temporary leadership transfer error (retry)
- Client sends PUT to any node (e.g., Node 2)
- Node 2 checks metadata: which node leads this topic’s current segment?
- If Node 2 is leader: write directly to local Walrus
- If Node 1 is leader: forward via internal RPC to Node 1
- Leader checks write lease and appends to Walrus
- Leader tracks entry count for rollover detection
- Response flows back to client
GET
Read the next entry from a topic using a shared server-side cursor. The cursor automatically advances across sealed segments. Syntax:topic: Topic name
OK <data>- Entry data (cursor advances)EMPTY- No data available yet (cursor doesn’t advance)ERR unknown topic- Topic doesn’t exist
- Within segment: Increment
delivered_in_segmenton each successful read - Sealed segment exhausted: If
delivered_in_segment >= sealed_count, move to next segment - Active segment empty: Return
EMPTY(wait for more data)
The cursor is shared across all client connections. Multiple consumers reading from the same topic will receive different entries (round-robin style).
STATE
Retrieve metadata for a topic, including segment layout and leadership. Syntax:topic: Topic name
- JSON object with topic state
ERR unknown topic- Topic doesn’t exist
| Field | Type | Description |
|---|---|---|
current_segment | u64 | Active segment accepting writes |
leader_node | u64 | Node ID currently leading the active segment |
last_sealed_entry_offset | u64 | Cumulative entries across all sealed segments |
sealed_segments | map<u64, u64> | Segment ID → entry count for sealed segments |
segment_leaders | map<u64, u64> | Segment ID → node ID (historical leaders) |
- Monitor segment distribution across nodes
- Debug routing issues
- Calculate total entries:
last_sealed_entry_offset + (entries in current segment) - Understand leadership history
METRICS
Retrieve Raft cluster metrics and health information. Syntax:- JSON object with Raft metrics
| Metric | Description |
|---|---|
current_leader | Node ID of Raft leader (or null if no leader) |
state | Node’s Raft role: Leader, Follower, Candidate |
last_log_index | Latest Raft log entry index |
last_applied | Last index applied to state machine |
voters | List of voting node IDs |
A healthy cluster should show:
current_leaderis non-nulllast_appliedclosely trailslast_log_indexvotersincludes all expected nodes
Connection Management
Persistent Connections
Clients can reuse the same TCP connection for multiple commands:- Avoid TCP handshake overhead
- Lower latency for burst operations
- Connection pooling friendly
Timeouts and Retries
- Network Errors
- Application Errors
Connection refused: Node is down or port not open
- Retry with a different node (e.g., 9092, 9093)
- Reconnect and retry command (PUTs are NOT idempotent)
Load Balancing
For high availability, use a load balancer to distribute client connections::9090, and HAProxy distributes to :9091-9093.
Client Libraries
Official CLI
The Rust CLI client is the reference implementation:distributed-walrus/src/bin/walrus-cli.rs (see GitHub)
Third-Party Clients
Community clients (contribution welcome!):- Python:
pip install walrus-client(planned) - Go:
github.com/user/walrus-go(planned) - Node.js:
npm install walrus-js(planned)
Writing a Custom Client
Minimal client in 50 lines:Protocol Limitations
For production deployments, consider:- Running in a private network with firewall rules
- Using a VPN or SSH tunnel for encryption
- Implementing authentication at the load balancer layer
Next Steps
Segment Management
Learn how segments roll over and leases work
Failure Recovery
Handle errors and node failures