Overview
ByteStream enables:- Streaming uploads of large files
- Resumable downloads
- Efficient data transfer for blobs too large for batch operations
- Chunked streaming (no size limits)
- Resumable uploads
- Read offsets and limits
Configuration
The instance name identifying this ByteStream endpoint
Map of instance names to CAS store references
gRPC Methods
Read
Download a blob in chunks. Request:Format:
{instance_name}/blobs/{hash}/{size}Example: main/blobs/abc123.../1048576Byte offset to start reading from
Maximum bytes to read (0 = all remaining)
Chunk of blob data
Write
Upload a blob in chunks. Request (stream):Format:
{instance_name}/uploads/{uuid}/blobs/{hash}/{size}The UUID must be client-generated and unique per upload.Byte offset of this chunk (must be sequential)
True on the final chunk
Chunk data
Total bytes successfully written
Example (first chunk):
QueryWriteStatus
Check the status of an incomplete upload. Request:The upload resource name
Bytes successfully written so far
True if upload is finished
Resource Name Format
ByteStream uses specific resource name formats: Read:Compression Support
ByteStream supports compressed transfers: Compressor values:identity- No compressionzstd- Zstandard compressiondeflate- DEFLATE compression
Chunk Size Recommendations
Recommended chunk sizes:
- Small files (<1MB): Use BatchUpdateBlobs instead
- Medium files (1-100MB): 256KB - 1MB chunks
- Large files (>100MB): 1-4MB chunks
Error Codes
| Code | Description |
|---|---|
NOT_FOUND | Blob or upload session not found |
INVALID_ARGUMENT | Invalid resource name or offset |
OUT_OF_RANGE | Read offset beyond blob size |
FAILED_PRECONDITION | Non-sequential write offset |
RESOURCE_EXHAUSTED | Disk full |
DATA_LOSS | Hash mismatch on completed upload |
Implementation Details
Fromnativelink-service/src/bytestream_server.rs, the ByteStream server wraps the underlying CAS stores and handles chunked streaming.
The ByteStream service follows the Google ByteStream API specification