Overview
Beacon can run as a web API service using beacon serve. This guide covers deploying Beacon with Docker, setting up Redis for rate limiting, and configuring production environments.
Quick Start
Pull and Run
docker run -p 8080:8080 \
-e GEMINI_API_KEY=your_key_here \
-e REDIS_URL=redis://redis:6379 \
ghcr.io/davidnzube101/beacon
Test the deployment:
curl http://localhost:8080/health
Dockerfile Breakdown
Beacon uses a multi-stage Docker build for optimal image size and security:
# ── Stage 1: Build ─────────────────────────────────────────────────────────
FROM rust:slim AS builder
WORKDIR /app
# Cache dependencies first
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo "fn main() {}" > src/main.rs
RUN cargo build --release
RUN rm -rf src
# Build the real binary
COPY src ./src
RUN touch src/main.rs && cargo build --release
# ── Stage 2: Runtime ───────────────────────────────────────────────────────
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/beacon /usr/local/bin/beacon
EXPOSE 8080
CMD [ "beacon" , "serve" , "--port" , "8080" ]
Build Optimizations
Dependency Caching
The Dockerfile creates a dummy main.rs and builds dependencies first. This layer is cached and only rebuilt when Cargo.toml changes. RUN mkdir src && echo "fn main() {}" > src/main.rs
RUN cargo build --release
Multi-Stage Build
The final image uses debian:bookworm-slim instead of the full Rust image, reducing size from ~1.5GB to ~80MB.
Minimal Runtime Dependencies
Only ca-certificates is installed for HTTPS support when calling AI provider APIs.
Environment Variables
Required Variables
Redis connection string for rate limiting. Example : redis://localhost:6379 or redis://:password@redis:6379/0The Beacon API will fail to start without Redis configured.
AI Provider Keys
At least one provider key should be set:
Google Gemini API key (recommended for default operations).
Anthropic Claude API key.
Beacon Cloud Variables
For supporting pay-per-run operations:
BEACON_CLOUD_URL
string
default: "https://beacon-latest.onrender.com"
Override the default Beacon Cloud endpoint.
Base network wallet address for receiving USDC payments.
Solana wallet address for receiving USDC payments.
USDC amount to charge per generation when using beacon-ai-cloud provider.
Blockchain RPC URLs
BASE_RPC_URL
string
default: "https://mainnet.base.org"
RPC endpoint for Base network payment verification.
SOLANA_RPC_URL
string
default: "https://api.mainnet-beta.solana.com"
RPC endpoint for Solana network payment verification.
Database Configuration
Supabase project URL (if using Beacon Cloud features).
Supabase service role key for database access (required for Beacon Cloud payment tracking).
Docker Compose Setup
Complete Stack
Deploy Beacon with Redis using Docker Compose:
version : '3.8'
services :
beacon :
image : ghcr.io/davidnzube101/beacon:latest
ports :
- "8080:8080"
environment :
- REDIS_URL=redis://redis:6379
- GEMINI_API_KEY=${GEMINI_API_KEY}
- CLAUDE_API_KEY=${CLAUDE_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
depends_on :
- redis
restart : unless-stopped
networks :
- beacon-network
redis :
image : redis:7-alpine
ports :
- "6379:6379"
volumes :
- redis-data:/data
restart : unless-stopped
networks :
- beacon-network
networks :
beacon-network :
driver : bridge
volumes :
redis-data :
Environment File
Create a .env file (never commit this!):
GEMINI_API_KEY = AIzaSy...
CLAUDE_API_KEY = sk-ant-...
OPENAI_API_KEY = sk-...
Start the Stack
# Start services
docker-compose up -d
# View logs
docker-compose logs -f beacon
# Check health
curl http://localhost:8080/health
Redis Configuration
Beacon uses Redis for rate limiting to prevent abuse of the API.
Rate Limiting Implementation
From src/main.rs:36-202, Beacon implements sliding window rate limiting:
Window : 60 seconds
Max requests : 20 per IP address
Endpoints : /generate and /validate
const RATE_LIMIT_WINDOW_SECONDS : u64 = 60 ;
const RATE_LIMIT_MAX_REQUESTS : usize = 20 ;
How It Works
Extract client IP
Beacon identifies clients by IP address, with fallback to X-Forwarded-For header for proxied requests.
Sliding window with sorted sets
Uses Redis ZADD and ZREMRANGEBYSCORE to maintain a time-ordered set of requests. redis :: pipe ()
. atomic ()
. zrembyscore ( & key , 0 , ( now - RATE_LIMIT_WINDOW_SECONDS ) as f64 )
. zadd ( & key , now , now )
. zcard ( & key )
. expire ( & key , RATE_LIMIT_WINDOW_SECONDS as i64 )
. query_async ( & mut conn )
. await
Reject if limit exceeded
Returns 429 Too Many Requests if the count exceeds 20.
Redis Persistence
For production, configure Redis persistence:
redis :
image : redis:7-alpine
command : redis-server --appendonly yes --appendfsync everysec
volumes :
- redis-data:/data
This enables AOF (Append-Only File) persistence with 1-second sync interval, preventing rate limit data loss on restart.
Port Configuration
Beacon exposes HTTP on port 8080 by default. You can customize this:
Via CLI
Via Docker
docker run -p 3000:3000 beacon beacon serve --port 3000
With Docker Compose
services :
beacon :
command : [ "beacon" , "serve" , "--port" , "3000" ]
ports :
- "3000:3000"
Production Considerations
1. Reverse Proxy
Run Beacon behind Nginx or Traefik for:
HTTPS/TLS termination
Load balancing
Better logging
Request buffering
upstream beacon {
server localhost:8080;
}
server {
listen 443 ssl http2;
server_name beacon.example.com;
ssl_certificate /etc/letsencrypt/live/beacon.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/beacon.example.com/privkey.pem;
location / {
proxy_pass http://beacon;
proxy_set_header Host $ host ;
proxy_set_header X-Real-IP $ remote_addr ;
proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for ;
proxy_set_header X-Forwarded-Proto $ scheme ;
}
}
2. Health Checks
Configure Docker health checks:
beacon :
healthcheck :
test : [ "CMD" , "curl" , "-f" , "http://localhost:8080/health" ]
interval : 30s
timeout : 10s
retries : 3
start_period : 40s
3. Resource Limits
Set memory and CPU limits to prevent resource exhaustion:
beacon :
deploy :
resources :
limits :
cpus : '1.0'
memory : 512M
reservations :
cpus : '0.5'
memory : 256M
4. Logging
Beacon uses tracing for structured logs. Configure log output:
# Set log level
RUST_LOG = info docker-compose up
# Debug mode
RUST_LOG = debug, beacon = trace docker-compose up
Forward logs to a logging service:
beacon :
logging :
driver : "json-file"
options :
max-size : "10m"
max-file : "3"
Add security headers via reverse proxy:
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
API Endpoints
Once deployed, Beacon exposes:
GET /health
Health check endpoint:
curl http://localhost:8080/health
Response:
{
"status" : "ok" ,
"version" : "0.2.4" ,
"name" : "beacon"
}
POST /generate
Generate AGENTS.md from repository context:
curl -X POST http://localhost:8080/generate \
-H "Content-Type: application/json" \
-d '{
"name": "my-project",
"readme": "# My Project\n...",
"source_files": [{"path": "main.rs", "content": "..."}],
"provider": "gemini"
}'
Response:
{
"success" : true ,
"agents_md" : "# AGENTS.md — my-project \n ..." ,
"manifest" : {
"name" : "my-project" ,
"capabilities" : [ ... ],
"endpoints" : [ ... ]
}
}
POST /validate
Validate AGENTS.md content:
curl -X POST http://localhost:8080/validate \
-H "Content-Type: application/json" \
-d '{
"content": "# AGENTS.md — beacon\n..."
}'
Monitoring
Prometheus Metrics
Beacon doesn’t expose Prometheus metrics by default, but you can add them via middleware or use Docker stats:
# Monitor container stats
docker stats beacon
# Redis monitoring
redis-cli INFO stats
Custom Metrics
Track API usage by parsing logs:
# Count requests per endpoint
docker logs beacon 2>&1 | grep "POST /generate" | wc -l
# Find rate-limited requests
docker logs beacon 2>&1 | grep "429"
Scaling
Horizontal Scaling
Run multiple Beacon instances behind a load balancer:
beacon :
deploy :
replicas : 3
# ... rest of config
Redis handles rate limiting across all instances using the same key space.
Vertical Scaling
Increase resources for a single instance:
beacon :
deploy :
resources :
limits :
cpus : '2.0'
memory : 2G
Troubleshooting
”REDIS_URL must be set”
Problem : Beacon can’t connect to Redis.
Solution :
# Check if Redis is running
docker ps | grep redis
# Test Redis connection
redis-cli -u redis://localhost:6379 PING
# Ensure REDIS_URL is set correctly
echo $REDIS_URL
Container Crashes on Startup
Check logs :
Common issues:
Missing required environment variables
Redis not reachable
Port already in use
High Memory Usage
Beacon processes large repository contexts. Monitor with:
Increase memory limits if needed.
Next Steps
Beacon Cloud Deploy without managing infrastructure
Multiple Providers Configure AI provider keys