Skip to main content
The beacon serve command starts Beacon as a web API server, exposing HTTP endpoints for generating and validating AGENTS.md files programmatically.

Usage

beacon serve [OPTIONS]

Options

--port
number
default:"8080"
The port number on which to run the Beacon API server.Short form: -pExample:
beacon serve --port 3000

Prerequisites

Redis is required - The serve command requires a running Redis instance for rate limiting.
Set the Redis connection URL via environment variable:
export REDIS_URL="redis://localhost:6379"
beacon serve
If REDIS_URL is not set, the server will fail to start.

Rate Limiting

Beacon automatically enforces rate limits on the /generate and /validate endpoints:
  • Window: 60 seconds (1 minute)
  • Max requests: 20 requests per IP address
Rate limiting uses Redis sorted sets to track request timestamps per IP address. Response when rate limit exceeded:
HTTP 429 Too Many Requests

API Endpoints

GET /health

Health check endpoint for monitoring server status. Response:
{
  "status": "ok",
  "version": "0.2.2",
  "name": "beacon"
}

POST /generate

Generate an AGENTS.md file from repository context. Request Body:
{
  "name": "my-repo",
  "source_files": [
    {
      "path": "src/main.rs",
      "language": "rust",
      "content": "fn main() { ... }"
    }
  ],
  "readme": "# My Project\n...",
  "package_manifest": "[package]\nname = \"my-repo\"...",
  "openapi_spec": null,
  "existing_agents_md": null,
  "provider": "gemini"
}
Request Headers (for beacon-ai-cloud provider):
X-Payment-Txn-Hash: 0x123...
X-Payment-Chain: base
X-Payment-Run-ID: run_abc123
Success Response:
{
  "success": true,
  "agents_md": "# AGENTS.md — my-repo\n...",
  "manifest": {
    "name": "my-repo",
    "version": "0.1.0",
    "capabilities": [...],
    "endpoints": [...],
    "auth_type": "none"
  }
}
Error Response:
{
  "success": false,
  "error": "Inference failed: ..."
}

POST /validate

Validate AGENTS.md file content. Request Body:
{
  "content": "# AGENTS.md — my-repo\n...",
  "provider": "none"
}
Request Headers (for beacon-ai-cloud provider):
X-Payment-Txn-Hash: 0x123...
X-Payment-Chain: base
X-Payment-Run-ID: run_abc123
Success Response:
{
  "success": true,
  "valid": true,
  "errors": [],
  "warnings": [],
  "endpoint_results": []
}
Error Response:
{
  "success": true,
  "valid": false,
  "errors": ["Missing top-level # heading"],
  "warnings": ["Missing generator footer"],
  "endpoint_results": []
}

Examples

Start server on default port (8080)

export REDIS_URL="redis://localhost:6379"
beacon serve
Output:
⬛ Beacon API
   http://0.0.0.0:8080
   POST /generate  — generate AGENTS.md from a repo path
   POST /validate  — validate an AGENTS.md file
   GET  /health    — health check

Start server on custom port

export REDIS_URL="redis://localhost:6379"
beacon serve --port 3000
Output:
⬜ Beacon API
   http://0.0.0.0:3000
   POST /generate  — generate AGENTS.md from a repo path
   POST /validate  — validate an AGENTS.md file
   GET  /health    — health check

Test the health endpoint

curl http://localhost:8080/health
Response:
{
  "status": "ok",
  "version": "0.2.2",
  "name": "beacon"
}

Test the validate endpoint

curl -X POST http://localhost:8080/validate \
  -H "Content-Type: application/json" \
  -d '{
    "content": "# AGENTS.md — test\n\n> A test repo\n\n## Capabilities\n\n### `test_capability`\n\nA test capability."
  }'
Response:
{
  "success": true,
  "valid": true,
  "errors": [],
  "warnings": ["Missing generator footer"],
  "endpoint_results": []
}

Test rate limiting

Make more than 20 requests within 60 seconds:
for i in {1..25}; do
  curl -X POST http://localhost:8080/validate \
    -H "Content-Type: application/json" \
    -d '{"content": "test"}'
done
Requests 21-25 will return:
HTTP 429 Too Many Requests

Environment Variables

REDIS_URL
string
required
Redis connection URL for rate limiting.Example:
export REDIS_URL="redis://localhost:6379"
GEMINI_API_KEY
string
API key for Google Gemini (default provider).
CLAUDE_API_KEY
string
API key for Anthropic Claude.
OPENAI_API_KEY
string
API key for OpenAI models.
BEACON_WALLET_BASE
string
Base wallet address for beacon-ai-cloud payments.
BEACON_WALLET_SOLANA
string
Solana wallet address for beacon-ai-cloud payments.
PAYMENT_AMOUNT_USDC
string
default:"0.09"
Payment amount in USDC for beacon-ai-cloud provider.

Docker Example

FROM rust:latest

WORKDIR /app
COPY . .
RUN cargo build --release

ENV REDIS_URL=redis://redis:6379
ENV GEMINI_API_KEY=your-api-key

EXPOSE 8080

CMD ["./target/release/beacon", "serve"]
docker-compose.yml:
version: '3.8'
services:
  redis:
    image: redis:alpine
    ports:
      - "6379:6379"
  
  beacon:
    build: .
    ports:
      - "8080:8080"
    environment:
      - REDIS_URL=redis://redis:6379
      - GEMINI_API_KEY=${GEMINI_API_KEY}
    depends_on:
      - redis

Source Code Reference

The serve command implementation can be found in:
  • Command definition: /home/daytona/workspace/source/src/main.rs:70-73
  • Execution logic: /home/daytona/workspace/source/src/main.rs:448-472
  • Rate limiting middleware: /home/daytona/workspace/source/src/main.rs:138-202
  • Generate handler: /home/daytona/workspace/source/src/main.rs:204-297
  • Validate handler: /home/daytona/workspace/source/src/main.rs:299-370
  • generate - Generate AGENTS.md files locally
  • validate - Validate AGENTS.md files locally

Build docs developers (and LLMs) love