Skip to main content

Overview

Beacon supports multiple AI providers for generating AGENTS.md files. Each provider offers different tradeoffs in terms of cost, quality, and speed. This guide helps you choose the right provider and manage API keys securely.

Supported Providers

Gemini 2.5 Flash

Default provider
  • Fast and cost-effective
  • Native JSON output support
  • Best for most use cases

Claude Sonnet 4.5

  • Highest quality analysis
  • Better at complex codebases
  • More expensive per run

OpenAI GPT-4o

  • Balanced quality and speed
  • Structured JSON responses
  • Wide model availability

Beacon Cloud

  • No API key required
  • Pay-per-run with USDC
  • $0.09 per generation

Provider Configuration

Setting API Keys

1

Choose your provider

Decide which AI provider you want to use based on your needs:
  • Gemini: Best default choice (free tier available)
  • Claude: When you need the highest quality
  • OpenAI: If you already have OpenAI credits
  • Beacon Cloud: No API key management needed
2

Set environment variables

Export the appropriate API key for your chosen provider:
export GEMINI_API_KEY="your_gemini_key_here"
beacon generate ./my-project
3

Or use command-line flags

Pass API keys directly via the --api-key flag:
beacon generate ./my-project --provider claude --api-key sk-ant-...
Passing keys via command line may expose them in shell history. Use environment variables for better security.

Key Resolution Priority

Beacon resolves API keys in the following order (from src/inferrer.rs:243):
  1. CLI flag: --api-key argument
  2. Environment variable: GEMINI_API_KEY, CLAUDE_API_KEY, or OPENAI_API_KEY
  3. Error: If neither is found, Beacon will fail with a helpful message
fn resolve_key(cli_key: Option<&str>, env_var: &str, provider: &str) -> Result<String> {
    if let Some(key) = cli_key {
        return Ok(key.to_string());
    }
    std::env::var(env_var).map_err(|_| anyhow::anyhow!(
        "No API key for {}. Pass --api-key or set {} in your environment.",
        provider, env_var
    ))
}

Provider Comparison

Cost Analysis

ProviderModelApprox. Cost per RunFree Tier
Gemini2.5 Flash~$0.002✅ 1500/day
ClaudeSonnet 4.5~$0.015
OpenAIGPT-4o~$0.008
Beacon CloudGemini via x402$0.09 USDCN/A

Quality & Speed

Best for: General use, rapid iterationPros:
  • Native JSON response format (no parsing issues)
  • Very fast (under 5 seconds typical)
  • Generous free tier
  • Good at identifying REST endpoints
Cons:
  • May miss nuanced capability descriptions
  • Less sophisticated for complex architectures

Switching Between Providers

You can easily test different providers to find the best fit:
# Generate with Gemini (default)
beacon generate ./my-project -o gemini-output.md

# Try Claude for comparison
beacon generate ./my-project -o claude-output.md --provider claude

# Compare the outputs
diff gemini-output.md claude-output.md
For large or complex repositories, try Claude first. For smaller projects or rapid iteration, Gemini is usually sufficient.

Secure API Key Management

Local Development

Use a .env file (never commit this!):
.env
GEMINI_API_KEY=AIzaSy...
CLAUDE_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
Beacon automatically loads .env files via dotenvy (see src/main.rs:375).

CI/CD Environments

Store API keys as encrypted secrets:
name: Generate AGENTS.md
on: [push]

jobs:
  generate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Install Beacon
        run: curl -fsSL https://raw.githubusercontent.com/DavidNzube101/beacon/master/install.sh | sh
      - name: Generate
        env:
          GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
        run: beacon generate .

Production Deployment

For the Beacon API server, set keys as environment variables:
# Docker
docker run -e GEMINI_API_KEY=xyz -e CLAUDE_API_KEY=abc beacon serve

# Docker Compose
environment:
  - GEMINI_API_KEY=${GEMINI_API_KEY}
  - CLAUDE_API_KEY=${CLAUDE_API_KEY}
Never hardcode API keys in source code or commit them to version control. Use secret management tools or environment variables.

Provider-Specific Configuration

Gemini Settings

Beacon uses these parameters for Gemini (src/inferrer.rs:64-72):
{
  "generationConfig": {
    "temperature": 0.2,
    "responseMimeType": "application/json"
  }
}
  • Temperature: Low (0.2) for consistent, deterministic output
  • Response format: Native JSON mode prevents parsing errors

Claude Settings

Claude configuration (src/inferrer.rs:91-102):
{
  "model": "claude-sonnet-4.5",
  "max_tokens": 4096,
  "messages": [
    {
      "role": "user",
      "content": "<prompt>"
    }
  ]
}

OpenAI Settings

OpenAI configuration (src/inferrer.rs:123-136):
{
  "model": "gpt-4o",
  "temperature": 0.2,
  "response_format": { "type": "json_object" },
  "messages": [
    {
      "role": "system",
      "content": "You are an expert at analyzing software repositories. Always respond with valid JSON only."
    },
    {
      "role": "user",
      "content": "<prompt>"
    }
  ]
}

Troubleshooting

”No API key” Error

No API key for gemini. Pass --api-key or set GEMINI_API_KEY in your environment.
Solution: Export the required environment variable:
export GEMINI_API_KEY="your_key_here"

“Unknown provider” Error

Unknown provider 'gpt4'. Valid options: gemini, claude, openai, beacon-ai-cloud
Solution: Use the exact provider name:
beacon generate . --provider openai  # not 'gpt4'

Rate Limiting

If you hit provider rate limits, try:
  1. Wait a few minutes before retrying
  2. Switch to a different provider temporarily
  3. Upgrade your API plan
  4. Use Beacon Cloud (different rate limits)

Next Steps

Beacon Cloud

Use Beacon without managing API keys

Docker Deployment

Deploy Beacon as an API service

Build docs developers (and LLMs) love