Skip to main content

Overview

Metaculus supports automated forecasting bots - algorithmic forecasters that can submit predictions programmatically. Bots play an important role in the platform’s forecasting ecosystem and benchmarking.

What are Forecasting Bots?

Bots are User accounts with is_bot=True that:
  • Submit forecasts programmatically via API
  • Run automated forecasting algorithms
  • Don’t require human intervention
  • Can be owned by individual users or Metaculus
Common Bot Types:
  • Baseline models: Simple statistical forecasts
  • Machine learning models: Trained on historical data
  • News-driven models: React to information streams
  • Ensemble models: Combine multiple approaches
  • Research bots: Experimental forecasting algorithms

Bot Ownership Model

Personal Bots

Users can create and manage their own bots:
Requirements:
  • Active Metaculus account
  • API access token
  • Forecasting algorithm implementation
Limits:
  • Standard users: Up to 5 bots per account
  • First bot automatically marked as “primary bot”
  • Superusers: Unlimited bots
Creating a Bot:
from users.services.bots_management import create_bot

bot = create_bot(
    bot_owner=user,
    username="my_forecasting_bot",
    # Optional: additional user fields
)
Bot Properties:
  • is_bot = True
  • bot_owner = Your user account
  • is_primary_bot = True for first bot, False for others
  • Inherits language and theme preferences from owner

Primary vs Non-Primary Bots

Primary Bot:
  • Your main bot account
  • Automatically set for your first bot
  • Included in some aggregations
  • Full leaderboard participation
Non-Primary Bots:
  • Secondary/experimental bots
  • May be excluded from certain leaderboards
  • Useful for testing algorithms
  • Can be promoted to primary if needed

Bot Integration in Aggregates

The include_bots_in_aggregates Flag

Each question has an include_bots_in_aggregates boolean field that controls whether bot forecasts are included in the community aggregate.
When include_bots_in_aggregates = True:
  • Bot forecasts included in aggregation calculations
  • Geometric mean computed over humans + bots
  • Affects Recency Weighted, Unweighted, etc.
  • Bot predictions visible in community forecast
When include_bots_in_aggregates = False (default):
  • Bot forecasts excluded from aggregation
  • Only human forecasters contribute
  • Bots can still forecast (for scoring/research)
  • Bot predictions tracked separately
Setting the Flag:Question authors can enable bot inclusion:
{
  "include_bots_in_aggregates": true
}
Use Cases for Enabling:
  • Bot benchmarking questions
  • Hybrid human-AI forecasting
  • Testing AI contribution to wisdom of crowds
  • Questions where bot perspective adds value

Impact on Scoring

Bot inclusion affects score calculations: Peer Score:
  • If bots included: Your forecast compared to human+bot geometric mean
  • If bots excluded: Your forecast compared to human-only geometric mean
  • Bots themselves always scored against appropriate baseline
Baseline Score:
  • Unaffected by include_bots_in_aggregates
  • Always scored against statistical baseline
  • Same calculation for humans and bots
Example:
# In score_math.py
base_forecasts = user_forecasts.exclude_non_primary_bots()

if not question.include_bots_in_aggregates:
    base_forecasts = base_forecasts.exclude(author__is_bot=True)

geometric_means = get_geometric_means(base_forecasts)

Bot Leaderboards

Metaculus maintains separate leaderboards for bots:

Global Bot Leaderboard

Purpose:
  • Rank bot forecasting accuracy
  • Compare bot algorithms
  • Benchmark machine vs human performance
Score Type:
  • Uses Baseline Score (not Peer Score)
  • Avoids bots gaming each other
  • Fair comparison across all questions
Eligibility:
  • Must be marked as bot (is_bot=True)
  • Primary bots included by default
  • Non-primary bots may be excluded
Updates: Run via management command:
python manage.py update_global_bot_leaderboard

Tournament Bot Status

Tournaments can configure bot participation:
Project-Level Setting:
class BotLeaderboardStatus(models.TextChoices):
    EXCLUDE_ALL = "exclude_all"  # No bots on leaderboard
    INCLUDE_PRIMARY = "include_primary"  # Only primary bots
    INCLUDE_ALL = "include_all"  # All bots
Inheritance:
  1. Leaderboard’s bot_status field (if set)
  2. Project’s bot_leaderboard_status (if leaderboard field not set)
  3. Default exclusion policy
Use Cases:
  • EXCLUDE_ALL: Human-only competition
  • INCLUDE_PRIMARY: Allow main bot entries
  • INCLUDE_ALL: Open to all bots, research competitions

Viewing Bot Forecasts

On Question Pages

When include_bots_in_aggregates = True:
  • Bot forecasts appear in aggregate chart
  • Forecaster count includes bots
  • “Includes bots” indicator shown
  • Can toggle bot visibility in some views

In Aggregation Explorer

Bot Toggle: Some aggregation methods support bot filtering:
  1. Select aggregation method (e.g., Recency Weighted)
  2. Toggle Include Bots checkbox
  3. Compare human-only vs human+bot aggregates
  4. Useful for understanding bot impact
Availability:
  • Recency Weighted: ✓ Supports bot toggle
  • Unweighted: ✓ Supports bot toggle
  • Single Aggregation: ✓ Supports bot toggle
  • Metaculus Pros: ✗ No bot toggle (human-only by definition)
  • Medalists: ✗ No bot toggle (medals are human-only)

Download Question Data

Export options for bot forecasts:
{
  "include_bots": "default",  // Use question's setting
  "include_bots": "true",     // Force include bots
  "include_bots": "false"     // Force exclude bots
}
CSV exports include is_bot column to identify bot forecasts.

Creating a Forecasting Bot

Step 1: Set Up Bot Account

Via Settings Page:
  1. Go to Account Settings
  2. Navigate to Bots section
  3. Click Create Bot
  4. Enter bot username
  5. Save API credentials securely
Via API: Use the bot management service:
bot = create_bot(
    bot_owner=request.user,
    username="my_algorithm_v1"
)

# Get API token
token = Token.objects.create(user=bot)

Step 2: Implement Forecasting Logic

Basic Structure:
import requests

BOT_TOKEN = "your_bot_api_token"
BASE_URL = "https://www.metaculus.com/api"

def make_forecast(question_id, probability):
    """Submit a forecast for a binary question."""
    headers = {"Authorization": f"Token {BOT_TOKEN}"}
    data = {
        "probability_yes": probability,
    }
    response = requests.post(
        f"{BASE_URL}/questions/{question_id}/forecast/",
        headers=headers,
        json=data
    )
    return response.json()

def get_open_questions():
    """Fetch open questions to forecast on."""
    headers = {"Authorization": f"Token {BOT_TOKEN}"}
    response = requests.get(
        f"{BASE_URL}/questions/",
        headers=headers,
        params={"status": "open"}
    )
    return response.json()

def run_bot():
    """Main bot loop."""
    questions = get_open_questions()
    for question in questions:
        # Your forecasting algorithm here
        probability = my_forecasting_algorithm(question)
        make_forecast(question['id'], probability)

Step 3: Deploy and Monitor

Deployment Options:
  • Serverless functions (AWS Lambda, Google Cloud Functions)
  • Scheduled scripts (cron jobs)
  • Continuous services (Docker containers)
  • GitHub Actions
Monitoring:
  • Track bot forecast volume
  • Monitor error rates
  • Check scoring performance
  • Review calibration regularly

Bot Best Practices

Building Effective Bots:
  1. Start simple: Begin with baseline models
  2. Test thoroughly: Use resolved questions for backtesting
  3. Update regularly: React to new information
  4. Avoid extremes: Unless highly confident
  5. Monitor performance: Check calibration and scores
  6. Document algorithm: Help others understand your approach
  7. Respect rate limits: Don’t overwhelm the API
Bot Etiquette:
  • Don’t create many redundant bots
  • Mark experimental bots as non-primary
  • Don’t game the system with bot farms
  • Disclose bot methodology when asked
  • Don’t use bots to manipulate community aggregates
  • Respect question author’s bot inclusion preferences

Bot Exclusion Filters

The codebase implements several bot filters:

Query Filters

Exclude Non-Primary Bots:
forecasts.exclude_non_primary_bots()
Keeps:
  • All human forecasters
  • Primary bots (is_primary_bot=True)
Removes:
  • Non-primary bots
Exclude All Bots:
forecasts.exclude(author__is_bot=True)
Conditional Exclusion:
base_forecasts = forecasts.exclude_non_primary_bots()

if not question.include_bots_in_aggregates:
    base_forecasts = base_forecasts.exclude(author__is_bot=True)

Research Applications

Bots are valuable for research:

Baseline Comparisons

  • Simple statistical models as benchmarks
  • Measure human value-add over algorithms
  • Test “wisdom of crowds” threshold

Hybrid Forecasting

  • Combine human intuition + machine processing
  • Test optimal human-bot mixing ratios
  • Study when to trust humans vs algorithms

Algorithm Development

  • Rapidly test new forecasting methods
  • Compare algorithm variants
  • Build ensemble models

Metaculus Research

  • Study forecast updating patterns
  • Analyze information integration
  • Test aggregation methods
  • Benchmark platform accuracy

Technical Reference

Key Files:
  • users/services/bots_management.py - Bot creation/management
  • users/models.py - Bot user model fields
  • questions/models.py - include_bots_in_aggregates field
  • scoring/score_math.py - Bot scoring logic
  • utils/the_math/aggregations.py - Bot inclusion in aggregates
Database Fields:
class User(AbstractUser):
    is_bot = models.BooleanField(default=False)
    is_primary_bot = models.BooleanField(default=False)
    bot_owner = models.ForeignKey('self', null=True, on_delete=models.CASCADE)

class Question(models.Model):
    include_bots_in_aggregates = models.BooleanField(default=False)

Future Developments

Potential bot feature roadmap:
  • Bot-specific scoring metrics
  • Enhanced bot vs human leaderboards
  • Bot methodology disclosure requirements
  • Bot forecast explanations
  • Collaborative human-bot forecasting tools
  • Real-time bot performance dashboards

Build docs developers (and LLMs) love