Skip to main content

Overview

Leaderboards track forecaster performance across the platform and within specific tournaments. Metaculus uses sophisticated scoring algorithms to measure forecast accuracy and reward skilled predictors.

Get Global Leaderboard

curl -X GET "https://www.metaculus.com/api/leaderboards/global/?limit=50" \
  -H "Authorization: Token YOUR_TOKEN"
GET /api/leaderboards/global/ Retrieve the global Metaculus leaderboard showing top forecasters across all questions.

Query Parameters

for_user
integer
Show leaderboard position for a specific user
score_type
string
Score type to display: peer, baseline, spot_peer, spot_baseline
limit
integer
default:"50"
Number of entries to return
offset
integer
default:"0"
Pagination offset

Response

leaderboard
object
Leaderboard metadata
id
integer
Leaderboard ID
name
string
Leaderboard name
score_type
string
Scoring method used
start_time
string (datetime)
When this leaderboard period started
end_time
string (datetime)
When this leaderboard period ends
finalized
boolean
Whether the leaderboard is finalized
entries
array
Array of leaderboard entry objects

Leaderboard Entry Object

user
object
User information
id
integer
User ID
username
string
Username
is_bot
boolean
Whether this is a bot account
rank
integer
Position on the leaderboard (1 = first place)
score
number
Total forecasting score
ci_lower
number
Lower bound of 95% confidence interval for score
ci_upper
number
Upper bound of 95% confidence interval for score
coverage
number
Proportion of scored questions the user forecasted (0-1)
contribution_count
integer
Number of questions contributing to this score
medal
string
Medal earned: gold, silver, bronze, or null
prize
number
Prize amount earned (for tournaments)
excluded
boolean
Whether this entry is excluded from rankings

Get Project Leaderboard

curl -X GET "https://www.metaculus.com/api/leaderboards/project/144/" \
  -H "Authorization: Token YOUR_TOKEN"
GET /api/leaderboards/project/{projectId}/ Retrieve the leaderboard for a specific project or tournament.

Path Parameters

projectId
integer
required
The project ID

Query Parameters

Same as global leaderboard, plus:
primary_only
boolean
default:"false"
Only show the primary leaderboard for this project

Response

Returns the same structure as the global leaderboard, but includes additional fields:
leaderboard.project_id
integer
The project this leaderboard belongs to
leaderboard.project_name
string
Project name
leaderboard.project_slug
string
Project slug
leaderboard.prize_pool
string
Total prize pool for this tournament
leaderboard.is_primary_leaderboard
boolean
Whether this is the project’s primary leaderboard
entries[].take
number
For tournaments: user’s tournament-specific score
entries[].percent_prize
number
For tournaments: percentage of prize pool won

Get User Medals

curl -X GET "https://www.metaculus.com/api/medals/?user_id=12345" \
  -H "Authorization: Token YOUR_TOKEN"
GET /api/medals/ Retrieve medal counts for a user.

Query Parameters

user_id
integer
required
User ID to get medals for

Response

gold
integer
Number of gold medals
silver
integer
Number of silver medals
bronze
integer
Number of bronze medals
tournaments
array
List of tournaments where medals were earned
project_id
integer
Tournament project ID
project_name
string
Tournament name
medal
string
Medal type earned
rank
integer
Final rank in tournament

Get Metaculus Track Record

curl -X GET "https://www.metaculus.com/api/metaculus_track_record/" \
  -H "Authorization: Token YOUR_TOKEN"
GET /api/metaculus_track_record/ Retrieve Metaculus’s overall forecasting track record and performance statistics.

Response

statistics
array
Array of platform-wide statistics
name
string
Statistic name
value
number
Statistic value
description
string
What this statistic measures
calibration_data
object
Calibration curve data showing how often predictions match outcomes

Understanding Score Types

Peer Score vs Baseline ScoreMetaculus uses multiple scoring methods:
  • Peer Score: Measures performance relative to the community aggregate
  • Baseline Score: Measures performance relative to a baseline prior
  • Spot Score: Evaluated at a specific time (CP reveal time) rather than continuously
Higher scores are better. Scores can be negative if predictions are worse than the baseline.

Score Calculation

Scores are calculated using:
  1. Log Score: Rewards accuracy with proper scoring rules
  2. Coverage: Weights scores by how many questions you forecasted
  3. Recency: More recent forecasts may have higher weight

Example: Tournament Rankings

import requests

headers = {"Authorization": "Token YOUR_TOKEN"}

# Get tournament leaderboard
response = requests.get(
    "https://www.metaculus.com/api/leaderboards/project/3876/",
    headers=headers,
    params={"limit": 10}
)

leaderboard = response.json()
project = leaderboard["leaderboard"]

print(f"Tournament: {project['project_name']}")
print(f"Prize Pool: ${project['prize_pool']}")
print(f"Status: {'Finalized' if project['finalized'] else 'Ongoing'}")
print("\nTop 10:")

for entry in leaderboard["entries"]:
    user = entry["user"]
    medal = f" 🏅{entry['medal']}" if entry["medal"] else ""
    prize = f" (${entry['prize']:.2f})" if entry.get("prize") else ""
    
    print(f"{entry['rank']:2d}. {user['username']:20s} "
          f"Score: {entry['score']:7.2f} "
          f"Coverage: {entry['coverage']:.1%}{medal}{prize}")

Example: Compare User to Leaderboard

import requests

headers = {"Authorization": "Token YOUR_TOKEN"}
user_id = 12345

# Get user's position on global leaderboard
response = requests.get(
    "https://www.metaculus.com/api/leaderboards/global/",
    headers=headers,
    params={
        "for_user": user_id,
        "limit": 100
    }
)

leaderboard = response.json()
user_entry = next(
    (e for e in leaderboard["entries"] if e["user"]["id"] == user_id),
    None
)

if user_entry:
    print(f"Your rank: {user_entry['rank']}")
    print(f"Your score: {user_entry['score']:.2f}")
    print(f"Your coverage: {user_entry['coverage']:.1%}")
    print(f"Questions scored: {user_entry['contribution_count']}")
    
    # Compare to #1
    top_entry = leaderboard["entries"][0]
    score_diff = top_entry["score"] - user_entry["score"]
    print(f"\nGap to #1: {score_diff:.2f} points")
else:
    print("User not found on leaderboard")

Build docs developers (and LLMs) love