Skip to main content

Overview

Leaderboards rank forecasters based on their prediction accuracy across multiple questions. Metaculus uses sophisticated leaderboard systems to recognize top performers, distribute prizes, and track forecasting skill over time.

Leaderboard Types

From scoring/constants.py:17-28, Metaculus supports multiple leaderboard score types:
class LeaderboardScoreTypes(models.TextChoices):
    PEER_TOURNAMENT = "peer_tournament"
    DEFAULT = "default"
    SPOT_PEER_TOURNAMENT = "spot_peer_tournament"
    SPOT_BASELINE_TOURNAMENT = "spot_baseline_tournament"
    RELATIVE_LEGACY_TOURNAMENT = "relative_legacy_tournament"
    BASELINE_GLOBAL = "baseline_global"
    PEER_GLOBAL = "peer_global"
    PEER_GLOBAL_LEGACY = "peer_global_legacy"
    COMMENT_INSIGHT = "comment_insight"
    QUESTION_WRITING = "question_writing"
    MANUAL = "manual"

Tournament

Sum of scores in competitions

Global

Platform-wide rankings

Specialized

Question writing, comments, manual

Leaderboard Model

From scoring/models.py:97-223, the core leaderboard structure:
class Leaderboard(TimeStampedModel):
    name = models.CharField(max_length=200, null=True, blank=True)
    project = models.ForeignKey(
        Project,
        null=True,
        blank=True,
        on_delete=models.CASCADE,
        related_name="leaderboards",
    )
    
    score_type = models.CharField(
        max_length=200,
        choices=LeaderboardScoreTypes.choices,
    )
    
    # Time boundaries
    start_time = models.DateTimeField(null=True, blank=True)
    end_time = models.DateTimeField(null=True, blank=True)
    finalize_time = models.DateTimeField(null=True, blank=True)
    finalized = models.BooleanField(default=False)
    
    # Prize configuration
    prize_pool = models.DecimalField(
        decimal_places=2,
        max_digits=15,
        null=True,
        blank=True,
    )
    minimum_prize_amount = models.DecimalField(
        default=50.00,
        decimal_places=2,
        max_digits=15,
    )
    
    # User filtering
    user_list = models.ManyToManyField(User, blank=True)
    bot_status = models.CharField(
        max_length=32,
        choices=Project.BotLeaderboardStatus.choices,
        null=True,
        blank=True,
    )
    
    # Display configuration
    display_config = models.JSONField(null=True, blank=True)

Score Types Explained

Tournament Score Types

Most common tournament scoringSums peer scores across all tournament questions. This is the default for new tournaments.From scoring/constants.py:30-42:
case cls.PEER_TOURNAMENT:
    return ScoreTypes.PEER
Formula: Total = Σ(peer_score × question_weight)Best for: Standard forecasting competitions

Global Score Types

Coverage-weighted average of peer scores across all qualifying questions.Formula: Score = Σ(peer_score × coverage × weight) / Σ(coverage × weight)Best for: Platform-wide rankings that reward both accuracy and participationNote: Requires many questions (50+) to be meaningful

Specialized Score Types

H-index of upvotes for comments on questions.Calculation: Your comment insight score is N if you have N comments with at least N upvotes each.Use: Rewards helpful commentary and explanation
H-index of (number of forecasters / 10) on authored questions.Calculation: Your question writing score is N if you wrote N questions with at least N×10 forecasters each.Use: Rewards writing popular, engaging questions
Does not automatically update. Entries must be manually set.Use: Special awards, judged competitions, or custom scoring systems

Question Filtering

Leaderboards filter which questions contribute to rankings.

Tournament Leaderboards

From scoring/models.py:224-238:
def get_questions(self) -> QuerySet[Question]:
    questions = Question.objects.filter(
        post__curation_status=Post.CurationStatus.APPROVED
    )
    
    if not (self.project and self.project.type == Project.ProjectTypes.SITE_MAIN):
        # Normal project leaderboard
        if self.project:
            questions = questions.filter(
                Q(post__projects=self.project) | Q(post__default_project=self.project)
            )
        return questions.distinct("id")
Tournament leaderboards include questions where:
  • Post is approved (curated)
  • Post’s default project OR tagged projects include the tournament

Global Leaderboards

From scoring/models.py:240-297, global leaderboards have strict time filtering:
# Questions must:
# 1. Be public
questions = questions.filter_public().filter(
    post__in=Post.objects.filter_for_main_feed()
)

# 2. Have time boundaries within leaderboard period
questions = questions.filter(
    open_time__gte=self.start_time,
    open_time__lt=self.end_time,
    scheduled_close_time__lte=self.end_time + close_grace_period,
)

# 3. Exclude questions that belong to other overlapping leaderboards
# (Questions only count for the shortest matching time window)
Global leaderboards ensure questions only count toward ONE global leaderboard period, preventing double-counting.

Time Boundaries

Leaderboards have three key timestamps:

Start Time and End Time

From scoring/models.py:154-170:
start_time = models.DateTimeField(
    null=True,
    blank=True,
    help_text="""Optional (required for global leaderboards).
    Global Leaderboards: filters for questions that have an open time after this.
    Non-Global Leaderboards: has no effect on question filtering.
    Filtering MedalExclusionRecords: MedalExclusionRecords that have no end_time
    or an end_time greater than this will be triggered.
    """,
)

end_time = models.DateTimeField(
    null=True,
    blank=True,
    help_text="""Optional (required for global leaderboards).
    Global Leaderboards: filters for questions that have a scheduled_close_time
    before this (plus a grace period).
    """,
)

Finalize Time

From scoring/models.py:172-183:
finalize_time = models.DateTimeField(
    null=True,
    blank=True,
    help_text="""Optional. If not set, the Project's close_date will be used instead.
    For all Leaderboards: used to filter out questions that have a
    resolution_set_time after this (as they were resolved after this
    Leaderboard was finalized).
    """,
)

finalized = models.BooleanField(
    default=False,
    help_text="If true, this Leaderboard's entries cannot be updated except by manual action.",
)
1

Active Period

Leaderboard tracks questions between start_time and end_time
2

Resolution Period

Questions resolve and scores are calculated (can extend past end_time)
3

Finalization

At finalize_time, leaderboard locks and becomes immutable

Prize Distribution

Prize Pool

From scoring/models.py:184-201:
prize_pool = models.DecimalField(
    default=None,
    decimal_places=2,
    max_digits=15,
    null=True,
    blank=True,
    help_text="""Optional. If not set, the Project's prize_pool will be used.
    If the Project has a prize pool, but this leaderboard has none, set this to 0.
    """,
)

minimum_prize_amount = models.DecimalField(
    default=50.00,
    decimal_places=2,
    max_digits=15,
    help_text="""The minimum amount a user can win in this leaderboard.
    Any remaining money is redistributed. Tournaments that close before June 2025
    will have a value of 0.00.
    """,
)

Prize Calculation

Typical prize distribution:
  1. Rank users by total score (descending)
  2. Apply medal exclusions (see below)
  3. Distribute prize pool according to curve (e.g., top 10% of prize pool to 1st place)
  4. Apply minimum prize floor - redistribute amounts below minimum
The minimum_prize_amount prevents many small payouts. Set to 50meansprizesbelow50 means prizes below 50 are redistributed to higher performers.

Bot Handling

Leaderboards control how bots are ranked and displayed.

Bot Status

From scoring/models.py:202-209 and projects/models.py:248-264:
bot_status = models.CharField(
    max_length=32,
    choices=Project.BotLeaderboardStatus.choices,
    null=True,
    blank=True,
    help_text="Optional. If not set, the Project's bot_leaderboard_status will be used.",
)

class BotLeaderboardStatus(models.TextChoices):
    EXCLUDE_AND_HIDE = "exclude_and_hide"  # Bots hidden completely
    EXCLUDE_AND_SHOW = "exclude_and_show"  # Bots shown but no prizes (default)
    INCLUDE = "include"                    # Bots compete for prizes
    BOTS_ONLY = "bots_only"               # Only bots compete
Most leaderboards use EXCLUDE_AND_SHOW - bots appear for benchmarking but don’t win prizes or count in official rankings.

User Filtering

Leaderboards can restrict participation to specific users.

User List

From scoring/models.py:210-217:
user_list = models.ManyToManyField(
    User,
    blank=True,
    help_text="""Optional. If not set, all users with scores will be included.
    If set, only users in this list will be included.
    Exclusion Records still apply independent of this list.
    """,
)
Use cases:
  • Invite-only competitions
  • Organization-specific leaderboards
  • Qualifying tournaments with pre-approved participants

Display Configuration

Leaderboards can customize their presentation.

Display Config JSON

From scoring/models.py:112-128:
display_config = models.JSONField(
    null=True,
    blank=True,
    help_text=(
        "Optional JSON configuration for displaying this leaderboard."
        "If not set, default display settings will be used."
        "Example display_config:"
        """{
            "display_name": "My Custom Leaderboard",
            "column_renames": {
                "Questions": "Question Links"
            },
            "display_order": 1,
            "display_on_project": true
        }"""
    ),
)
Configuration options:
  • display_name: Override default leaderboard name
  • column_renames: Custom column headers
  • display_order: Sort order when project has multiple leaderboards
  • display_on_project: Whether to show on project page

Global Leaderboard Periods

Metaculus maintains time-based global leaderboard periods.

Period Generation

From scoring/models.py:300+, global leaderboards follow patterns:
GLOBAL_LEADERBOARD_STRING = "Leaderboard"
GLOBAL_LEADERBOARD_SLUG = "leaderboard"

def global_leaderboard_dates() -> list[tuple[datetime, datetime]]:
    # Returns list of (start_date, end_date) tuples for each period
    # Typically: quarterly periods (Q1 2024, Q2 2024, etc.)

Period Resolution

From questions/models.py:385-419, questions are matched to the shortest overlapping period:
def get_global_leaderboard_dates(self) -> tuple[datetime, datetime] | None:
    forecast_horizon_start = self.open_time
    forecast_horizon_end = self.scheduled_close_time
    
    # Find shortest window that contains this question's forecast period
    shortest_window = None
    for gl_start, gl_end in gl_dates[::-1]:  # Reverse order (shortest first)
        if forecast_horizon_start < gl_start:
            continue
        if forecast_horizon_end > gl_end + timedelta(days=3):
            continue
        if shortest_window is None or (gl_end - gl_start) < window_size:
            shortest_window = (gl_start, gl_end)
Questions only contribute to ONE global leaderboard - the shortest period that fully contains their forecasting window.

Leaderboard Entries

Leaderboard rankings are stored as entries (from scoring/models.py:97):
class Leaderboard(TimeStampedModel):
    entries: QuerySet["LeaderboardEntry"]  # Related model
Each entry represents one user’s standing:
  • User reference
  • Total score
  • Rank
  • Prize amount (if applicable)
  • Number of questions forecasted
  • Coverage statistics

Best Practices

Forecast Broadly

Global leaderboards reward breadth - forecast on many questions across topics

Maintain Coverage

Keep forecasts active throughout question periods to maximize coverage weights

Start Early

Tournament leaderboards often reward early participation with higher coverage

Track Your Rank

Monitor leaderboard position to understand your standing and adjust strategy

Read Leaderboard Rules

Each leaderboard may use different scoring - understand which score type applies

Focus Quality

One well-researched forecast beats many hasty predictions

API Reference

Leaderboards API

Explore the full Leaderboards API documentation

Scoring

Understand how individual scores are calculated

Tournaments

Learn about competitive forecasting events

Projects

Understand project-leaderboard relationships

Questions

Learn which questions contribute to leaderboards

Build docs developers (and LLMs) love