Leaderboards rank forecasters based on their prediction accuracy across multiple questions. Metaculus uses sophisticated leaderboard systems to recognize top performers, distribute prizes, and track forecasting skill over time.
Most common tournament scoringSums peer scores across all tournament questions. This is the default for new tournaments.From scoring/constants.py:30-42:
case cls.PEER_TOURNAMENT: return ScoreTypes.PEER
Formula: Total = Σ(peer_score × question_weight)Best for: Standard forecasting competitions
Uses each question’s default score type (which can vary per question).From questions/models.py:91-102:
Coverage-weighted average of peer scores across all qualifying questions.Formula: Score = Σ(peer_score × coverage × weight) / Σ(coverage × weight)Best for: Platform-wide rankings that reward both accuracy and participationNote: Requires many questions (50+) to be meaningful
Sum of baseline scores across all qualifying questions.Formula: Total = Σ(baseline_score × question_weight)Best for: Measuring absolute forecasting accuracy at global scale
Simple average of peer scores (older algorithm).Status: Used for historical leaderboards pre-2024
H-index of upvotes for comments on questions.Calculation: Your comment insight score is N if you have N comments with at least N upvotes each.Use: Rewards helpful commentary and explanation
Question Writing
H-index of (number of forecasters / 10) on authored questions.Calculation: Your question writing score is N if you wrote N questions with at least N×10 forecasters each.Use: Rewards writing popular, engaging questions
Manual
Does not automatically update. Entries must be manually set.Use: Special awards, judged competitions, or custom scoring systems
From scoring/models.py:240-297, global leaderboards have strict time filtering:
# Questions must:# 1. Be publicquestions = questions.filter_public().filter( post__in=Post.objects.filter_for_main_feed())# 2. Have time boundaries within leaderboard periodquestions = questions.filter( open_time__gte=self.start_time, open_time__lt=self.end_time, scheduled_close_time__lte=self.end_time + close_grace_period,)# 3. Exclude questions that belong to other overlapping leaderboards# (Questions only count for the shortest matching time window)
Global leaderboards ensure questions only count toward ONE global leaderboard period, preventing double-counting.
start_time = models.DateTimeField( null=True, blank=True, help_text="""Optional (required for global leaderboards). Global Leaderboards: filters for questions that have an open time after this. Non-Global Leaderboards: has no effect on question filtering. Filtering MedalExclusionRecords: MedalExclusionRecords that have no end_time or an end_time greater than this will be triggered. """,)end_time = models.DateTimeField( null=True, blank=True, help_text="""Optional (required for global leaderboards). Global Leaderboards: filters for questions that have a scheduled_close_time before this (plus a grace period). """,)
finalize_time = models.DateTimeField( null=True, blank=True, help_text="""Optional. If not set, the Project's close_date will be used instead. For all Leaderboards: used to filter out questions that have a resolution_set_time after this (as they were resolved after this Leaderboard was finalized). """,)finalized = models.BooleanField( default=False, help_text="If true, this Leaderboard's entries cannot be updated except by manual action.",)
1
Active Period
Leaderboard tracks questions between start_time and end_time
2
Resolution Period
Questions resolve and scores are calculated (can extend past end_time)
3
Finalization
At finalize_time, leaderboard locks and becomes immutable
prize_pool = models.DecimalField( default=None, decimal_places=2, max_digits=15, null=True, blank=True, help_text="""Optional. If not set, the Project's prize_pool will be used. If the Project has a prize pool, but this leaderboard has none, set this to 0. """,)minimum_prize_amount = models.DecimalField( default=50.00, decimal_places=2, max_digits=15, help_text="""The minimum amount a user can win in this leaderboard. Any remaining money is redistributed. Tournaments that close before June 2025 will have a value of 0.00. """,)
From scoring/models.py:202-209 and projects/models.py:248-264:
bot_status = models.CharField( max_length=32, choices=Project.BotLeaderboardStatus.choices, null=True, blank=True, help_text="Optional. If not set, the Project's bot_leaderboard_status will be used.",)class BotLeaderboardStatus(models.TextChoices): EXCLUDE_AND_HIDE = "exclude_and_hide" # Bots hidden completely EXCLUDE_AND_SHOW = "exclude_and_show" # Bots shown but no prizes (default) INCLUDE = "include" # Bots compete for prizes BOTS_ONLY = "bots_only" # Only bots compete
Most leaderboards use EXCLUDE_AND_SHOW - bots appear for benchmarking but don’t win prizes or count in official rankings.
user_list = models.ManyToManyField( User, blank=True, help_text="""Optional. If not set, all users with scores will be included. If set, only users in this list will be included. Exclusion Records still apply independent of this list. """,)
Use cases:
Invite-only competitions
Organization-specific leaderboards
Qualifying tournaments with pre-approved participants
From scoring/models.py:300+, global leaderboards follow patterns:
GLOBAL_LEADERBOARD_STRING = "Leaderboard"GLOBAL_LEADERBOARD_SLUG = "leaderboard"def global_leaderboard_dates() -> list[tuple[datetime, datetime]]: # Returns list of (start_date, end_date) tuples for each period # Typically: quarterly periods (Q1 2024, Q2 2024, etc.)