Rate Limiting Overview
The Metaculus API implements rate limiting to ensure fair usage and protect the service from abuse. All API requests are subject to throttling policies.
Exceeding rate limits will result in your requests being temporarily rejected with a 429 Too Many Requests error.
Why Rate Limits Exist
Rate limits help:
- Ensure fair access for all users
- Prevent abuse and malicious activity
- Maintain API performance and stability
- Protect server resources
Rate Limit Policy
While Metaculus throttles requests to prevent abuse, specific rate limit thresholds may vary based on:
- Authentication status (authenticated vs. unauthenticated)
- Endpoint being accessed
- Request patterns and historical usage
- Account standing
Rate limits are subject to change as we optimize the API. We’ll notify developers of significant changes to rate limiting policies.
Rate Limit Response
When you exceed the rate limit, the API returns a 429 Too Many Requests error:
{
"detail": "Request was throttled. Expected available in X seconds."
}
Rate limit responses may include headers indicating your current limit status:
HTTP/2 429
Retry-After: 60
Content-Type: application/json
{"detail": "Request was throttled. Expected available in 60 seconds."}
Best Practices
1. Implement Exponential Backoff
When you receive a 429 error, wait before retrying. Use exponential backoff to gradually increase wait times:
import time
import requests
from typing import Optional
def make_request_with_backoff(
url: str,
headers: dict,
max_retries: int = 5
) -> Optional[dict]:
"""
Make API request with exponential backoff on rate limits.
"""
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code == 200:
return response.json()
elif response.status_code == 429:
# Check for Retry-After header
retry_after = int(response.headers.get('Retry-After', 0))
if retry_after:
wait_time = retry_after
else:
# Exponential backoff: 2^attempt seconds
wait_time = 2 ** attempt
print(f"Rate limited. Waiting {wait_time} seconds...")
time.sleep(wait_time)
else:
response.raise_for_status()
raise Exception(f"Max retries ({max_retries}) exceeded")
# Usage
headers = {'Authorization': 'Token YOUR_API_TOKEN'}
data = make_request_with_backoff(
'https://www.metaculus.com/api/posts/',
headers
)
2. Cache Responses
Reduce API calls by caching responses that don’t change frequently:
import time
import requests
from functools import lru_cache
@lru_cache(maxsize=100)
def get_post_cached(post_id: int, cache_time: int) -> dict:
"""
Get post with simple time-based cache.
cache_time parameter ensures cache expires.
"""
headers = {'Authorization': f'Token {API_TOKEN}'}
response = requests.get(
f'https://www.metaculus.com/api/posts/{post_id}/',
headers=headers
)
return response.json()
# Call with current minute to cache for 1 minute
current_minute = int(time.time() / 60)
post = get_post_cached(12345, current_minute)
3. Batch Requests
Instead of making many individual requests, use pagination and filters to retrieve multiple items at once:
# Bad: Multiple individual requests
for post_id in post_ids:
response = requests.get(
f'https://www.metaculus.com/api/posts/{post_id}/',
headers=headers
)
posts.append(response.json())
# Good: Single paginated request
response = requests.get(
'https://www.metaculus.com/api/posts/',
headers=headers,
params={
'limit': 100,
'tournaments': 'metaculus-cup'
}
)
posts = response.json()['results']
4. Use Webhooks (Future)
While webhooks are not currently available, they may be added in future API versions to reduce the need for polling.
If you need real-time updates, consider:
- Polling at reasonable intervals (e.g., every 5-15 minutes)
- Using efficient filters to minimize data transfer
- Implementing conditional requests (if supported)
5. Monitor Your Usage
Keep track of your API usage to avoid hitting rate limits:
import time
from collections import deque
class RateLimitedClient:
def __init__(self, api_token: str, max_requests_per_minute: int = 60):
self.api_token = api_token
self.max_requests = max_requests_per_minute
self.request_times = deque()
def _wait_if_needed(self):
"""Wait if we've exceeded our self-imposed rate limit."""
now = time.time()
# Remove requests older than 1 minute
while self.request_times and self.request_times[0] < now - 60:
self.request_times.popleft()
# If we've hit our limit, wait
if len(self.request_times) >= self.max_requests:
sleep_time = 60 - (now - self.request_times[0])
if sleep_time > 0:
time.sleep(sleep_time)
self.request_times.clear()
def get(self, url: str, **kwargs):
"""Make rate-limited GET request."""
self._wait_if_needed()
self.request_times.append(time.time())
headers = kwargs.get('headers', {})
headers['Authorization'] = f'Token {self.api_token}'
kwargs['headers'] = headers
return requests.get(url, **kwargs)
# Usage
client = RateLimitedClient('YOUR_API_TOKEN', max_requests_per_minute=30)
response = client.get('https://www.metaculus.com/api/posts/')
Rate Limit Troubleshooting
I’m Getting 429 Errors
- Implement backoff logic - Wait before retrying failed requests
- Reduce request frequency - Spread requests over time
- Use pagination efficiently - Fetch more data per request with larger
limit values (max 100)
- Cache responses - Don’t re-fetch data that hasn’t changed
- Check your code - Ensure you’re not making requests in tight loops
My Bot Is Being Rate Limited
For automated bots:
- Add delays between requests - Wait 1-2 seconds between API calls
- Implement request queuing - Queue requests and process them at a controlled rate
- Monitor and log - Track your request patterns to identify issues
- Contact support - If you have legitimate high-volume needs, contact api-requests@metaculus.com
Example: Rate-Limited Bot
import time
import requests
from datetime import datetime
class ForecastingBot:
def __init__(self, api_token: str):
self.api_token = api_token
self.headers = {'Authorization': f'Token {api_token}'}
self.min_request_interval = 1.0 # seconds between requests
self.last_request_time = 0
def _throttle(self):
"""Ensure minimum time between requests."""
elapsed = time.time() - self.last_request_time
if elapsed < self.min_request_interval:
time.sleep(self.min_request_interval - elapsed)
self.last_request_time = time.time()
def get_open_questions(self):
"""Fetch open questions with rate limiting."""
self._throttle()
response = requests.get(
'https://www.metaculus.com/api/posts/',
headers=self.headers,
params={
'statuses': 'open',
'forecast_type': 'binary',
'limit': 50
}
)
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limited. Waiting {retry_after}s...")
time.sleep(retry_after)
return self.get_open_questions() # Retry
response.raise_for_status()
return response.json()
# Usage
bot = ForecastingBot('YOUR_API_TOKEN')
questions = bot.get_open_questions()
print(f"Found {len(questions['results'])} open questions")
Fair Use Guidelines
To maintain good standing with the API:
- Be respectful - Don’t hammer the API with excessive requests
- Use efficient queries - Filter and paginate appropriately
- Handle errors gracefully - Implement proper retry logic
- Cache when possible - Don’t repeatedly fetch the same data
- Monitor your usage - Keep track of request volumes
Requesting Higher Limits
If you have a legitimate use case requiring higher rate limits:
- Document your use case and expected request volume
- Explain how your application benefits the forecasting community
- Contact us at api-requests@metaculus.com
We’re happy to work with developers building valuable tools for the community!
Additional Resources