The Blog API enforces rate limits on every endpoint to prevent excessive load on the server and the upstream data source. Rate limiting is handled by slowapi, a FastAPI-compatible wrapper around theDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/Project516/BlogAPI/llms.txt
Use this file to discover all available pages before exploring further.
limits library. Each limit is applied per client IP address, so different callers do not share a quota.
Limits per endpoint
| Endpoint | Method | Limit |
|---|---|---|
/blogs | GET | 5 requests / minute |
/blogs/latest | GET | 5 requests / minute |
/blogs/search | GET | 5 requests / minute |
/blogs/cache | POST | 1 request / minute |
How the limit key works
The API usesget_remote_address as the key function, which means the rate limit window is tracked per client IP address. Each unique IP gets its own independent counter for each endpoint. If you are behind a proxy or NAT, all requests from that shared IP will count against the same quota.
What happens when you exceed a limit
When you exceed the allowed number of requests, the API returns anHTTP 429 Too Many Requests response. This is handled automatically by slowapi’s built-in _rate_limit_exceeded_handler. The response body contains a plain-text or JSON message indicating the limit was exceeded.
429 response
429 as a signal to back off, not retry immediately.
Handling 429 responses in your client
The recommended approach is exponential backoff with jitter: wait a short interval after the first failure, double the wait on each subsequent failure, and add a small random offset to avoid synchronized retries from multiple clients.exponential backoff example
POST /blogs/cache, where the limit is 1 request per minute, set your initial wait to at least 60 seconds.