Overview
The Inbound API implements rate limiting to ensure fair usage and system stability. Rate limits are applied per API key (per user account).
Rate Limit Configuration
Current Limit: 10 requests per second per account
The rate limiter uses a sliding window algorithm powered by Upstash Redis:
ratelimit = new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(10, "1 s"),
analytics: true,
prefix: "e2:ratelimit"
})
This provides smooth, precise rate limiting that prevents burst traffic while allowing sustained usage.
Every API response includes rate limit information in the headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 10
X-RateLimit-Remaining: 7
X-RateLimit-Reset: 1705315200000
Content-Type: application/json
Maximum number of requests allowed per window (e.g., “10”)
Number of requests remaining in current window (e.g., “7”)
Unix timestamp (milliseconds) when the rate limit window resets (e.g., “1705315200000”)
Checking Rate Limit Status
Monitor your rate limit status using the response headers:
const response = await fetch('https://inbound.new/api/e2/domains', {
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
}
})
const rateLimit = {
limit: parseInt(response.headers.get('X-RateLimit-Limit') || '0'),
remaining: parseInt(response.headers.get('X-RateLimit-Remaining') || '0'),
reset: parseInt(response.headers.get('X-RateLimit-Reset') || '0')
}
console.log(`Rate limit: ${rateLimit.remaining}/${rateLimit.limit} remaining`)
console.log(`Resets at: ${new Date(rateLimit.reset).toISOString()}`)
Handling 429 Responses
When you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
Response
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 10
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1705315201000
Retry-After: 3
Content-Type: application/json; charset=utf-8
{
"error": "Too Many Requests",
"message": "Rate limit exceeded. Maximum 10 requests per second. Retry after 3 seconds.",
"statusCode": 429
}
Number of seconds to wait before retrying (RFC 6585 compliant)
Retry Logic
Implement exponential backoff when you receive a 429 response:
async function makeRequestWithRetry(
url: string,
options: RequestInit,
maxRetries = 3
) {
let retries = 0
while (retries < maxRetries) {
const response = await fetch(url, options)
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '1')
const resetTime = parseInt(response.headers.get('X-RateLimit-Reset') || '0')
// Calculate wait time (use Retry-After or calculate from reset time)
const waitTime = retryAfter * 1000
console.log(`Rate limited. Waiting ${retryAfter} seconds...`)
await new Promise(resolve => setTimeout(resolve, waitTime))
retries++
continue
}
return response
}
throw new Error('Max retries exceeded')
}
// Usage
const response = await makeRequestWithRetry(
'https://inbound.new/api/e2/domains',
{
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
}
}
)
Best Practices
1. Implement Client-Side Rate Limiting
Prevent hitting rate limits by tracking requests locally:
class RateLimitedClient {
private requestTimes: number[] = []
private maxRequests = 10
private windowMs = 1000
private async waitForSlot() {
const now = Date.now()
// Remove old requests outside the window
this.requestTimes = this.requestTimes.filter(
time => now - time < this.windowMs
)
// If at limit, wait until oldest request expires
if (this.requestTimes.length >= this.maxRequests) {
const oldestTime = this.requestTimes[0]
const waitTime = this.windowMs - (now - oldestTime) + 100 // +100ms buffer
await new Promise(resolve => setTimeout(resolve, waitTime))
return this.waitForSlot() // Recursive check
}
this.requestTimes.push(now)
}
async request(url: string, options: RequestInit) {
await this.waitForSlot()
return fetch(url, options)
}
}
2. Batch Operations
When possible, use list endpoints with pagination instead of making individual requests:
// ❌ Bad: Multiple requests
for (const id of domainIds) {
await inbound.domains.get(id)
}
// ✅ Good: Single paginated request
const { data } = await inbound.domains.list({ limit: 100 })
3. Cache Responses
Cache responses that don’t change frequently:
const cache = new Map()
async function getDomainWithCache(domainId: string) {
if (cache.has(domainId)) {
return cache.get(domainId)
}
const domain = await inbound.domains.get(domainId)
cache.set(domainId, domain)
// Clear cache after 5 minutes
setTimeout(() => cache.delete(domainId), 5 * 60 * 1000)
return domain
}
4. Use Webhooks Instead of Polling
Instead of polling for new emails, use webhooks:
// ❌ Bad: Polling every second
setInterval(async () => {
const emails = await inbound.emails.list({ limit: 10 })
// Process emails...
}, 1000)
// ✅ Good: Webhook endpoint
export async function POST(request: Request) {
const payload = await request.json()
// Process email immediately when it arrives
}
Rate Limit Tiers
Rate limits may vary based on your plan:
| Plan | Requests/Second |
|---|
| Free | 10 |
| Pro | 10 |
| Enterprise | Custom |
Enterprise plans can request custom rate limits. Contact support for custom configurations.
Fail-Closed Security
The API implements a fail-closed security pattern:
- If rate limiting services are unavailable, requests are blocked by default
- This prevents abuse during system degradation
- Development environments can override with
ALLOW_REQUESTS_WITHOUT_RATE_LIMIT=true
Monitoring Rate Limits
Track your rate limit usage:
function logRateLimitStatus(response: Response) {
const limit = response.headers.get('X-RateLimit-Limit')
const remaining = response.headers.get('X-RateLimit-Remaining')
const reset = response.headers.get('X-RateLimit-Reset')
if (remaining && limit) {
const percentage = (parseInt(remaining) / parseInt(limit)) * 100
if (percentage < 20) {
console.warn(`⚠️ Rate limit low: ${remaining}/${limit} remaining`)
}
}
}
RFC Compliance
Inbound’s rate limiting follows industry standards:
- RFC 6585:
429 Too Many Requests status code
- RFC 6585:
Retry-After header for retry guidance
- RFC 7231: Standard HTTP status codes and semantics