Overview
Openfront’s API is designed to handle high-volume traffic, but rate limiting may be implemented in production environments to ensure fair usage and system stability.
Current Status
Rate limiting is currently not enabled by default in Openfront. The infrastructure is in place but commented out in the codebase.
The rate limiting code exists in the Keystone configuration but is disabled:
// From keystone/index.ts:14-18
// Add rate limiting on storefront queries and mutations
// import { ApolloArmor } from "@escape.tech/graphql-armor";
// import { applyMiddleware } from "graphql-middleware";
// import { RateLimiterMemory } from "rate-limiter-flexible";
// import { applyRateLimiting } from "./applyRateLimiting";
Planned Implementation
When enabled, Openfront will use the following technologies for rate limiting:
Technologies
Apollo Armor - GraphQL security and rate limiting
rate-limiter-flexible - Flexible rate limiting for Node.js
graphql-middleware - Middleware layer for GraphQL
Architecture
The rate limiting implementation is designed to:
// Planned configuration (from keystone/index.ts:397-431)
const armor = new ApolloArmor ();
graphql : {
apolloConfig : {
... armor . protect ()
},
extendGraphqlSchema : ( schema ) => {
const extendedSchema = extendGraphqlSchema ( schema );
return applyMiddleware ( extendedSchema , applyRateLimiting );
}
}
Why Rate Limiting Matters
Even though rate limiting isn’t currently enforced, understanding the concept is important:
Protection Benefits
Prevent Abuse : Stop malicious actors from overwhelming your API
Fair Usage : Ensure all users get reasonable access
Cost Control : Limit computational resources consumed
System Stability : Prevent cascading failures from traffic spikes
Common Scenarios
Brute Force Attacks : Multiple rapid authentication attempts
Data Scraping : Automated tools extracting large amounts of data
DDoS Protection : Distributed denial-of-service mitigation
Accidental Loops : Client bugs causing infinite request loops
Best Practices
Even without enforced rate limits, follow these best practices:
1. Batch Requests Efficiently
Use GraphQL’s batching capabilities to reduce request count:
# Good: Fetch multiple resources in one request
query GetDashboardData {
products ( take : 10 ) {
id
title
}
orders ( take : 10 ) {
id
displayId
}
customers : users ( take : 10 ) {
id
email
}
}
# Avoid: Making separate requests for each resource
Limit result sets and paginate through data:
# Good: Paginated query
query GetProducts ( $skip : Int ! , $take : Int ! ) {
products ( skip : $skip , take : $take ) {
id
title
}
}
# Variables: { "skip": 0, "take": 20 }
3. Implement Client-Side Caching
Cache responses to reduce redundant requests:
// Example with Apollo Client
import { ApolloClient , InMemoryCache } from '@apollo/client' ;
const client = new ApolloClient ({
uri: 'https://your-domain.com/api/graphql' ,
cache: new InMemoryCache ({
typePolicies: {
Query: {
fields: {
products: {
// Cache products for 5 minutes
keyArgs: [ 'where' , 'orderBy' ],
merge ( existing , incoming ) {
return incoming ;
}
}
}
}
}
})
});
Delay search queries until user stops typing:
// React example with debounce
import { useState , useEffect } from 'react' ;
import { debounce } from 'lodash' ;
function ProductSearch () {
const [ searchTerm , setSearchTerm ] = useState ( '' );
const [ debouncedTerm , setDebouncedTerm ] = useState ( '' );
useEffect (() => {
const handler = debounce (() => {
setDebouncedTerm ( searchTerm );
}, 500 ); // Wait 500ms after user stops typing
handler ();
return () => handler . cancel ();
}, [ searchTerm ]);
useEffect (() => {
if ( debouncedTerm ) {
// Execute GraphQL query
fetchProducts ( debouncedTerm );
}
}, [ debouncedTerm ]);
return (
< input
type = "text"
value = { searchTerm }
onChange = { ( e ) => setSearchTerm ( e . target . value ) }
placeholder = "Search products..."
/>
);
}
5. Use WebSockets for Real-Time Data
For frequently updating data, use subscriptions instead of polling:
# Instead of polling every second
# Use GraphQL subscriptions (when implemented)
subscription OrderUpdates {
orderStatusChanged {
id
status
updatedAt
}
}
Enabling Rate Limiting
If you need to enable rate limiting in your deployment:
Step 1: Install Dependencies
npm install @escape.tech/graphql-armor rate-limiter-flexible graphql-middleware
In source/features/keystone/index.ts, uncomment:
// Line 15-18: Uncomment imports
import { ApolloArmor } from "@escape.tech/graphql-armor" ;
import { applyMiddleware } from "graphql-middleware" ;
import { RateLimiterMemory } from "rate-limiter-flexible" ;
import { applyRateLimiting } from "./applyRateLimiting" ;
// Line 397: Uncomment armor initialization
const armor = new ApolloArmor ();
// Line 422-431: Uncomment GraphQL config
graphql : {
apolloConfig : {
... armor . protect ()
},
extendGraphqlSchema : ( schema ) => {
const extendedSchema = extendGraphqlSchema ( schema );
return applyMiddleware ( extendedSchema , applyRateLimiting );
}
}
Create or modify source/features/keystone/applyRateLimiting.ts:
import { RateLimiterMemory } from 'rate-limiter-flexible' ;
// Configure rate limiter
const rateLimiter = new RateLimiterMemory ({
points: 100 , // Number of requests
duration: 60 , // Per 60 seconds
});
export const applyRateLimiting = async ( resolve , root , args , context , info ) => {
const userId = context . session ?. itemId || context . req ?. ip ;
try {
await rateLimiter . consume ( userId );
return resolve ( root , args , context , info );
} catch ( error ) {
throw new Error ( 'Rate limit exceeded. Please try again later.' );
}
};
Step 4: Test Implementation
Test that rate limiting works:
# Make 101 requests rapidly (should fail on 101st)
for i in { 1..101} ; do
curl -X POST http://localhost:3000/api/graphql \
-H "Content-Type: application/json" \
-d '{"query":"query { products(take:1) { id } }"}'
echo "Request $i "
done
Monitoring API Usage
Track your API usage even without rate limiting:
1. API Key Usage Tracking
Openfront automatically tracks API key usage:
query GetApiKeyUsage {
apiKeys {
id
name
usageCount {
total
daily
}
lastUsedAt
}
}
2. Application Monitoring
Use application performance monitoring (APM) tools:
Datadog : Full-stack monitoring
New Relic : Application performance insights
Sentry : Error tracking and performance
CloudWatch : AWS native monitoring
3. Database Query Monitoring
Monitor database query performance:
// Add Prisma query logging
const prisma = new PrismaClient ({
log: [ 'query' , 'info' , 'warn' , 'error' ],
});
When rate limiting is enabled, responses will include headers:
Maximum number of requests allowed per window
Number of requests remaining in current window
Time when the rate limit window resets
Seconds to wait before retrying (only sent when rate limit exceeded)
Error Handling
When rate limited, handle errors gracefully:
async function makeRequest () {
try {
const response = await fetch ( 'https://your-domain.com/api/graphql' , {
method: 'POST' ,
headers: { 'Content-Type' : 'application/json' },
body: JSON . stringify ({ query: '...' })
});
if ( response . status === 429 ) {
// Rate limited
const retryAfter = response . headers . get ( 'Retry-After' );
console . log ( `Rate limited. Retry after ${ retryAfter } seconds` );
// Wait and retry
await new Promise ( resolve =>
setTimeout ( resolve , retryAfter * 1000 )
);
return makeRequest (); // Retry
}
return response . json ();
} catch ( error ) {
console . error ( 'Request failed:' , error );
}
}
Optimize queries to reduce load:
1. Use Fragments
Reuse common field selections:
fragment ProductBasics on Product {
id
title
handle
thumbnail
}
query GetProducts {
featured : products ( where : { isFeatured : { equals : true } }) {
... ProductBasics
}
recent : products ( orderBy : { createdAt : desc }, take : 5 ) {
... ProductBasics
}
}
2. Limit Nested Queries
Avoid deeply nested queries:
# Avoid: Deep nesting
query TooManyLevels {
products {
productVariants {
prices {
currency {
region {
countries {
# Too deep!
}
}
}
}
}
}
}
# Better: Fetch only what you need
query OptimizedQuery {
products {
id
title
productVariants {
id
prices {
amount
currency { code }
}
}
}
}
3. Use DataLoader Pattern
Implement DataLoader for N+1 query prevention (advanced):
import DataLoader from 'dataloader' ;
const productLoader = new DataLoader ( async ( ids ) => {
const products = await prisma . product . findMany ({
where: { id: { in: ids } }
});
// Return products in same order as ids
return ids . map ( id => products . find ( p => p . id === id ));
});
Production Recommendations
For production deployments, consider implementing rate limiting to protect your infrastructure.
Recommended Limits
Authenticated Users : 1,000 requests per 15 minutes
API Keys : 5,000 requests per 15 minutes
Anonymous Users : 100 requests per 15 minutes
Webhooks : No rate limit (validate signatures instead)
Deployment Considerations
Load Balancer : Implement rate limiting at the load balancer level
CDN : Use CDN caching for static resources
Redis : Use Redis instead of in-memory rate limiter for distributed systems
Monitoring : Set up alerts for unusual traffic patterns
Next Steps
API Overview Learn about the GraphQL API structure
Authentication Understand authentication methods