authx-extra package provides Prometheus metrics collection through middleware that automatically tracks request counts, response times, and other important metrics for your FastAPI application.
How Prometheus metrics work
Metrics collection with Prometheus relies on the pull model, meaning that Prometheus is responsible for getting metrics (scraping) from the services that it monitors. This is different from other tools like Graphite, which passively wait for clients to push their metrics to a known server.Exposing and scraping metrics
Clients have only one responsibility: make their metrics available for a Prometheus server to scrape. This is done by exposing an HTTP endpoint, usually/metrics, which returns the full list of metrics (with label sets) and their values. This endpoint is very cheap to call as it simply outputs the current value of each metric without doing any calculation.
On the Prometheus server side, each target (statically defined or dynamically discovered) is scraped at a regular interval (scrape interval). Each scrape reads the /metrics endpoint to get the current state of the client metrics and persists the values in the Prometheus time-series database.
Add metrics to your application
Add theMetricsMiddleware to your FastAPI application and expose the /metrics endpoint:
/metrics endpoint.
MetricsMiddleware configuration
TheMetricsMiddleware class accepts several configuration parameters:
Parameters
- app: A
FastAPIinstance representing your FastAPI application - prefix: A string specifying the prefix for Prometheus metrics (default:
"authx_") - buckets: A tuple of float values representing the histogram buckets for request durations (default:
(0.002, 0.05, 0.1, +Inf))
Collected metrics
The middleware automatically collects the following metrics:Request count
A Prometheus Counter metric that tracks the total number of requests:- Name:
{prefix}request_count - Type: Counter
- Labels:
method,path,status
Request duration
A Prometheus Histogram metric that tracks the duration of requests:- Name:
{prefix}request_time - Type: Histogram
- Labels:
method,path,status - Buckets: Configurable (default: 0.002s, 0.05s, 0.1s, +Inf)
How the middleware works
TheMetricsMiddleware class inherits from Starlette’s BaseHTTPMiddleware and implements the dispatch method that is called for each request.
For each request, the middleware:
- Records the request method, path, and start time
- Calls the next middleware or request handler in the pipeline
- Captures the response status code
- Measures the total time taken to process the request
- Updates the corresponding Prometheus metrics
View your metrics
Once you’ve added the middleware and metrics endpoint, you can view your metrics by visiting:Configure Prometheus to scrape metrics
Add your application to your Prometheus configuration file (prometheus.yml):
For production deployments, consider using service discovery mechanisms instead of static targets.
Custom metric prefixes
You can customize the metric prefix to avoid collisions with other services:my_service_request_count and my_service_request_time.
Custom histogram buckets
Adjust the histogram buckets to match your application’s performance characteristics:Choose buckets that align with your service level objectives (SLOs). This helps you better understand your application’s performance distribution.
Next steps
Profiling
Profile slow requests to identify bottlenecks
Redis cache
Add caching to improve response times