Grafana - Metrics Visualization
Accessing Grafana
Grafana displays metrics collected from KrakenD and stored in InfluxDB.- URL: http://localhost:4000
- Credentials:
admin/admin
What to Look For
Grafana dashboards are pre-configured to show:- Request rates - Requests per second to KrakenD endpoints
- Response times - Latency percentiles (p50, p95, p99)
- Error rates - HTTP error responses and backend failures
- Backend performance - Individual upstream service metrics
- Connection stats - Active connections and connection pool usage
- Router metrics - Endpoint-specific performance data
Dashboard Setup
Dashboards and datasources are automatically provisioned through volume mounts:Jaeger - Distributed Tracing
Accessing Jaeger
Jaeger provides end-to-end request tracing through KrakenD and your backend services.- URL: http://localhost:16686
- No credentials required
What to Look For
- Trace timeline - Visual representation of request flow through services
- Span details - Individual operation timing and metadata
- Service dependencies - Which services KrakenD calls for each endpoint
- Latency breakdown - Time spent in each service/operation
- Error traces - Failed requests with error details and stack context
Using Jaeger
- Select krakend from the Service dropdown
- Click Find Traces to see recent requests
- Click on any trace to see the detailed timeline
- Look for spans that show:
- Total request duration
- Individual backend call times
- Any failed operations (shown in red)
Trace Components
Each trace includes:- Gateway span - KrakenD processing time
- Backend spans - Individual upstream service calls
- Parallel requests - Multiple backend calls shown concurrently
- Sequential requests - Chained calls with dependencies
Kibana - Log Analysis
Accessing Kibana
Kibana provides log search and visualization for logs processed by Logstash and stored in Elasticsearch.- URL: http://localhost:5601
- No credentials required
Importing the Dashboard
After starting the playground, import the pre-configured dashboard:What to Look For
- Error logs - Failed requests and error messages
- Access logs - Request/response details for all traffic
- Debug information - Detailed KrakenD processing logs (when debug mode is enabled)
- Backend errors - Upstream service failures and timeouts
- Configuration changes - Logs showing config reload events
Creating Index Patterns
If needed, create an index pattern to view logs:- Navigate to Stack Management > Index Patterns
- Create pattern matching your log indices (e.g.,
logstash-*) - Select
@timestampas the time field
Log Pipeline
Logs flow through this pipeline:config/logstash/logstash.conf
InfluxDB - Metrics Storage
InfluxDB stores time-series metrics from KrakenD.- URL: http://localhost:8086
- Database:
krakend - User:
krakend-dev - Password:
pas5w0rd
Troubleshooting Observability Stack
Grafana Shows No Data
Cause: InfluxDB may not be receiving metrics from KrakenD. Solution:- Check KrakenD configuration has the InfluxDB exporter enabled
- Verify InfluxDB is running:
docker compose ps influxdb - Check InfluxDB logs:
docker compose logs influxdb - Ensure the datasource is configured correctly in Grafana
Jaeger Shows No Traces
Cause: KrakenD may not be exporting traces, or no requests have been made. Solution:- Make some requests to KrakenD endpoints
- Verify Jaeger is running:
docker compose ps jaeger - Check KrakenD configuration for the Jaeger exporter
- Look for Jaeger collector errors:
docker compose logs jaeger
Kibana Shows No Logs
Cause: Logstash pipeline may not be processing logs correctly. Solution:- Check Logstash is running:
docker compose ps logstash - View Logstash logs:
make logsordocker compose logs -f logstash - Verify Elasticsearch is running:
docker compose ps elasticsearch - Check the Logstash pipeline configuration in
config/logstash/logstash.conf - Ensure the index pattern exists in Kibana
Elasticsearch Connection Errors
Cause: Elasticsearch may be slow to start or out of memory. Solution:- Check Elasticsearch status:
docker compose logs elasticsearch - Verify heap size is adequate (default: 1GB)
- Wait 30-60 seconds for Elasticsearch to fully initialize
- Increase
ES_JAVA_OPTSindocker-compose.ymlif needed
Dashboard Import Fails
Cause: Kibana may not be fully initialized. Solution:- Wait for Kibana to finish starting up
- Retry:
make elastic - Check Kibana logs:
docker compose logs kibana - Manually import via Kibana UI: Stack Management > Saved Objects > Import
Missing Metrics in InfluxDB
Cause: Database or credentials may not be configured correctly. Solution:- Verify InfluxDB environment variables in
docker-compose.yml:INFLUXDB_DB=krakendINFLUXDB_USER=krakend-devINFLUXDB_USER_PASSWORD=pas5w0rd
- Restart InfluxDB:
docker compose restart influxdb - Check KrakenD is configured to use the correct credentials
Performance Considerations
Resource Usage
The observability stack requires significant resources:- Elasticsearch: 1GB heap (configurable via
ES_JAVA_OPTS) - Logstash: ~500MB RAM
- Grafana: ~100MB RAM
- InfluxDB: ~200MB RAM
- Jaeger: ~300MB RAM
Disabling Components
To reduce resource usage, you can disable observability services:krakend.json.