Skip to main content
The KrakenD Playground includes a complete observability stack with metrics, logs, and distributed tracing. This page explains how to access and use each component.

Grafana - Metrics Visualization

Accessing Grafana

Grafana displays metrics collected from KrakenD and stored in InfluxDB.

What to Look For

Grafana dashboards are pre-configured to show:
  • Request rates - Requests per second to KrakenD endpoints
  • Response times - Latency percentiles (p50, p95, p99)
  • Error rates - HTTP error responses and backend failures
  • Backend performance - Individual upstream service metrics
  • Connection stats - Active connections and connection pool usage
  • Router metrics - Endpoint-specific performance data

Dashboard Setup

Dashboards and datasources are automatically provisioned through volume mounts:
volumes:
  - "./config/grafana/datasources/all.yml:/etc/grafana/provisioning/datasources/all.yml"
  - "./config/grafana/dashboards/all.yml:/etc/grafana/provisioning/dashboards/all.yml"
  - "./config/grafana/krakend:/var/lib/grafana/dashboards/krakend"
No manual configuration is required.

Jaeger - Distributed Tracing

Accessing Jaeger

Jaeger provides end-to-end request tracing through KrakenD and your backend services.

What to Look For

  • Trace timeline - Visual representation of request flow through services
  • Span details - Individual operation timing and metadata
  • Service dependencies - Which services KrakenD calls for each endpoint
  • Latency breakdown - Time spent in each service/operation
  • Error traces - Failed requests with error details and stack context

Using Jaeger

  1. Select krakend from the Service dropdown
  2. Click Find Traces to see recent requests
  3. Click on any trace to see the detailed timeline
  4. Look for spans that show:
    • Total request duration
    • Individual backend call times
    • Any failed operations (shown in red)

Trace Components

Each trace includes:
  • Gateway span - KrakenD processing time
  • Backend spans - Individual upstream service calls
  • Parallel requests - Multiple backend calls shown concurrently
  • Sequential requests - Chained calls with dependencies

Kibana - Log Analysis

Accessing Kibana

Kibana provides log search and visualization for logs processed by Logstash and stored in Elasticsearch.

Importing the Dashboard

After starting the playground, import the pre-configured dashboard:
make elastic
This command runs:
curl -X POST "localhost:5601/api/saved_objects/_import" \
  -H "kbn-xsrf: true" \
  --form file=@config/elastic/dashboard.ndjson \
  -H "kbn-xsrf: true"
The dashboard includes useful visualizations and saved searches for KrakenD logs.

What to Look For

  • Error logs - Failed requests and error messages
  • Access logs - Request/response details for all traffic
  • Debug information - Detailed KrakenD processing logs (when debug mode is enabled)
  • Backend errors - Upstream service failures and timeouts
  • Configuration changes - Logs showing config reload events

Creating Index Patterns

If needed, create an index pattern to view logs:
  1. Navigate to Stack Management > Index Patterns
  2. Create pattern matching your log indices (e.g., logstash-*)
  3. Select @timestamp as the time field

Log Pipeline

Logs flow through this pipeline:
KrakenD → Logstash (port 12201/udp) → Elasticsearch → Kibana
Logstash configuration: config/logstash/logstash.conf

InfluxDB - Metrics Storage

InfluxDB stores time-series metrics from KrakenD. You typically don’t need to access InfluxDB directly - use Grafana instead.

Troubleshooting Observability Stack

Grafana Shows No Data

Cause: InfluxDB may not be receiving metrics from KrakenD. Solution:
  1. Check KrakenD configuration has the InfluxDB exporter enabled
  2. Verify InfluxDB is running: docker compose ps influxdb
  3. Check InfluxDB logs: docker compose logs influxdb
  4. Ensure the datasource is configured correctly in Grafana

Jaeger Shows No Traces

Cause: KrakenD may not be exporting traces, or no requests have been made. Solution:
  1. Make some requests to KrakenD endpoints
  2. Verify Jaeger is running: docker compose ps jaeger
  3. Check KrakenD configuration for the Jaeger exporter
  4. Look for Jaeger collector errors: docker compose logs jaeger

Kibana Shows No Logs

Cause: Logstash pipeline may not be processing logs correctly. Solution:
  1. Check Logstash is running: docker compose ps logstash
  2. View Logstash logs: make logs or docker compose logs -f logstash
  3. Verify Elasticsearch is running: docker compose ps elasticsearch
  4. Check the Logstash pipeline configuration in config/logstash/logstash.conf
  5. Ensure the index pattern exists in Kibana

Elasticsearch Connection Errors

Cause: Elasticsearch may be slow to start or out of memory. Solution:
  1. Check Elasticsearch status: docker compose logs elasticsearch
  2. Verify heap size is adequate (default: 1GB)
  3. Wait 30-60 seconds for Elasticsearch to fully initialize
  4. Increase ES_JAVA_OPTS in docker-compose.yml if needed

Dashboard Import Fails

Cause: Kibana may not be fully initialized. Solution:
  1. Wait for Kibana to finish starting up
  2. Retry: make elastic
  3. Check Kibana logs: docker compose logs kibana
  4. Manually import via Kibana UI: Stack Management > Saved Objects > Import

Missing Metrics in InfluxDB

Cause: Database or credentials may not be configured correctly. Solution:
  1. Verify InfluxDB environment variables in docker-compose.yml:
    • INFLUXDB_DB=krakend
    • INFLUXDB_USER=krakend-dev
    • INFLUXDB_USER_PASSWORD=pas5w0rd
  2. Restart InfluxDB: docker compose restart influxdb
  3. Check KrakenD is configured to use the correct credentials

Performance Considerations

Resource Usage

The observability stack requires significant resources:
  • Elasticsearch: 1GB heap (configurable via ES_JAVA_OPTS)
  • Logstash: ~500MB RAM
  • Grafana: ~100MB RAM
  • InfluxDB: ~200MB RAM
  • Jaeger: ~300MB RAM
Ensure your Docker environment has at least 4GB RAM allocated.

Disabling Components

To reduce resource usage, you can disable observability services:
# Start only specific services
docker compose up krakend_ce fake_api web
Remember to also remove the corresponding exporters from krakend.json.

Build docs developers (and LLMs) love