Grafana Alloy is deployed as the primary telemetry collection agent in the Kimbernetes cluster. It runs as a DaemonSet on every node, collecting logs, metrics, and events, then routing them to appropriate backends (Loki for logs, Prometheus for metrics).
HelmRelease Configuration
Alloy is deployed using the k8s-monitoring Helm chart:
overlays/base/grafana/grafana-alloy/helm-release.yaml
apiVersion : helm.toolkit.fluxcd.io/v2
kind : HelmRelease
metadata :
name : grafana-alloy
spec :
timeout : 15m
chart :
spec :
chart : k8s-monitoring
sourceRef :
kind : HelmRepository
name : grafana
version : 3.2.2
interval : 24h
releaseName : grafana-monitoring
install :
crds : Create
upgrade :
crds : CreateReplace
The k8s-monitoring chart includes Alloy configured specifically for Kubernetes observability with sensible defaults.
Architecture
Alloy is deployed in multiple modes:
alloy-logs : DaemonSet for pod log collection
alloy-singleton : Single instance for cluster-wide event collection
alloy-metrics : Metrics collection (disabled in favor of Prometheus ServiceMonitors)
alloy-logs :
enabled : true
enableReporting : false
alloy-singleton :
enabled : true
enableReporting : false
alloy-metrics :
enabled : false
alloy-receiver :
enabled : false
Log Collection
Pod Logs
Alloy collects logs from all pods with rich label extraction:
overlays/base/grafana/grafana-alloy/helm-release.yaml
podLogs :
enabled : true
staticLabels :
source : "pods"
extraDiscoveryRules : | -
rule {
target_label = "log_source"
replacement = "kim7s"
}
rule {
source_labels = ["__meta_kubernetes_pod_controller_name"]
regex = "([0-9a-z-.]+?)(-[0-9a-f]{8,10})?"
target_label = "__tmp_controller_name"
}
rule {
source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name", "__meta_kubernetes_pod_label_app", "__tmp_controller_name", "__meta_kubernetes_pod_name"]
regex = "^;*([^;]+)(;.*)?$"
target_label = "app"
}
rule {
source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_instance", "__meta_kubernetes_pod_label_instance"]
regex = "^;*([^;]+)(;.*)?$"
target_label = "instance"
}
rule {
source_labels = ["__meta_kubernetes_pod_node_name"]
target_label = "node_name"
}
rule {
source_labels = ["__meta_kubernetes_pod_container_image"]
target_label = "image"
}
Alloy automatically extracts and preserves these labels:
labelsToKeep :
- app
- cluster
- container
- flags
- image
- log_source
- namespace
- node_name
- pod
- stream
- instance
- source
rule {
source_labels = [
"__meta_kubernetes_pod_label_app_kubernetes_io_name" ,
"__meta_kubernetes_pod_label_app" ,
"__tmp_controller_name" ,
"__meta_kubernetes_pod_name"
]
regex = "^;*([^;]+)(;.*)?$"
target_label = "app"
}
Extracts the application name from standard Kubernetes labels with fallback to controller name. rule {
source_labels = ["__meta_kubernetes_pod_node_name"]
target_label = "node_name"
}
Adds the node name to every log line for node-level filtering. rule {
source_labels = ["__meta_kubernetes_pod_container_image"]
target_label = "image"
}
Includes the container image for version tracking.
Node Logs
Alloy collects logs from systemd journal on each node:
overlays/base/grafana/grafana-alloy/helm-release.yaml
nodeLogs :
enabled : true
labelsToKeep : [ "instance" , "level" , "name" , "unit" , "service_name" , "source" ]
journal :
units :
- kubelet.service
- containerd.service
Node logs are essential for diagnosing infrastructure issues like container runtime problems or kubelet failures.
Cluster Events
Kubernetes cluster events are collected and forwarded as logs:
overlays/base/grafana/grafana-alloy/helm-release.yaml
clusterEvents :
enabled : true
labelsToKeep : [ "level" , "namespace" , "node" , "source" ]
Cluster events include:
Pod scheduling events
Container lifecycle events
Volume mount events
Resource quota violations
Node status changes
Destinations
Alloy routes logs to Loki via internal service:
overlays/base/grafana/grafana-alloy/helm-release.yaml
destinations :
- name : loki-grafana-cloud
type : loki
url : http://loki-monolith.observability.svc:3100/loki/api/v1/push
Example: Adding Remote Write to Grafana Cloud
destinations :
- name : loki-local
type : loki
url : http://loki-monolith.observability.svc:3100/loki/api/v1/push
- name : loki-grafana-cloud
type : loki
url : https://logs-prod-us-central1.grafana.net/loki/api/v1/push
auth :
username : "12345"
passwordFrom :
name : grafana-cloud-credentials
key : password
Metrics Collection
While Alloy can collect metrics, this cluster uses Prometheus ServiceMonitors instead:
clusterMetrics :
enabled : false
kube-state-metrics :
enabled : false
cadvisor :
enabled : false
kubelet :
enabled : false
node-exporter :
enabled : true
controlPlane :
enabled : false
kepler :
enabled : true
windows-expoter :
enabled : false
opencost :
enabled : false
Node-exporter and Kepler are enabled for host-level and energy metrics.
Environment Variables
Alloy instances are configured with environment variables for cluster identification:
overlays/base/grafana/grafana-alloy/helm-release.yaml
alloy :
extraEnv :
- name : CLUSTER_NAME
value : ""
- name : NAMESPACE
valueFrom :
fieldRef :
fieldPath : metadata.namespace
- name : POD_NAME
valueFrom :
fieldRef :
fieldPath : metadata.name
- name : NODE_NAME
valueFrom :
fieldRef :
fieldPath : spec.nodeName
Querying Alloy Logs in Loki
Use LogQL to query logs collected by Alloy:
Pod Logs by App
Node Logs
Cluster Events
Error Logs
Logs by Image Version
{app="my-app", namespace="default"}
Configuration Customization
To customize Alloy configuration, update the HelmRelease values:
Edit the HelmRelease
kubectl edit helmrelease grafana-alloy -n observability
Add Custom Pipeline
values :
alloy-logs :
alloy :
extraConfig : |
// Custom Alloy configuration
loki.process "custom" {
forward_to = [loki.write.default.receiver]
stage.regex {
expression = ".*(?P<status>\\d{3}).*"
}
stage.labels {
values = {
status = "",
}
}
}
Wait for Reconciliation
flux reconcile helmrelease grafana-alloy -n observability
Access Policy
The cluster includes a SealedSecret for Grafana Cloud access:
overlays/kimawesome/infrastructure/observability/grafana-alloy/access-policy.sealed.yaml
apiVersion : bitnami.com/v1alpha1
kind : SealedSecret
metadata :
name : grafana-cloud-credentials
namespace : observability
This SealedSecret must be decrypted by the sealed-secrets controller. Ensure the controller is running before deploying Alloy with remote write enabled.
Troubleshooting
Check DaemonSet status: kubectl get daemonset -n observability -l app.kubernetes.io/name=alloy
kubectl describe daemonset grafana-monitoring-alloy-logs -n observability
View pod logs: kubectl logs -n observability -l app.kubernetes.io/name=alloy-logs
Logs not appearing in Loki
Verify Loki endpoint: kubectl run -it --rm debug --image=curlimages/curl --restart=Never -- \
curl http://loki-monolith.observability.svc:3100/ready
Check Alloy write metrics: kubectl port-forward -n observability ds/grafana-monitoring-alloy-logs 12345:12345
curl http://localhost:12345/metrics | grep loki_write
Review discovery rules in HelmRelease: kubectl get helmrelease grafana-alloy -n observability -o yaml | grep -A 50 extraDiscoveryRules
Check Alloy debug UI: kubectl port-forward -n observability ds/grafana-monitoring-alloy-logs 12345:12345
# Open http://localhost:12345
For high-volume environments, consider these optimizations:
values :
alloy-logs :
alloy :
resources :
limits :
memory : 1Gi
requests :
memory : 512Mi
cpu : 500m
global :
scrapeInterval : 5m # Reduce frequency for less critical metrics
podLogs :
maxRecordAge : 5m # Only process recent logs
Next Steps
Query in Loki Search and analyze collected logs
Visualize in Grafana Create dashboards for log patterns
Add Metrics Configure ServiceMonitors
Overview Return to observability architecture