Use this file to discover all available pages before exploring further.
Kernel debugging requires a different toolkit than userspace debugging because the kernel runs without the safety net of an operating system beneath it. This page covers the spectrum of kernel debugging techniques, from simple log messages to post-mortem crash analysis, using real interfaces documented in the kernel source.
printk is the kernel’s primary logging function. Messages written with printk go into the kernel ring buffer and appear on the console if their log level is at or below the current console log level.
/* Basic printk with an explicit log level */printk(KERN_INFO "my_driver: device found at IRQ %d\n", irq);/* Preferred short-hand macros */pr_emerg("System is about to crash\n");pr_alert("Action required immediately\n");pr_crit("Critical error in %s\n", __func__);pr_err("Failed to allocate memory: %d\n", ret);pr_warn("Deprecated API called from %pS\n", (void *)_RET_IP_);pr_info("Device initialized successfully\n");pr_debug("Buffer address: %px size: %zu\n", buf, size);
Read the ring buffer from userspace with dmesg:
# Show all messages with timestampsdmesg -T# Follow new messages as they arrivedmesg -w# Show only error-level messages and abovedmesg --level=err,crit,alert,emerg# Clear the ring buffer (requires root)dmesg -c
pr_debug and dev_dbg compile to no-ops unless either DEBUG is defined in the source file or dynamic debug is enabled at runtime. This means they add zero overhead in production.
Dynamic debug allows you to enable and disable individual pr_debug and dev_dbg call sites at runtime without recompiling the kernel. Enable it by setting CONFIG_DYNAMIC_DEBUG=y.If /proc/dynamic_debug/control exists, your kernel supports dynamic debug. View the catalog of all available debug sites:
head -n7 /proc/dynamic_debug/control# filename:lineno [module]function flags format# init/main.c:1424 [main]run_init_process =_ " with arguments:\012"
The third column shows the current flags. =p means the site is enabled and will print.Enable specific debug messages by writing a query to the control file:
# Enable all debug messages from a moduleecho "module usb +p" > /proc/dynamic_debug/control# Enable messages in a specific source fileecho "file drivers/net/ethernet/intel/e1000e/netdev.c +p" > /proc/dynamic_debug/control# Enable messages in a specific functionecho "func tcp_rcv_established +p" > /proc/dynamic_debug/control# Disable all enabled sitesecho "-p" > /proc/dynamic_debug/control
To see dynamic debug output on the console you may need to raise the console log level: echo 8 > /proc/sys/kernel/printk or boot with loglevel=8.
ftrace is a kernel tracing framework built around the tracefs filesystem, typically mounted at /sys/kernel/tracing. It provides function-level tracing, latency measurement, and a rich event system.
1
Mount tracefs
mount -t tracefs nodev /sys/kernel/tracing# Or add to /etc/fstab:# tracefs /sys/kernel/tracing tracefs defaults 0 0
2
Choose a tracer
# List available tracerscat /sys/kernel/tracing/available_tracers# blk function_graph wakeup_dl wakeup_rt wakeup function nop# Enable the function tracerecho function > /sys/kernel/tracing/current_tracer
3
Filter which functions to trace
# Trace only functions matching a patternecho 'tcp_*' > /sys/kernel/tracing/set_ftrace_filter# Trace a specific functionecho schedule > /sys/kernel/tracing/set_ftrace_filter# Clear the filter (trace all functions)echo > /sys/kernel/tracing/set_ftrace_filter
4
Start and read the trace
# Enable tracingecho 1 > /sys/kernel/tracing/tracing_on# ... perform the operation you want to trace ...# Disable tracingecho 0 > /sys/kernel/tracing/tracing_on# Read the trace outputcat /sys/kernel/tracing/trace
The function_graph tracer adds call depth and duration, making it easier to follow execution flow:
perf is the primary performance analysis tool for Linux. It exposes hardware performance counters, software events, and kernel tracepoints through a unified interface.
# Record CPU cycles, instructions, cache misses for a commandperf stat ls -la /usr/bin# Record with specific eventsperf stat -e cycles,instructions,cache-misses,cache-references ls
For meaningful function names in perf report, build the kernel with CONFIG_DEBUG_INFO=y and install the kernel debug symbols package for your distribution.
Kprobes lets you dynamically insert breakpoints at nearly any instruction in the kernel and run a handler when the breakpoint is hit. A kretprobe fires when a specified function returns, giving you access to both the return value and the execution time.
#include <linux/kprobes.h>static struct kprobe kp = { .symbol_name = "do_sys_open",};static int handler_pre(struct kprobe *p, struct pt_regs *regs){ pr_info("do_sys_open called from %pS\n", (void *)regs->ip); return 0;}static int __init kprobe_example_init(void){ kp.pre_handler = handler_pre; return register_kprobe(&kp);}
Tracepoints are static instrumentation points compiled into the kernel. Unlike kprobes they have no overhead when disabled and a minimal, well-defined overhead when enabled.
# List all available tracepoints in the scheduler subsystemls /sys/kernel/tracing/events/sched/# Enable the sched_switch tracepointecho 1 > /sys/kernel/tracing/events/sched/sched_switch/enable# Read the tracecat /sys/kernel/tracing/trace
KFENCE is a low-overhead sampling-based memory error detector designed to run in production. It detects heap out-of-bounds access, use-after-free, and invalid-free errors. Enable it with CONFIG_KFENCE=y.Unlike KASAN, KFENCE does not instrument every allocation. It guards a sample of allocations and relies on the statistical guarantee that, given enough uptime across a fleet, bugs will be caught.
# Set the sample interval (milliseconds between guarded allocations)# At boot:kfence.sample_interval=100# At runtime:echo 100 > /sys/module/kfence/parameters/sample_interval# View KFENCE error reportsdmesg | grep KFENCE
A KFENCE report looks like this:
BUG: KFENCE: out-of-bounds read in test_out_of_bounds_read+0x...Out-of-bounds read at 0x... (1B right of kfence-#0): test_out_of_bounds_read+0x... [kfence_test]
When the kernel encounters an unrecoverable error it prints an oops message (if the error is recoverable) or a panic message (if it is not). The message contains the faulting instruction, register state, and a call stack.
BUG: unable to handle kernel NULL pointer dereference at 0000000000000000Oops: 0002 [#1] SMP PTICPU: 0 PID: 4424 Comm: insmod Tainted: P W O 4.20.0RIP: 0010:my_oops_init+0x13/0x1000 [kpanic]
The Tainted: field tells you whether the kernel’s integrity is in question. A P flag means a proprietary module was loaded; W means a warning occurred; O means an out-of-tree module was loaded. Decode the taint flags:
# Check taint status of the running kernelcat /proc/sys/kernel/tainted# Decode the taint number using the kernel toolsh tools/debugging/kernel-chktaint
Decode the RIP address back to a symbol and source line:
# Translate an address to a symboladdr2line -e vmlinux -i 0xffffffff81234567# Or use the scripts provided with the kernelscripts/decode_stacktrace.sh vmlinux < oops.txt
kdump captures a complete memory dump of the failed kernel by booting a small capture kernel into a reserved region of memory. The dump is written to disk and analysed later with the crash tool.
1
Reserve memory for the capture kernel
Add crashkernel=256M to your kernel command line (adjust the size based on available RAM).
# Should report that the crash kernel is loadedcat /proc/sys/kernel/kexec_crash_loaded# 1# Check the reserved memory regioncat /proc/iomem | grep "Crash kernel"
4
Analyse the dump with crash
After a kernel panic, the dump is saved to the path configured in /etc/kdump.conf (commonly /var/crash/).
crash /usr/lib/debug/lib/modules/$(uname -r)/vmlinux \ /var/crash/$(ls /var/crash | tail -1)/vmcore# Inside the crash shell:crash> bt # backtrace of the crashing threadcrash> log # print the kernel message logcrash> ps # list processes at the time of crashcrash> vm # display virtual memory informationcrash> quit
ftrace documentation
Full reference for the ftrace framework and all available tracers.
perf wiki
Comprehensive perf usage guide including flame graphs and PMU events.
KASAN documentation
Detailed KASAN configuration and report interpretation guide.
kdump documentation
Setting up kdump and analysing crash dumps with the crash tool.