Use this file to discover all available pages before exploring further.
The Linux kernel core API provides the fundamental building blocks every kernel developer reaches for daily: intrusive data structures for zero-copy container embedding, an asynchronous work-deferral framework, a structured logging system, an object model that backs sysfs, and the macros that govern module initialization and teardown. This page covers each of these subsystems with the function signatures, usage patterns, and caveats drawn directly from the kernel source.
Data structures
Linked lists, red-black trees, and hash tables built into the kernel.
Work queues
Defer and schedule work to kernel worker threads asynchronously.
Printk logging
Structured kernel logging with log levels and device-specific helpers.
Kobject & sysfs
Object model, reference counting, and sysfs directory management.
Module lifecycle
module_init(), module_exit(), and metadata macros.
Timer API
One-shot and periodic kernel timers with safe teardown.
The kernel avoids dynamic dispatch by using intrusive data structures: instead of a list node holding a pointer to your struct, your struct embeds the list node. The container_of() macro recovers the enclosing struct from the embedded node pointer.
struct list_head is the universal doubly-linked circular list. Every list node and every list head is the same type, so no separate “header” struct is needed.
#include <linux/list.h>/* Embed list_head in your own struct */struct my_device { int id; struct list_head list; /* intrusive list node */};/* Declare and initialise a list head */static LIST_HEAD(device_list);/* Add/remove */list_add(&dev->list, &device_list); /* add after head (stack behaviour) */list_add_tail(&dev->list, &device_list); /* add before head (queue behaviour) */list_del(&dev->list);/* Iterate safely (safe = element may be deleted during iteration) */struct my_device *dev, *tmp;list_for_each_entry_safe(dev, tmp, &device_list, list) { if (dev->id == target_id) list_del(&dev->list);}
list_for_each_entry() is fine when you will not delete the current element during the loop. Use list_for_each_entry_safe() whenever deletion is possible.
Red-black trees (include/linux/rbtree.h) offer O(log n) search, insert, and delete. The kernel does not supply a generic comparator; you write the search and insert logic against your key.
For hash-table buckets, the kernel provides struct hlist_head / struct hlist_node (a singly-linked list with O(1) head operations) along with the DECLARE_HASHTABLE macro for fixed-size hash tables.
The Concurrency Managed Workqueue (cmwq) framework lets drivers defer work to kernel worker threads. A work item is a struct work_struct that holds a function pointer; queuing it schedules that function for asynchronous execution.
#include <linux/workqueue.h>/* Define the work function */static void my_work_fn(struct work_struct *work){ struct my_device *dev = container_of(work, struct my_device, work); /* ... do work ... */}struct my_device { struct work_struct work; /* ... */};/* Initialise (once, typically in probe) */INIT_WORK(&dev->work, my_work_fn);/* Queue to the system workqueue */schedule_work(&dev->work);/* Queue to a specific workqueue */struct workqueue_struct *wq = alloc_workqueue("my_wq", WQ_UNBOUND, 0);queue_work(wq, &dev->work);/* Delayed work (fires after a jiffies delay) */static DECLARE_DELAYED_WORK(delayed_work, my_work_fn);schedule_delayed_work(&delayed_work, msecs_to_jiffies(500));/* Wait for all pending work to complete, then destroy */flush_workqueue(wq);destroy_workqueue(wq);
Always set WQ_MEM_RECLAIM on workqueues that may be invoked from memory-reclaim paths. Without it, the system can deadlock if every worker thread is blocked waiting for memory that cannot be freed because no worker is available to run.
printk() writes to the kernel ring buffer (readable via dmesg). Each message carries a log level that determines whether the kernel flushes it to the current console immediately.
When you have a struct device *, use the dev_*() family. It automatically prepends the device name so log messages are traceable without manual formatting.
In hot paths (interrupt handlers, high-frequency timer callbacks), prefer pr_warn_ratelimited() or pr_err_once() to avoid flooding the ring buffer and triggering soft-lockup warnings on legacy console drivers.
A struct kobject is the kernel’s base object: it tracks a name, a parent, a reference count, and a sysfs directory. Subsystems embed a kobject inside their own structs rather than allocating one standalone.
#include <linux/kobject.h>#include <linux/sysfs.h>struct my_subsystem { struct kobject kobj; int value;};/* Recover the enclosing struct from a kobject pointer */#define to_my_subsystem(kobj_ptr) \ container_of(kobj_ptr, struct my_subsystem, kobj)/* kobj_type specifies the release callback and sysfs ops */static void my_release(struct kobject *kobj){ struct my_subsystem *sub = to_my_subsystem(kobj); kfree(sub);}static struct kobj_type my_ktype = { .release = my_release, .sysfs_ops = &kobj_sysfs_ops,};/* Initialise and register */kobject_init_and_add(&sub->kobj, &my_ktype, parent_kobj, "my_object");/* Announce creation to userspace (triggers udev rules) */kobject_uevent(&sub->kobj, KOBJ_ADD);/* Create a simple sysfs attribute file */sysfs_create_file(&sub->kobj, &my_attr.attr);/* Reference counting */kobject_get(&sub->kobj); /* increment */kobject_put(&sub->kobj); /* decrement; frees when count reaches 0 */
Never call kfree() directly on an object whose kobject has been registered. Always call kobject_put() and let the release() callback free the memory. Bypassing this will cause use-after-free bugs.
Every loadable kernel module declares its entry and exit points with two macros and annotates itself with metadata macros recognised by modinfo.
#include <linux/module.h>#include <linux/init.h>MODULE_AUTHOR("Jane Developer <jane@example.com>");MODULE_DESCRIPTION("Example kernel module");MODULE_LICENSE("GPL");MODULE_VERSION("1.0.0");static int __init my_module_init(void){ pr_info("my_module: loaded\n"); /* allocate resources, register devices, etc. */ return 0; /* non-zero aborts loading */}static void __exit my_module_exit(void){ pr_info("my_module: unloaded\n"); /* release all resources in reverse order */}module_init(my_module_init);module_exit(my_module_exit);
The __init annotation places the function into a special ELF section that the kernel frees after boot (for built-in modules) or after the module’s init succeeds. Never call an __init function after module load completes.
Kernel timers run in softirq context, so they must not sleep or acquire sleeping locks.
#include <linux/timer.h>struct my_timer_data { struct timer_list timer; int counter;};static void my_timer_callback(struct timer_list *t){ struct my_timer_data *data = from_timer(data, t, timer); data->counter++; pr_info("timer fired, counter=%d\n", data->counter); /* Rearm for another 500 ms */ mod_timer(t, jiffies + msecs_to_jiffies(500));}/* Setup */struct my_timer_data data;timer_setup(&data.timer, my_timer_callback, 0);mod_timer(&data.timer, jiffies + msecs_to_jiffies(500));/* Teardown — waits for any running callback to complete */del_timer_sync(&data.timer);
Always call del_timer_sync() (not del_timer()) before freeing any data the timer callback accesses. del_timer() returns immediately and the callback may still be executing on another CPU.