Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/DeelerDev/linux/llms.txt

Use this file to discover all available pages before exploring further.

The Linux kernel provides several memory allocators tuned for different size classes, contiguity requirements, and calling contexts. Choosing the wrong allocator is one of the most common sources of kernel bugs—allocating with GFP_ATOMIC where GFP_KERNEL would suffice wastes reserves, and calling a sleeping allocator from an interrupt handler causes an immediate BUG. This reference covers the full allocation stack from the slab allocator to the raw page allocator, along with GFP flag semantics and failure-handling patterns.

kmalloc / kzalloc

General-purpose slab allocation for objects smaller than a page.

vmalloc

Virtually contiguous allocations for large, non-DMA buffers.

Slab caches

High-frequency, fixed-size object pools with kmem_cache.

Page allocator

Raw page allocation via alloc_pages() and __get_free_pages().

GFP flags

Controlling reclaim behaviour, zones, and allocation semantics.

Failure handling

Patterns for detecting and recovering from allocation failures.

Kmalloc and kzalloc

kmalloc() is the general-purpose kernel allocator. It returns physically contiguous memory suitable for DMA and for objects up to roughly KMALLOC_MAX_SIZE (architecture-dependent; commonly 4 MiB). For objects smaller than a page, it satisfies requests from pre-built power-of-two slab caches, making it fast.

Function signatures

#include <linux/slab.h>

/**
 * kmalloc - allocate memory
 * @size:  number of bytes to allocate
 * @flags: GFP flags controlling allocator behaviour
 *
 * Returns a kernel virtual address, or NULL on failure.
 * The returned memory is NOT initialised.
 */
void *kmalloc(size_t size, gfp_t flags);

/**
 * kzalloc - allocate zeroed memory
 *
 * Identical to kmalloc() but zeroes the returned memory.
 * Prefer this over kmalloc() + memset() to avoid use-before-init bugs.
 */
void *kzalloc(size_t size, gfp_t flags);

/**
 * kfree - release memory obtained from kmalloc/kzalloc
 * @objp: pointer to free; NULL is safe and is a no-op
 */
void kfree(const void *objp);

/**
 * krealloc - resize a kmalloc'd block
 * @p:     existing allocation (or NULL for a fresh allocation)
 * @new_size: desired new size
 * @flags: GFP flags
 */
void *krealloc(const void *p, size_t new_size, gfp_t flags);

Arrays and size-overflow helpers

/* Allocate an array of n elements of size s; detects overflow */
void *kmalloc_array(size_t n, size_t s, gfp_t flags);

/* Allocate and zero an array */
void *kcalloc(size_t n, size_t s, gfp_t flags);

Typical usage

#include <linux/slab.h>

/* Simple struct allocation */
struct my_ctx *ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
    return -ENOMEM;

/* ... use ctx ... */

kfree(ctx);

/* Array allocation with overflow protection */
u32 *buf = kmalloc_array(count, sizeof(*buf), GFP_KERNEL);
if (!buf)
    return -ENOMEM;
The address returned by kmalloc() is aligned to at least ARCH_KMALLOC_MINALIGN bytes. For power-of-two sizes, alignment equals the size itself. This makes kmalloc safe for naturally-aligned scalar types without extra alignment specification.

Vmalloc and vfree

vmalloc() maps physically discontiguous pages into a single virtually contiguous region. It is slower than kmalloc() (requires page-table manipulation) and is not suitable for DMA, but it can satisfy much larger allocations.
#include <linux/vmalloc.h>

/**
 * vmalloc - allocate virtually contiguous memory
 * @size: number of bytes to allocate
 *
 * May sleep. Do not call from atomic context.
 * Memory is NOT zeroed.
 */
void *vmalloc(unsigned long size);

/**
 * vzalloc - allocate and zero virtually contiguous memory
 */
void *vzalloc(unsigned long size);

/**
 * vfree - release memory from vmalloc/vzalloc
 * @addr: virtual address to free; NULL is safe
 */
void vfree(const void *addr);

When to use vmalloc vs kmalloc

/* Load a firmware image that may be several megabytes */
void *fw_buf = vmalloc(fw_size);
if (!fw_buf)
    return -ENOMEM;

if (copy_from_user(fw_buf, user_ptr, fw_size)) {
    vfree(fw_buf);
    return -EFAULT;
}

/* ... process firmware ... */
vfree(fw_buf);

kvmalloc: the adaptive allocator

When you do not know whether the size will fit in a kmalloc slab, use kvmalloc(). It tries kmalloc() first and falls back to vmalloc() if that fails.
#include <linux/mm.h>

void *buf = kvmalloc(size, GFP_KERNEL);
/* ... */
kvfree(buf);  /* handles both kmalloc'd and vmalloc'd pointers */
Memory allocated with vmalloc() is not physically contiguous, so it cannot be passed directly to hardware DMA engines. For DMA buffers, use dma_alloc_coherent() or kmalloc() (for sizes below a page).

Slab caches

When a subsystem allocates many objects of the same fixed size at high frequency (e.g., a packet descriptor, an inode, a request block), creating a dedicated slab cache with kmem_cache_create() is more efficient than repeated kmalloc() calls. The slab allocator batches construction and destruction and can colour objects to reduce cache-line conflicts.

Creating and destroying a cache

#include <linux/slab.h>

static struct kmem_cache *my_cache;

/**
 * kmem_cache_create - create a slab cache
 * @name:  name shown in /proc/slabinfo
 * @size:  size of each object in bytes
 * @align: minimum alignment (0 = use natural alignment)
 * @flags: SLAB_* flags
 * @ctor:  optional constructor called when a new slab is populated (or NULL)
 */
my_cache = kmem_cache_create("my_obj_cache",
                              sizeof(struct my_obj),
                              0,             /* align */
                              SLAB_HWCACHE_ALIGN,
                              NULL);         /* ctor */
if (!my_cache)
    return -ENOMEM;

/* ... module teardown ... */
kmem_cache_destroy(my_cache);   /* all objects must be freed first */

Allocating and freeing objects

/**
 * kmem_cache_alloc - allocate one object from the cache
 * @cachep: the cache
 * @flags:  GFP flags
 */
struct my_obj *obj = kmem_cache_alloc(my_cache, GFP_KERNEL);
if (!obj)
    return -ENOMEM;

/* ... use obj ... */

/**
 * kmem_cache_free - return an object to the cache
 */
kmem_cache_free(my_cache, obj);

Common SLAB flags

FlagEffect
SLAB_HWCACHE_ALIGNAlign objects to CPU cache-line boundaries for better performance.
SLAB_PANICPanic on allocation failure during cache creation (for caches that must succeed).
SLAB_TYPESAFE_BY_RCUDelay freeing of slab pages by one RCU grace period; enables RCU-protected lookups.
SLAB_POISONFill freed objects with a poison pattern to catch use-after-free.
SLAB_RED_ZONEAdd red zones around objects to catch out-of-bounds writes (debug).

Page allocator

The page allocator is the lowest-level allocator in the kernel. It operates on physically contiguous orders (powers of two in pages: order 0 = 1 page, order 1 = 2 pages, order 10 = 1024 pages). Use this when you need guaranteed physical contiguity that kmalloc cannot provide.
#include <linux/gfp.h>
#include <linux/mm.h>

/**
 * alloc_page - allocate a single page
 * @gfp_mask: GFP flags
 *
 * Returns struct page *, or NULL on failure.
 */
struct page *page = alloc_page(GFP_KERNEL);

/**
 * alloc_pages - allocate 2^order contiguous pages
 */
struct page *pages = alloc_pages(GFP_KERNEL, order);

/**
 * __get_free_pages - allocate contiguous pages and return a virtual address
 */
unsigned long addr = __get_free_pages(GFP_KERNEL, order);   /* order 0 = 1 page */
unsigned long page_addr = get_zeroed_page(GFP_KERNEL);      /* one zeroed page */

/* Free */
__free_pages(pages, order);
free_pages(addr, order);

Converting between pages and addresses

void *vaddr = page_address(page);  /* only valid for lowmem pages */
struct page *p = virt_to_page(vaddr);

/* Physical address */
phys_addr_t phys = page_to_phys(page);
Page allocator allocations at order > 0 must be physically contiguous. High-order allocations can fail under memory pressure even when total free memory is abundant, because fragmentation prevents a contiguous block from being assembled. Prefer the slab allocator for objects smaller than a page.

GFP flags

GFP (Get Free Pages) flags control which memory zones the allocator may use, whether it may block, whether it may trigger reclaim, and other policies.

Primary GFP flag combinations

#include <linux/gfp.h>

/*
 * GFP_KERNEL — standard allocation, may sleep, may reclaim
 * Use in: process context where sleeping is allowed.
 * This is the right choice for the vast majority of driver allocations.
 */
void *p = kmalloc(size, GFP_KERNEL);

/*
 * GFP_ATOMIC — non-sleeping allocation from interrupt/softirq context
 * May access memory reserves. Use only when sleeping is truly impossible.
 * Higher chance of failure under memory pressure than GFP_KERNEL.
 */
void *p = kmalloc(size, GFP_ATOMIC);

/*
 * GFP_NOWAIT — like GFP_ATOMIC but without reserve access
 * Suitable for performance-critical paths with a fallback slow path.
 */
void *p = kmalloc(size, GFP_NOWAIT);

/*
 * GFP_DMA — must be satisfied from the DMA zone (below 16 MiB on x86)
 * Use only for legacy ISA DMA. Prefer dma_alloc_coherent() for modern hardware.
 */
void *p = kmalloc(size, GFP_DMA);

/*
 * __GFP_ZERO — zero the returned memory (can be ORed into any flag)
 * kmalloc(size, GFP_KERNEL | __GFP_ZERO) is equivalent to kzalloc(size, GFP_KERNEL)
 */
void *p = kmalloc(size, GFP_KERNEL | __GFP_ZERO);

GFP flags and reclaim behaviour

Both background (kswapd) and direct (in-caller) reclaim are enabled. This is the default and correct choice for process-context allocations. Non-costly requests are effectively non-failing, but callers must still check the return value because OOM-killed tasks may see failures.
Equivalent to GFP_KERNEL & ~__GFP_DIRECT_RECLAIM. The allocator may wake kswapd but will not block the caller. Use in interrupt-safe paths that have a fallback.
(GFP_KERNEL | __GFP_HIGH) & ~__GFP_DIRECT_RECLAIM. Provides access to a small per-zone reserve pool. Use only in hard interrupt / softirq context. Overuse depletes reserves and degrades system stability.
Triggers one round of reclaim and returns NULL rather than retrying. Does not invoke the OOM killer. Useful when the caller has a cheaper fallback and does not want to stall.
The allocator retries indefinitely. This should be used only when failure is genuinely unacceptable and the kernel will be unable to continue (e.g., critical boot-time structures). Never use for high-order allocations.

When to use each allocator

SituationRecommended allocator
Small struct in process contextkzalloc(sizeof(*obj), GFP_KERNEL)
Small struct in interrupt contextkmalloc(sizeof(*obj), GFP_ATOMIC)
Large buffer (>1 page), non-DMAvmalloc(size)
Unknown size, process contextkvmalloc(size, GFP_KERNEL)
Many identical objects, high ratekmem_cache_alloc(cache, GFP_KERNEL)
DMA-capable bufferdma_alloc_coherent(dev, size, &dma_addr, GFP_KERNEL)
Raw page(s) neededalloc_pages(GFP_KERNEL, order)

Allocation failure handling

Every allocation can fail. Ignoring the return value is a bug; the kernel’s sparse checker will warn about unchecked allocations when __must_check is applied to the allocator prototypes.
/* Pattern 1: early return on failure */
struct my_state *state = kzalloc(sizeof(*state), GFP_KERNEL);
if (!state)
    return -ENOMEM;

/* Pattern 2: goto cleanup on failure (preferred in functions with resources) */
int ret;
struct my_ctx *ctx;
u8 *buf;

ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx) {
    ret = -ENOMEM;
    goto err_ctx;
}

buf = kmalloc(BUF_SIZE, GFP_KERNEL);
if (!buf) {
    ret = -ENOMEM;
    goto err_buf;
}

/* ... use ctx and buf ... */
return 0;

err_buf:
    kfree(ctx);
err_ctx:
    return ret;
In modern kernel code, the __free() cleanup attribute (from include/linux/cleanup.h) can replace goto chains with scope-based cleanup. However, the goto pattern remains the most widely used and is always correct.
/* Pattern 3: use kvmalloc for size-agnostic allocation */
void *data = kvmalloc(user_provided_size, GFP_KERNEL);
if (!data)
    return -ENOMEM;

/* kvfree handles both kmalloc'd and vmalloc'd pointers */
kvfree(data);

Build docs developers (and LLMs) love