Skip to main content
AWS instance types offer varying resources and can be selected by labels in your NodePool spec.template.spec.requirements. The values provided in this reference reflect resources available after instance overhead has been subtracted, with the following assumptions:
  • blockDeviceMappings are not configured
  • amiFamily is set to AL2023

How Karpenter discovers instance types

Karpenter calls the EC2 DescribeInstanceTypes and DescribeInstanceTypeOfferings APIs to discover the available instance types and their offerings (availability zone, capacity type) in your AWS region. This information is cached and periodically refreshed. Karpenter uses this data to:
  1. Match NodePool requirements against available instance types
  2. Estimate instance costs for optimized bin-packing
  3. Select the best-fit instance type for each batch of pending pods
Karpenter filters out instance types that are incompatible with your NodePool requirements before considering them as candidates. Instance types in unsupported availability zones or without the requested capacity type (on-demand or spot) are automatically excluded.

Well-known labels

Karpenter annotates every node it launches with a rich set of labels describing the instance type’s properties. You can use these labels in nodeSelector, nodeAffinity, or NodePool requirements to influence scheduling.

Core labels

LabelDescriptionExample
node.kubernetes.io/instance-typeFull EC2 instance type namem5.xlarge
kubernetes.io/archCPU architectureamd64, arm64
kubernetes.io/osOperating systemlinux
topology.kubernetes.io/zoneAvailability zoneus-east-1a
karpenter.sh/capacity-typeCapacity typeon-demand, spot

Karpenter AWS instance labels

LabelDescriptionExample values
karpenter.k8s.aws/instance-categoryHigh-level instance categorya, c, m, r, g, p, inf, trn
karpenter.k8s.aws/instance-familyInstance familym5, c6g, p4d
karpenter.k8s.aws/instance-generationGeneration number5, 6, 7
karpenter.k8s.aws/instance-sizeInstance sizelarge, xlarge, 2xlarge, metal
karpenter.k8s.aws/instance-cpuvCPU count2, 4, 8, 96
karpenter.k8s.aws/instance-cpu-manufacturerCPU manufactureraws, intel, amd
karpenter.k8s.aws/instance-cpu-sustained-clock-speed-mhzSustained CPU clock speed in MHz3400, 2500
karpenter.k8s.aws/instance-memoryTotal memory in MiB4096, 32768
karpenter.k8s.aws/instance-ebs-bandwidthEBS bandwidth in Mbps4750, 19000
karpenter.k8s.aws/instance-network-bandwidthNetwork bandwidth in Mbps1250, 25000
karpenter.k8s.aws/instance-hypervisorHypervisor typenitro, xen, “ (bare metal)
karpenter.k8s.aws/instance-encryption-in-transit-supportedSupports encryption in transittrue, false
karpenter.k8s.aws/instance-local-nvmeLocal NVMe storage in GB (if present)75, 900, 3800
karpenter.k8s.aws/instance-capability-flexFlex capacity supporttrue, false

GPU and accelerator labels

For GPU and accelerator instances (e.g., p, g, inf, trn families), Karpenter also surfaces:
LabelDescriptionExample
karpenter.k8s.aws/instance-gpu-nameGPU model namea100, v100, t4
karpenter.k8s.aws/instance-gpu-manufacturerGPU manufacturernvidia, aws
karpenter.k8s.aws/instance-gpu-countNumber of GPUs1, 4, 8
karpenter.k8s.aws/instance-gpu-memoryGPU memory in MiB16384, 40960
karpenter.k8s.aws/instance-accelerator-nameAccelerator name (e.g. for Inferentia/Trainium)inferentia, trainium
karpenter.k8s.aws/instance-accelerator-manufacturerAccelerator manufactureraws
karpenter.k8s.aws/instance-accelerator-countNumber of accelerators1, 16

Architecture options

Karpenter supports both x86-64 (amd64) and ARM (arm64) architectures. The architecture is surfaced on the standard kubernetes.io/arch label.
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: arm64-pool
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["arm64"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["c", "m", "r"]
ARM-based Graviton instances (arm64) typically offer better price/performance for compute-intensive workloads. Look for instance families ending in g (e.g., c6g, m7g, r8g) for Graviton instances.

Filtering instance types in NodePools

Use spec.template.spec.requirements to constrain which instance types Karpenter can select. Requirements use standard Kubernetes label selectors with In, NotIn, Exists, DoesNotExist, Gt, and Lt operators.

Select by family

spec:
  template:
    spec:
      requirements:
        - key: karpenter.k8s.aws/instance-family
          operator: In
          values: ["m5", "m6i", "m7i"]

Select by category and minimum CPU

spec:
  template:
    spec:
      requirements:
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["c", "m"]
        - key: karpenter.k8s.aws/instance-cpu
          operator: Gt
          values: ["7"]   # more than 7 vCPUs (i.e., 8+)

Exclude specific sizes

spec:
  template:
    spec:
      requirements:
        - key: karpenter.k8s.aws/instance-size
          operator: NotIn
          values: ["nano", "micro", "small", "medium", "large"]

Require local NVMe storage

spec:
  template:
    spec:
      requirements:
        - key: karpenter.k8s.aws/instance-local-nvme
          operator: Exists

GPU workloads

To schedule GPU workloads, express the GPU resource requirement on your pod and add a matching NodePool:
# Pod spec requesting a GPU
resources:
  limits:
    nvidia.com/gpu: "1"
# NodePool for GPU instances
spec:
  template:
    spec:
      requirements:
        - key: karpenter.k8s.aws/instance-gpu-manufacturer
          operator: In
          values: ["nvidia"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["g", "p"]

Instance family reference

The following tables give a representative overview of the instance families Karpenter can discover. This list is not exhaustive — Karpenter queries the EC2 API dynamically and will include all types available in your region.

Compute-optimized (c family)

FamilyArchitectureManufacturerNotes
c1amd64intelPrevious generation
c3amd64intelPrevious generation
c4amd64intelPrevious generation
c5amd64intelNitro hypervisor
c5aamd64amdNitro, encryption-in-transit
c5adamd64amdNitro, local NVMe
c5damd64intelNitro, local NVMe
c5namd64intelEnhanced networking
c6aamd64amd6th gen AMD
c6garm64awsAWS Graviton2
c6gdarm64awsGraviton2, local NVMe
c6iamd64intel6th gen Intel
c7garm64awsAWS Graviton3
c7iamd64intel7th gen Intel

General-purpose (m family)

FamilyArchitectureManufacturerNotes
m1, m2, m3amd64intelPrevious generation
m4amd64intelPrevious generation
m5, m5a, m5damd64intel/amdNitro
m6aamd64amd6th gen AMD
m6garm64awsAWS Graviton2
m6iamd64intel6th gen Intel
m7garm64awsAWS Graviton3
m7iamd64intel7th gen Intel

Memory-optimized (r family)

FamilyArchitectureNotes
r3, r4amd64Previous generation
r5, r5a, r5damd64Nitro
r6aamd646th gen AMD
r6garm64AWS Graviton2
r6iamd646th gen Intel
r7garm64AWS Graviton3
r7iamd647th gen Intel
x1, x1e, x2amd64High memory

GPU and accelerator instances

FamilyGPU/AcceleratorUse case
g4dn, g5NVIDIA T4 / A10GML inference, graphics
p3, p4d, p5NVIDIA V100 / A100 / H100ML training
inf1, inf2AWS InferentiaML inference
trn1AWS TrainiumML training
GPU instance types require you to use an AMI that includes the appropriate GPU drivers (e.g., the NVIDIA driver for NVIDIA GPUs). Ensure your EC2NodeClass amiSelectorTerms selects an AMI with the correct drivers installed.

Available resources per node

The resources reported per node reflect allocatable capacity after Karpenter subtracts instance overhead. The key resources are:
ResourceDescription
cpuAllocatable CPU in millicores
memoryAllocatable memory in MiB
ephemeral-storageEphemeral storage (default 17Gi)
podsMaximum number of pods supported
vpc.amazonaws.com/pod-eniENI-based pod IPs (Nitro instances only)
For example, a c5.xlarge reports:
ResourceQuantity
cpu3920m
ephemeral-storage17Gi
memory6584Mi
pods58
vpc.amazonaws.com/pod-eni18
Allocatable CPU and memory are lower than the raw instance specs because Karpenter accounts for Kubernetes system overhead (kubelet, OS processes) and the VM memory overhead percent configured via VM_MEMORY_OVERHEAD_PERCENT.

Build docs developers (and LLMs) love