Skip to main content
Karpenter provides native support for EC2 On-Demand Capacity Reservations (ODCRs) and EC2 Capacity Blocks for ML. This lets you explicitly select and prioritize pre-purchased capacity reservations, so Karpenter uses them before falling back to standard on-demand or Spot instances.
Native ODCR support is a Beta feature. The ReservedCapacity feature gate is enabled by default as of Karpenter v1.6.If you were using open ODCRs with an earlier version of Karpenter, review the migration section before enabling this feature.

What are ODCRs and Capacity Blocks?

On-Demand Capacity Reservations (ODCRs) let you reserve EC2 instance capacity in a specific Availability Zone for any duration. Reserved capacity is available immediately when you need it — you are billed for the reserved capacity whether or not instances are running in it. Capacity Blocks for ML are time-bounded reservations designed for large-scale ML training and inference workloads. Unlike standard ODCRs, Capacity Blocks have a defined end time, after which EC2 reclaims the instances. Karpenter models capacity reservations as a distinct capacity type (reserved) separate from on-demand and spot. This lets you express prioritization in NodePool requirements.

Enabling native ODCR support

Ensure the ReservedCapacity feature gate is enabled. As of v1.6 this is on by default. For earlier versions, enable it explicitly in your Karpenter configuration.

Configuring capacity reservation selector terms

Add capacityReservationSelectorTerms to your EC2NodeClass. This works similarly to amiSelectorTerms — you specify one or more terms, and Karpenter selects matching reservations in your AWS account.
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  capacityReservationSelectorTerms:
    # Select by tag
    - tags:
        application: foobar
    # Select by reservation ID
    - id: cr-56fac701cc1951b03
Capacity Blocks are modeled as on-demand capacity reservations in EC2. Select them using the same capacityReservationSelectorTerms you use for standard ODCRs.
Karpenter does not support open matching for ODCRs. All reservations you want Karpenter to use — including those with open instance eligibility — must be explicitly listed in spec.capacityReservationSelectorTerms. Reservations not listed will not be used.
For full field reference, see the NodeClass docs.

Configuring the NodePool

Karpenter uses a dedicated capacity type value, reserved, for capacity reservations. Update your NodePool to include reserved in the karpenter.sh/capacity-type requirement.

Prioritize reservations, fall back to on-demand

apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: karpenter.sh/capacity-type
          operator: In
          values: ['reserved', 'on-demand']

Prioritize reservations, fall back to Spot or on-demand

requirements:
  - key: karpenter.sh/capacity-type
    operator: In
    values: ['reserved', 'on-demand', 'spot']
When multiple capacity types are allowed, Karpenter prioritizes reserved capacity first. Because ODCRs are pre-paid, Karpenter models them as free and will consolidate Spot and on-demand nodes onto reserved capacity when possible.

Scheduling labels for reserved nodes

Nodes launched into a capacity reservation carry additional labels you can use for scheduling constraints:
LabelExample valueDescription
karpenter.k8s.aws/capacity-reservation-idcr-56fac701cc1951b03The reservation’s ID
karpenter.k8s.aws/capacity-reservation-typedefault or capacity-blockThe type of reservation
These labels are only present on reserved nodes. Use them in NodePool requirements or pod scheduling constraints (e.g. node affinity):
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        - matchExpressions:
            - key: karpenter.k8s.aws/capacity-reservation-id
              operator: In
              values:
                - cr-56fac701cc1951b03

Prioritization behavior

When a NodePool is compatible with multiple capacity types, Karpenter uses the following priority order:
  1. Reserved — used first if available and compatible with pending workloads
  2. Spot — used if no compatible reservations are available
  3. On-demand — used as final fallback
During consolidation, Karpenter also prefers moving workloads onto reserved nodes since reserved capacity is pre-paid.

Expiration and capacity block reclamation

ODCRs

An instance launched into an ODCR is not guaranteed to remain in that reservation indefinitely. The ODCR can expire, be cancelled, or the instance can be manually removed from it. If Karpenter detects that an instance is no longer associated with a reservation, it updates the node’s karpenter.sh/capacity-type label from reserved to on-demand.

Capacity Blocks

Capacity Blocks always have an end time. EC2 terminates instances in a Capacity Block before the end time:
  • For standard instance types: 30 minutes before expiry
  • For UltraServer instance types: 60 minutes before expiry
Karpenter preemptively begins draining nodes that were launched for a Capacity Block 10 minutes before EC2 begins termination, giving workloads time to gracefully terminate before the block is reclaimed.
Ensure your workloads have appropriate terminationGracePeriodSeconds and that Pod Disruption Budgets are configured for applications running on Capacity Block nodes. Karpenter’s 10-minute preemptive drain window is finite.

Combining capacity types

NodePools can mix all three capacity types to express flexible fallback behavior:
requirements:
  - key: karpenter.sh/capacity-type
    operator: In
    values: ['reserved', 'on-demand', 'spot']
You can also use separate NodePools with different capacity-type requirements and different weight values to express explicit preference ordering across pools.
Use a dedicated NodePool that targets only reserved capacity for workloads that must run on your reservations (e.g. to avoid paying for unused reserved capacity). Add a second NodePool with on-demand or spot for general workloads.

Migrating from previous versions

Before native ODCR support (prior to v1.3), Karpenter could incidentally launch instances into open ODCRs if a NodeClaim’s requirements happened to match an open reservation. This behavior is no longer supported when the ReservedCapacity feature gate is enabled. If you relied on this implicit behavior:
1

Add capacityReservationSelectorTerms to your EC2NodeClass

Explicitly list the reservations you want Karpenter to use before enabling the feature gate.
spec:
  capacityReservationSelectorTerms:
    - tags:
        application: foobar
    - id: cr-56fac701cc1951b03
2

Update NodePool capacity-type requirements

Add reserved to any NodePools you want to use with your ODCRs.
requirements:
  - key: karpenter.sh/capacity-type
    operator: In
    values: ['reserved', 'on-demand']
3

Enable the ReservedCapacity feature gate

Once your EC2NodeClass and NodePool are updated, enable the feature gate. Karpenter will immediately begin using your reservations under the new model.
Performing the EC2NodeClass and NodePool updates before enabling the feature gate ensures Karpenter can continue using your reservations without a gap in coverage.
Enabling the feature gate before updating your EC2NodeClass will cause Karpenter to stop using open ODCRs immediately, potentially falling back to on-demand capacity for workloads that were previously using your reserved capacity.

Build docs developers (and LLMs) love