Assumptions
This guide assumes:- You have an existing EKS cluster with CAS installed
- Your cluster uses existing VPC, subnets, and security groups
- Your nodes are part of one or more managed node groups
- Your workloads have pod disruption budgets that follow EKS best practices
- Your cluster has an OIDC provider configured for service accounts
- The
awsCLI is installed and configured
Key differences from Cluster Autoscaler
Before migrating, it helps to understand how the two systems differ conceptually:| Cluster Autoscaler | Karpenter | |
|---|---|---|
| Scaling model | Scales existing Auto Scaling Groups | Launches EC2 instances directly via RunInstances |
| Instance selection | Fixed per node group | Dynamically chosen per workload requirement |
| Configuration | One node group per instance type set | One NodePool covers many instance types and zones |
| Node lifecycle | ASG manages nodes | Karpenter manages the full instance lifecycle |
| Spot support | Separate Spot node groups | Native karpenter.sh/capacity-type requirement |
| Consolidation | Limited (scale down) | Continuous bin-packing and right-sizing |
NodePool and EC2NodeClass objects. Your existing managed node groups can be kept at a minimal size to host Karpenter itself and other critical cluster components.
Migrate to Karpenter
Set environment variables
Set your cluster name and collect the variables needed throughout this guide:
Create the Karpenter node IAM role
Nodes launched by Karpenter need their own IAM role with the standard EKS worker-node policies. Create it with a trust policy that allows EC2 to assume the role:Attach the required managed policies:
Create the Karpenter controller IAM role
The Karpenter controller uses IAM Roles for Service Accounts (IRSA) to call AWS APIs. Create a role with a trust policy that allows your cluster’s OIDC provider to issue credentials to the Karpenter service account:Create and attach the controller policy granting Karpenter the permissions it needs to manage EC2 instances, IAM instance profiles, and SQS interruption queues:
Tag subnets and security groups
Karpenter uses tag-based discovery to find which subnets and security groups to use when launching nodes. Tag the subnets for all your node groups:Tag the security groups. The commands below cover two common EKS configurations — use the one that matches your setup:
Update the aws-auth ConfigMap
Allow nodes using the new Karpenter node IAM role to join the cluster by adding a mapping to the Add the following entry to the The
aws-auth ConfigMap:mapRoles section. Replace ${AWS_PARTITION} and ${AWS_ACCOUNT_ID} with your actual values, but do not replace {{EC2PrivateDNSName}}:aws-auth ConfigMap should now contain two role mappings: one for your existing node group and one for the new Karpenter node role.Deploy Karpenter
Set the Karpenter version to deploy:Generate the Karpenter manifest from the Helm chart. This approach lets you inspect and edit the manifest before applying it:Before applying, edit Now install the CRDs and apply the Karpenter manifest:
karpenter.yaml to set node affinity so Karpenter runs on your existing managed node group rather than on a Karpenter-provisioned node. Find the Karpenter Deployment and update its affinity:Create a default NodePool
Create a NodePool that covers the workload requirements previously handled by your CAS-managed node groups. The example below requests Spot capacity across the
c, m, and r instance families — adjust the requirements to match your workloads.You can find additional NodePool examples at github.com/aws/karpenter/tree/v1.9.0/examples/v1.Disable Cluster Autoscaler
With Karpenter running, scale the Cluster Autoscaler to zero replicas:Then reduce your managed node groups to a minimal size. Karpenter will take over provisioning capacity for your workloads. Keeping a small number of managed node group nodes ensures Karpenter itself and other critical components have a stable place to run.
If you have a large number of nodes, scale down gradually — a few instances at a time — and watch for workloads that might not have enough replicas or disruption budgets configured.
Validate the migration
As managed node group nodes are drained, verify that Karpenter is provisioning new nodes to replace them:Watch for new Karpenter-managed nodes appearing in the cluster:Karpenter-managed nodes will have the label
karpenter.sh/nodepool set to the name of the NodePool that provisioned them. You can filter for them with:Mapping CAS node groups to NodePools
With CAS you typically created one node group per combination of instance type, availability zone, and capacity type (on-demand vs Spot). Karpenter collapses this into a singleNodePool with a requirements block that expresses the same constraints declaratively.
For example, a CAS setup with separate on-demand and Spot node groups across three instance types and three availability zones (18 node groups) maps to a single NodePool:
Next steps
NodePool concepts
Learn how to configure requirements, limits, disruption, and expiry.
Scheduling
Control which NodePool provisions your workloads using node selectors and affinity.
Disruption
Understand consolidation, expiry, drift, and interruption handling.
Troubleshooting
Diagnose common Karpenter issues.