eksctl to create the cluster. It takes less than one hour to complete and costs less than $0.25 — follow the clean-up step at the end to avoid further charges.
Prerequisites
Install the following tools before proceeding:- AWS CLI — configured with a user that has sufficient privileges to create an EKS cluster. Verify authentication with
aws sts get-caller-identity. kubectl— the Kubernetes CLI.eksctl>= v0.202.0 — the CLI for Amazon EKS.helm— the Kubernetes package manager.
Install Karpenter
Set environment variables
Set the Karpenter version, Kubernetes version, and cluster configuration variables:Then populate your AWS account details:
Create the EKS cluster
The following command deploys a CloudFormation stack to set up the IAM infrastructure Karpenter needs, then creates an EKS cluster with After the cluster is created, export the cluster endpoint and Karpenter IAM role ARN:
eksctl.The CloudFormation stack creates the KarpenterNodeRole and the Karpenter controller IAM policies. The eksctl config associates those policies with a Karpenter service account using EKS Pod Identity, and adds the node role to the cluster’s aws-auth ConfigMap so Karpenter-provisioned nodes can join.If your AWS account has not previously used EC2 Spot instances, create the Spot service-linked role to avoid a
ServiceLinkedRoleCreationNotPermitted error later:Install Karpenter with Helm
Karpenter is distributed as a signed OCI Helm chart at Then install Karpenter:
oci://public.ecr.aws/karpenter/karpenter.First log out of the Helm ECR registry to perform an unauthenticated pull:Create a NodePool and EC2NodeClass
A single NodePool can handle many different pod shapes. Karpenter selects the right instance type for each workload at scheduling time, so you do not need to pre-define one node group per instance type.The Karpenter is now active and ready to provision nodes.
EC2NodeClass below uses tag-based discovery to find subnets and security groups. The karpenter.sh/discovery tag was applied to these resources by the eksctl command in step 2.Apply both resources together:consolidationPolicy: WhenEmptyOrUnderutilized tells Karpenter to continuously right-size and remove underutilized nodes. To disable automatic consolidation, set consolidateAfter: Never.Test node provisioning
Deploy a workload that uses the pause container to trigger node provisioning:Scale the deployment to trigger provisioning:Watch the Karpenter controller logs to see provisioning in action:
Verify and clean up
After scaling up, scale the deployment back to zero. Karpenter should terminate the now-empty nodes due to consolidation:You can also delete individual Karpenter-managed nodes with
kubectl delete node <node-name>. Karpenter adds a finalizer to each node it manages, so kubectl delete node gracefully cordons, drains, and terminates the underlying EC2 instance before the node object is removed.To remove the entire demo cluster and avoid further charges:Compatibility
| Kubernetes | Minimum Karpenter |
|---|---|
| 1.29 | >= 0.34 |
| 1.30 | >= 0.37 |
| 1.31 | >= 1.0.5 |
| 1.32 | >= 1.2 |
| 1.33 | >= 1.5 |
| 1.34 | >= 1.6 |
| 1.35 | 1.9.x |
Advanced topics
Private clusters
You can install Karpenter in a private EKS cluster by passing--set settings.isolatedVPC=true to Helm. Private clusters have no outbound internet access, so you must enable the following VPC private endpoints:
In private clusters there is no VPC endpoint for the IAM API. You cannot use
spec.role in your EC2NodeClass. Instead, create an instance profile manually and use spec.instanceProfile:Preventing API server request throttling
By default, installing Karpenter inkube-system places it under the system-leader-election and kube-system-service-accounts FlowSchemas, which map to the leader-election and workload-high PriorityLevelConfigurations. This ensures Karpenter is not throttled when other components saturate lower-priority buckets.
If you install Karpenter in a different namespace, create custom FlowSchemas for that namespace to maintain the same priority treatment.
Next steps
NodePool concepts
Learn how to configure requirements, limits, disruption, and expiry.
EC2NodeClass concepts
Configure AMIs, subnets, security groups, and user data.
Scheduling
Use pod affinity, topology spread, and Karpenter-specific labels.
Disruption
Understand consolidation, expiry, drift, and interruption handling.