Skip to main content
This guide walks you through creating a new Amazon EKS cluster and installing Karpenter. By the end you will have Karpenter running and will have triggered your first automatic node provisioning. The guide uses eksctl to create the cluster. It takes less than one hour to complete and costs less than $0.25 — follow the clean-up step at the end to avoid further charges.

Prerequisites

Install the following tools before proceeding:
  1. AWS CLI — configured with a user that has sufficient privileges to create an EKS cluster. Verify authentication with aws sts get-caller-identity.
  2. kubectl — the Kubernetes CLI.
  3. eksctl >= v0.202.0 — the CLI for Amazon EKS.
  4. helm — the Kubernetes package manager.

Install Karpenter

1

Set environment variables

Set the Karpenter version, Kubernetes version, and cluster configuration variables:
export KARPENTER_NAMESPACE="kube-system"
export KARPENTER_VERSION="1.9.0"
export K8S_VERSION="1.34"
Then populate your AWS account details:
export AWS_PARTITION="aws" # use aws-cn or aws-us-gov for non-standard partitions
export CLUSTER_NAME="${USER}-karpenter-demo"
export AWS_DEFAULT_REGION="us-west-2"
export AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)"
export TEMPOUT="$(mktemp)"
export ALIAS_VERSION="$(aws ssm get-parameter \
  --name "/aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2023/x86_64/standard/recommended/image_id" \
  --query Parameter.Value | xargs aws ec2 describe-images \
  --query 'Images[0].Name' --image-ids | sed -r 's/^.*(v[[:digit:]]+).*$/\1/')"
If you open a new shell during this procedure, you will need to re-export these variables. To remind yourself of their current values run:
echo "${KARPENTER_NAMESPACE}" "${KARPENTER_VERSION}" "${K8S_VERSION}" \
     "${CLUSTER_NAME}" "${AWS_DEFAULT_REGION}" "${AWS_ACCOUNT_ID}" \
     "${TEMPOUT}" "${ALIAS_VERSION}"
2

Create the EKS cluster

The following command deploys a CloudFormation stack to set up the IAM infrastructure Karpenter needs, then creates an EKS cluster with eksctl.The CloudFormation stack creates the KarpenterNodeRole and the Karpenter controller IAM policies. The eksctl config associates those policies with a Karpenter service account using EKS Pod Identity, and adds the node role to the cluster’s aws-auth ConfigMap so Karpenter-provisioned nodes can join.
curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > "${TEMPOUT}" \
&& aws cloudformation deploy \
  --stack-name "Karpenter-${CLUSTER_NAME}" \
  --template-file "${TEMPOUT}" \
  --capabilities CAPABILITY_NAMED_IAM \
  --parameter-overrides "ClusterName=${CLUSTER_NAME}"
eksctl create cluster -f - <<EOF
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ${CLUSTER_NAME}
  region: ${AWS_DEFAULT_REGION}
  version: "${K8S_VERSION}"
  tags:
    karpenter.sh/discovery: ${CLUSTER_NAME}

iam:
  withOIDC: true
  podIdentityAssociations:
  - namespace: "${KARPENTER_NAMESPACE}"
    serviceAccountName: karpenter
    roleName: ${CLUSTER_NAME}-karpenter
    permissionPolicyARNs:
    - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerNodeLifecyclePolicy-${CLUSTER_NAME}
    - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerIAMIntegrationPolicy-${CLUSTER_NAME}
    - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerEKSIntegrationPolicy-${CLUSTER_NAME}
    - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerInterruptionPolicy-${CLUSTER_NAME}
    - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerResourceDiscoveryPolicy-${CLUSTER_NAME}

iamIdentityMappings:
- arn: "arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}"
  username: system:node:{{EC2PrivateDNSName}}
  groups:
  - system:bootstrappers
  - system:nodes

managedNodeGroups:
- instanceType: m5.large
  amiFamily: AmazonLinux2023
  name: ${CLUSTER_NAME}-ng
  desiredCapacity: 2
  minSize: 1
  maxSize: 10

addons:
- name: eks-pod-identity-agent
EOF
After the cluster is created, export the cluster endpoint and Karpenter IAM role ARN:
export CLUSTER_ENDPOINT="$(aws eks describe-cluster --name "${CLUSTER_NAME}" --query "cluster.endpoint" --output text)"
export KARPENTER_IAM_ROLE_ARN="arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}-karpenter"
If your AWS account has not previously used EC2 Spot instances, create the Spot service-linked role to avoid a ServiceLinkedRoleCreationNotPermitted error later:
aws iam create-service-linked-role --aws-service-name spot.amazonaws.com || true
3

Install Karpenter with Helm

Karpenter is distributed as a signed OCI Helm chart at oci://public.ecr.aws/karpenter/karpenter.First log out of the Helm ECR registry to perform an unauthenticated pull:
helm registry logout public.ecr.aws
Then install Karpenter:
helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter \
  --version "${KARPENTER_VERSION}" \
  --namespace "${KARPENTER_NAMESPACE}" \
  --create-namespace \
  --set "settings.clusterName=${CLUSTER_NAME}" \
  --set "settings.interruptionQueue=${CLUSTER_NAME}" \
  --set controller.resources.requests.cpu=1 \
  --set controller.resources.requests.memory=1Gi \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=1Gi \
  --wait
You can verify the chart signature with Cosign before installing:
cosign verify public.ecr.aws/karpenter/karpenter:1.9.0 \
  --certificate-oidc-issuer=https://token.actions.githubusercontent.com \
  --certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \
  --certificate-github-workflow-repository=aws/karpenter-provider-aws \
  --certificate-github-workflow-name=Release \
  --certificate-github-workflow-ref=refs/tags/v1.9.0 \
  --annotations version=1.9.0
Karpenter uses ClusterFirst DNS policy by default. If Karpenter must manage the nodes where your DNS service (e.g., CoreDNS) runs, DNS will not be available when Karpenter starts. In that case, set --set dnsPolicy=Default to use host DNS resolution instead.
Karpenter tracks the mapping between EC2 instances and NodeClaims using the tags karpenter.sh/managed-by, karpenter.sh/nodepool, and kubernetes.io/cluster/${CLUSTER_NAME}. Any IAM principal that can create or delete these tags on EC2 instance resources (i-*) can indirectly cause Karpenter to launch or terminate instances. Enforce tag-based IAM policies to restrict this.
4

Create a NodePool and EC2NodeClass

A single NodePool can handle many different pod shapes. Karpenter selects the right instance type for each workload at scheduling time, so you do not need to pre-define one node group per instance type.The EC2NodeClass below uses tag-based discovery to find subnets and security groups. The karpenter.sh/discovery tag was applied to these resources by the eksctl command in step 2.Apply both resources together:
cat <<EOF | envsubst | kubectl apply -f -
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["c", "m", "r"]
        - key: karpenter.k8s.aws/instance-generation
          operator: Gt
          values: ["2"]
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: default
      expireAfter: 720h # 30 * 24h = 720h
  limits:
    cpu: 1000
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized
    consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  role: "KarpenterNodeRole-${CLUSTER_NAME}"
  amiSelectorTerms:
    - alias: "al2023@${ALIAS_VERSION}"
  subnetSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}"
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}"
EOF
consolidationPolicy: WhenEmptyOrUnderutilized tells Karpenter to continuously right-size and remove underutilized nodes. To disable automatic consolidation, set consolidateAfter: Never.
Karpenter is now active and ready to provision nodes.
5

Test node provisioning

Deploy a workload that uses the pause container to trigger node provisioning:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate
spec:
  replicas: 0
  selector:
    matchLabels:
      app: inflate
  template:
    metadata:
      labels:
        app: inflate
    spec:
      terminationGracePeriodSeconds: 0
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      containers:
      - name: inflate
        image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
        resources:
          requests:
            cpu: 1
        securityContext:
          allowPrivilegeEscalation: false
EOF
Scale the deployment to trigger provisioning:
kubectl scale deployment inflate --replicas 5
Watch the Karpenter controller logs to see provisioning in action:
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller
6

Verify and clean up

After scaling up, scale the deployment back to zero. Karpenter should terminate the now-empty nodes due to consolidation:
kubectl delete deployment inflate
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller
You can also delete individual Karpenter-managed nodes with kubectl delete node <node-name>. Karpenter adds a finalizer to each node it manages, so kubectl delete node gracefully cordons, drains, and terminates the underlying EC2 instance before the node object is removed.To remove the entire demo cluster and avoid further charges:
eksctl delete cluster --name "${CLUSTER_NAME}"
aws cloudformation delete-stack --stack-name "Karpenter-${CLUSTER_NAME}"

Compatibility

KubernetesMinimum Karpenter
1.29>= 0.34
1.30>= 0.37
1.31>= 1.0.5
1.32>= 1.2
1.33>= 1.5
1.34>= 1.6
1.351.9.x

Advanced topics

Private clusters

You can install Karpenter in a private EKS cluster by passing --set settings.isolatedVPC=true to Helm. Private clusters have no outbound internet access, so you must enable the following VPC private endpoints:
com.amazonaws.<region>.ec2
com.amazonaws.<region>.ecr.api
com.amazonaws.<region>.ecr.dkr
com.amazonaws.<region>.s3
com.amazonaws.<region>.sts
com.amazonaws.<region>.ssm
com.amazonaws.<region>.sqs
com.amazonaws.<region>.eks
Create a VPC endpoint with:
aws ec2 create-vpc-endpoint \
  --vpc-id ${VPC_ID} \
  --service-name ${SERVICE_NAME} \
  --vpc-endpoint-type Interface \
  --subnet-ids ${SUBNET_IDS} \
  --security-group-ids ${SECURITY_GROUP_IDS}
In private clusters there is no VPC endpoint for the IAM API. You cannot use spec.role in your EC2NodeClass. Instead, create an instance profile manually and use spec.instanceProfile:
aws iam create-instance-profile --instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}"
aws iam add-role-to-instance-profile \
  --instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}" \
  --role-name "KarpenterNodeRole-${CLUSTER_NAME}"

Preventing API server request throttling

By default, installing Karpenter in kube-system places it under the system-leader-election and kube-system-service-accounts FlowSchemas, which map to the leader-election and workload-high PriorityLevelConfigurations. This ensures Karpenter is not throttled when other components saturate lower-priority buckets. If you install Karpenter in a different namespace, create custom FlowSchemas for that namespace to maintain the same priority treatment.

Next steps

NodePool concepts

Learn how to configure requirements, limits, disruption, and expiry.

EC2NodeClass concepts

Configure AMIs, subnets, security groups, and user data.

Scheduling

Use pod affinity, topology spread, and Karpenter-specific labels.

Disruption

Understand consolidation, expiry, drift, and interruption handling.

Build docs developers (and LLMs) love