Skip to main content
The Getting Started with Karpenter guide uses a CloudFormation template (cloudformation.yaml) to bootstrap IAM resources that allow Karpenter to create and manage nodes and respond to interruption events. This page describes each section of that template so you can:
  • Understand what Karpenter is authorized to do with your EKS cluster and AWS resources
  • Create equivalent IAM resources manually when adding Karpenter to an existing cluster

Downloading the template

export KARPENTER_VERSION="1.9.0"
curl https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml > cloudformation.yaml

Template overview

The template is organized into three groups of resources:
SectionPurpose
Node authorizationCreates KarpenterNodeRole and attaches it to an instance profile that Karpenter generates at runtime
Controller authorizationCreates five IAM managed policies attached to the Karpenter controller service account
Interruption handlingCreates an SQS queue and EventBridge rules to route EC2 lifecycle events to Karpenter
Resource names in the template are derived from the cluster name. For a cluster named bob-karpenter-demo, the node role would be KarpenterNodeRole-bob-karpenter-demo.

Node authorization

KarpenterNodeRole

This IAM role is attached to the instance profiles Karpenter generates when launching EC2 nodes. It grants nodes the permissions they need to join the cluster and operate.
KarpenterNodeRole:
  Type: "AWS::IAM::Role"
  Properties:
    RoleName: !Sub "KarpenterNodeRole-${ClusterName}"
    Path: /
    AssumeRolePolicyDocument:
      Version: "2012-10-17"
      Statement:
        - Effect: Allow
          Principal:
            Service:
              !Sub "ec2.${AWS::URLSuffix}"
          Action:
            - "sts:AssumeRole"
    ManagedPolicyArns:
      - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEKS_CNI_Policy"
      - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly"
      - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonSSMManagedInstanceCore"
The role attaches four AWS managed policies:
PolicyPurpose
AmazonEKS_CNI_PolicyPermissions for the Amazon VPC CNI Plugin to configure EKS worker nodes
AmazonEKSWorkerNodePolicyAllows worker nodes to connect to EKS clusters
AmazonEC2ContainerRegistryPullOnlyAllows pulling images from Amazon ECR
AmazonSSMManagedInstanceCoreEnables AWS Systems Manager core functions on EC2 instances
If you have an existing node role you want to reuse, you can skip this step and pass the existing role to your EC2NodeClasses. Make sure the controller’s iam:PassRole permission covers the role attached to the generated instance profiles.

Controller authorization

The Karpenter controller’s IAM permissions are split across five managed policies. When using eksctl, these policies are attached to the karpenter service account’s IAM role via IRSA or EKS Pod Identity.

KarpenterControllerNodeLifecyclePolicy

Manages EC2 instance and launch template lifecycle operations.
Allows RunInstances and CreateFleet to access (but not create) image, snapshot, security-group, subnet, and capacity-reservation resources, scoped to the AWS partition and region.
{
  "Sid": "AllowScopedEC2InstanceAccessActions",
  "Effect": "Allow",
  "Resource": [
    "arn:${Partition}:ec2:${Region}::image/*",
    "arn:${Partition}:ec2:${Region}::snapshot/*",
    "arn:${Partition}:ec2:${Region}:*:security-group/*",
    "arn:${Partition}:ec2:${Region}:*:subnet/*",
    "arn:${Partition}:ec2:${Region}:*:capacity-reservation/*"
  ],
  "Action": ["ec2:RunInstances", "ec2:CreateFleet"]
}
Allows RunInstances and CreateFleet to access launch templates that have the kubernetes.io/cluster/${ClusterName}=owned and karpenter.sh/nodepool tags. This ensures Karpenter can only use launch templates it provisioned itself.
{
  "Sid": "AllowScopedEC2LaunchTemplateAccessActions",
  "Effect": "Allow",
  "Resource": "arn:${Partition}:ec2:${Region}:*:launch-template/*",
  "Action": ["ec2:RunInstances", "ec2:CreateFleet"],
  "Condition": {
    "StringEquals": {
      "aws:ResourceTag/kubernetes.io/cluster/${ClusterName}": "owned"
    },
    "StringLike": {
      "aws:ResourceTag/karpenter.sh/nodepool": "*"
    }
  }
}
Allows RunInstances, CreateFleet, and CreateLaunchTemplate to create fleet, instance, volume, network-interface, launch-template, and spot-instances-request resources. Requires that kubernetes.io/cluster/${ClusterName}=owned and karpenter.sh/nodepool tags are set on the request, scoping Karpenter to a single EKS cluster.
{
  "Sid": "AllowScopedEC2InstanceActionsWithTags",
  "Effect": "Allow",
  "Resource": [
    "arn:${Partition}:ec2:${Region}:*:fleet/*",
    "arn:${Partition}:ec2:${Region}:*:instance/*",
    "arn:${Partition}:ec2:${Region}:*:volume/*",
    "arn:${Partition}:ec2:${Region}:*:network-interface/*",
    "arn:${Partition}:ec2:${Region}:*:launch-template/*",
    "arn:${Partition}:ec2:${Region}:*:spot-instances-request/*"
  ],
  "Action": ["ec2:RunInstances", "ec2:CreateFleet", "ec2:CreateLaunchTemplate"],
  "Condition": {
    "StringEquals": {
      "aws:RequestTag/kubernetes.io/cluster/${ClusterName}": "owned",
      "aws:RequestTag/eks:eks-cluster-name": "${ClusterName}"
    },
    "StringLike": {
      "aws:RequestTag/karpenter.sh/nodepool": "*"
    }
  }
}
Allows CreateTags on fleet, instance, volume, network-interface, launch-template, and spot-instances-request resources only during RunInstances, CreateFleet, or CreateLaunchTemplate calls. Prevents Karpenter from tagging resources arbitrarily after creation.
Allows CreateTags on instances after creation, restricted to instances Karpenter owns (identified by kubernetes.io/cluster/${ClusterName} and karpenter.sh/nodepool tags). Only the eks:eks-cluster-name, karpenter.sh/nodeclaim, and Name tag keys may be modified.
Allows TerminateInstances and DeleteLaunchTemplate on resources that have both karpenter.sh/nodepool and kubernetes.io/cluster/${ClusterName} tags set, ensuring Karpenter can only delete resources it owns.
{
  "Sid": "AllowScopedDeletion",
  "Effect": "Allow",
  "Resource": [
    "arn:${Partition}:ec2:${Region}:*:instance/*",
    "arn:${Partition}:ec2:${Region}:*:launch-template/*"
  ],
  "Action": ["ec2:TerminateInstances", "ec2:DeleteLaunchTemplate"],
  "Condition": {
    "StringEquals": {
      "aws:ResourceTag/kubernetes.io/cluster/${ClusterName}": "owned"
    },
    "StringLike": {
      "aws:ResourceTag/karpenter.sh/nodepool": "*"
    }
  }
}

KarpenterControllerIAMIntegrationPolicy

Manages IAM instance profile operations so Karpenter can auto-generate profiles for EC2NodeClasses.
Grants iam:PassRole on the KarpenterNodeRole so EC2 can use it when assigning permissions to generated instance profiles during node launch.
Grants iam:CreateInstanceProfile scoped to requests tagged with kubernetes.io/cluster/${ClusterName}=owned, eks:eks-cluster-name=${ClusterName}, topology.kubernetes.io/region, and a karpenter.k8s.aws/ec2nodeclass tag.
Grants iam:TagInstanceProfile restricted to instance profiles owned by Karpenter for this cluster (enforced via both ResourceTag and RequestTag conditions).
Grants iam:AddRoleToInstanceProfile, iam:RemoveRoleFromInstanceProfile, and iam:DeleteInstanceProfile on instance profiles tagged with kubernetes.io/cluster/${ClusterName}=owned and the current region. If you configure Karpenter to use a new role via an EC2NodeClass, ensure that role is also covered by your iam:PassRole permission.

KarpenterControllerEKSIntegrationPolicy

Enables Karpenter to discover the Kubernetes cluster’s external API endpoint.
{
  "Sid": "AllowAPIServerEndpointDiscovery",
  "Effect": "Allow",
  "Resource": "arn:${Partition}:eks:${Region}:${AccountId}:cluster/${ClusterName}",
  "Action": "eks:DescribeCluster"
}
If you are not using an EKS control plane, you must specify the cluster endpoint explicitly using the CLUSTER_ENDPOINT environment variable or --cluster-endpoint CLI flag.

KarpenterControllerInterruptionPolicy

Grants read/delete access to the SQS interruption queue.
{
  "Sid": "AllowInterruptionQueueActions",
  "Effect": "Allow",
  "Resource": "${KarpenterInterruptionQueue.Arn}",
  "Action": [
    "sqs:DeleteMessage",
    "sqs:GetQueueUrl",
    "sqs:ReceiveMessage"
  ]
}

KarpenterControllerResourceDiscoveryPolicy

Provides read-only access for resource discovery.
Allows read-only EC2 Describe actions scoped to the current region:
{
  "Sid": "AllowRegionalReadActions",
  "Effect": "Allow",
  "Resource": "*",
  "Action": [
    "ec2:DescribeCapacityReservations",
    "ec2:DescribeImages",
    "ec2:DescribeInstances",
    "ec2:DescribeInstanceTypeOfferings",
    "ec2:DescribeInstanceTypes",
    "ec2:DescribeLaunchTemplates",
    "ec2:DescribeSecurityGroups",
    "ec2:DescribeSpotPriceHistory",
    "ec2:DescribeSubnets"
  ],
  "Condition": {
    "StringEquals": {
      "aws:RequestedRegion": "${Region}"
    }
  }
}
Allows ssm:GetParameter for AWS service SSM parameters (used to discover the latest EKS-optimized AMI IDs):
{
  "Sid": "AllowSSMReadActions",
  "Effect": "Allow",
  "Resource": "arn:${Partition}:ssm:${Region}::parameter/aws/service/*",
  "Action": "ssm:GetParameter"
}
Allows pricing:GetProducts globally (pricing data is not available in every region):
{
  "Sid": "AllowPricingReadActions",
  "Effect": "Allow",
  "Resource": "*",
  "Action": "pricing:GetProducts"
}
Set ISOLATED_VPC=true if your cluster cannot reach the AWS pricing endpoint. Karpenter will fall back to on-demand pricing estimates.
Allows iam:ListInstanceProfiles globally and iam:GetInstanceProfile on all instance profiles to check whether a profile has been provisioned for an EC2NodeClass:
{
  "Sid": "AllowUnscopedInstanceProfileListAction",
  "Effect": "Allow",
  "Resource": "*",
  "Action": "iam:ListInstanceProfiles"
}

Interruption handling

This section creates an SQS queue and EventBridge rules that route EC2 lifecycle events to Karpenter. Karpenter uses these events to proactively reschedule workloads before instances are reclaimed.
Interruption handling is optional. Enable it by setting the INTERRUPTION_QUEUE environment variable to the SQS queue name (matching your cluster name). See Settings.

Supported events

EventSourceDescription
AWS Health Eventaws.healthScheduled maintenance and AWS health notifications
EC2 Spot Instance Interruption Warningaws.ec22-minute warning before a Spot instance is reclaimed
EC2 Instance Rebalance Recommendationaws.ec2Signal that a Spot instance is at elevated interruption risk
EC2 Instance State-change Notificationaws.ec2Instance state transitions (pending, running, stopping, terminated)

KarpenterInterruptionQueue

An SQS standard queue named after your cluster with a 5-minute message retention period and server-side encryption enabled:
KarpenterInterruptionQueue:
  Type: AWS::SQS::Queue
  Properties:
    QueueName: !Sub "${ClusterName}"
    MessageRetentionPeriod: 300
    SqsManagedSseEnabled: true

KarpenterInterruptionQueuePolicy

Allows events.amazonaws.com and sqs.amazonaws.com to send messages to the queue. Denies all non-HTTPS connections to enforce encryption in transit:
KarpenterInterruptionQueuePolicy:
  Type: AWS::SQS::QueuePolicy
  Properties:
    Queues:
      - !Ref KarpenterInterruptionQueue
    PolicyDocument:
      Id: EC2InterruptionPolicy
      Statement:
        - Effect: Allow
          Principal:
            Service:
              - events.amazonaws.com
              - sqs.amazonaws.com
          Action: sqs:SendMessage
          Resource: !GetAtt KarpenterInterruptionQueue.Arn
        - Sid: DenyHTTP
          Effect: Deny
          Action: sqs:*
          Resource: !GetAtt KarpenterInterruptionQueue.Arn
          Condition:
            Bool:
              aws:SecureTransport: false
          Principal: "*"

EventBridge rules

Four EventBridge rules route events into the interruption queue:
# AWS Health Events
ScheduledChangeRule:
  Type: 'AWS::Events::Rule'
  Properties:
    EventPattern:
      source: [aws.health]
      detail-type: [AWS Health Event]
    Targets:
      - Id: KarpenterInterruptionQueueTarget
        Arn: !GetAtt KarpenterInterruptionQueue.Arn

# Spot interruption warnings
SpotInterruptionRule:
  Type: 'AWS::Events::Rule'
  Properties:
    EventPattern:
      source: [aws.ec2]
      detail-type: [EC2 Spot Instance Interruption Warning]
    Targets:
      - Id: KarpenterInterruptionQueueTarget
        Arn: !GetAtt KarpenterInterruptionQueue.Arn

# Spot rebalance recommendations
RebalanceRule:
  Type: 'AWS::Events::Rule'
  Properties:
    EventPattern:
      source: [aws.ec2]
      detail-type: [EC2 Instance Rebalance Recommendation]
    Targets:
      - Id: KarpenterInterruptionQueueTarget
        Arn: !GetAtt KarpenterInterruptionQueue.Arn

# Instance state changes
InstanceStateChangeRule:
  Type: 'AWS::Events::Rule'
  Properties:
    EventPattern:
      source: [aws.ec2]
      detail-type: [EC2 Instance State-change Notification]
    Targets:
      - Id: KarpenterInterruptionQueueTarget
        Arn: !GetAtt KarpenterInterruptionQueue.Arn

Manual IAM setup

If you are adding Karpenter to an existing cluster without using cloudformation.yaml, create the five controller policies described above and attach them to the IAM role used by Karpenter’s service account. You can use IRSA or EKS Pod Identity:
# Using eksctl to create the service account with IRSA
eksctl create iamserviceaccount \
  --cluster "${CLUSTER_NAME}" \
  --namespace kube-system \
  --name karpenter \
  --role-name "KarpenterControllerRole-${CLUSTER_NAME}" \
  --attach-policy-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerNodeLifecyclePolicy-${CLUSTER_NAME}" \
  --attach-policy-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerIAMIntegrationPolicy-${CLUSTER_NAME}" \
  --attach-policy-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerEKSIntegrationPolicy-${CLUSTER_NAME}" \
  --attach-policy-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerInterruptionPolicy-${CLUSTER_NAME}" \
  --attach-policy-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerResourceDiscoveryPolicy-${CLUSTER_NAME}" \
  --approve

Build docs developers (and LLMs) love