Architecture and actors
The threat model involves three actors:Cluster Operator
Cluster Operator
An identity that installs and configures Karpenter in a Kubernetes cluster, and configures Karpenter’s cloud identity and permissions.The Cluster Operator has full control to install and configure Karpenter, including all NodePools and EC2NodeClasses. The Cluster Operator has privileges to manage the cloud identities and permissions for nodes and the Karpenter controller.
Cluster Developer
Cluster Developer
An identity that can create pods, typically through Deployments, DaemonSets, or other pod-controller types.A Cluster Developer cannot modify the Karpenter pod, launch pods using Karpenter’s service account, or gain access to Karpenter’s IAM role. Restrictions on specific pod fields (e.g., privilege escalation) are enforced by the Cluster Operator using policy frameworks such as OPA Gatekeeper or Kyverno.
Karpenter Controller
Karpenter Controller
The Karpenter application pod that operates inside the cluster.Karpenter has permissions to:
- Create and manage cloud instances (via AWS IAM)
- Create, update, and remove Kubernetes nodes
- Evict any pod
- List pods, nodes, deployments, and many other pod-controller and storage resource types
Assumptions
The threat model is based on the following assumptions:| Category | Assumption | Comment |
|---|---|---|
| Generic | The Karpenter pod runs on a node in the cluster and uses a Service Account for authentication to the Kubernetes API | Cluster Operators may want to isolate the node running Karpenter to a system-pool to mitigate container breakout risks |
| Generic | Cluster Developers do not have Kubernetes permissions to manage Karpenter (the Deployment, pods, ClusterRole, etc.) | |
| Generic | Restrictions on the fields of pods a Cluster Developer can create are out of scope | Cluster Operators can use policy frameworks to enforce restrictions on Pod capabilities |
| Generic | No sensitive data is included in non-Secret resources in the Kubernetes API | Karpenter does not have permission to list/watch cluster-wide ConfigMaps or Secrets |
| Generic | Karpenter has permissions to create, modify, and delete nodes from the cluster and evict any pod | Cluster Operators running applications with varying security profiles may want to configure dedicated nodes and scheduling rules for Karpenter |
| AWS-Specific | The Karpenter IAM policy is encoded in the GitHub repo; any additional permissions possibly granted to that role are out of scope | |
| AWS-Specific | The Karpenter pod uses IRSA for AWS credentials | Setup of IRSA is out of scope for this document |
Security best practices
- Isolate the Karpenter node. Use node selectors, taints, and tolerations to ensure Karpenter runs on dedicated system nodes that only system components can reach.
- Enforce tag-based IAM policies. Use IAM Condition keys on EC2 instance resources to restrict who can create or delete the Karpenter-managed tags (
karpenter.sh/nodepool,kubernetes.io/cluster/${CLUSTER_NAME}). - Enumerate PassRole targets explicitly. Limit
iam:PassRoleto the exact node role(s) you intend Karpenter to use. Do not use wildcard ARNs. - Scope disruption budgets. Use NodePool
spec.disruption.budgetsto limit how aggressively Karpenter can remove nodes, reducing the impact of a misconfigured policy. - Use resource quotas. Apply Kubernetes
ResourceQuotaobjects to limit the total resource footprint any Cluster Developer can request, preventing runaway node creation.
Known threats and mitigations
Threat: Cluster Developer creates an arbitrarily large number of nodes
Background: Karpenter creates new instances based on the count of pending pods. Threat: A Cluster Developer creates a large number of pods or uses pod anti-affinity to schedule one pod per node, causing Karpenter to launch far more instances than intended. Mitigation: Use Kubernetes resource quotas to limit pod counts, and configure NodePool limits to cap the total CPU, memory, or other resources provisioned across all nodes in the pool.Threat: EC2 tag manipulation to orchestrate instance creation or deletion
Background: Starting in v0.28.0, Karpenter uses the following tags to maintain a consistent mapping between CloudProvider instances and Kubernetes CustomResources:karpenter.sh/managed-bykarpenter.sh/nodepoolkubernetes.io/cluster/${CLUSTER_NAME}
ec2:CreateTags or ec2:DeleteTags permissions on instance resources can cause Karpenter to create or delete CloudProvider instances as a side effect.
Mitigation: Enforce tag-based IAM policies on EC2 instance resources (i-*) for any user that has CreateTags/DeleteTags permissions but should not have RunInstances/TerminateInstances permissions.
Threat: Launching EC2 instances with unintended IAM roles
Background: Many IAM roles in an AWS account may trust the EC2 service principal. Theiam:PassRole permission controls which roles an IAM principal can attach to instances.
Threat: A Cluster Operator creates an EC2NodeClass with an IAM role not intended for Karpenter nodes.
Mitigation: Enumerate the allowed roles explicitly in the Resource section of the iam:PassRole statement in the Karpenter controller’s IAM policy. Karpenter will fail to generate an instance profile if the role specified in spec.role of the EC2NodeClass is not included in iam:PassRole.
Threat: Karpenter operates on IAM instance profiles it does not own
Background: Karpenter has permission to create, update, and delete IAM instance profiles so it can auto-generate them for EC2NodeClasses. Threat: An actor who gains control of the Karpenter pod’s IAM role may delete instance profiles not owned by Karpenter, disrupting other workloads in the account. Mitigation: Karpenter’s controller permissions are conditioned on ownership tags:karpenter.sh/managed-bykubernetes.io/cluster/${CLUSTER_NAME}karpenter.k8s.aws/ec2nodeclasstopology.kubernetes.io/region
Threat: Karpenter creates or terminates EC2 instances outside the cluster
Background: EC2 instances can exist in an AWS account outside of any Kubernetes cluster. Threat: An actor who gains control of the Karpenter pod’s IAM role creates or terminates EC2 instances not managed by Karpenter. Mitigation: Karpenter tags every instance it creates withkarpenter.sh/nodepool and kubernetes.io/cluster/${CLUSTER_NAME}. The termination permission (ec2:TerminateInstances) is conditioned on both tags being present. Karpenter cannot terminate instances that lack these tags.
Additionally, Karpenter cannot modify tags on instances it does not own after creation. The aws:ResourceTag conditions enforce that only instances already tagged with karpenter.sh/nodepool and kubernetes.io/cluster/${CLUSTER_NAME} can have their Name and karpenter.sh/nodeclaim tags updated.
Threat: Karpenter selects an unintended AMI
Background: EC2NodeClasses can reference AMIs by metadata (name or tags) rather than a specific AMI ID. Threat: A threat actor publishes a public AMI with the same name as a customer’s AMI, causing Karpenter to select it instead of the intended AMI. Mitigation: When selecting AMIs by name or tags, Karpenter automatically applies an ownership filter ofself,amazon, ensuring that only AMIs owned by the account or Amazon are considered. Public AMIs from third parties are excluded by default.
IAM permissions summary
The table below summarizes the categories of AWS permissions Karpenter holds and what they are used for:| Permission category | AWS actions | Purpose |
|---|---|---|
| Instance lifecycle | ec2:RunInstances, ec2:CreateFleet, ec2:TerminateInstances | Launch and terminate EC2 instances for Kubernetes nodes |
| Launch template management | ec2:CreateLaunchTemplate, ec2:DeleteLaunchTemplate | Create and clean up launch templates used for node provisioning |
| Instance tagging | ec2:CreateTags | Tag managed instances and launch templates for ownership tracking |
| EC2 resource discovery | ec2:Describe* | Discover available instance types, subnets, security groups, and capacity |
| IAM instance profiles | iam:CreateInstanceProfile, iam:DeleteInstanceProfile, iam:AddRoleToInstanceProfile, iam:RemoveRoleFromInstanceProfile, iam:TagInstanceProfile, iam:GetInstanceProfile, iam:ListInstanceProfiles | Manage instance profiles that authorize nodes joining the cluster |
| IAM role passing | iam:PassRole | Allow Karpenter to attach the node role to generated instance profiles |
| EKS cluster discovery | eks:DescribeCluster | Discover the API server endpoint for new nodes |
| SSM parameter access | ssm:GetParameter | Retrieve the latest EKS-optimized AMI IDs |
| Pricing data | pricing:GetProducts | Estimate instance costs for bin-packing decisions |
| SQS interruption queue | sqs:ReceiveMessage, sqs:DeleteMessage, sqs:GetQueueUrl | Process EC2 interruption and health events |