A-EKS-3

EKS Clusters Using Small Subnets

Risk:
Moderate

AWS EKS utilises a Container Network Interface designed for EC2, which consumes a significant number of IP addresses that the network must accommodate.


Details

AWS EKS (Elastic Kubernetes Service) is a managed Kubernetes service provided by AWS. It offers a Kubernetes control plane that AWS manages on behalf of its customers. The EKS control plane is often operated with standardized configurations, catering to the typical needs of customer workloads. As this service is managed by AWS, customers generally do not directly configure the control plane.

A key component of the control plane is the cloud controller manager and the network controller. AWS operates within a virtual private cloud (VPC) framework, which is fully virtualized and requires specific configuration changes to function effectively with Kubernetes. To simplify this process for customers, AWS has developed its own network controller for EKS workloads. This integration helps alleviate the complexity involved in configuring and maintaining the network for Kubernetes deployments when hosted in AWS VPC environments.

One of the major requirements for running a Kubernetes network is that each pod must have its own unique IP address. To meet this requirement the AWS network controlled provided by AWS EKS allocates private network address space for each Kubernetes pod in the provided AWS CIDR range. Unlike traditional EC2 servers in AWS, which can run multiple services and processes under a single private IP address, each individual pod and service in an EKS Kubernetes cluster uses a private IP address on top of the IP addresses already in use by the underlying EC2 server. This networking requirement leads to higher IP address consumption as the number of IP addresses needed for EKS-managed Kubernetes clusters is significantly greater than that of a non-EKS application setup.

Therefore, when running an AWS EKS Cluster in a VPC the number of IP addresses available can become a limiting factor resulting in the inability to schedule pods due to no available IP addresses.

Remediation

To accommodate the IP usage requirements of EKS services, we recommend configuring your subnets with a CIDR block of /22 or higher. This configuration will provide each subnet with approximately 1,000 available IP addresses.

In a Kubernetes deployment, it’s also advisable to distribute your worker nodes across multiple subnets, ideally using at least three subnets corresponding to three different availability zones. This setup would yield around 3,000 IP addresses which should be plenty for most EKS deployments and use patterns.

For smaller subnet ranges, such as /24 which only offer 256 IP addresses, you may quickly encounter hard limitations that hinder your cluster’s ability to schedule new pods. This issue often requires manual intervention to either create and attach new addressable space or to manually alter the pod usage. As scheduling is intended to be an automated service this usually results in unecessary work and conflict between the scheduler and the administrators attempting to stabilise the cluster.

Depending on your current VPC configuration you may be able to simply create new larger subnets to which you can migrate your worker nodes. This will then solve this issue for the longer term. However, for VPCs that have not been built to a larger CIDR range or VPCs that already have a large amount of network addresses already in use a migration might require a redesigned network and alternative migration path. The best path forward is to get a SkySiege Cloud Assessment to immediately detect which of your subnets are too small and to determine a migration route via the included consultation:

Discover if you're vulnerable

SkySiege Cloud Security Assessments scan for this issue and provide same-day reports..
Available for individual projects or organisations.