Intro to EKS: Amazon’s Elastic Compute Service for Kubernetes 

One of Amazon’s newest offerings is Amazon Elastic Container Service for Kubernetes (EKS). Amazon EKS is a platform for enterprises to run production-grade workloads. According to a CNCF survey, an estimated 63% of Kubernetes workloads already run on AWS today, all being self-managed by customers. Given this amount of Kubernetes usage already on the platform, it made sense to build a service for customers to help manage Kubernetes workloads.

Why Kubernetes?

Kubernetes is popular for a number of reasons. In addition to offering flexible scheduling, Kubernetes offers a lot of 3rd party plugins, and there’s built in out-of-the-box support for secrets, service discovery, load balancing etc. In building Amazon EKS, the goal was to take the management away from customers, so they could focus more on daily workloads.

Amazon EKS was built around a few tenets to ensure its success amongst customers:

  1. Amazon EKS is a platform for enterprises to run production-grade workloads.
  2. Amazon EKS provides a native and upstream Kubernetes experience.
  3. Amazon EKS customers can integrate seamlessly with other AWS services, but are not forced to use them.
  4. Amazon EKS actively contributes to the upstream Kubernetes project.

Amazon EKS provides a managed control plane and highly available master and etcd nodes that are managed and scaled for you. However, you will have to bring your own worker nodes like in Amazon ECS, as part of the shared security model between AWS and its customers. AWS APIs also allow for simplified cluster creation.

Understanding Amazon EKS Architecture and Connectivity 

Amazon EKS architecture is spread across three availability zones (AZs) and will only operate in regions with a minimum of three AZs. Amazon manages the Master and etcd nodes for you, while customers are responsible for managing the worker nodes.

EKS is exposed to you as the customer through an endpoint. Kubectl should be configured on your machine to be able to connect to that endpoint. As you spin-up your worker nodes, those nodes should call into that master node from your AWS account.

Master access and visibility is available through CloudTrail at the moment, and authorization and IAM is controlled through the Kubernetes API and AWS IAM. Networking appears as a native VPC through a CNI plugin, and firewalls and permissions are offered using Tigera and Project Calico.

Optimizing Costs on Amazon EKS

A lot of clients on AWS don’t run things to their maximum potential. Often, maximizing potential means only running what you need. AWS offers scaling (and auto-scaling) that was previously unavailable in development and data centers of old. When working with scaling on AWS the key is to run at 80% capacity since scaling can take a second or two to kick-in. Operating under this method allows for some ramp up room in the time it takes to scale.

In addition, spot instances are another way to keep costs low. Spot instances can be up to 90% off in pricing, with AZ Specific pricing. Any disruptions are typically AZ specific which eliminates any “dangers” with spot instances that can be shut off. Amazon EKS does a great job of rescheduling if a spot instance gets turned off, and there are a number of different tools around spot instances that can help you pick what pods you want running and where.  Spot fleets also let you mix and match instance sizes to help meet your needs.

Live Demo of Amazon EKS

Demoing EKS, we begin with the EKS Getting Started Guide.

First, we create a role for the EKS service account, which is quick and easy to do.

Then we create a VPC, which realistically creates very minimal resources, including three public subnets, a route table, an internet gateway and a security group. The security group is often overlooked but will be important to copy over to your system if you want to plug EKS into your own VPC.

Next, you’ll create an EKS cluster, which can take a few minutes to actually create. You will want to select the Kubernetes version and role. Selecting a VPC will autoselect your subnets, and you will need to select your security group.

Once your EKS cluster is created, you will then create your worker nodes using CloudFormation. You’ll want to cite the cluster and security group you want to tie into. You’ll want to set autoscaling sizing, choose an instance type, add an AMI ID, and configure your network.

In order to set up kubectl, you’ll need to download and setup per the instructions in the Getting Started Guide. Once you have the necessary installations, you’ll want to copy and paste your first kubectl config file from the Getting Started Guide, adjusting for the correct cluster, server name, and security certificate data.

You will then need to copy the authorization config map from the Getting Started Guide into your kubectl, correcting the instance role to authorize your instances.

To watch this demo in action and learn how we used Amazon EKS to major success for one of our customers, watch the full Intro to EKS webinar.

 

 

Nate Fox

About Nate Fox

Nate Fox is the Engineering Director at Onica. He focuses on helping clients with their DevOps and automation needs. He spends his days obsessing over building out AWS ECS and Docker tools.