Tolga Talks Tech is a weekly video series in which Onica’s CTO Tolga Tarhan tackles technical topics related to AWS and cloud computing. This week, Tolga discusses Amazon ECS vs. Amazon EKS with Nate Fox, Engineering Director at Onica. For more videos in this series, click here.
What is Amazon EKS?
Amazon EKS is Amazon’s managed Kubernetes service. Basically its running Kubernetes (the newest orchestration platform for Docker containers) and manages not only your Docker containers across multiple machines, but also things like load balancer, storage, and secrets, amongst a bunch of other things.
How is that different from Amazon ECS?
Amazon ECS is Amazon’s home grown system where they run their own containers and their own systems behind the scenes for you. Whereas Amazon EKS runs Kubernetes, the open-source system that is managed by the Cloud Native Computing Foundation (CNCF). Kubernetes has a very large open-source community around it, which enables a lot of different software and capabilities in the system.
If Amazon EKS is the managed version of Kubernetes, how is that different from running Kubernetes on Amazon EC2?
Running Kubernetes on Amazon EC2 means you have to run your own masters. Your masters are typically comprised of etcd as well as the API server. Amazon EKS will run six machines, three of them etcd and three of them API servers, and they’ll manage them for you. That means managing upgrades for you, availability across AZs, and security as well, so it’s all integrated with IAM. It’s a really good package solution.
I know you’ve been doing Amazon EKS for the last several months on some large projects. Can you give one tip if I was going to start today on Amazon EKS?
Sure, one tip would probably be that the person or the role who creates the cluster is the only one who has the ability to add additional permissions. So if you have an automated pipeline, and your Jenkins user is the one that creates the role, only that Jenkins user can add more people to be able to do other things with kubectl such as list pods or access other name spaces.
So best practice is to immediately deploy other roles that can access the cluster. You want to keep all your roles in configuration management or in GitHub or some kind of source control, and you want to be able to know exactly what changed and when, but rolling them out with your deployment is the best solution.