Tolga Talks Tech is a weekly summer video series in which Onica’s CTO Tolga Tarhan tackles technical topics related to AWS and cloud computing. This week Tolga talks about Container Optimization on AWS with Onica’s Lead DevOps Architect, William Kray.  For more videos in this series, click here

What are containers?

Containers are a technology that wraps up a process on a computer. Containers are self-contained and have all of the additional libraries and other components needed to run. People often think that containers are like virtual machines, which are a lot more common and have been in the in the IT space for a long time, but they’re actually very different.

How are containers different from virtual machines?

A virtual machine launches an entire computer’s operating system within itself. That includes the most critical part which is called the kernel. The kernel handles how the software is interacting with the hardware of the computer. In a virtual machine all of that is still running and there’s a lot of extra overhead. In a container, all they really need to do is use the kernel from whatever computers are already running that container system, then they just fill in the extra files that need that smaller operating system to run inside it. It’s a lot smaller and more lightweight.

What other benefits are there to containers over virtual machines?

The decrease in overhead compared to a virtual machine means it’s considerably faster to launch. It can take a second or two to launch a new container whereas with a virtual machine you have to wait for the entire thing to boot up. It also makes it a lot more interesting to move those processes around from one host to another. It becomes a lot easier to have a whole cluster of machines that are all running a container interface like Docker. If a container were to stop on one server, you could start it up easily on another server within a few seconds.

What does AWS provide in the container space?

AWS has a number of features to run containers. One of them is Elastic Container Service (ECS), which is sort of their original offering. It grants the opportunity to launch EC2 resources in AWS and deploy containers on to them. It manages scheduling them and ties into IAM for permissions handling.

Recently, AWS launched Elastic Container Service for Kubernetes (EKS). If you have a full Kubernetes cluster, the masters are managed for you by AWS, and then you still have more familiar interface of Kubernetes, which is the front runner in the container scheduling interface.

Finally, there’s AWS Fargate which is essentially an overlay for ECS and EKS. This  allows you to deploy containers into AWS without having to actually manage the EC2 resources that they’re running on, giving a lot more flexibility and a lot less management overhead when you’re deploying containers.

Want to learn more about Containers & EKS? Watch our on-demand webinars!

containers on AWS

Tolga Tarhan

About Tolga Tarhan

As Onica’s Chief Technology Officer, Tolga Tarhan leads the technological vision of the company by pushing innovation and driving strategy for our product development and service offerings. With nearly two decades of experience leading and hands-on software development, his cross-functional expertise across different technology areas gives him unique insight into the best approaches for building complex systems and applications. In addition to facilitating technology on the executive level, Tolga has also successfully led numerous deployments involving web-based, mobile, Internet of Things (IoT), and real-time telecommunications applications. His passion for IoT in particular has driven Onica’s achievement of the AWS IoT competency, and he continues to show thought leadership in the field through his extensive speaking engagements at AWS events and educational groups across North America. Tolga also holds an MBA from Pepperdine University and helps customers strategize beyond technology solutions to improve their businesses and grow their bottom line.