Every year the annual re:Invent conference contains a torrent of announcements and new features that takes weeks to process and absorb. For the containers space, the bombshell announcements from last year were Amazon Elastic Containers Service for Kubernetes (EKS) and AWS Fargate, both of which completely changed the containers computing landscape on AWS. While there were no container related announcements of similar magnitude made his year, there were two containers specific announcements made during re:Invent — AWS Cloud Map and AWS App Mesh. AWS Cloud Map is already generally available while the AWS App Mesh is in preview. The two new services may not be very impressive in the first glance, but they serve as interesting tell-tale signs of the AWS container strategies.
Service Discovery for AWS – AWS Cloud Map
Service discovery has become an indispensable tool in the microservices architecture. As the different functionalities within the product are broken down into individual services, middle-tier queues, data caches, and databases that are updated and maintained in different release cycles, the endpoints individual services need to call can change drastically in an unpredictable manner, making hard-coded or deployment-time configured values unusable. What the services need are a real-time dynamic map of the services and respective locations that also supports healthcheck functionalities to ensure validity.
There are a number of third-party products available that advertise themselves to be the service discovery solutions for a blooming microservices market. However, there is already a tool serving a similar purpose, namely, domain name service (DNS). DNS is an adequate solution because it is an ingrained part of the underlying infrastructure and there is common knowledge on how to access and update records in it.
AWS Cloud Map is built on top of AWS Route53, which is the AWS-native domain name service. In order to set up AWS Cloud Map, a namespace needs to be first created. A namespace represents a group of services. If one deploys a new Cloud Map namespace for each deployment, the services within the deployment can easily register against the same namespace and therefore be visible to each other within the context of the namespace. In fact, whenever a Cloud Map is created, a new hosted zone on Route53 is created. AWS Cloud Map and Route53 are so tightly integrated into each other that the usual paradigms related to Route53 are well applicable to Cloud Map as well, including the ideas of private vs public zones and health checking. To use Cloud Map, Route53 IAM permissions need to be granted too.
AWS has already announced service discovery for ECS and Fargate earlier this year in March. By introducing Cloud Map, the Route53 based service discovery is now also extended to EKS, services running on EC2, also S3 buckets, DynamoDB tables, SQS queues.
In order to use AWS Cloud Map for Kubernetes services (which contains an internal DNS already), it is important to deploy the ExternalDNS such that internal service locations are automatically propagated to the AWS Cloud Map service.
Service Mesh for AWS – AWS App Mesh
Service Mesh has been a popular since last year. The most prominent service mesh product up to date is Istio from Google, IBM and Lyft, followed by Linkerd — a CNCF project. While a lot of people want the service mesh, sometimes it’s actual role and purpose is confusing.
A service mesh is a mesh of network proxies sitting in front of the individual running containers. There are different ways to deploy service mesh, one of which is using the “sidecar” method, such that the traffic to a task or pod will always go through the proxy sitting in front of it. Because all the traffic goes through the proxies containers in the service mesh, the service mesh can control different aspects of the traffic. It can act as a router performing routing for load-balancing, deployment (blue/green) and testing (A/B testing or chaos engineering) purposes. It can play a critical part in traffic encryption by managing the TLS termination between different communication points. It also acts as a traffic monitor by recording metrics and performing periodic health checks.
One may then question the reason of adopting this layer of service mesh network instead of incorporating the features into the applications. One such reason is flexibility: while monitoring and security is of utmost importance in the production stack, it can be a severe overhead in a development stack. Using a service mesh layer that can be separately configured from the containerized applications allow for flexibility and minimal overhead while following the best practice of build once and use everywhere. Also, the properties of the service mesh can be managed and tested separately from the applications.
While there are advantages to run a service mesh layer to control the inter-service traffic, it does come with some disadvantages — after all, it is its own distinct layer and therefore there is management overhead. Other than the proxy containers that need to be deployed with each pod, it is necessary to maintain a control plane that is responsible for storing the traffic policies. Attached is the architecture of the popular Istio, it is apparent that each of the individual control plane pods need to be managed and operated distinctly (even when there are standardized deployments using Helm to facilitate that).
AWS is therefore introducing the AWS App Mesh to reduce the overhead by providing a managed service mesh control plane. Similar to Istio, AWS App Mesh is also supporting Envoy — a CNCF project — as the proxy.
Fortifying the Container Ecosystem with a Common Fabric
While AWS Cloud Map and AWS App Mesh have just been announced, there will certainly be more exploration and implementation needed to fully understand the compatibility of the services. However, it’s interesting to see the approach AWS takes with the ecosystem. After introducing EKS and Fargate last year on top of ECS as services for container orchestration, each of which can best serve some distinct use cases after considering the varying needs on cloud agnosticity and near-zero setup time, this year the focus appears to be creating a common fabric that supports interoperability between the workload on EKS and ECS. As both of the services are advertised to support EKS, ECS, and Kubernetes on EC2, it is apparent that AWS is creating a service layer to provide a uniform AWS experience for the container ecosystem.
The uniform AWS containers experience is not only accentuated by the emerging service layer that currently consist of AWS App Mesh and AWS Cloud Map, but also by the newly announced AWS Marketplace for Containers. If one of the tenets of containers is “build once, use everywhere,” it is certainly applicable to third-party products as well. By introducing the AWS Marketplace for Containers, users can now conveniently and safely consume third-party images, similarly to how conveniently and safely leverage AMI for EC2 before.
Even when there are debates as to whether EKS or ECS or K8s on EC2 is the best way to run containers workloads, AWS makes the stance that no matter what the result of the debate is (as one can argue always that there is no one single best tool, but the need to have “the right tool for the right thing”), the common experience is on the platform itself, including the monitoring, debugging, the service discovery, and the service mesh tools. Once again, AWS is the unassailable leader in cloud computing not only because of the individual services, but how integrated the entire platform is to provide a uniform user experience. Competitors may use distinct features to promote alternative services, but if you are looking for a holistic cloud experience that includes sound container services, AWS is the way to go.
Interest in learning more about the benefits of containers on AWS? Download our whitepaper!