With yet another re:Invent behind us, now’s a great time to take a moment and reflect on the many announcements made across the breadth of AWS. In total there were 61 major announcements across 15 categories on the platform. Some of the announcements may only have interest value to you, while others may cause you to take a close look at how you might integrate them into your current workloads, or kickstart future projects.
Continued investments in core computing technologies
Within the “Compute” category, there were significant new announcements and updates. AWS made a smart move, and acknowledged that within the world of containers, Kubernetes is important, and while it’s been possible to deploy Kubernetes on EC2, it still required specialized skillsets to configure and deploy effectively. For customers who prefer Kubernetes as their container orchestrator, AWS ECS for Kubernetes is a welcomed addition to the ECS (Elastic Container Service) portfolio. Maybe more interesting though is AWS Fargate, which promises to reduce the complexities of containers even further, by allowing customers to upload containers, while Fargate deals with all of the provisioning. While there are striking similarities between Fargate and Lambda where in both services deal with provisioning the required infrastructure to run the workload, Fargate is far more developer friendly. With Fargate, the “package” is a container instead of zip file of code as it would be with Lambda.
Global Deployments get easier with inter-region VPC peering.
Probably one of the more popular announcements, and one certainly welcomed by many of Onica’s customers was inter-region VPC peering. This new capability has been a long time coming, since VPC peering within a single region was announced way back in 2014. It was possible to peer two VPC’s in different regions, but it required significant engineering efforts to establish IPSEC tunnels and manage those configurations. The level of complexity increased if you needed those connections to be highly available. Another major change is that with inter-region peering, traffic between the VPC’s is now routed across AWS’s global infrastructure. Previously with your IPSEC tunnels, your traffic would have been routed out an Internet Gateway, and across the public internet. With inter-region VPC peering, all of that complexity is eliminated. As was the case with VPC peering within a single region, it’s important to note that VPC peering still does not support “transitive routing” meaning all VPCs that needs to communicate with each other must have a direct peering relationship. To illustrate this point, in the diagram below you’ll see that for traffic to route between us-east-1 (N. Virginia), and ap-northeast-1 (Tokyo), we must have a VPC peering relationship between those VPC’s. We cannot use eu-central-1 (Frankfurt) as a transit point between the two other regions.
New Databases, New Database features
Amazon’s Aurora is a fantastic choice for almost all types of workloads, and Amazon has continued to invest in Aurora. Probably of great importance to customers who already have global footprints, or are expanding globally, is the announcement of multi-master Aurora deployments. Many of our customers have been asking for this type of capability, and this new feature will greatly simplify how our customers deploy databases globally.
Another very interesting Aurora announcement was Aurora Serverless. This will give customers the ability to have a database, and have no running compute nodes until it’s actually required. The customer would still pay for the storage used, but would not be paying for compute capacity until the moment it’s needed. Together with per-second billing this is a major improvement, and will likely get a lot of adoption for non-production workloads. DEV and Test environments may not always be in use.
DynamoDB, which is a massively scalable, managed NoSQL solution from Amazon got some fantastic updates too. If you’ve been a DynamoDB user for any amount of time, you would be familiar with one capability that is glaringly missing: the ability to natively backup your tables. This was addressed at re:Invent, and we now have native capabilities within DynamoDB to make backups!
DynamoDB Global tables is another big update. This will allow you to replicate your tables to other regions, getting your data closer to the compute resources in those regions. This will reduce latency, and remove a lot of custom engineering that some customers had put in place to get table data into other regions.
New to the AWS Database product line is AWS Neptune. Neptune is a fully managed Graph database. Graph databases make storing and querying highly relational data much more efficient that traditional RDBM’s which can get incredibly compute and memory intensive when performing complex joins across vast amounts of data. Graph databases use a language other than SQL for querying. Neptune has support for Apache TinkerPop, and Gremlin, and W3C’s SPARQL
Expanding the AWS Machine Learning portfolio
Another major investment area was machine learning. This is an area within technology that is inherently difficult to execute well. AWS has made this area of technology more approachable to a much wider segment of customers. A new feature called Video Analysis was added to Rekognition. Video analysis allows customers to gain insights, and generate metadata about video rather than just still images. To facilitate delivering video to services such as Rekognition Video Insights, a video streaming addition to Kinesis was announced.
AWS Comprehend is a new service that uses natural language processing (NLP) and machine learning to generate insights from text. To complement AWS Comprehend, two other new services were announced – AWS Transcribe, and AWS Translate. Each of these individually may be useful to developers for a given problem, but we see a lot of potential in using all three of these new services together. Imagine taking a piece of recorded audio, and using the automatic speech recognition (ASR) capabilities of AWS Transcribe, then using AWS Translate to make that transcription available in a number of languages, paired up with using AWS Comprehend to understand something about the text – automatically, all powered by machine learning. This is something that most developers would have previously shied away from due to the complexities of building such systems.