Tolga Talks Tech is a weekly video series in which Onica’s CTO Tolga Tarhan tackles technical topics related to AWS and cloud computing. This week, Tolga discusses Tooling in a Hybrid Cloud Environment with William Kray, Onica’s Practice Director of Architecture & Engineering.
Let’s look at hybrid and multi-cloud environments, specifically around the tooling. Customers say they want a consistent set of tooling across both their on-prem environments and their two or three cloud providers, but in practice we find that to be pretty challenging.
Can you tell us a little about how these challenges affect tooling across environments?
There’s always a drive to use a consistent set of tools, and the problem with that usually comes about in the sense that the tool is going to enforce a set of standards that might not be the most beneficial for each individual platform. So if you’re running on-premise or if you’re running in a group of cloud providers, what you’re going to end up seeing is that a single tool will tend to default to the lowest common denominator across all these platforms. For example, this is something like virtual machines or EC2 instances in AWS compared to a virtual machine on-premise.
If you’re using these lowest common denominators, you’re really not getting all of the benefit – you’re not squeezing all the juice out of the cloud platform that you’re choosing. Typically what we end up seeing is that you don’t get any of the benefit and you just end up running another data center that can cost more than on-prem.
What about these tools that purport to be multi-cloud like Terraform?
Terraform is a really good example because it does support many platforms. You can write automation and infrastructure-as-code for VMware, for AWS, for Google’s cloud platform, Microsoft Azure – it supports many things. The problem is that people think you can write just one Terraform template and deploy it to any platform, and that’s simply not the case. The way that it interacts with each of those platforms is entirely different. So you’d end up writing the same template five different times to support each one of those platforms individually. It’s a good tool to tie interactions together, because it can talk to them, but it doesn’t necessarily mean you’re going to have less work involved.
What about these tools that are even a higher level abstraction, like “Hey, don’t code against the AWS APIs. Here’s an abstraction that works the same across multiple clouds?”
The tools that abstract these processes can seem really glossy and shiny on the outside because you don’t have to worry about what’s going on under the hood. Inevitably, what we have found is that you will end up having to dig under the hood to solve problems, so you lose the benefit of all that abstraction. The other thing is that they tend to fall back on those lowest common denominators in order to make the abstraction work. And inevitably, it’s a flawed system. The best approach is to always go forth using the best tool for the job in each individual situation and trying to tie those back to a consistent workflow.
It’s more about normalizing the workflow, and less about normalizing the exact tools, because if you try and abstract things a way, it’s just going to fall apart.
These are called leaky abstractions, where you have to become an expert not only in the thing you were trying to abstract away, but then you have to become an expert in this abstraction layer because it has some limitations to overcome.
Interested in learning more about a hybrid cloud environment? Learn more about Simplifying Hybrid Cloud Management by using AWS Systems Manager and read our blog on the Five Simple Rules for a Successful AWS Hybrid Cloud Architecture.