Are you struggling to find IT budget to drive new business initiatives? There are many things you can do within the AWS console to drastically reduce your monthly spend. Watch our AWS cost optimization video below, where we taught AWS users at the San Francisco Loft tips and tricks to better optimize their AWS console and free up IT budget for projects that drive innovation and move the business forward. You can also download our free eBook from the event!
Jeremy Bendat: Hi, my name is Jeremy Bendat and I’m with AWS Premier Consulting and Managed Service Partner CorpInfo-Onica. When talking with the AWS team, we found that figuring out ways to drive innovation by helping you save money within your AWS console is something that’s a real sticking point. Today we are going to give you an excuse to free up some budget and try something new that’s going to help drive innovation and push your company forward. We also have Tim Fox, Head of Cloud Managed Services here who is going to help me present.
Tim Fox: Hi, I’ve been in IT for over 20 years. I jumped into the whole concept of cloud about five years ago and embraced it in its entirety. That’s how I ended up at CorpInfo-Onica doing what we’re trying to do today.
Jeremy Bendat: First, I wanted to give you guys a little bit of a background about us. AWS offers their own certification program and we have 100+ of these certifications running across our team.
Tim Fox: We have quite a few people that have all five AWS certifications, plus some of those people have taken the new AWS certification Beta tests. Part of that requires a good understanding of how Amazon works and how you can leverage it in an evolving IT environment. Since we’ve been around for 30+ years, you’re not getting a company that understands just the cloud, we’ve done a variety of different technology implementations for businesses over the years and realize the shifts in IT.
Jeremy Bendat: Through our partnership with AWS, we’ve migrated 7,0000+ servers. We’re not only doing the standard lift and shift over to AWS, but also helping to rearchitect our clients so that they can take advantage of more services within the cloud. If they’re not leveraging things like AWS CodeDeploy and CodePipeline – we’re helping to drive that innovation.
Tim Fox: We have smaller customers that are billing $5,000 per month and customers that are billing more than $500,000 a month. We’ve seen the challenges that these different types of companies are going through and what they do to make it all work a lot of these conversations are focused on costs.
Jeremy Bendat: Some of our cool customer stories are also focused around big events, for instance during the Super Bowl Samsung was trying to figure out how people were experiencing the halftime show – were they leaving it on and turning up the volume or were they changing the channel? We ended up using it AWS to scale up their infrastructure to be able to take on 500,000+ concurrent users per second.
Tim Fox: We’ve got other customers, like Movietickets.com when the Star Wars premiere came out, needed to sell tickets. The question was how were they going to deal with the volume? We came in and assisted them so they could scale up their environment for the onslaught that occurred that day and in the weeks leading up to the actual premiere of the new movie.
Jeremy Bendat: The challenge was that they had 15+ years’ worth of infrastructure they needed to move to AWS within 2.5 weeks and we did it. Depending on what your current challenges are or if there is some sort of event or launch that you have ahead of your organization, that’s where we want to come in and help by sharing our experience to make sure things go smoothly through proper planning and using the right tools within AWS.
Tim Fox: We have the proven experience to know how other companies are scaling and what that costs them. A lot of times when you’re going to the cloud, you have this idea that it’s going to be cheaper, but when you’re doing it, you find out there are some challenges.
Jeremy Bendat: At the foundation layer we’re walking you through the best practices as far as setting up AWS, such as the Well-Architected Framework.
Tim Fox: When we talk about Amazon Web Services, we are talking about the fundamental shift that is occurring in IT today. There are common phrases that we always hear about. The big one is talking about the service over the server. We don’t care about server names anymore. All of us that have been in IT for any length of time can tell you war stories about trying to figure out what cartoon characters or Greek gods they were going to use to name their servers. Today that no longer matters. If you want to make sure your environment is going to be up and running all the time, then you need not care about that. You need to care about what that individual service does and ensure that it’s running all the time. It’s the concept of sheep over pets. If you have to go ahead and have a server killed, it doesn’t matter anymore. You don’t have to concern yourself with the service going down.
And it really comes down to the concept of meeting the business needs. IT can’t just be in the business of being IT, right? The reality is, if we become too much on the commodity side, we aren’t helping the business evolve. We’re not running servers. We’re not running services. We’re providing the means by which a business needs to grow. And a big part of that is making sure that somebody is bringing us a problem and not coming to us with a solution. In my early days, way back when I was doing desktop support, someone would say, I need a bigger computer. Well, why do you need a bigger computer? What’re you going to do with it? What man-hour time are you going to save? Why are you storing a gigabyte of pictures on an accountant’s computer? These types of things. So, don’t bring me a solution, tell me what your problem is and let us figure that out.
Jeremy Bendat: Through the cloud we are able to become incredibly efficient at the types of services that we’re actually leveraging and that’s why we have a slide that says, “Only pay for what you use.” And that’s what we’re really going to be driving home today. Beginning with what is your overall cloud strategy? What are you trying to achieve? Is that a simple proof of concept where you say, “Hey we’re trying to leverage AWS for XYZ reasons or we’ve already made the decision to go to the cloud in order to stay competitive and we want to find the best way to do it and leverage a partner to help accelerate our path there.”
Tim Fox: So the first part of that is as Jeremy mentioned – what’s the cloud strategy? What is the reason for going to the cloud? Sometimes it’s top-down. We’ve seen a board come in and say they’ve heard about the cloud thing, we think it’s cool, go make it happen. Other times we see the shadow IT piece of it. People that have started dabbling on their own AWS account because they think it’s quicker and easier to get things going. And we understand that because we see it from all the different types of customers we have. We’ve gone in and done some small projects for companies who are migrating their marketing infrastructure to multi-billion dollar companies running hundreds and thousands of dollars in IT spend. Once we migrate it all, you could be start spending $600,000 a month on Amazon. So, what is the strategy that goes behind that? What are you trying to accomplish? What is the business need for this?
Jeremy Bendat: The San Francisco community has some of the most advanced AWS users that we’ve come across. So this conversation will be focused on how to make you more efficient within the existing infrastructure that you have today. That could be through several things, that could be through automation, that can come from being a support arm to your organization, which is why we offer the cloud Managed Services, or it could come from security. We are constantly having to go through these rigorous security checks to make our customers more efficient through a standardized process, so when you’re going through an audit, you’re not concerned,because you know that we’ve helped to set those gates up for you.
Tim Fox: We’ve listed out capabilities: Big Data, DevOps, Managed Services, Migration, and Security. You know what mass migration is, moving 1,000+ servers into AWS. In terms of Data analytics, we’ve helped customers build specific pipelines using Amazon Redshift. We’ve also helped design the automation piece. We have a very robust DevOps program. We go in and help customers build out automation for a Datapipeline. You hear the concept of DevOps and what it means, the culture of DevOps. But in practicum, how do you apply that? How do you build automation to make sure that you’re getting systems that are built consistently and reliable every single time? Of course, as Jeremy mentioned, we do Managed Services. Managed Services means you can sleep at night because there are people out there actually running your IT environment. You pass it off to us and we have the engineers and people working 24 hours a day, 7 days a week to respond to issues and problems. They’re actually logging on the boxes, figuring out what’s going on, trying to resolve those issues, and if they can’t, then they escalate to you. But the whole idea and the whole concept there, is that we’re trying to make sure that IT groups, as well as companies can sleep at night knowing that someone’s guarding the sheep.
Jeremy Bendat: One of the differentiators with Tim’s Managed Services practice is that they take a proactive approach to helping to manage the environment and find ways to become more efficient at supporting you. Tim was actually recently featured on AWS’ This is My Architecture video series discussing one of our Managed Services clients and the proactive approach we took to rearchitect and support them.
Tim Fox: Finally, security is very important in today’s environment, as we all look at the newspapers every single day and see how many people have been hacked. How does the security piece look in your own environment? Security is more than just setting up IIM rules. We have customers that need HIPAA and PCI compliance to meet regulatory concerns. We have the expertise and understanding to make that work.
Jeremy Bendat: As we continue to drive this conversation forward, one thing that AWS is constantly harping on is the Well-Architected Framework. And that is basically a program that they’ve put together and a 42-point Checklist which talks about how to architect and best practices for cloud architecture on their platform. It covers five basic pillars – security, reliability, performance efficiency, AWS cost optimization, and operational excellence. When we’re taking a company on the journey and going through the architecture framework, we’re looking at each one of these pillars. Today we’re going to single out and focus on AWS cost optimization.
Tim Fox: The whole concept of driving innovation, as Jeremy mentioned before, is us helping you save money on your AWS bill, which will give you the flexibility to use that capital to initiate a new project, hire more people, and all those things that make the business grow. Its driven by the fact that you can save money in what you’re doing and how you’re running your architecture, then you can utilize it in other ways.
Jeremy Bendat: So we take two approaches to AWS cost optimization and management. The first is a data-driven approach. We leverage tools and enable our customers with these tools to help them better understand what’s happening on a day-to-day, month-by-month trending basis within their architecture.
The second approach we take is a more qualitative approach where we’re asking the customer questions – what is it your business or your technology is trying to achieve and how can we make that more efficient when looking at it through the lens of cost.
Tim Fox: The data-driven approach is the most basic piece. There are quite a few tools that can help you sort of diagnose and understand what you’re doing on the cost side of things – how are you identifying idle resources or unused resources. It’s looking at Amazon Reserved Instance (RI) purchases or things that might be misconfigured. But you can’t necessarily take that information in a vacuum. You must apply some other sort of fundamentals to it to better match the things we talked about before. Just a simple concept, if I want to right size my system and I’m looking at the AWS console and I see that I’m only using 10% of the CPU, the realization is there are other things that you can apply there. How much memory am I using on that box? Am I really network bound? Am I using the right size system for the type of interactions I need to have. All of these things go into that and you can’t necessarily get there by just looking at the data you have, you have to overlay that with an understanding of the business, so doing the qualitative piece.
Jeremy Bendat: Absolutely. Its beyond just the data, look at your storage and content distribution strategies. We’ll see clients who have thousands of dollars being spent in Amazon S3 and they’re not actually accessing that data. Setting up automation to move it over to something like Amazon Glacier can be a simple AWS cost optimization and savings strategy that can provide immediate impact without necessarily changing anything within your architecture. Same thing with the content distribution strategy. If you’re not leveraging Amazon CloudFront for data transfer, you’re probably spending two to three times more than you have to be.
Tim Fox: The way we come to this is by sitting down with our customers and really getting an understanding of what the business is trying to do. And answering some of these questions to help guide them and provide suggestions on the best things that they should actually look into and how much it will save them overall.
Jeremy Bendat: We not only talk to the business owners or key stakeholders, but we also talk to people on your team to see what it is that they’re trying to achieve and then come up with an execution plan together based on our recommendations and what the business is trying to achieve.
Tim Fox: Let’s start off by asking some basic questions. What are you doing for backups? How do you recover your information? How long do you need to retain information? What type of security policies do you have in place? What type of architecture diagrams do you have? Who’s in charge of that particular application and can give you an answer? Strangely enough, a lot of times customers have no clue. They know that somebody has built and deployed it, but when it comes down to who defines how that application works and what the requirements are for that application, they have no clue.
It could be more basic than that. Do they have an expectation for how customer facing applications should respond? Do they need an application that can respond in 100 milliseconds? Or is it okay to have a customer sit there and wait for 30 seconds to get a response? If you start to think through these questions, you start to have a better understanding of how you’re actually architecting your environment to better match the business need. These are the types of questions we need to get answered. And sometimes they’re tough questions.
A lot of times customers don’t know. They say – we know we have backups, but how long do we need to retain the backups? Do we have regulatory requirements that require you to retain them for six to seven years? We have a customer that has to retain data for 100 years. They need to understand what that means and start to get predictive in the amount of stores they are going to need over time. All of these things lead into the question of AWS cost optimization.
Jeremy Bendat: A big part of that is not just thinking about the infrastructure today, but understanding the future vision of an application or workload, that will help us not only to plan what we’re doing today, but to make sure that it makes sense going forward.
Tim Fox: The next thing we’re going to get into are the tools we use today. As I said there are quite a few tools out there that actually do AWS cost optimization for you and it isn’t just a matter of looking at the data, it’s actually answering the questions that we posed about how you’re utilizing your systems today. And the most common one that everyone always comes back to is – how are you using vital resources? Do you have a good way to identify how that works today? From our standpoint, we utilize a tool called CloudCheckr. CloudCheckr gives us the ability to answer some of the questions on what’s occurred.
I’m going to show you our demo dev account information. CloudCheckr analyzes your account. You would give it access to IIEM to do a read-only and it pulls back information not only in the form of detailed billing records, but from what’s inside of the Amazon account. It will give you a detailed understanding of the inventory. In this case, we’re looking at a Dev account where we can save $4,000 a month.
The first thing to notice is that we have an Amazon Redshift account, Redshift clusters that were basically doing nothing. We can save $900 a month just by downsizing. Does it always have to be on? Do we have specific performance parameters that aren’t necessarily being shown by just looking at the CPU utilization? Do we have member requirements on that system that may need to have a larger memory set because I’m going to do large sets of queries at some point in the future, if I try to downsize it? It’s not just a matter of looking at the tools, it’s thinking through the business case and answering the question – what are you using it for? That goes for Amazon RDS instances and DynamoDB tables. Which ones are we doing writes and which ones aren’t? Am I optimized in terms of what types of wrrite I’m doing, as compared to the types of reads. We have one customer that we had a conversation with because it became apparent that after we talked to them for basically an hour, they were upside down. They were write depended and almost never read the data. When they optimized it, they saved money. You don’t necessarily think about it. And that’s where we come in. We have the 100+ plus certs and 220+ years of AWS experience doing this.
Jeremy Bendat: What’s cool about CloudCheckr is that you’ll notice instance tags, that’s why a tagging strategy is so important for AWS cost optimization. We’re able to help you make the determination as to what instance or instances you are running that maybe you shouldn’t be. For example, you might think that Redshift is a part of your strategy and that’s why you’re running it, but what really happened was a rogue developer wanted to try data warehousing and turned it on. That’s why leveraging things like a service catalog where you’re giving your engineers specific parameters on what they are allowed to work with or even an IM strategy on the type of environment they’re allowed to access is important. Sometimes taking a holistic approach and identifying things that are idle, can help save money and help you develop an IT strategy.
Tim Fox: We can talk a little bit later about how there are some other strategies, but in regards to idle resources, they are low hanging fruit. But as I said, it’s not always an easy answer and just looking for low CPU, you need to answer some of the other questions around what that application is doing and the expectations the business has for the response of that application to make sure you’re doing it correctly. The second part of that is unused resources. Do I have things that I’m spending money on that I shouldn’t be? Do I have elastic IPs that are attached to anything?
Here’s an example –we have a customer that had a huge number of unattached EBS volumes. When I say huge, we’re talking $20,000 to $30,000 per month, volumes that are sitting out there that nobody is using. And that’s a huge amount of money to be spending. That’s where CloudCheckr comes in. It helps provide you with an understanding of the idle and unused resources whether it’s load balancers, volumes, and/or unmounted EBS volumes. How does that affect your business? Maybe you have unattached volumes because you take snapshots, you set them off regularly, and you use them as your location for doing database backups. I’ve seen this in a customer doing Oracle backups, on an actual disk. But they only attach that disk when they’re doing the backups. Unfortunately, the disk has to exist. What are the implications? Do you need to have it there, always available? Looking at this isn’t going to answer the question – what is the business need? Are we meeting it? That’s one of things we’ve become good at, forcing function and asking the business questions.
After that look at unused resources and misprovision resources. For example, a year ago we went through and said what we believe we need for CPU, the amount of memory, and the network optimization. A year later, we aren’t thinking about that. Now we have a system that has been out there for 18 months and we realize that by simply turning it off and turning it back on with a newer instance type it will give us the exact same parameters and a faster CPU. We’re actually saving money. On an ongoing basis, how are you thinking about these things? Are there times when you’re going through your environment to see if you’re answering these questions?
Jeremy Bendat: We also need to understand the behavior of the application. Are we able to leverage things like auto scaling as opposed to running your instances all the time? We want to enable that dynamic capability within your account. Same thing with leveraging things like spot instances and identifying if something is misprovisioned, we can identify that for you and then push forward a spot instance strategy.
Tim Fox: How many people out there are actually using spot instances? For those of you who aren’t, is it a concern about not having an instance running, or having an application that’s designed to be able to do it? We partner with organizations that can help build out the infrastructure for you, to always ensure you have an instance running that will leverage the spot so you’re saving money. That’s a big difference. Some of these instances can be 40% – 50% off, if you are using the spot instance market. That can be a huge savings. But again, your application must be built in such a way that it can sustain that.
Jeremy Bendat: We can help to enable your strategy in that way, even if it does take some time. Sometimes a short investment upfront can pay huge dividends in the long term.
Tim Fox: When we’re talking about right sizing. One of the things that often gets missed is how you’re utilizing storage. And this is a big. And while we may have expectations and regulatory requirements that define the data retention policies we have, are we adhering to those in the correct way? Given the example, of our customer with the 100-year retention policy, it became apparent that all they were doing for backups was taking that information and shoving it into Amazon S3 which is great. It’s a good place to store things and a lot of the modern backup tools will automatically connect you to S3 for storage. But they weren’t taking advantage of lifecycle policies. Lifecycle policies allow them to move data that they don’t need to frequently access to Amazon Glacier, which saves them a huge amount of money. Because they’re rarely pulling this data back, they are only paying for a small amount of storage and when they pull it back.
Jeremy Bendat: CloudFront is Amazon’s content distribution network (CDN). We have a cool program in place with CloudFront, if you’re not using it. There are a few ways of getting data out of your infrastructure. One is through leveraging Amazon’s data transfer out and the other is by adding CloudFront to your account and sending it out that way. CloudFront uses a series of edge locations strategically placed across the world that will help you get your data out faster to your clients. If you’re running a website or have some sort of download service, we recommend using CloudFront.
Tim Fox: Edge can also affect how you’re going to save money. If I’m using U.S. West and my customers are in Europe, CloudFront is going to be a huge advantage for me because it’s going to push that data out there and you’re going to spend less money because it pushes it to that edge. This is one of those things I think a lot of customers overlook. They don’t look at how their data is getting accessed in Amazon S3 to see if there are ways to save money by pushing it to CloudFront. We’ve seen some customers save as much as $10,000 to $20,000 a month by making a subtle shift and getting it from the location closest to the to the end user.
Jeremy Bendat: The customer receives a benefit too. You get the AWS cost optimization savings and they’re getting access to that data faster because they’re accessing it at a closer location. Saving money basically builds innovation. You’re giving your end users better performance. It’s a win-win across the board. That’s really the culture that Amazon embodies and if you have had any experience with your Amazon account rep, they’re always calling you and trying to get you to optimize your environment and make it more efficient.
Tim Fox: It’s simple. Happy customers buy more things. And in this case, if we can help you save money, you’re going to buy more things on AWS.
Jeremy Bendat: One of the cool things that has come out of the AWS cost optimization and savings conversation is that we have actually hired a Cloud Finance Manager. He interacts with our clients on a regular basis to try and drive that cost savings conversation. Typically that is not the technology arm of your company. He is an accountant and that’s usually an OPEX vs. CAPEX conversation – how your bill is being distributed, leveraging tabs, doing custom reports to help drive innovation. I’ve literally seen him at the end of the month email a customer and say – if you make these recommended changes we can save you $10,000. And then the customer will say, we’ll get around to it. The following month, he says again – if you made those recommendations, you could have saved $20,000. And then by the third month when he says $30,000, they get it. Part of this is finding ways to stay close to the customers that we’re working with and continuing to beat the drum around cost savings through these recommendations and to figure out new ways for them to save money and innovate.
Tim Fox: Let’s look at tags. Why are tags important? How do tags save you money? Some of our customers want to understand where the money is going. For example, I have one customer that basically built a specific infrastructure in each of their products. Their board of directors wanted to see how much it costs every single user using that piece of their infrastructure. With tags, they can get that answer and combine all of the costs associated with an individual product line each month. That’s huge, when your board of directors or CEO can understand the expense of something and it’s not necessarily the concept of bill back. When you talk about enterprise, they may want to see how much a department or resource is costing them. Beyond the tagging strategy, how do you make sure that you have a good tagging strategy?
Jeremy Bendat: And to take it up a level, AWS recognizes that this is a challenge that their customers have brought forward as well. When you have multiple accounts under a consolidated bill, how do you manage that and understand your bill.
Tim Fox: Let’s talk about auto scaling, the concept of making sure you’re getting the most out of your environment and meeting the needs. It can be a tough challenge. What are the metrics you are using to drive? Is it just a matter of CPU or the number of connections? Is it a matter of the latency? It could be the external pieces that go into it as well. Auto scaling at times can be a bit problematic if you’re coming in after the curve has already started. If you have the correct metrics and can get your auto scaling to work in the correct way, you’re going to save money because you’re ahead of a curve and can scale things down.
The second piece is using Lambda. This is an underutilized piece that gives you the capability to only pay for action when you need to take that action. For example, if you have a customer that’s going to drop off a file in S3, that you need to take action on instead of running an instance. You can tie that to a Lambda script that looks for that and only fires off when it occurs. So now you’re only paying for that on an event action. As compared to paying the hourly rate running an instance. That’s a huge advantage, being able to look at what you’re doing today and drilling it down to the specific business instance and utilizing Lambda to make it work.
Finally, a little bit about underutilized spot instances. Spot instances give you the capability to build ELB and ensure that systems are always running whether they’re running on classic or whether they’re running on spot to ensure you’re getting the performance you need. Alright, let’s talk about Reserved Instances (RIs).
Jeremy Bendat: There are a few simple RI strategies that we love to evangelize amongst our customers. But first let’s talk about what an RI is. You’re basically buying the right to a discount by committing to AWS for a certain period and through that, you are saving money. There’s a few different RI models: no upfront, partial upfront, all upfront payment, and convertible.
Tim Fox: This gets a little confusing for people to understand sometimes, because it really is a billing function and it’s also a reservation. If I’m going through and buying a particular M44 XL in U.S. East to my Zone A, I know that I’m always going to have that available. But that also means if I don’t have something running in our environment, I’m not going to get credit for it. But I’m still going to pay for it. So, it’s a billing function as long as you have one of those instances running, you always get credit for that, as compared to paying the full book rate.
Jeremy Bendat: And when you think about the strategy behind a RI. You’re not necessarily always locked into one instance. We always encourage our clients to buy within a RI family. If you buy an M44 XL in U.S. East for example, you can move up and down within that M44 family. You can get two M44 2XLs for the exact same price. And you can also move up if you purchase multiple M44 XL and you need to then convert to a M4.8XL. Being able to understand that type of strategy is going to be extremely helpful depending on how elastic and dynamic your infrastructure is.
Tim Fox: The other question is – what if I’m using auto scaling environments? Because this is a billing function, it doesn’t necessarily mean you’re tied to a specific instance itself to actually be running it, as long as you have an instance type running in that particular environment. We sit down with our customers and look at what that is because it’s a little more complex than just saying – I’m going to run a server 24 hours a day, 7 days a week. Maybe that doesn’t make sense. Maybe you’re only going to run that system 20 hours a day, there’s four hours you aren’t using it in the middle of the night, you’re going to shut it off. There’s still a break-even point where you’ll actually save money, and we can help you look at what that is and get an understanding.
Jeremy Bendat: CloudCheckr helps us see how the instances are behaving and if the technology case justifies the business decision to spend the money on that instance. Let’s talk a little bit about convertible RIs. This is something new which is enabling regional benefits. Convertible RIs are similar to moving up and down within a family, but you can actually convert your RI into anything as long as it’s within that same region. And by enabling something called regional benefit, you’re no longer tied to a AZ, but you can move and spread around throughout any AZ within that region.
Tim Fox: We can help you define what RIs you should buy. And that’s through interviewing and having conversations with you, looking at your bill, and looking at the optimization of those systems.
And then, once you’ve purchased those RIs, what are you doing with them? Are you utilizing them or are you underutilizing systems all the time? We can help you monthly to look at that and define whether you’re getting the value out of the RIs.
Jeremy Bendat: That’s where that white glove mentality comes in, you don’t need to hire someone to help manage all of your AWS infrastructure. We are literally living and breathing this stuff every day to help you do it and be more efficient and use the experience that we have to help you save money and innovate. With all of these cost savings exercises we discussed, we helped freed up IT budget and now what do we do with it? That’s a conversation that we’re used to having, we’ve just freed up an additional $100,000 off your AWS bill for the year, so what’s next? What services are you going to take advantage of to help drive that additional innovation?
Tim Fox: Driving Innovation. We live in a world where capital money is what drives our ability to make change. If you can actually say – I can save the company enough money to spin up new servers or hire more people, that is going to make a huge difference. You’re saving money and innovating at the same time.
Jeremy Bendat: Our goal is help you as individuals and as companies drive innovation through AWS cost optimization and identify opportunities within your account. Contact us if you are interested in a free CloudCheckr account, an AWS cost optimization assessment, or if you want to talk about the various proof of concept programs AWS offers. And don’t forget to download our free Book. Thank you so much and we appreciate the opportunity to chat with you.