Since its emergence in the mid-2000s, the cloud computing market has evolved significantly. The benefits of reliability, scalability, and cost reduction using cloud computing have created…
Cloud cost management. Everybody wants to keep them down as much as possible while maintaining reliability and service quality.
In this post, we’d like to go beyond the more obvious recommendations, yet remain at mostly simple tips which you may have overlooked.
After you design your deployment and decide on a cloud provider, one of your first instincts is to go and sign up for the service and start executing. In truth, unless you expect very high volumes, by going with an authorized reseller you may get to enjoy their volume discount. Some resellers may offer added value services to watch your bill, such as Datapipe’s Newvem offering. Don’t be afraid to shop around and see who’s offering what at which price.
Sure, everybody knows that you can reserve EC2 and RDS instances, and RedShift too. Upfront, Non-upfront, 1 year, 3 years… but did you also know that you can reserve DynamoDB capacity?
DynamoDB reservations work by allocating read and write capacities, so
once you get a handle on DynamoDB pricing and your needs, making a reservation can reduce up to 40% of DynamoDB costs.
And AWS keeps on updating their offerings, so make sure you look for additional reservation options beyond the straightforward ones.
EC2, RDS, Redshift and DynamoDB offer self-service reservations via AWS console and APIs. Does that mean that you can’t reserve capacity for other services? Well, yes and no. If you check out the pricing page for some services, such as S3, you will see that starting from a certain volume, the price changes to ‘contact us’. This is a good sign – it means that if you can plan for the capacity, AWS wants to hear from you. And even if you are somewhat shy of qualifying, it’s worth checking up with your account manager if AWS is willing to offer a discount for your volume of usage.
One of the more opaque cost items in the AWS bill is the data transfer fee. It’s tricky to diagnose and keep down. But by sticking to proper guidelines, you should be able to keep that factor down. Some worthwhile tricks to consider:
Cloudfront traffic is cheaper than plain data out. Want something downloaded off your servers or services? Use Cloudfront, even if there is no caching involved.
Need to exchange files between AWS availability zones? Use AWS EFS. Aside from being highly convenient, it waives the data transfer fee.
Also, make due care not ever to use external IP addresses unless needed. This will incur data transfer costs for traffic that may otherwise be totally free.
One major source of possible excess costs is the presence of data beyond that which is needed. Old archives and files, old releases, old logs… The problem is, usually the time it takes to discern whether they can safely be erased or not, outweighs the value of just keeping them there. A good compromise in between is to have AWS automatically move them to cheaper and less available storage tiers, like IA and Glacier.
You may be tempted to perform some of these manually, but keep in mind that changing a file’s storage class manually carries the cost of a POST operation, so it’s better to let policy handle that.