AWS does not provide a way to cap usage costs. It is often pointed out that it would not be useful to shut down a commercial website in case of charges exceeding a budget, without information about the appropriate response that's only possessed by the business itself. However, for those who want to experiment at home for learning purposes, this situation does not apply.
Prevention is a good thing, but it is impossible to prevent all accidents and attacks. This question is about response and not prevention.
One standard suggestion is to have some means of rapidly shutting down all AWS resources in an account.
Another piece of standard advice is to make use of features like budget alerts. As an individual citizen, it's plausible that the time to react to such an alert could be one day, or perhaps a week or more in case of illness, which could cause a very high bill. So automation might be useful here.
How can I solve these problems in a manner suitable for an individual developer experimenting in their own time and at their own cost? In particular, how can I:
Prepare for a rapid, well-tested, reliable response to shut down all resource usage in an AWS account
Trigger that response automatically (triggered by, for example, an AWS budget alert, or some other form of cost monitoring)
Some potential complications:
A. In the case of deliberate attack rather than pure user error, 1. may be complicated by the attacker making use of such features as EC2 termination protection.
B. An attacker might also make use of many different AWS services. So, given the large and expanding AWS product range, attempting to maintain a library that deletes every type of resource (EC2 instances, RDS instances, etc.), using code that is specific to particular resource types, may be impractical.
C. This rather old forum post suggests that AWS accounts can't be closed without first cancelling all opt-in services.
Note I can't use the free tier because I want to make use of features not available in that tier.
First off, proper security and management of root account credentials is critical. Enable MFA on all accounts, including root. Do not use the root account except for cases where absolutely necessary. Limit accounts with broad permissions. Enable CloudTrail and if desired, alert on use of elevated permissions. These sorts of actions will most certainly protect against nearly all attackers and since this is a personal account, the types of attackers who may be able to evade these controls would likely have no interest in causing an individual harm, they are more interested in large organizations.
As for accidents, what types of accidents are you thinking might happen? Do you have large compute jobs that use auto-scaling based on factors such as a queue depth? Your best action here is likely to set ASG max sizes, use CloudWatch events to monitor and re-mediate resource usage issues, or even use third party tools that deal with this type of thing.
Something to keep in mind is that AWS implements account limits that will constrain you some but for a personal account, even these limits are likely too permissive. I only have experience requesting limit increases but it might be worth asking AWS if they perform limit decreases as well.
You have raised concerns about excessive costs being generated due to:
Normal usage: If you require the computing resources, then they are most probably of sufficient benefit to the company to warrant the cost. Therefore, excessive use should generate a warning, but you do not want to turn things off.
Accidental usage: This is where an authorized person uses too many resources, such as turning on a service and forgetting to turn it off. Again, monitoring can give you a hint that this is happening. Many AWS customers create a Sandbox Account where they can experiment, and then use an automated script to turn off resources in this account (which is not used for real business purposes).
An attacker: This is an external party sending excessive usage to your services (eg making many requests to your website) but without access to your actual AWS account. This could also be caused by a Denial of Service attack. There is plenty of documentation around handling DDOS-style attacks, but a safe method is to limit the maximum number of instances permitted in an Auto Scaling group.
Someone accessing your AWS account: You mention an attacker making use of EC2 Termination Protection. This is an option you can apply to your own EC2 instances to prevent accidental termination. It is not something that someone outside your company would be able to control unless they have gained credentials to access your AWS Account. If you are worried about this, then active Multi-Factor Authentication (MFA) on your logins.
If you are worried about excessive costs, it's worth considering what generates costs:
Amazon EC2 instances are charged per hour. If you are worried about their cost, then you can Stop them, but this means they are no longer providing services to your company and customers.
Storage services (eg Amazon EBS and Amazon S3) are charged based upon the amount of data stored: You most probably do not want to automatically delete data due to excessive costs, since the data is presumably of value to your company.
Database services (eg Amazon RDS) are charged per hour. You probably don't want to turn them off because they, too, contain data of value to your company.
Some services are charged based upon throughput (eg AWS API Gateway, Amazon Kinesis), but turning off such services would also impact your ability to continue providing services to your customers.
If you are talking about personal usage of AWS that is not supplying a service to customers, then best practice is to always turn off services that aren't required, such as compute and database services. AWS is an on-demand service, so you only have to pay for services that you have requested.
Also, create Billing Alarms to alert you when you cross a certain cost threshold. These can be further broken down into budgets that notify you as you approach certain spending thresholds.
Bottom line: Rather than focusing on system that automatically react to high expenditure and delete things, you should only run services that you currently want. Set alarms/budget to be aware of costs that exceed desired thresholds.
Related
We are going to have a new business system and I'm trying to convince my boss to host it on cloud in China cause business is there, ie: Azure, AWS, etc. He has a concern about data confidentiality and he doesn't want the company's financial info to leak out. The software vendor also suggested we build our own data center if we are so concern about data confidentiality. This makes me even more difficult to convince him. He has the impression that anything can be done in China.
I understand that Azure SQL is not an option for me cause host admin still have control even though I implement TDE (cannot use Always Encrypt). Now I'm looking at VM where I have full control over - at VM level up. I can also use disk encryption. Couple that with other security measures like SSL I'm hoping that this will improve the security of the data is it in transit or at rest. Is my understanding correct?
With that said, can the Azure admin still overwrite anything set on VM and take over the VM fully?
Even though it's technically possible but if this takes a lot of effort (benefit < effort) it still worth trying.
Any advice will be much appreciated.
Azure level Admin can just login to your VM, doesnt matter if its encrypted or not (or decrypt it, for that matter). You cannot really protect yourself from somebody inside your organization doing what he is not supposed to do (you can with to some extent with things like Privileged Identity Management, proper RBAC, etc).
If you are talking about Azure Fabric admin (so the person working for Microsoft or the chinese company in this particular case). He can, obviously pull the hard drive and get access to your data, but its encrypted at rest. Chances are he cannot decrypt it. If you encrypt the VM on top of that with Azure Disk Encryption (or Transparent Data Encryption) using your own set of keys he wouldn't be able to decrypt the data even if he can, somehow, get past the Azure side encryption
If you want to more control better to have IaaS services than PaaS services. You have more control on IaaS. You can use Bit loker to encrypt your disks if you are using Windows OS. China data center also under industry specific standards. Access to your customer data is controlled by an independent company in China, 21Vianet. Not even Microsoft can access your data without approval and oversight by 21Vianet. I think there is no big risk but you have to implement more security mechanism than Azure provide for better security.
I need to copy highly sensitive data on an AWS EC2 instance (to perform a couple of operations on it.
How safe is my data? (Although I have implemented the security group on this instance; which allows access only from my IP).
Can Amazon access this data? Can they somehow access and use this data? Since the data involves source code which is extremely crucial to my organization, leaking of which can cause huge repercussions !!
According to Amazon:
Who owns customer contents?
Customers maintain ownership of their customer content and select which AWS services process, store and host their customer content. We do not access or use customer content for any purpose other than as legally required and for maintaining the AWS services and providing them to our customers and their end users. We never use customer content or derive information from it for marketing or advertising.
You can also check Whitepaper on EU Data Protection
Hope it helps :-)
As soon as you send your data to the Cloud, there are security risks. There is no way to make these risks zero.
You say you have limited access to your server to just your IP. OK, is it a company IP? Are there others on your Wifi network who connect to the internet with that IP? Is your Wifi network secure? Have any of your employees installed malware on their desktops? etc etc etc. Restricting to your IP only is a good thing to do, but it's not 100% unless you are connecting from a highly secure location.
Amazon says "We do not access or use customer content for any purpose other than as legally required and for maintaining the AWS services and providing them to our customers and their end users." OK, so they CAN access your data. Who knows what "legally required" means. Seems to me governments are making the laws up as they go along nowadays.
If the data is really that sensitive, don't put it in the Cloud. A cost analysis should help you - how much do you lose if your data is compromised? how much would it cost to buy the server out of the Cloud and do it securely at your company? If the first amount is massive and the second tiny don't use the Cloud!
We had a bill shock scare in our corporate account when someone got access to the secure keys and started a lot of m3.large spot instances (50+) on the aws account.
The servers ran overnight before it was found and the bill went over $7000 for the day.
We have several security practices set up on the account after the incident including
key rotation
password minimum length
password expiry
Billing alerts
Cloudwatch
Git precommit hooks to look for AWS keys
I am yet to find a way to cap the bill amount to a desired top threshold.
Does AWS provide a method of setting a cap on the bill(daily/monthly) ? Is there any best practices on this front which can be added to the measures pointed out above to prevent unauthorized use ?
Amazon does not have a mechanism to "take action" in cases where bills skyrocket. You can do what you've already done:
Setup billing alerts to monitor for a skyrocketing bill
Setup good security practices to ensure that people cannot mess with your AWS account
But also, you can:
Setup internal company policies so that employees don't accidentally cause unnecessary charges
Ensure you're using IAM roles and policies appropriately so that no one can do the wrong thing
There's a good reason why AWS won't do anything active: what exactly would you expect them to do? Doing anything that isn't inline with your business practices could totally damage your company.
For example, you have an autoscaling group managing a small fleet of EC2 instances. One day, your company gets some unexpected good press and your website activity goes through the roof, launching new EC2 instances to meet the demand, and blasts past your billing alert. If AWS were to terminate or stop EC2 instances to prevent your bill from going nuts, then your customers wouldn't be able to access your website. This could cause damage to your company reputation, or worse.
If you want to take action, you can setup a trigger on the billing alert and handle it yourself according to your business needs. That's how AWS is built: it gives you the tools; you need to use those tools in a way that best suit your business.
You can definitely setup Billing Alerts to receive a notification when this kind of thing happens:
http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/monitor-charges.html
Also take a look at:
http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/checklistforunwantedcharges.html
Although AWS does not support a cap on billing, it does support caps on services including a cap on the number of EC2 instances - see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html. By default new accounts are caped at 20 EC2 instances, but this limit can be changed.
Windows Azure has a store.
The stuff you can by there are called Add-Ons, and they fall in two categories: Service and data.
I understand the point of some of the service offerings, but not all, and I don't yet understand the point of the data offerings at all.
With services, some offerings are database deployments such as ClearDB (MySQL) and MongoLab. That makes sense to me: You get those databases deployed and monitored with a few clicks, yet those databases run in the same data center as the applications that consume them, which is good for performance and security.
For most other services (there is a simple scheduler application, for example), it seems that the only advantage is the unified billing method. Is that a correct observation, or is there more to it?
Then the data offerings: The fact that I can buy bing query transactions cannot really have anything to do with the rest of my azure account, right? Technically, it's just bing (or whatever other data offering you look at) and presumably I'm going against the same bing api that I would have used previously (I'm assuming that was possible). There is nothing really deployed in any Azure data center the moment I buy it, is there? So in what sense is that an Add-On?
In a nutshell, am I missing something or are most Add-Ons just a method of buying external services and having the billed on my Azure account?
If you can answer the question for other 'app stores', you can answer it for Windows Azure. We know about THE App Store (as per the court battles over the name) which is the only way to get applications onto the closed (iOS) device. There is also a Mac App Store which would seem unnecessary because of the ability to install apps by yourself (which makes it more similar to the Azure store). In this case the reason for the store is discoverability, association with the store brand (where the buyer assumes a degree of vetting), a single point for updates, and simplified billing.
The Windows Azure Store (and data marketplace) exist for similar reasons. It is less about the technical benefits than the association with the Azure brand. Since SO is technical, let me highlight some (largely) technical aspects:
Don't assume that the service will run in the same data centre. In most cases it probably won't.
There is an advantage of having everything in one place from an operational point of view. Granting of operator access to the subscription means that you don't have to administer accounts on the service. I have had problems with this though - where the service made it difficult to do other things (such as get support) because the Azure identity wasn't handled very well. (I had this with New Relic).
The combined billing works on credit card payments only. Last time I checked (Summer 2013) there was no way to get an add-on with a pay-by-invoice subscription, so a second subscription (with credit card) was needed anyway.
Add-ons seems to still be in 'preview', which may indicate low adoption. Microsoft probably hasn't seen it grow the way they expected and may not be developing it much in future. This is opinion only, and shouldn't affect the service (after all the store is just a gateway, and has no (little) technical impact on the service provided)
Don't completely ignore the store however. The biggest benefit seems to be the free tier of the servers and reduced pricing, where Microsoft has managed to get service providers to make the store attractive. For example, the SendGrid free option provides 25,000 emails per month, and there doesn't seem to be a free option on SendGrid.com. New Relic pricing was (and maybe still is) significantly less.
Pay attention mainly to the pricing benefits, rather than perceived technical benefits.
I'm not sure whether this is the right place to ask this question, but I'd like to give Amazon RDS's trial a go. Previously I've used Microsoft SQL Azure's trial and they cut me off as soon as I overshot the limit, preventing me from paying a single cent.
However, with Amazon RDS's trial, it seems that I will be charged as soon as I exceed their limits. I'd just like to know if there's anything in particular I should look out for, that I might miss out, and be charged because of that.
Of course, I'd prefer it if there is a way for me to prevent me from exceeding the free-of-charge limits.
Many thanks...
As far as I know you can't set a hard limit. As far as I can tell the only limits you can hit are the time one or the IO limit: RDS won't magically grow your storage size or your instance size for you.
You can however setup a billing alert: amazon billing charges are available as a metric in cloud watch (amazon's monitoring system), so you can create alerts base on them (for example to send you an email). You can set this up from the account activity page or you can configure the alerts as you would with any other cloudwatch metric.