Limitations of Amazon RDS's Trial - amazon-rds

I'm not sure whether this is the right place to ask this question, but I'd like to give Amazon RDS's trial a go. Previously I've used Microsoft SQL Azure's trial and they cut me off as soon as I overshot the limit, preventing me from paying a single cent.
However, with Amazon RDS's trial, it seems that I will be charged as soon as I exceed their limits. I'd just like to know if there's anything in particular I should look out for, that I might miss out, and be charged because of that.
Of course, I'd prefer it if there is a way for me to prevent me from exceeding the free-of-charge limits.
Many thanks...

As far as I know you can't set a hard limit. As far as I can tell the only limits you can hit are the time one or the IO limit: RDS won't magically grow your storage size or your instance size for you.
You can however setup a billing alert: amazon billing charges are available as a metric in cloud watch (amazon's monitoring system), so you can create alerts base on them (for example to send you an email). You can set this up from the account activity page or you can configure the alerts as you would with any other cloudwatch metric.

Related

What's the meaning of the daily memory-time quota in the context of Azure Function Apps?

Within an Azure Function App it is possible to define a daily memory-time quota.
Unfortunately I was not able to find an official resource from Microsoft stating what setting this value actually means.
What is a memory-time quota? What does it mean if I set the value e.g. to 1000?
Here is document(refer to Step 7 - Configure a Daily Use Quota for the details.) about daily memory-time quota.
In short, when using Azure Functions consumption plan, it offers near-infinite scale to handle huge spikes in load. But that does also leave you open to a "denial of wallet attack" where due to an external DoS attack or a coding mistake, you end up with a huge bill because your function app scaled out to hundreds of instances. The daily quota allows you to set a limit in terms of "Gigabyte seconds" (GB-s).
For "Gigabyte seconds", you can refer to this SO answer.
Hope it helps.

Prepare to respond to excessive usage on personal dev AWS account

AWS does not provide a way to cap usage costs. It is often pointed out that it would not be useful to shut down a commercial website in case of charges exceeding a budget, without information about the appropriate response that's only possessed by the business itself. However, for those who want to experiment at home for learning purposes, this situation does not apply.
Prevention is a good thing, but it is impossible to prevent all accidents and attacks. This question is about response and not prevention.
One standard suggestion is to have some means of rapidly shutting down all AWS resources in an account.
Another piece of standard advice is to make use of features like budget alerts. As an individual citizen, it's plausible that the time to react to such an alert could be one day, or perhaps a week or more in case of illness, which could cause a very high bill. So automation might be useful here.
How can I solve these problems in a manner suitable for an individual developer experimenting in their own time and at their own cost? In particular, how can I:
Prepare for a rapid, well-tested, reliable response to shut down all resource usage in an AWS account
Trigger that response automatically (triggered by, for example, an AWS budget alert, or some other form of cost monitoring)
Some potential complications:
A. In the case of deliberate attack rather than pure user error, 1. may be complicated by the attacker making use of such features as EC2 termination protection.
B. An attacker might also make use of many different AWS services. So, given the large and expanding AWS product range, attempting to maintain a library that deletes every type of resource (EC2 instances, RDS instances, etc.), using code that is specific to particular resource types, may be impractical.
C. This rather old forum post suggests that AWS accounts can't be closed without first cancelling all opt-in services.
Note I can't use the free tier because I want to make use of features not available in that tier.
First off, proper security and management of root account credentials is critical. Enable MFA on all accounts, including root. Do not use the root account except for cases where absolutely necessary. Limit accounts with broad permissions. Enable CloudTrail and if desired, alert on use of elevated permissions. These sorts of actions will most certainly protect against nearly all attackers and since this is a personal account, the types of attackers who may be able to evade these controls would likely have no interest in causing an individual harm, they are more interested in large organizations.
As for accidents, what types of accidents are you thinking might happen? Do you have large compute jobs that use auto-scaling based on factors such as a queue depth? Your best action here is likely to set ASG max sizes, use CloudWatch events to monitor and re-mediate resource usage issues, or even use third party tools that deal with this type of thing.
Something to keep in mind is that AWS implements account limits that will constrain you some but for a personal account, even these limits are likely too permissive. I only have experience requesting limit increases but it might be worth asking AWS if they perform limit decreases as well.
You have raised concerns about excessive costs being generated due to:
Normal usage: If you require the computing resources, then they are most probably of sufficient benefit to the company to warrant the cost. Therefore, excessive use should generate a warning, but you do not want to turn things off.
Accidental usage: This is where an authorized person uses too many resources, such as turning on a service and forgetting to turn it off. Again, monitoring can give you a hint that this is happening. Many AWS customers create a Sandbox Account where they can experiment, and then use an automated script to turn off resources in this account (which is not used for real business purposes).
An attacker: This is an external party sending excessive usage to your services (eg making many requests to your website) but without access to your actual AWS account. This could also be caused by a Denial of Service attack. There is plenty of documentation around handling DDOS-style attacks, but a safe method is to limit the maximum number of instances permitted in an Auto Scaling group.
Someone accessing your AWS account: You mention an attacker making use of EC2 Termination Protection. This is an option you can apply to your own EC2 instances to prevent accidental termination. It is not something that someone outside your company would be able to control unless they have gained credentials to access your AWS Account. If you are worried about this, then active Multi-Factor Authentication (MFA) on your logins.
If you are worried about excessive costs, it's worth considering what generates costs:
Amazon EC2 instances are charged per hour. If you are worried about their cost, then you can Stop them, but this means they are no longer providing services to your company and customers.
Storage services (eg Amazon EBS and Amazon S3) are charged based upon the amount of data stored: You most probably do not want to automatically delete data due to excessive costs, since the data is presumably of value to your company.
Database services (eg Amazon RDS) are charged per hour. You probably don't want to turn them off because they, too, contain data of value to your company.
Some services are charged based upon throughput (eg AWS API Gateway, Amazon Kinesis), but turning off such services would also impact your ability to continue providing services to your customers.
If you are talking about personal usage of AWS that is not supplying a service to customers, then best practice is to always turn off services that aren't required, such as compute and database services. AWS is an on-demand service, so you only have to pay for services that you have requested.
Also, create Billing Alarms to alert you when you cross a certain cost threshold. These can be further broken down into budgets that notify you as you approach certain spending thresholds.
Bottom line: Rather than focusing on system that automatically react to high expenditure and delete things, you should only run services that you currently want. Set alarms/budget to be aware of costs that exceed desired thresholds.

Does AWS cloud provides an option to cap the billing amount?

We had a bill shock scare in our corporate account when someone got access to the secure keys and started a lot of m3.large spot instances (50+) on the aws account.
The servers ran overnight before it was found and the bill went over $7000 for the day.
We have several security practices set up on the account after the incident including
key rotation
password minimum length
password expiry
Billing alerts
Cloudwatch
Git precommit hooks to look for AWS keys
I am yet to find a way to cap the bill amount to a desired top threshold.
Does AWS provide a method of setting a cap on the bill(daily/monthly) ? Is there any best practices on this front which can be added to the measures pointed out above to prevent unauthorized use ?
Amazon does not have a mechanism to "take action" in cases where bills skyrocket. You can do what you've already done:
Setup billing alerts to monitor for a skyrocketing bill
Setup good security practices to ensure that people cannot mess with your AWS account
But also, you can:
Setup internal company policies so that employees don't accidentally cause unnecessary charges
Ensure you're using IAM roles and policies appropriately so that no one can do the wrong thing
There's a good reason why AWS won't do anything active: what exactly would you expect them to do? Doing anything that isn't inline with your business practices could totally damage your company.
For example, you have an autoscaling group managing a small fleet of EC2 instances. One day, your company gets some unexpected good press and your website activity goes through the roof, launching new EC2 instances to meet the demand, and blasts past your billing alert. If AWS were to terminate or stop EC2 instances to prevent your bill from going nuts, then your customers wouldn't be able to access your website. This could cause damage to your company reputation, or worse.
If you want to take action, you can setup a trigger on the billing alert and handle it yourself according to your business needs. That's how AWS is built: it gives you the tools; you need to use those tools in a way that best suit your business.
You can definitely setup Billing Alerts to receive a notification when this kind of thing happens:
http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/monitor-charges.html
Also take a look at:
http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/checklistforunwantedcharges.html
Although AWS does not support a cap on billing, it does support caps on services including a cap on the number of EC2 instances - see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html. By default new accounts are caped at 20 EC2 instances, but this limit can be changed.

Azure instance limit and size limit

I have just setup an Azure cloud trial account for my application that solves complex problem.
Solution is working but too slow.
Limit is 20 instances - why? how to make more?
also instances are "small", how can I make "big"
If Azure not scalable, what cloud is?
During "Free" trial period, you get limited resources so that you can evaluate whether Azure is right for your business needs. Also the limit of 20 instances is by default so that you don't accidently overrun the cost (there are a lot of users including myself who have been affected by this where we ran the stuff without fully understanding the cost implications).
You could contact support to get the limit increased but I doubt that they will do it for a trial account. My guess is that you would need to purchase a subscription first to get the quota increased.

Doubts about Windows Azure Platform Introductory Special

I'm considering to join the Windows Azure Platform Introductory Special, but I'm a little bit afraid of losing money with it. I don't wanna develop any fancy large scale application, I want to join just to learn Azure and do my experiments, what should I be afraid of?
In the transference, it says: "Data Transfers (per region)", what does that mean?
Can I put limits to stop the app if it goes over this plan in order to avoid get charged?
Can it be "pre pay" instead "bill pay"?
Would it be enough for a blog?
Any experiencie so far?
Kind regards.
As ligget pointed out, Azure isn't cost affect as a host for an application that can be easily deployed to a traditional shared hosting provider. Azure's target market are those that want dedicated resources without the need to micro-manage the infrasture and the capability to easily scale up/down based on demand.
That said, here's the answers to the questions you posted:
Data Transfers are based on bandwidth in and out of the hosting data center. bandwidth for communication occuring within components (SQL Azure, Windows Azure, Azure Storage, etc...) in the same datacenter are not billable.
Your usage is not currently capped when the free quotas are used up. However, you will recieved warning emails when those items approach their usage threadsholds.
There is the option to pay your subscription using a PO, but the minimum threshold for most of these operations is $500/month. So as a hobbyist, its unlikely you're wanting that route.
The introductory special does not provide enough resources for hosting a 24x7 personal blog. That level includes only 25hrs of compute resources. Each hour a single instance of your application is deployed will count against this, even if the application received no traffic. Think of it like renting office space. You still pay rent on the office even if there are no customers there.
All this said, there's still much to be learned with the introductory special. The azure development tools allows you to work with Windows Azure and Azure storage locally and get a feel for how they work. The introductory special then lets you deploy those solutions so you can see what works and what doesn't (not everything that works locally works hosted).
I would recommend you host your blog somewhere else - it's a waste of resources running it on Azure and you'll find much cheaper options. A recently introduced extra small instance would be a better choice in this case, but AFAIK it is charged separately as of now, e.g. even when you have an MSDN subscription those extra small instance hours do not count towards free Azure hours that come with the subscription.
There is no pre-pay option I know of and it's not possible to stop the app automatically. It'll be running until the deployment is deleted (beware! even if suspended/stopped the deployment will continue to accrue charges). I believe you will be sent a notification shortly before reaching your free hours threshold.
Be aware that when launching more than 1 instance you are charged for every hour of every instance combined. This can happen for example when you have more than one role in your Azure project (1 web role + 1 worker role - a separate instance will be started for each role).
Data trasfer means your entire data trasfer: blobs/Table storage/queues (transfers between your hosted service and storage account inside the same data center are free) + whatever data is transfered in/out of your hosted application, e.g. when somebody visits your pages. When you create storage accounts and hosted services in Azure you will specify a region that will be hosting your account/app - hosting in Asia is slightly more expensive than in Europe/U.S.
Your best bet would be to contact Microsoft with these questions.

Resources