Azure functions set hard limit to free tier - azure

I'm interested in using Azure Functions for a piece of serverless code, but I would like to ensure that I am always within the free tier, so as to not incur any expenses (I'm okay with potential downtime, not really critical). How do I achieve this?
My function is limited to some domains I control, and possibly a resource used in GitHub readme (like a tracking pixel). How do I combat potential DDOS, and massive bill spikes?
I've seen other questions on how to manage fanout, scale etc, but none on setting hard limits. I'm still a student, so I'd rather stay exclusively in the free tier.
Note, by 'Free tier', I mean the 'Always Free' offering.

You cannont (as far as I know) set a hard limit.
What you can do is to reduce your functions ability to scale. So that it can just process a single request at a time, then depending on how long the request takes you will stay within the free tier.
https://nullable.online/2018/11/20/how-to-throttle-a-azure-function-hosted-on-a-consumption-plan/

Related

How do I determine virtual user amount and pacing if the client cannot give me any real data about the website?

I've come across many clients who aren't really able to provide real production data about a website's peak usage. I often do not get peak pageviews per hour, etc.
In these circumstances, besides just guessing or going with what "feels right" (i.e. making it all up), how exactly does one come up with a realistic workload model with an appropriate # of virtual users and a good pacing value?
I use Loadrunner for my performance/load testing.
Ask for the logs for a month.
Find the stats for session duration, then count the number of distinct IP's blocked by session duration.
Once you have the high volume hour, count the number of page instances. Business processes will typically have a termination page which is distinct and allows you to understand how many times a particular action takes place, such as request new password, update profile, business process 1, etc...
With this you will have a measurement of users and actions. You will want your stakeholder to take ownership of this data. As Quality assurance, we should not own both the requirement and the test against it. We should own one, but not both. If your client will not own the requirement, cascading it down to rest of the organization, assume you will be left out in the cold with a result they do not like....i.e., defects that need to be addressed before deployment to production.
Now comes your largest challenge, which is something that needs to be fixed with a process issue by your client.....You are about to test using requirements that no other part of the organization, architecture, development, platform engineering, had when they built the solution. Even if your requirements are a perfect recovery, plus some amount for growth, any defects you find will be challenged aggressively.
Your test will not match any assumptions or requirements used by any other portion of the organization.
And, in a sense, these other orgs will be correct in aggressively challenging your results. It really isn't fair to hold their designed solution to a set of requirements which were not in place when they made decisions which impacted scalability and response times for the system. You would be wise to call this out with your clients before the first execution of any performance test.
You can buy yourself some time. If the client does have a demand for a particular response time, such as an adoption of the Google RAIL model, then you can implement a gate before accepting any code for multi-user performance testing that the code SHALL BE compliant for a single user. It is not going to get any faster for two or more users. Implementing this hard gate will solve about 80% of your performance issues, for the changes required to bring code into compliance for a single user most often will have benefits on the multi-user front.
You can buy yourself some time in a second way as well. Take a look at their current site using tools such as Google Lighthouse and GTMetrix. Most of us are creatures of habit, that includes architect, developers, and ops personnel. We design, build, deploy to patterns we know and are comfortable with....usually the same ones over and over again until we are forced to make a change. It is highly likely that the performance antipatterns pulled from Lighthouse and GTMetrix will be carried forward into a future release unless they are called out for mitigation. Begin citing defects directly off of these tools before you even run a performance test. You will need management support, but you might consider not even accepting a build for multi-user performance testing until GTMetrix scores at least a B across the board and Lighthouse a score of 90 or better.
This should leave edge cases when you do get to multi-user performance testing, such as too early allocation of a resource, holding onto resources too long, too large of a resource allocation, hitting something too often, lock contention on a shared resource. An architectural review might pick up on these, where someone might say, "we are pre-allocating this because.....," or "Marketing says we need to hold the cart for 30 minutes before de-allocation," or "...." Well, you get the idea.
Don't forget to have the database profiler running while functional testing is going on. You are likely to pick up a few missing indexes or high cost queries here which should be addressed before multi-user performance testing as well.
You are probably wondering why am I pointing out all of these things before your performance test takes place. Darnit, you are hired to engage in a performance test! The test you are about to conduct is very high risk politically. Even if it finds something ugly, because the other parts of the organization did not benefit from the requirements, the result is likely to be rejected until the issue shows up in production. By shifting the focus to objective measures even before you need to run two users in anger together there are many avenues to finding and fixing performance issues which are far less politically volatile. Food for thought.

Azure Pay-as-you-go typical pricing

I am required to learn Azure Databricks as well as other Azure services that require something more than just the Free Trial. I don’t really have any problem doing this.
However, my question is how much can I expect to be charged monthly, weekly, etc. when strictly performing learning-related tasks?
I just want to become familiar with the services and just Azure in general, but I want to know what other people’s experiences have been. I don’t want to set up clusters on a Databricks project just to learn and figure it out and end up costing myself 50 bucks for something I’m really just testing.
Any help would be appreciated. Thanks!
You can get very far without actually paying anything, especially with the free trial. However, you can also accrue extremely high costs very quickly in pay-as-you-go. There are too many Azure Services to get any more specific within the scope of this site.
Three tips to get you started:
Use the Azure Pricing calculator for the services you want to learn to get a feeling for the costs and how they develop.
Set a budget on your subscription to avoid accidentally spending too much.
Delete your services as soon as you are done with them, even if you need to recreate them the next day. You often pay by the hour.

Azure Search Multi-Tenant Strategy, Costs and Recomendations

I'm designing an architecture for leveraging Azure Search for multiple tenants. Since each tenant will have a slightly different schema my solution will require 1 index per tenant. This is easy enough to set up and I'm really liking what Microsoft has put together. However now that I am starting to think about on-boarding new tenants, monthly costs and scaling up the service I am starting to hit a few walls and wondering what my "best" option is.
Has anyone encountered this situation that can shed some light onto best practices? Here are the options as I see them now:
Option 1:
Spin up a new BASIC plan for every 5 tenants at a cost of $38/m for every 5 tenants ($7.60 per tenant per month).
Pros: Cheap to start.
Cons: Tenants are crippled by the limited performance and storage capabilities, I'll have to manage X number of services and ClientQueryKeys once I get past 5 indexes/tenants.
Option 2:
Spin up a new STANDARD S1 plan for every 50 tenants at a cost of $250/m for every 50 tenants ($5 per tenant per month).
Pros: Better performance, less services to manage as tenant counts increase
Cons: Much higher costs to start, still need to manage tenant-to-service relation once system has greater than 50 tenants, I'll have to manage X number of services and ClientQueryKeys once I get past 50 indexes/tenants.
Option 3:
Spin up a single STANDARD S2 plan that can be used for ALL tenants (assuming no cap on index count)
Pros: Better performance, no need to manage multiple services/client keys as tenant counts increase
Cons: Much higher costs to start, very little documentation on costs and limitations.
In all scenarios (aside from option 3, I'm assuming?) I would have to manage client keys across multiple services. So obviously having only one service with an infinite index count is ideal. However I am a startup (yes I am using BizSpark already) and the costs for search a very daunting when I may only have 1-5 tenants to start.
I've read that there is no way to easily migrate data between plans (without doing it manually or writing a script) so my first choice is likely to be my last. I would also prefer to only have to manage one service with one plan for all my tenant. Therefore I am leaning to option 3.
If option 3 is the best option:
Can I start on BASIC and scale up to S1 then S2 as needed, or is this not possible?
If BASIC cannot scale to S1 is it at least possible to scale from STANDARD S1 to S2 once I go past 50 tenants or will I need to manually manage this or start at S2?
What are my startup costs and/or costs per index/tenant on Standard S2?
Is my index limit infinite on S2?
If not, what is the index cap?
Are there any other options or caveats that I should consider?
S2 services work much better in multi-tenant scenarios. Not only they can fit more indexes (up to 200), but they also have more overall capacity so assuming exponential distribution of index sizes and loads, you get a better typical experience for your customers.
You're right that the cost of entry is higher.
Regarding the cons of S2, soon we're going to publish proper documentation and other supporting materials for it. In the meanwhile, feel free to contact me directly (Pablo DOT Castro AT the usual Microsoft domain) for more details.
If you think you'll have lots of indexes in the future (many 100s), we're also working on options for better multi-tenant support. We're not ready to announce the details yet but I'm happy to discuss if you get in touch with us.
Answering your specific questions:
1.Can I start on BASIC and scale up to S1 then S2 as needed, or is this not possible?
We don't currently support this. You'd have to create a new search service and migrate the indexes.
2.If BASIC cannot scale to S1 is it at least possible to scale from STANDARD S1 to S2 once I go past 50 tenants or will I need to manually manage this or start at S2?
No, it's not. We want to do this, just have not gotten to it yet.
3.What are my startup costs and/or costs per index/tenant on Standard S2?
Please get in touch with us and we can discuss pricing.
4.Is my index limit infinite on S2?
5.If not, what is the index cap?
No, S2 services are limited to 200 indexes/service.
6.Are there any other options or caveats that I should consider?
You've done a good analysis, I think you're on the right track. One thing you may want to consider is fairness. All indexes in the same service share the capacity you've provisioned for the service. If there's risk of unfair loads you might want to consider per-tenant throttling.

Performance of Web Database seems quicker than new Azure SQL DB service tiers?

I am using MVC3, EF5, LINQ, .NET4.5, SQL Database.
Microsoft has just brought out the new service levels for SQL Databases ie Basic, Standard and Premium.
Originally I was using the "Web" SQL database since my DB was small ie about 30mb. However on my test web site instance I have been using Basic web site and "Basic" SQL Database setups to save money.
I have a "slower" running query which suddenly took 9secs when my Live DB was restored as a "Basic" new style DB on the test instance. It tool about 2.5 secs on live. When I scaled up this test DB instance to "Standard" SO, 20 DTUs, it took 3.9 secs. When I then scaled this DB back to the "retired" "Web" format, it then took 1.9 secs which really surprised me. It is as if one needs to scale the DB to S1 to get comparable performance to the old "Web" style DB, but I suspect this will then cost more than the old "Web" format DB.
I appreciate any comments on the above, especially if other have found the new DB styles can be slower.
At the end of the day, what setup in the new DB style is the old "Web" style equivalent to?
Thanks.
EDIT (THIS IS REALLY REALLY WORRYING)
I have discovered a very useful document on this, and my worst fears are confirmed
see Web/Business comparison with new SQL Database service tiers. These are very, very worrying as it seems that web database performance can only be matched by the "Premium P1" edition, and we would not be able to afford the use of this. So for the time being we will continue to use the "Web" edition.
EDIT, Seem to have touched a raw nerve.... There are many worried folks about this....
see: Forum chat with worried users
FEEDBACK FROM .NET USER GROUP
I have also been speaking with a number of my Azure using .NET peers at a recent user group meeting, and they were also very worried to the extend they believed developers would just leave Azure. I think one of the key mistakes here, by Microsoft, is to set the performance of Basic well below that of Web(most of the time) and even S1 and S2 below web. It is only when you get onto P1 and P2 that you experience a par, and we dare not use this in test due to the impact on charges. In our experience Web has performed at this high level for 90% of the time. I am guessing the 10% is there, since you say it is, but non of our clients have complained about this. However to retain our current level of performance we would need to upgrade to S2 or P1 which would have an extraordinary impact on our monthly charges. Jim Rand's feedback is appreciated, and backs up our concerns.
I am the author of the blog post mentioned above. A more up to date version of that post is available:
http://cbailiss.wordpress.com/2014/09/16/performance-in-new-azure-sql-database-performance-tiers/
The tests I conducted were primarily around the physical I/O capabilities of the new service tiers. From those tests I believe that P1 offers roughly the same I/O on average as Web/Business.
So, the specific answer to your question:
At the end of the day, what setup in the
new DB style is the old "Web" style equivalent to?
If you were running toward the physical I/O limits of Web/Business (roughly speaking 200MB+ read, 50MB+ write per minute), then I would say a minimum of P1 is needed to offer equivalent I/O performance in the newer service tiers.
If on average your I/O is generally much less than the figures above, then the database may perform OK on one of the Standard Tiers.
My tests didn't quantify/compare CPU or memory differences between Web/Business and the new tiers, but they too scale by service tier in the new world. The sys.resource_stats DMV in the master database might offer some insight for your workload. See the newer blog post above for more details.
For completeness, it is worth mentioning that the newer service tiers do offer some other advantages likely supporting more connections concurrently, new availability features, new backup features, etc.
Hope that helps...
EDIT: Jan 2015: A new Standard S3 performance level is currently in preview as part of the Azure SQL Database v12 version. This looks like it will offer price-performance at a point much closer to Business Edition than has been available until now. In addition, every service tier and performance level looks to be gaining higher performance in v12. See my blog post for details:
https://cbailiss.wordpress.com/2014/12/17/azure-sql-database-v12-performance-tests-show-significant-performance-increase/
Chris
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Hit this last Thursday. Converting data from old system to SQL Azure. Chose the new Standard (S2) instead of the 5 gig web (retired) database.
The SQL:
UPDATE Invoice
SET SalesOrderID = O.SalesOrderID
FROM Invoice
INNER JOIN SalesOrder AS O ON Invoice.InvoiceID = O.InvoiceID
196043 rows. Re ran and it took over 4 minutes. Exported database and reloaded it into the web edition. Query took 19 seconds. Total database size is about 750 megabytes.
Bottom line, this is more than "all a little worrying". Unless Microsoft gets the performance up on the new basic / standard / premium tiers to where it is now in the web edition, they can pretty much kiss Azure goodbye. Totally unreasonable that you can't run a query on only 196043 rows unless the the data is in the cache. So much for analytics with a relational database.
I'll be advising my client this week of this matter. Undoubtedly, he will be contacting upper management at Microsoft.
Jim, I'd be happy to help. We know that changing business models is a hard thing to do. In the Web/Business case, you pay on size of the DB and you get whatever performance we have at the time. Sometimes this is great, other times this is ok and sometimes performance is very poor. Customers have given us feedback that this unpredictable performance is very difficult to deal with.
Using this feedback as a key input, the business model for Basic/Standard/Premium is $/perf. Understanding what resources your consuming is a great first step before moving to B/S/P. We have several pieces of new guidance that should help you do this
http://azure.microsoft.com/en-us/documentation/articles/sql-database-upgrade-new-service-tiers/
Your mileage may vary here. Many customers see a decrease because of this business model change. Others see no impact, and some will see an increase if their DBs are very small and consume a lot of resources. I and the team would be happy to help customers move into the new business model. To have great conversations will need some customer specifics that aren't best shared in a public forum. guyhay#microsoft is my email if you'd like to have that conversation.

Azure - 2x extra small or a single small instance

Starting out with Windows Azure, but how do I know which is better to handle web-traffic and a background processor. Would 2x extra small instances be better or a single small instance.
If I were to use a small instance, I would make the background processor in the web-role, what are the cons of doing it this way?
In future this would also apply where multiple small instances or fewer big instances.
Is there some sort of tool to help decide which way I'll be able to get the best bang for my buck etc.?
I know that for the Microsoft's SLA to be met 2x instances need to be running.
It is better to have 2 extra-small rather that 1 small instance as far service availability is concerned. That being said there are multiple gotchas:
You need to put your 2 VMs into 2 distinct upgrade domains (done in role definition file).
Your app needs to support multi-VM, aka not rely on non-shared session state.
Better availability does not mean better performance, in particular, local cache is basically halved.
Size of cache and overall difficulties at spreading an app over many small VMs typically explain why most dev stick to a single but larger VM until they reach a point that really calls for scaling out (which is likely to never happen for most apps anyway).
For SLA purposes, you need at least two instances, as Joannes alluded to when talking about service availability. Other things to consider:
It's easy to handle background tasks in a web role - you get the same OnStart() and Run() as a worker role.
When scaling, remember that, if you combine functionality into a single role, it all scales together as a single unit. So, if your background processing is being starved out due to excessive web traffic, you'll want to consider splitting them into separate roles
Extra Small instances have shared CPU. More importantly, they're going to have less network throughput. A Small instance has approx. 100Mbps. An Extra Small instance is a fraction of that (I'll need to look up the number). And... memory is 768MB vs. 1.75GB for a Small.
If you have an MSDN subscription, the included Windows Azure subscription comes with 1,500 CPU-hours monthly. But... that excludes Extra Small instances. You'll pay for those. Be sure to use Small when using your MSDN-supplied account. Edit: MSDN allowances are now friendly to extra-small instances.
I guess we can't really know without usage figures and more info and even then I think only time will tell but...
Why not sign up for one of the free trial accounts that gives you an extra small instance. See if your app copes well enough then when it goes live get a second one for load balancing, SLA etc.
If it doesn't cope then get a bigger one - but I'd still be inclined to go for a second one - unless you don't care if it becomes unavailable at random times. MS will apply security patches and will reboot your instances without asking so the second instance will prevent your site being unavailable as they'll update them separately.
It doesn't look particularly challenging to upgrade to bigger instances anyway should it become a sell-out.
If you've got an MSDN subscription (premium level, I think) then you get enough free hours to run 2 small instances.
From what little I've seen I don't think there's any real 'con' in adding the background processor. After all you're paying for a whole machine so you might as well make it work for it's money. That was kind of the impression they gave at the recent tech days thingy I saw.
Try it and see...

Resources