Azure SQL Database "DTU percentage" metric - azure

With the new Azure SQL Database tier structure, it seems important to monitor your database "DTU" usage to know whether to upgrade or downgrade to another tier.
When reading Azure SQL Database Service Tiers and Performance Levels, it only talks about monitoring with CPU, Data and Log percentage usage.
But, when I add new metrics, I also have an DTU percentage option:
I can't find any about this online. Is this essentially a summary of the other DTU-related metrics?

A DTU is a unit of measure for the performance of a service tier and is a summary of several database characteristics. Each service tier has a certain number of DTUs assigned to it as an easy way to compare the performance level of one tier versus another.
Database Throughput Unit (DTU): DTUs provide a way to
describe the relative capacity of a performance level of Basic,
Standard, and Premium databases. DTUs are based on a blended measure
of CPU, memory, reads, and writes. As DTUs increase, the power offered
by the performance level increases. For example, a performance level
with 5 DTUs has five times more power than a performance level with 1
DTU. A maximum DTU quota applies to each server.
The DTU Quota applies to the server, not the individual databases and each server has a maximum of 1600 DTUs. The DTU% is the percentage of units your particular database is using and it seems that this number can go over 100% of the DTU rating of the service tier (I assume to the limit of the server). This percentage number is designed to help you choose the appropriate service tier.
From down toward the bottom of this announcement:
For example, if your DTU consumption shows a value of 80%, it
indicates it is consuming DTU at the rate of 80% of the limit an S2
database would have. If you see values greater than 100% in this view
it means that you need a performance tier larger than S2.
As an example, let’s say you see a percentage value of 300%. This
tells you that you are using three times more resources than would be
available in an S2. To determine a reasonable starting size, compare
the DTUs available in an S2 (50 DTUs) with the next higher sizes (P1 =
100 DTUs, or 200% of S2, P2 = 200 DTUs or 400% of S2). Because you
are at 300% of S2 you would want to start with a P2 and re-test.

Still not cool enough to comment, but regarding #vladislav's comment the original article was fairly old. Here is an update document regarding DTU's, which would help answer the OP's question.
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-what-is-a-dtu

From this document, this DTU percent is determined by this query:
SELECT end_time,
(SELECT Max(v)
FROM (VALUES (avg_cpu_percent), (avg_data_io_percent),
(avg_log_write_percent)) AS
value(v)) AS [avg_DTU_percent]
FROM sys.dm_db_resource_stats;
looks like the max of avg_cpu_percent, avg_data_io_percent and avg_log_write_percent
Reference:
https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database

DTU is nothing but a blend of CPU, memory and IO. Why do we need a blend when these 3 are pretty clear? Because we want a unit for power. But it is still confusing in many ways.
eg: If I simply increase memory will it increase power(DTU)? If yes, how can DTU be a blend? It is a yes. In this memory-increase case, as per the query in the answer given by jyong, DTU will be equivalent to memory(since we increased it). MS has even a pricing model based on this DTU and it raised many questions.
Because of these confusions and questions, MS wanted to bring in another option.
We already had some specs in on-premise, why can't we use them? As a result, 'vCore pricing model' was born. In this model we have visibility to RAM and CPU. But not in DTU model.
The counter argument from DTU would be that DTU measures are calibrated using a benchmark that simulates real-world database workload. And that we are not in on-premise anymore ;). Yes it is designed with cloud computing in mind(but is also used in OLTP workloads).
But that is not all. Now that we are entering the pricing model the equation changes. The question now is about money and the bundle(what all features are included). Here DTU has some advantages(the way I see it) but enterprises with many existing licenses would disagree.
DTU has one pricing(Compute + Storage + Backup). Simpler and can
start with lower pricing.
vCore has different pricing (Compute, Storage). Software assurance
is available here. Enterprises will have on-premise licenses, this can be easily ported here(so they get big machines for less price than DTU model). Plus they commit for multiple years and get additional discounts.
We can switch between both when needed so if not sure start with DTU(Basic/Standard/Premium).
How can we know which pricing tier to use? Go to configure menu as given below: (on the right/left you can switch between both)
Even though Vcore is bigger 'machine' and for bigger things, the cost can sometimes be cheaper for enterprise organizations. Here is a proof. DTU costs $147 . But Vcore costs $111. That is because you can commit for 3 years(but still pay monthly) and also because of the license re-use option(enterprises will have on-premise licenses).
It is a bit too much than answering direct question but I am gonna go ahead and make this complete by answering 'how to choose between different options in DTU let alone choosing between DTU and vCore'. This is answered in this beautiful blog and this flowchart explains it all

To check the accurate usage for your services be it is free (as per always free or 12 months free) or Pay-As-You-Go, it is important to monitor the usage so that you know upfront on the cost incurred or when to upgrade your service tier.
To check your free service usage and its limits, Go to search in Portal, search with "Subscription" and click on it. you will see the details of each service that you have used.
In case of free azure from Microsoft, you get to see the cost incurred for each one.
Visit Check usage of free services included with your Azure free account
Hope this helps someone!

Related

How can I create the free 250GB SQL Server Database promised with Free Azure Subscription

I created my Free Azure subscription and have been hosting a couple of Apps out there since around April of this year (2020).
All of my resources; Subscription, Resource Group, AppService, and Apps are F1 service rather than S1 to ensure they are running free and my cost forecast for the month should always say $0.0. This was something confusing in the beginning that I had to reach out to Microsoft to help me with in setting up my hierocracy of resources.
In my main web app I now need to deploy an SQL Database. I've been developing using LocalDB in my ASP.Net Core 3.1 app.
Now the Free Azure description here:
https://azure.microsoft.com/en-us/free/
gives these specs for SQL Server with your free subscription for the first year:
250GBs. Now I'm thinking 250GB of storage, not memory. But when you start selecting your DB configuration they are talking memory. So now I'm confused with that. Do you get 250GB of Storage or memory with free SQL Server with free Azure subscription.
Also, the free service really just says free SQL Database. Not free SQL Server. So I am confused here as well. Do you just get one Database? I know you have to set up an SQL Server in order to set up the Database.
Next I found a quick tutorial on creating an SQL Server Database her:
https://learn.microsoft.com/en-us/azure/azure-sql/database/single-database-create-quickstart?tabs=azure-portal
I want to go through the three versions of this tutorial:
Using:
Portal
Azure CLI
PowerShell
so I can get a feel for the environment and find the way that best suites me.
I am going through the Portal tutorial first.
On step 9, the default is General Purpose, Serverless.
This says "up to 40 vCores, up to 120 GB memory".
But you are supposed to have 250GBs with the free subscription.
So this is not it.
I click provisioned and now it says "up to 80 vCores, up to 408 GB memory".
Well 408GB is too much; over 250GB.
So I click, "Looking for Basic, Standard, or Premium?"
And from there click Standard because it is the 250GB configuration I think I am looking for to get the free SQL Database with the free Azure Subscription. (Again do I just get one database?)
But now instead of talking vCores, the cost is per DTU. What the hec is a DTU? I tried to read up on it. Seems like a unit of performance rather than a transaction. So standard is estimated at 10 DTUs a month I believe. Hopefully that does not mean 10 transactions per month but rather again a measure of performance.
Estimated Cost $15 dollars a month.
That "Standard S0" above scares me I think that would start charging me.
It should say F1 shouldn't it.
I've come accross some similar questions to this online. A lot of people seem to have the same confusion and question I have. Main question is how do I get an F1 level database for my app. And is one database all I get. That would suck. Not really a free subscription then since most web apps in ASP.Net/Core which is Microsoft are dynamic and need a DB and Azure is Microsoft right?
Or should I just go ahead and review and create. And S0 is just how they do it for free Azure subscription? Like you wouldn't get charged for S0? But I don't think so.
Trying to get a concrete answer somewhere so I know how to proceed.
UPDATE 10/20/20
I have just gone in a different way and am creating an SQL Server instead of Sql Database.
This appears to be free and cost estimate per month says:
"No extra charges"
Ok everybody.
Let's consider this a tentative answer until it all proves out to be true.
I opened up a support ticket with Azure/Microsoft.
Here is part(s) of the response I got:
First, I would like to thank you very much for providing me with such a detailed service request. After my investigation, I was able to determine that the estimated price does not show the discount with the free services. Therefore, using the S0 database in Azure SQL Database at the Basic service tier will be included in the free services. The free service limitation states that you can use up to 250 GB. So, anything deployed below 250 GB is ok to use if it is correctly configuring all around. As long as you stay within the limits, you're will not be charged.
My reply here:
So thank you for the information on S0 being considered free as part of the F1 subscription.
(Although, I really wish they would include next to S0 on the pricing sheet to use as part of F1 in parenthesis or something)
Does it matter if you use vCores or DTUs?
And if you use DTUs does it matter if you go above the max?
Or as you said I guess as long as I stay under 250GB I'm ok.
Her response continued:
Lastly, I would like to leave you with a link on how to avoid charges on your free service account: https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/avoid-charges-free-account.
I hope this information was beneficial to you, Sam. Please let me know if you have any additional questions.
Everybody notice the link to track your free services which enables us to make sure we do not use a service outside of the free services or exceed the amount of what we get with a free service. I think this is a gold mine find of a URL.
And one more question I sent her:
Can I create a 250GB application for each app I deploy out there.
Or do I only get one and have to make all my apps share it?
At least we know that Basic S0 is free now.
I will update this answer with better information as I work through the details.
This is the best answer.
I have worked out a procedure that works for me.
And I understand a lot of things better now.
It seems like the whole answer is not in one place since Azure is so vast and everyone's
scenario is different.
So I wrote up an article to document what worked for me.
I hope this helps someone out there:
https://ctas.azurewebsites.net/TechCorner/AspNetCore3/HowTos/DeployWebAppWithLocalDbToAzure

Azure functions set hard limit to free tier

I'm interested in using Azure Functions for a piece of serverless code, but I would like to ensure that I am always within the free tier, so as to not incur any expenses (I'm okay with potential downtime, not really critical). How do I achieve this?
My function is limited to some domains I control, and possibly a resource used in GitHub readme (like a tracking pixel). How do I combat potential DDOS, and massive bill spikes?
I've seen other questions on how to manage fanout, scale etc, but none on setting hard limits. I'm still a student, so I'd rather stay exclusively in the free tier.
Note, by 'Free tier', I mean the 'Always Free' offering.
You cannont (as far as I know) set a hard limit.
What you can do is to reduce your functions ability to scale. So that it can just process a single request at a time, then depending on how long the request takes you will stay within the free tier.
https://nullable.online/2018/11/20/how-to-throttle-a-azure-function-hosted-on-a-consumption-plan/

Azure App Service scale out more than 100 instances?

I read here that you can max scale out to 30 or 100 instance depending on pricing tier.
https://learn.microsoft.com/en-us/azure/app-service/manage-scale-up
But what if you need more?
100 doesn't sound that much if you website becomes big as Wikipedia and needs to scale and load balance throughout the world.
Is it really true that its only 100 or am i missing a point or something?
As George mentioned in the comment, the maximum number of Instances for the Isolated plan is up to 100.
However, to answer your original question, there are various ways to architecture your application based on the needs, in case of large applications you can go for either containerization/Batch or microservice architecture. Here is a diagram that would help you to understand more,

Azure Search Multi-Tenant Strategy, Costs and Recomendations

I'm designing an architecture for leveraging Azure Search for multiple tenants. Since each tenant will have a slightly different schema my solution will require 1 index per tenant. This is easy enough to set up and I'm really liking what Microsoft has put together. However now that I am starting to think about on-boarding new tenants, monthly costs and scaling up the service I am starting to hit a few walls and wondering what my "best" option is.
Has anyone encountered this situation that can shed some light onto best practices? Here are the options as I see them now:
Option 1:
Spin up a new BASIC plan for every 5 tenants at a cost of $38/m for every 5 tenants ($7.60 per tenant per month).
Pros: Cheap to start.
Cons: Tenants are crippled by the limited performance and storage capabilities, I'll have to manage X number of services and ClientQueryKeys once I get past 5 indexes/tenants.
Option 2:
Spin up a new STANDARD S1 plan for every 50 tenants at a cost of $250/m for every 50 tenants ($5 per tenant per month).
Pros: Better performance, less services to manage as tenant counts increase
Cons: Much higher costs to start, still need to manage tenant-to-service relation once system has greater than 50 tenants, I'll have to manage X number of services and ClientQueryKeys once I get past 50 indexes/tenants.
Option 3:
Spin up a single STANDARD S2 plan that can be used for ALL tenants (assuming no cap on index count)
Pros: Better performance, no need to manage multiple services/client keys as tenant counts increase
Cons: Much higher costs to start, very little documentation on costs and limitations.
In all scenarios (aside from option 3, I'm assuming?) I would have to manage client keys across multiple services. So obviously having only one service with an infinite index count is ideal. However I am a startup (yes I am using BizSpark already) and the costs for search a very daunting when I may only have 1-5 tenants to start.
I've read that there is no way to easily migrate data between plans (without doing it manually or writing a script) so my first choice is likely to be my last. I would also prefer to only have to manage one service with one plan for all my tenant. Therefore I am leaning to option 3.
If option 3 is the best option:
Can I start on BASIC and scale up to S1 then S2 as needed, or is this not possible?
If BASIC cannot scale to S1 is it at least possible to scale from STANDARD S1 to S2 once I go past 50 tenants or will I need to manually manage this or start at S2?
What are my startup costs and/or costs per index/tenant on Standard S2?
Is my index limit infinite on S2?
If not, what is the index cap?
Are there any other options or caveats that I should consider?
S2 services work much better in multi-tenant scenarios. Not only they can fit more indexes (up to 200), but they also have more overall capacity so assuming exponential distribution of index sizes and loads, you get a better typical experience for your customers.
You're right that the cost of entry is higher.
Regarding the cons of S2, soon we're going to publish proper documentation and other supporting materials for it. In the meanwhile, feel free to contact me directly (Pablo DOT Castro AT the usual Microsoft domain) for more details.
If you think you'll have lots of indexes in the future (many 100s), we're also working on options for better multi-tenant support. We're not ready to announce the details yet but I'm happy to discuss if you get in touch with us.
Answering your specific questions:
1.Can I start on BASIC and scale up to S1 then S2 as needed, or is this not possible?
We don't currently support this. You'd have to create a new search service and migrate the indexes.
2.If BASIC cannot scale to S1 is it at least possible to scale from STANDARD S1 to S2 once I go past 50 tenants or will I need to manually manage this or start at S2?
No, it's not. We want to do this, just have not gotten to it yet.
3.What are my startup costs and/or costs per index/tenant on Standard S2?
Please get in touch with us and we can discuss pricing.
4.Is my index limit infinite on S2?
5.If not, what is the index cap?
No, S2 services are limited to 200 indexes/service.
6.Are there any other options or caveats that I should consider?
You've done a good analysis, I think you're on the right track. One thing you may want to consider is fairness. All indexes in the same service share the capacity you've provisioned for the service. If there's risk of unfair loads you might want to consider per-tenant throttling.

Performance of Web Database seems quicker than new Azure SQL DB service tiers?

I am using MVC3, EF5, LINQ, .NET4.5, SQL Database.
Microsoft has just brought out the new service levels for SQL Databases ie Basic, Standard and Premium.
Originally I was using the "Web" SQL database since my DB was small ie about 30mb. However on my test web site instance I have been using Basic web site and "Basic" SQL Database setups to save money.
I have a "slower" running query which suddenly took 9secs when my Live DB was restored as a "Basic" new style DB on the test instance. It tool about 2.5 secs on live. When I scaled up this test DB instance to "Standard" SO, 20 DTUs, it took 3.9 secs. When I then scaled this DB back to the "retired" "Web" format, it then took 1.9 secs which really surprised me. It is as if one needs to scale the DB to S1 to get comparable performance to the old "Web" style DB, but I suspect this will then cost more than the old "Web" format DB.
I appreciate any comments on the above, especially if other have found the new DB styles can be slower.
At the end of the day, what setup in the new DB style is the old "Web" style equivalent to?
Thanks.
EDIT (THIS IS REALLY REALLY WORRYING)
I have discovered a very useful document on this, and my worst fears are confirmed
see Web/Business comparison with new SQL Database service tiers. These are very, very worrying as it seems that web database performance can only be matched by the "Premium P1" edition, and we would not be able to afford the use of this. So for the time being we will continue to use the "Web" edition.
EDIT, Seem to have touched a raw nerve.... There are many worried folks about this....
see: Forum chat with worried users
FEEDBACK FROM .NET USER GROUP
I have also been speaking with a number of my Azure using .NET peers at a recent user group meeting, and they were also very worried to the extend they believed developers would just leave Azure. I think one of the key mistakes here, by Microsoft, is to set the performance of Basic well below that of Web(most of the time) and even S1 and S2 below web. It is only when you get onto P1 and P2 that you experience a par, and we dare not use this in test due to the impact on charges. In our experience Web has performed at this high level for 90% of the time. I am guessing the 10% is there, since you say it is, but non of our clients have complained about this. However to retain our current level of performance we would need to upgrade to S2 or P1 which would have an extraordinary impact on our monthly charges. Jim Rand's feedback is appreciated, and backs up our concerns.
I am the author of the blog post mentioned above. A more up to date version of that post is available:
http://cbailiss.wordpress.com/2014/09/16/performance-in-new-azure-sql-database-performance-tiers/
The tests I conducted were primarily around the physical I/O capabilities of the new service tiers. From those tests I believe that P1 offers roughly the same I/O on average as Web/Business.
So, the specific answer to your question:
At the end of the day, what setup in the
new DB style is the old "Web" style equivalent to?
If you were running toward the physical I/O limits of Web/Business (roughly speaking 200MB+ read, 50MB+ write per minute), then I would say a minimum of P1 is needed to offer equivalent I/O performance in the newer service tiers.
If on average your I/O is generally much less than the figures above, then the database may perform OK on one of the Standard Tiers.
My tests didn't quantify/compare CPU or memory differences between Web/Business and the new tiers, but they too scale by service tier in the new world. The sys.resource_stats DMV in the master database might offer some insight for your workload. See the newer blog post above for more details.
For completeness, it is worth mentioning that the newer service tiers do offer some other advantages likely supporting more connections concurrently, new availability features, new backup features, etc.
Hope that helps...
EDIT: Jan 2015: A new Standard S3 performance level is currently in preview as part of the Azure SQL Database v12 version. This looks like it will offer price-performance at a point much closer to Business Edition than has been available until now. In addition, every service tier and performance level looks to be gaining higher performance in v12. See my blog post for details:
https://cbailiss.wordpress.com/2014/12/17/azure-sql-database-v12-performance-tests-show-significant-performance-increase/
Chris
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Hit this last Thursday. Converting data from old system to SQL Azure. Chose the new Standard (S2) instead of the 5 gig web (retired) database.
The SQL:
UPDATE Invoice
SET SalesOrderID = O.SalesOrderID
FROM Invoice
INNER JOIN SalesOrder AS O ON Invoice.InvoiceID = O.InvoiceID
196043 rows. Re ran and it took over 4 minutes. Exported database and reloaded it into the web edition. Query took 19 seconds. Total database size is about 750 megabytes.
Bottom line, this is more than "all a little worrying". Unless Microsoft gets the performance up on the new basic / standard / premium tiers to where it is now in the web edition, they can pretty much kiss Azure goodbye. Totally unreasonable that you can't run a query on only 196043 rows unless the the data is in the cache. So much for analytics with a relational database.
I'll be advising my client this week of this matter. Undoubtedly, he will be contacting upper management at Microsoft.
Jim, I'd be happy to help. We know that changing business models is a hard thing to do. In the Web/Business case, you pay on size of the DB and you get whatever performance we have at the time. Sometimes this is great, other times this is ok and sometimes performance is very poor. Customers have given us feedback that this unpredictable performance is very difficult to deal with.
Using this feedback as a key input, the business model for Basic/Standard/Premium is $/perf. Understanding what resources your consuming is a great first step before moving to B/S/P. We have several pieces of new guidance that should help you do this
http://azure.microsoft.com/en-us/documentation/articles/sql-database-upgrade-new-service-tiers/
Your mileage may vary here. Many customers see a decrease because of this business model change. Others see no impact, and some will see an increase if their DBs are very small and consume a lot of resources. I and the team would be happy to help customers move into the new business model. To have great conversations will need some customer specifics that aren't best shared in a public forum. guyhay#microsoft is my email if you'd like to have that conversation.

Resources