Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
This is regarding azure SQL database pricing, in elastic pool, standard 50 eDTU performance level, they have mentioned that there are 50 eDTU and 50 GB per pool and 100 DBs, i want to confirm whether the mentioned resources is per pool or per database.
for example, if i deploy 10 databases, then 50GB and 50 eDTU will be shared among the databases or is it for individual databases ?
in case, resources are for individual databases, then the price,which is now $110.26 per month, does it change based on the number of databases or is it estimated monthly cost regardless of the database count. does database count affect the monthly pricing ?
in case, resources are for entire pool, if i am going deploy 100 databases, then single database can not be more than 500 MB ? am i right ? does not it seems odd ??
Thanks.
Question 1:
The price, which is now $110.26 per month, does it change based on the number of databases or is it estimated monthly cost regardless of the database count. does database count affect the monthly pricing?
Yes it will impact on the pricing when resources are from individual.
Questions 2:
In case, resources are for entire pool, if i am going deploy 100 databases, then single database cannot be more than 500 MB? am I right? does not it seems odd??
single database resources limits are generally as same as which are in elastic pools based on DTUs and the service tier. But only difference is
if database in an elastic pool DTU utilization is 100%, it will utilize eDTU utilization from pool. That means elastic pool will support maximum number of DTU per database.
Here is the same reference of elastic pool and normal database implementations.
Created a multiple database under elastic pool and one normal database excluded from elastic pools.
Per database configurations we can set entire pool size of limited size also, depends on utilization of DTUs.
NOTE: If DTU's utilization of databases is very low, we can decrease the pool's size. It will reduce our costs over time.
Hope it's useful.
Related
I am going through exam questions, where in I find some questions are quite tricky. Where in I got a question in mind, what is difference between elastic and scaleble expenditure model?
The question arrived due to below two questions.
Your company is planning to migrate all their virtual machines to an Azure pay-as-you-go subscription. The virtual machines are currently hosted on the Hyper-V hosts in a data center.
You are required make sure that the intended Azure solution uses the correct expenditure model.
Solution: You should recommend the use of the elastic expenditure model.
Does the solution meet the goal?
• A. Yes B . No Ans ; No ( B)
B is the correct answer. The correct expenditure model is "Operational".
Your company is planning to migrate all their virtual machines to an Azure pay-as-you-go subscription. The virtual machines are currently hosted on the Hyper-V hosts in a data center.
You are required make sure that the intended Azure solution uses the correct expenditure model.
Solution: You should recommend the use of the scalable expenditure model.
Does the solution meet the goal?
A. Yes B . No Ans : No(B)
so how to differentiate scalable and operational?
Scalable : environments only care about increasing capacity to accommodate an increasing workload.
Elastic : environments care about being able to meet current demands without under/over provisioning, in an autonomic fashion.
Scalable systems don't necessarily mean they will scale back down - it's only about being able to reach peak loads.
Elastic workloads, however, will recognize dynamic demands and adapt to them, even if that means reducing capacity.
Operating expenditures are ongoing costs of doing business. Consuming cloud services in a pay-as-you-go model could qualify as an operating expenditure.
https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/strategy/business-outcomes/fiscal-outcomes
This appears to be a "decoy" question that may be phrased as a 4-way multiple choice question in the real exam. The correct answer is 'operational expenditure' since you pay for what you use. The other possibility is 'capital expenditure' (which is wrong since you are not investing in assets). 'scalable' and 'elastic' expenditure are nonsense terms put in to distract you if you are just guessing.
Scalability is simply an increase in size or number—and, therefore, Elastic is also a form of scaling, but in this case within the same machine.
For example, we have two types of scaling:
HORIZONTAL SCALING (known as Elastic model): Increase memory and storage (etc.) of a VM as the workload increases and reduces accordingly.
VERTICAL SCALING (scalable expenditure model): Increase by adding or attaching more virtual machines of the same size and configuration.
Note: Memory and storage of the same VM are increased IN Elastic models WHILE new VMs are added in SCALABLE MODELS. Either way, just know that the number of VMs remain the same, but memory/storage increases in the 'elastic model' and number of VMs increases as well as memory/storage.
I'm trying to figure out the pricing model for Azure SQL databases. Comparing vCore to DTU on the defaults, 20 DTUs worth of server would cost an estimated £27.96 a month while vCore would cost £326.34 a month.
Why the disparity? I looked up what DTUs are and I'm happy with the overall concept of them being based on maths based on CPU, etc., but I can't figure out if each database transaction would add up so I could eventually "use up" the 20 DTU and so will get charged for another set of 20 or whether the database will only run as fast as "20" based on the calculations.
This isn't a question about the DTU calculator, I'm happy with all that, this is a question about why there is such a significant difference between the two values.
The reason for the difference is that the vCore model provides a dedicated core for your database and can be scaled independent of storage. In the DTU model you are basically paying for a share of CPU, Memory and storage (including io). When you choose a larger DTU all the specs move up together.
This article provides some detail: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-vcore
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-vcore#choosing-service-tier-compute-memory-storage-and-io-resources
If you decide to convert from the DTU-model to vCore-model, you should select the performance level using the following rule of thumb: each 100 DTU in Standard tier requires at least 1 vCore in General Purpose tier; each 125 DTU in Premium tier requires at least 1 vCore in Business Critical tier.
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-vcore-resource-limits#what-happens-when-database-and-elastic-pool-resource-limits-are-reached
The maximum number of sessions and workers are determined by the service tier and performance level. New requests are rejected when session or worker limits are reached, and clients receive an error message.
If I have an S2 Sql Database, and I create a secondary geo-replicated database, should it be of the same size (S2)? I see that you get charged for the secondary DB, but the DTU's reported against that secondary are 0%, which seems to indicate that S2 is too large.
Obviously, we'd like to save the cost if at all possible and move the secondary to a smaller size if at all possible.
Considerations
I understand if we need to failover to the secondary, at that point, it would need to be bumped up to the size of S2 to meet the production workloads, but assuming that we could do this at the time of failover?
I also get that if we were actively using the replicated DB for reporting, etc, then we'd have to size it accordingly to meet that demand. But currently, we are not actively using the secondary for anything other than to use as a failover point if it is ever needed.
At this point both primary and secondary must be in the same edition but can have different performance objectives (DTU size). We are working on lifting that limitation so that geo-replication databases could scale to a different edition when needed without breaking the replication links (e.g. standard to premium).
Re sizing the secondary, you *can" make it smaller in DTU than the primary if you believe that the updates take less capacity than reads (high read/write ratio). But as noted earlier, you will have to upsize it right after the failover and it may take time during which your app's performance will be impacted. In general, we do not recommend having the secondary more than 1 level smaller. E.g. S3->S1 is not a good idea as it will likely cause replication lag and may result in excessive data loss after failover.
You can safely change the tier of the secondary database, but bear in mind, that in the case of failover, you will face performance issues. Also you cant scale past your current performance Tier (so both bases ought to be of the same Tier).
And yes, you can change the size past failover, but the process is manual.
I have a big performance problem with STDistance function on SQL Azure.
I'm testing the same query
SELECT Coordinate
FROM MyTable
WHERE Coordinate.STDistance(#Center) < 50000
on a SQL Azure database (Standard) and on my local machine database.
Same database, same indexes (a spatial index on Coordinate), same data (400k rows) but I got two different execution time.
The query takes less than 1 second in my local workstation and more or less 9 seconds on SQL Azure.
Anybody else has the same problem?
Federico
You can try following things to reduce network latency:
Select the data center closest to majority of your users
Co-Locate your DB with your application if your application is in Windows Azure as well
Minimize network round trips in your app
I would highly recommend you read this Azure SQL DB Perf guidance.
In addition to that, please check the existing service tier of your database and see if the performance is capping out. In that case, you might want to upgrade the service tier of your DB. If you would like to monitor the performance and adjust the performance levels, please use this link.
Thanks
Silvia Doomra
Query performance depends on various factors, one among them is your performance tier. Verify if you are hitting your resource limits (sys.resource_stats dmv from the master database)
Besides that there are a few other factors you can consider verifying:
index fragmentation on azure, network latency, locking etc.
Application level caching helps avoid hitting the database if the query is repeating.
You may also have to investigate on which Service-Tier and Performance level is required based on the Benchmarks here, AzureSQL-ServierTier_PerformanceLevel
The new new Azure SQL Database Services look good. However I am trying to work out how scalable they really are.
So, for example, assume a 200 concurrent user system.
For Standard
Workgroup and cloud applications with "multiple" concurrent transactions
For Premium
Mission-critical, high transactional volume with "many" concurrent users
What does "Multiple" and "Many" mean?
Also Standard/S1 offers 15 DTUs while Standard/S2 offers 50 DTUs. What does this mean?
Going back to my 200 user example, what option should I be going for?
Azure SQL Database Link
Thanks
EDIT
Useful page on definitions
However what is "max sessions"? Is this the number of concurrent connections?
There are some great MSDN articles on Azure SQL Database, this one in particular has a great starting point for DTUs. http://msdn.microsoft.com/en-us/library/azure/dn741336.aspx and http://channel9.msdn.com/Series/Windows-Azure-Storage-SQL-Database-Tutorials/Scott-Klein-Video-02
In short, it's a way to understand the resources powering each performance level. One of the things we know when talking with Azure SQL Database customers, is that they are a varied group. Some are most comfortable with the most absolute details, cores, memory, IOPS - and others are after a much more summarized level of information. There is no one-size fits all. DTU is meant for this later group.
Regardless, one of the benefits of the cloud is that it's easy to start with one service tier and performance level and iterate. In Azure SQL Database specifically you can change the performance level while you're application is up. During the change there is typically less than a second of elapsed time when DB connections are dropped. The internal workflow in our service for moving a DB from service tier/performance level follows the same pattern as the workflow for failing over nodes in our data centers. And nodes failing over happens all the time independent of service tier changes. In other words, you shouldn’t notice any difference in this regard relative to your past experience.
If DTU's aren't your thing, we also have a more detailed benchmark workload that may appeal. http://msdn.microsoft.com/en-us/library/azure/dn741327.aspx
Thanks Guy
It is really hard to tell without doing a test. By 200 users I assume you mean 200 people sitting at their computer at the same time doing stuff, not 200 users who log on twice a day. S2 allows 49 transactions per second which sounds about right, but you need to test. Also doing a lot of caching can't hurt.
Check out the new Elastic DB offering (Preview) announced at Build today. The pricing page has been updated with Elastic DB price information.
DTUs are based on a blended measure of CPU, memory, reads, and writes. As DTUs increase, the power offered by the performance level increases. Azure has different limits on the concurrent connections, memory, IO and CPU usage. Which tier one has to pick really depends upon
#concurrent users
Log rate
IO rate
CPU usage
Database size
For example, if you are designing a system where multiple users are reading and there are only a few writers, and if your application middle tier can cache the data as much as possible and only selective queries / application restart hit the database then you may not worry too much about the IO and CPU usage.
If many users are hitting the database at the same time, you may hit the concurrent connection limit and requests will be throttled. If you can control user requests coming to the database in your application then this shouldn't be a problem.
Log rate: Depends upon the volume of the data changes (including additional data pumping in the system). I have seen application steadily pumping the data vs data being pumped all at once. Selecting the right DTU again depends upon how one can do throttling at the application end and get steady rate.
Database size: Basic, standard, and premium has different allowed max sizes, and this is another deciding factor. Using table compression kind of features helps reducing the total size, and hence total IO.
Memory: Tuning the expesnive queries (joins, sorts etc), enabling lock escalation / nolock scans help controlling the memory usage.
The very common mistake people usually do in database systems is scaling up their database instead of tuning the queries and application logic. So testing, monitoring the resources / queries with different DTU limits is the best way of dealing this.
If choose the wrong DTU, don't worry you can always scale up/ down in SQL DB and it is completely online operation
Also unless a strong reason migrate to V12 to get even better performance and features.