So I've been getting my feet wet in Azure SQL databases, and one of the question that I can't quite figure out is whether Azure charges per SQL developer on top of the database costs. If there's a team of 5 DB administrators, are they all allowed to build tables/extract data as long as the SQL database pool is being paid for?
It's just confusing because the price is a lot lower than what I'd expect, and I want to make sure I'm not missing any gotcha's that could multiply the cost. How is it possible it's only $606/month for 400 databases with 1TB of total storage?? Am I missing something super obvious??
Seems like we can just add DBA groups to a DB resource group
Pricing calculator for an elastic SQL database pool for 1TB for $606
Also... some additional assumptions and questions (sorry):
-Azure DB bills ONLY on the transactions and storage used in the database
-Elastic pool DB allows for hundreds of DBs to be created for near-zero cost, so does that include backup DBs (Proof of concept/Test/Support DBs)?
How is it possible it's only $606/month for 400 databases with 1TB of total storage??
Under the vCore pricing model that will buy you only a few cores, and a limited amount of RAM. See Resource limits for elastic pools using the vCore-based purchasing model limits for the details. So your 400 databases are sharing a small pool of resources. You may need to scale up the pool based on your workload.
In the General Purpose tier your database files are stored on Azure premium storage, and 1TB of Premium SSD storage costs only $135/month.
Azure DB bills ONLY on the transactions and storage used in the database
Under DTU and VCore models there is no charge for transactions. You've paid for the capacity and may do what you want. Backup storage is extra. See the pricing calculator for details. There are no "charges per SQL developer on top of the database costs".
The price of an elastic pool is based on the number of eDTUs of the pool, if you choose the DTU pricing model. The price of an elastic pool is independent of the number and utilization of the elastic databases within it, the number of transactions, the storage consumed and is also independent of the developers or users using those databases.
If you choose the vCore pricing model, pricing is based on the number of virtual cores, the compute generation you choose (Gen4/Gen5), licensing cost and the storage used by the databases and backups. You can save money if you apply for the Azure Hybrid Benefit or apply to reserve licensing. Again pricing is independent of the number of databases, number of transactions or users using it.
Related
Is there a way to find the size of a database backup inside azure elastic pool ?
I need it to help me find the price of each database inside the elastic pool using azure calculator.
I don't think there is a way you can do that.
To see how much I spend per database I usually go into SQL Databases > Compute + Storage, but as you can see, when the database is inside an Elastic Pool you cannot see the price:
You can see the total price of the whole pool but not of one database.
A better approach is to go to Elastic Pool > Overview > databases, and have an overview of the consumption of each database.
Now try to see what the price of the each core and each Gb.
Try to create a rule that share the cost of the Elastic Pool per number of Avg CPU, Peak CPU, Data space used
At present we have 3 (Dev, QA & Prod) stages in our azure resources. All the three are using SQL Database 'Standard S6: 400 DTUs'. Because of Dev and QA SQL Database our monthly cost is going more than 700 euro's. I am planning to move from DTU to vCore serverless. Below are my queries,
Just going into portal -> Compute and storage -> and changing from DTU to vCore Serverless is the right process?
Do i need to take any other things before doing this operation?
Does my existing Azure SQL DB is going to get affected by this operation?
If things are not fine as per my requirement same way can i come back to DTU mode.
Thanks in advance.
You can have a look at this MS doc for details: Migrate Azure SQL Database from the DTU-based model to the vCore-based model
Just going into portal -> Compute and storage -> and changing from
DTU to vCore Serverless is the right process?
Yes! just change to required option from dropdown and click on Apply.
Migrating a database from the DTU-based purchasing model to the
vCore-based purchasing model is similar to scaling between service
objectives in the Basic, Standard, and Premium service tiers, with
similar duration and a minimal downtime at the end of the migration
process.
Do i need to take any other things before doing this operation?
Some hardware generations may not be available in every region. Check availability under Hardware generations for SQL
Database.
In the vCore model, the supported maximum database size may differ depending on hardware generation. For large databases, check supported
maximum sizes in the vCore model for single
databases
and elastic
pools.
If you have geo-replicated databases, during migration, you don't have
to stop geo-replication, but you must upgrade the secondary database
first, and then upgrade the primary. When downgrading, reverse the
order Also go through the doc once.
Does my existing Azure SQL DB is going to get affected by this
operation?
You can copy any database with a DTU-based compute size to a database
with a vCore-based compute size without restrictions or special
sequencing as long as the target compute size supports the maximum
database size of the source database. Database copy creates a
transactionally consistent snapshot of the data as of a point in time
after the copy operation starts. It doesn't synchronize data between
the source and the target after that point in time.
If things are not fine as per my requirement same way can i come
back to DTU mode.
A database migrated to the vCore-based purchasing model can be
migrated back to the DTU-based purchasing model at any time in the
same fashion, with the exception of databases migrated to the
Hyperscale service tier.
I'm doing a data load on azure sql server using azure data factory v2. I started the data load & the DB was set to Standard Pricing Tier with 800 DTUs. It was slow, so I increased the DTUs to 1600. (My pipeline is still running since 7 hrs).
I decided to change the pricing tier. I changed the pricing tier to Premium, DTUs set to 1000. (I didnt make any additional changes).
The pipeline failed as it lost connection. I rerun the pipeline.
Now, when I monitor the pipeline, it is working fine. When I monitor the database. The DTU usage on average is not going above 56%.
I am dealing with tremendous data. How can I speed up the process?
I expect the DTUs must max out. But the average utilization is around 56%.
Please follow this document Copy activity performance and scalability guide.
This tutorial gives us the Performance tuning steps.
One of ways is increase the Azure SQL Database tier with more DTUs. You have increased the Azure SQL Database tier with more 1000 DTUs, but the average utilization is around 56%. I think You don't need so higher price tier.
You need to think about other ways to improve the performance. Such as set more Data Integration Units(DIU).
A Data Integration Unit is a measure that represents the power (a combination of CPU, memory, and network resource allocation) of a single unit in Azure Data Factory. Data Integration Unit only applies to Azure integration runtime, but not self-hosted integration runtime.
Hope this helps.
The standard answer from Microsoft seems to be that you need to tune the target database or scale up to a higher tier. This suggests that Azure Data Factory is not a limiting factor in the copy performance.
However we've done some testing on a single table, single copy activity, ~15 GB of data. The table did not contain varchar(max), high precision, just simple and plain data.
Conclusion: it does barely matter what kind of tier you choose (not too low ofcourse), roughly above S7 / 800 DTU, 8 vcores, the performance of the copy activity is ~10 MB/s and does not go up. The load on the target database is 50%-75%.
Our assumption is that since we could keep throwing higher database tiers against this problem, but did not see any improvement in the copy activity performance, this is Azure Data Factory related.
Our solution is, since we are loading a lot of separate tables, to scale out instead of scale up via a for each loop and a batch count set to at least 4.
The approach to increase the DIU is only applicable in some cases:
https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-performance#data-integration-units
Setting of DIUs larger than four currently applies only when you copy
multiple files from Azure Storage, Azure Data Lake Storage, Amazon S3,
Google Cloud Storage, cloud FTP, or cloud SFTP to any other cloud data
stores.
In our case we are copying data from relational databases.
Hello I have 14 Databases for Azure SQL with DTU SO, S1 and S4 (prod)
So I am paying for some unused or not frequently used databases.
10 databases for Dev and test. 2 for production.
So I saw one post for Azure elastic pool. It was mentioned with Azure elastic pool. Can somebody suggest which kind database should I put in elastic pool and tips for cost saving.
Also I have Azure storage account (classic). How should I take its backup weekly. Is it possible.
Help and tips will be appreciated.
Thanks
Regards
KP
To keep it simple, Elastic pool will give you number of dtu's which can be used/distributed around number of databases you have as per their need.
So currently if you have 14 databases in S1 tier then you are have 14*50 =700 dtu's , if some databases are not in use, it's possible the dtu's are greatly underutilized.
In this case if You opt for Elastic pool with 50 dtu's then it will distribute among 14 databases , and as per need they will be used. which means you will save more and balance resources.
I have not verified all the numbers I have mentioned, but that's the principle idea.
I will just add to others answers and comments. For backups take in consideration you have Azure automated backups that provides backups with 7-35 days of retention period. Additionally you can use Azure Long-Term Backup Retention which can store backups with a retention period of 10 years.
About choosing the correct pool size to save money one of the documents shared by Nick above states the following: "SQL Database automatically evaluates the historical resource usage of databases in an existing SQL Database server and recommends the appropriate pool configuration in the Azure portal. In addition to the recommendations, a built-in experience estimates the eDTU usage for a custom group of databases on the server. This enables you to do a "what-if" analysis by interactively adding databases to the pool and removing them to get resource usage analysis and sizing advice before committing your changes".
Additionally, "After adding databases to the pool, recommendations are dynamically generated based on the historical usage of the databases you have selected. These recommendations are shown in the eDTU and GB usage chart and in a recommendation banner at the top of the Configure pool page. These recommendations are intended to assist you in creating an elastic pool optimized for your specific databases".
I'm trying to get an idea of how much databases will cost in Azure.
I've created an Elastic database pool and it says the monthly cost would be R2580 (South African Rands) for up to 200 databases & 100 eDTUs.
If I go to any of the databases I've created in the pool, and click on the Pricing Tier, it says it's a Basic database with 5 DTUs and estimated cost of R85 per month.
So what am I going to pay? R2580 per month, or (R85 x n databases) per month, or both?
Presumably, it's R2580 per month. If that's right, then you have to have at about 30 databases before the prices even out, and even then you're probably better off with the stand-alone databases as you'd have 150 DTUs vs 100 eDTUs.
Is my logic correct?
So what am I going to pay? R2580 per month, or (R85 x n databases) per
month, or both?
You're going to pay R2580 / month as all the databases are part of an elastic database pool.
Presumably, it's R2580 per month. If that's right, then you have to
have at about 30 databases before the prices even out, and even then
you're probably better off with the stand-alone databases as you'd
have 150 DTUs vs 100 eDTUs.
You're right again. Elastic Database Pools serve a different use cases and may not be a right solution in every scenario. Typically Elastic Database Pools become useful if you have a multi-tenant SaaS application where each tenant gets a different database and there's a varied consumption pattern for each tenant. With individual databases, you would be capped at the DTU limit of that database. With Elastic Database Pools, your tenants can share the eDTU of that pool and can occasionally go beyond the DTU for that of an individual database.
You may find this link helpful in understanding when it makes sense to use Elastic Database Pool: https://azure.microsoft.com/en-in/documentation/articles/sql-database-elastic-pool-guidance/.