I am considering migration from Azure SQL to Azure SQL Data Warehouse. It seems to offer some of the features that we need, however price is a concern for starting small. 100 DWU Data Warehouse is priced considerably higher ($521/month) than a seemingly comparable 100 DTU Azure SQL S2 tier ($150/month).
To make sure I am comparing apples to apples, can someone shed some light on how DWU compare to DTU (assuming basic configuration with a single database)?
Edit: to everyone who is inclined to answer that Azure SQL DW and Azure SQL are not comparable and therefore it makes no sense to compare DTU to DWU: then how does it make sense to (talk about migration) to DW?
For what it's worth, 1 DWU = 7.5 DTU with respect to server capacity
When you look at the server instance that you provision a DW instance on:
100 DWU instance consumes 750 DTUs of server capacity
400 DWU instance consumes 3,000 DTUs of server capacity
While this information is interesting, it may not be very useful in terms of comparing pricing because DW pricing is exclusively based on DWU, while Azure SQL pricing is the combination of DTU and database size.
You can't and really shouldn't compare the two for the same workload; they're designed for different things based on completely different architectures. As such, DTU and DWU are not comparable measures. Also, how deeply have you looked into the technical differences? The high level features are not the major issue, details are what might wreck your app (e.g. can you live with a limited TSQL surface area or transaction isolation level?)
Azure SQL DB is intended to be a general purpose DB as a service. The few feature gaps aside, you should think about Azure SQL DB functionally the same way you do SQL Server, minus a lot of the administrative tasks and with a different programming model. Works great for OLTP apps and most reporting apps (or mixed) but not so great for complex analytical apps against with very large datasets (can't really store that much in SQL DB anyway).
SQL DW is intended for data warehousing, analytical type workloads. Its MPP architecture is particularly well suited for complex queries against very large data sets. It will not perform well for typical OLTP applications that have lots of small or singleton queries especially when it's a mix of insert, update and delete operations. If you get a trial instance of SQL DW, you can easily test and verify the behavior for your workload compared to what it currently looks like on SQL DB.
SQL DW also has some limitations on its TSQL surface area, types, concurrency, isolation levels (deal breaker for almost all OLTP apps), etc... so be sure to look into the documentation to get the whole picture as you evaluate feasibility. It might work great but I suspect it's not the best solution if you're running an OLTP workload. Reporting/analytical type workloads however might find a happy home in SQL DW.
The best way to figure out what you need is to look at your current IO requirements. Data Warehouses tend to be IO hogs and consequently are optimized by maximizing IO throughput. The DWU Calculator site walks you through the process of capturing a your disk metrics and estimates how many DWUs you need to fulfill your workload.
http://dwucalculator.azurewebsites.net/
Related
Cosmos DB allows us to store graph data using the gremlin query language.
Are there intelligent algorithms optimizing how the graph is split up among many servers? If not I can imagine some queries being extremely slow do to network latency between the shards.
The documentation is still a bit lacking, but there are some performance considerations for DocumentDb itself. Namely, setting up a PartitionKey that is adequately granular will split your data across multiple partitions, giving you higher throughput. You can find more here:
https://learn.microsoft.com/en-us/azure/documentdb/documentdb-partition-data
I have an Azure SQL Database S1 Standard (20DTU) and I'm seeing vast variations in performance. I have a number of queries that power a set of reports on a small web site. When running these queries through the Management Studio the performance varies from 0 to 60 seconds. The site isn't public so there's no traffic yet - only me. Looking at the DTU usage, it spikes at around 50%. Can anyone help me understand where the performance difference is coming from?
You can follow the link http://social.technet.microsoft.com/wiki/contents/articles/1104.troubleshoot-and-optimize-queries-with-azure-sql-database.aspx to troubleshoot your query performance. Enabling Query data store is another option if you are on V12.
There could be various factors that imapcts query performance, buffer pool, sql instance restarts because of azure maintenance (which clears buffer pool) etc.
I have a big performance problem with STDistance function on SQL Azure.
I'm testing the same query
SELECT Coordinate
FROM MyTable
WHERE Coordinate.STDistance(#Center) < 50000
on a SQL Azure database (Standard) and on my local machine database.
Same database, same indexes (a spatial index on Coordinate), same data (400k rows) but I got two different execution time.
The query takes less than 1 second in my local workstation and more or less 9 seconds on SQL Azure.
Anybody else has the same problem?
Federico
You can try following things to reduce network latency:
Select the data center closest to majority of your users
Co-Locate your DB with your application if your application is in Windows Azure as well
Minimize network round trips in your app
I would highly recommend you read this Azure SQL DB Perf guidance.
In addition to that, please check the existing service tier of your database and see if the performance is capping out. In that case, you might want to upgrade the service tier of your DB. If you would like to monitor the performance and adjust the performance levels, please use this link.
Thanks
Silvia Doomra
Query performance depends on various factors, one among them is your performance tier. Verify if you are hitting your resource limits (sys.resource_stats dmv from the master database)
Besides that there are a few other factors you can consider verifying:
index fragmentation on azure, network latency, locking etc.
Application level caching helps avoid hitting the database if the query is repeating.
You may also have to investigate on which Service-Tier and Performance level is required based on the Benchmarks here, AzureSQL-ServierTier_PerformanceLevel
The new new Azure SQL Database Services look good. However I am trying to work out how scalable they really are.
So, for example, assume a 200 concurrent user system.
For Standard
Workgroup and cloud applications with "multiple" concurrent transactions
For Premium
Mission-critical, high transactional volume with "many" concurrent users
What does "Multiple" and "Many" mean?
Also Standard/S1 offers 15 DTUs while Standard/S2 offers 50 DTUs. What does this mean?
Going back to my 200 user example, what option should I be going for?
Azure SQL Database Link
Thanks
EDIT
Useful page on definitions
However what is "max sessions"? Is this the number of concurrent connections?
There are some great MSDN articles on Azure SQL Database, this one in particular has a great starting point for DTUs. http://msdn.microsoft.com/en-us/library/azure/dn741336.aspx and http://channel9.msdn.com/Series/Windows-Azure-Storage-SQL-Database-Tutorials/Scott-Klein-Video-02
In short, it's a way to understand the resources powering each performance level. One of the things we know when talking with Azure SQL Database customers, is that they are a varied group. Some are most comfortable with the most absolute details, cores, memory, IOPS - and others are after a much more summarized level of information. There is no one-size fits all. DTU is meant for this later group.
Regardless, one of the benefits of the cloud is that it's easy to start with one service tier and performance level and iterate. In Azure SQL Database specifically you can change the performance level while you're application is up. During the change there is typically less than a second of elapsed time when DB connections are dropped. The internal workflow in our service for moving a DB from service tier/performance level follows the same pattern as the workflow for failing over nodes in our data centers. And nodes failing over happens all the time independent of service tier changes. In other words, you shouldn’t notice any difference in this regard relative to your past experience.
If DTU's aren't your thing, we also have a more detailed benchmark workload that may appeal. http://msdn.microsoft.com/en-us/library/azure/dn741327.aspx
Thanks Guy
It is really hard to tell without doing a test. By 200 users I assume you mean 200 people sitting at their computer at the same time doing stuff, not 200 users who log on twice a day. S2 allows 49 transactions per second which sounds about right, but you need to test. Also doing a lot of caching can't hurt.
Check out the new Elastic DB offering (Preview) announced at Build today. The pricing page has been updated with Elastic DB price information.
DTUs are based on a blended measure of CPU, memory, reads, and writes. As DTUs increase, the power offered by the performance level increases. Azure has different limits on the concurrent connections, memory, IO and CPU usage. Which tier one has to pick really depends upon
#concurrent users
Log rate
IO rate
CPU usage
Database size
For example, if you are designing a system where multiple users are reading and there are only a few writers, and if your application middle tier can cache the data as much as possible and only selective queries / application restart hit the database then you may not worry too much about the IO and CPU usage.
If many users are hitting the database at the same time, you may hit the concurrent connection limit and requests will be throttled. If you can control user requests coming to the database in your application then this shouldn't be a problem.
Log rate: Depends upon the volume of the data changes (including additional data pumping in the system). I have seen application steadily pumping the data vs data being pumped all at once. Selecting the right DTU again depends upon how one can do throttling at the application end and get steady rate.
Database size: Basic, standard, and premium has different allowed max sizes, and this is another deciding factor. Using table compression kind of features helps reducing the total size, and hence total IO.
Memory: Tuning the expesnive queries (joins, sorts etc), enabling lock escalation / nolock scans help controlling the memory usage.
The very common mistake people usually do in database systems is scaling up their database instead of tuning the queries and application logic. So testing, monitoring the resources / queries with different DTU limits is the best way of dealing this.
If choose the wrong DTU, don't worry you can always scale up/ down in SQL DB and it is completely online operation
Also unless a strong reason migrate to V12 to get even better performance and features.
SQL Azure storage is a lot more expensive than Windows Azure Storage. Would implementing a no-sql solution like RavenDB allow me to store data on the cheaper Azure Storage?
Are there other things to consider, like backup, speed or security?
Thank you.
You have to consider that with SQL Azure you not only get the storage, but the database server too. If you implement RavenDB, you will will need a worker role to host it in and, in order to allow for failure of that worker role, another worker role (replica), which also doubles up the storage.
Bear in mind that with SQL Azure you get a highly available (3x replicated with failover) SQL solution that surfaces a familiar (ADO.NET) API. Make your choices based on aspects other than storage cost, such as operational effort and development effort. If you choose RavenDB it should be because of the potential cost savings in development effort (because of the closeness on the document API to the object graph) and operational cost, because RavenDB is 'administered' as part of the application. Cost of storage of actual bytes, particularly at scale, is a marginal consideration.
Adding a bit to #Simon's answer: When considering Table Storage and its low cost, also consider whether you can use it directly, instead of going with an installed-and-managed-by-you NoSQL database engine. As it stands, Table Storage offers a schemaless solution that lets you store essentially a property bag within a row, indexed by partitionkey+rowkey. Does that work for you? Could you work with a few extra tables to give you additional indexing? If so, your storage cost is going to be really low (and still durable, triple-replicated).
If you find yourself writing significant code to manage Table Storage, then it may be more efficient to invest in the Compute instances needed to run RavenDB. When considering this, also consider that you'll likely want larger VM sizes if you're moving significant data (as you get approx. 100Mbps per core). A database like MongoDB, working with memory-mapped files, really ramps up speed-wise with more RAM. Not sure if this is the same with RavenDB.