I have set up an Azure web site using an Azure SQL Server database. These are placed in different locations (by accident). The web site in Northern Europe and the SQL Server database in South Central US.
Assume I instead have the SQL Server database in Northern Europe, so that it is in the same location as the web site, would it be any faster retrieving data? If so, by how much? Assume I have a very inefficient query loading too much data that currently takes 15 seconds.
Please ignore the possibility of improving the query. I am just interested in if anyone have any statistics on any speed improvements on moving where the SQL Server data centers are located related to the web site.
Assume I have a very inefficient query loading too much data that currently takes 15 seconds.
Now it will take 15.3 seconds (15 seconds + 300ms 3-way TCP handshake across the ocean).
Consider having to do 10000 queries over let's say one hour - you pay the latency penalty FOR EACH OF THOSE QUERIES.
In essence, move your database in the same Azure Region as your application, or vice-versa.
Related
We are using Azure Stream Analytics to build out a new IoT product. The data is successfully streaming to Power BI but there is no way to implement Row Level Security so we can display this data back to a customer, limited to only that customer's data. I am considering adding an Azure SQL DB between ASA and PBI and switching the PBI Dataset from a streaming dataset to Direct Query with a high page refresh rate but this seems like it will be a very intense workload for an Azure SQL DB to handle. There is the potential, as the product grows, for multiple inserts per second and querying every couple of seconds. Streaming seems like the better answer besides the missing RLS. Any tips?
There is the potential, as the product grows, for multiple inserts per second and querying every couple of seconds.
A small Azure SQL Database should handle that load. 1000/sec simple. 100,000/sec is probably too much.
And ASA can ensure that the output streams are not too frequent.
I want to create Webapps with PowerBI Embedded from the german central datacenter. Unfortnuatly this service is not available and i don't know when it will become available.
Therefore my idea is to migrate PowerBi Embedded later and start with all other services located in german central. Is this possible or strongly recommended to have the PowerBi Embedded service and the azure SQL Datawarehouse in the same place?
When you place your data source (SQL Data warehouse) and your BI tool (Power BI) in different datacenters there are two things that should be mindful of:
Latency and network speed between the data centers may affect the performance of your BI solution significantly (in a negative way), especially if you are manipulating and analysing large amounts of data. It depends a little on how you set up your Power BI embedded. If you use DirectQuery then you will be hit with the latency penalty every time the query runs (whenever you look at your report), if not then you will only be hit with the latency when you refresh your imported data. However, without DirectQuery you may have to import more data in order to aggregate etc from the imported dataset.
Egress, you pay for traffic going out of data centers. If you continuously send large amounts of data between two data centers then the egress cost can be a factor for you. In a normal setup the traffic charges are almost negliable, but if your BI solution streams a lot of data on every refresh then it may build up to a lot of money.
I have an Azure SQL Database S1 Standard (20DTU) and I'm seeing vast variations in performance. I have a number of queries that power a set of reports on a small web site. When running these queries through the Management Studio the performance varies from 0 to 60 seconds. The site isn't public so there's no traffic yet - only me. Looking at the DTU usage, it spikes at around 50%. Can anyone help me understand where the performance difference is coming from?
You can follow the link http://social.technet.microsoft.com/wiki/contents/articles/1104.troubleshoot-and-optimize-queries-with-azure-sql-database.aspx to troubleshoot your query performance. Enabling Query data store is another option if you are on V12.
There could be various factors that imapcts query performance, buffer pool, sql instance restarts because of azure maintenance (which clears buffer pool) etc.
Looking for best azure services for holding and manipulating data for an e-commerce application (online book store) with millions of books.
As of now the e-commerce application is running over asp.net and on-premises SQL server. As stock availability and prices are changed very frequently (in every hour) so we are manipulating/ updating millions of data in a specific time-line. Millions of records are updating with in 30 minutes using SSIS packages.
Now as we are intended to move our application over Azure, so can some help me to select the best data storage service on azure which meets our expectations.
Expectations:
1- Can store relational data
2- Data can update with in strict timeline - uses minimum time to complete full transaction
3- Highly scalable and highly available
As an experiment I am managing these data with Azure SQL Database (P1-tier) but not fully satisfied. Because for those task where On-premises Sql Server takes 30 minutes to complete, Azure Sql takes more than 7 hrs for the same process. I also tried with batches but still struggling.
Can someone suggest the solution please.
I'd be happy to help.
Unfortunately there is so much that's different between an P1 in a remote data center with a 99.99 SLA, automatic HA and very specific CPU/IOPS/Memory resources - and your on-premise server where the app logic, SQL Server are all running in the same OS context. Putting network latency to the side, I would guess that the HW resources in this server (CPU/IOPS/memory) are many times larger than what resources an P1 has.
Using what data you already have, upgrading to a P2 will approx. double the resources available in this test, P3 will quadruple and so on.
Happy to talk offline to help you build a more apples to apples comparison. guyhay at microsoft.com
I have a big performance problem with STDistance function on SQL Azure.
I'm testing the same query
SELECT Coordinate
FROM MyTable
WHERE Coordinate.STDistance(#Center) < 50000
on a SQL Azure database (Standard) and on my local machine database.
Same database, same indexes (a spatial index on Coordinate), same data (400k rows) but I got two different execution time.
The query takes less than 1 second in my local workstation and more or less 9 seconds on SQL Azure.
Anybody else has the same problem?
Federico
You can try following things to reduce network latency:
Select the data center closest to majority of your users
Co-Locate your DB with your application if your application is in Windows Azure as well
Minimize network round trips in your app
I would highly recommend you read this Azure SQL DB Perf guidance.
In addition to that, please check the existing service tier of your database and see if the performance is capping out. In that case, you might want to upgrade the service tier of your DB. If you would like to monitor the performance and adjust the performance levels, please use this link.
Thanks
Silvia Doomra
Query performance depends on various factors, one among them is your performance tier. Verify if you are hitting your resource limits (sys.resource_stats dmv from the master database)
Besides that there are a few other factors you can consider verifying:
index fragmentation on azure, network latency, locking etc.
Application level caching helps avoid hitting the database if the query is repeating.
You may also have to investigate on which Service-Tier and Performance level is required based on the Benchmarks here, AzureSQL-ServierTier_PerformanceLevel