I upgraded my azure SQL db from P2 250 dtus to P4 500 dtus.
But, during heavy load again we are facing dropped connections and overall performance degradation.
According to me, the number of concurrent requests become too much and it stars dropping connections.
What i understood was p2 has 400 concurrent workers whereas p4 has 800.
https://learn.microsoft.com/en-us/azure/azure-sql/database/resource-limits-dtu-single-databases?view=azuresql
These concurrent workers are not related to dtus as now my dtus in P4 is 40-45% under heavy load also.
Can we get some data or logs to check the current concurrent workers?
Is there any other way to check it.?
Is that's the main reason for dropped connections and performance degradation?
Can we get some data or logs to check the current concurrent workers?
Is there any other way to check it.? Is that’s the main reason for
dropped connections and performance degradation?
You can Monitor and fetch logs and metrics of your Azure SQL DB by selecting the Metrics like below :-
Here, I am checking successful connections to understand the session metrics and worker percentage to understand if the number of requests or queries are affecting the performance.
You can also use various metrics based on your DTU by changing the metrics like below :-
In order to troubleshoot the performance degradation, You can make use of Azure Diagnostics to solve or get an insight like below :-
I selected High CPU utilization which gave me a recommendation to diagnose the issue and also a T-SQL query that can be run inside Azure SQL query editor or SSMS directly.
As, You have faced issue with scaling, You can also try to diagnose by selecting the option below :-
You can connect to your Azure SQL server in SSMS and query the log data directly to get the worker or sessions.
You can query > # sys.dm_db_resource_stats table in SSMS or query editor and get > max_worker_percent
which will give you - Maximum concurrent workers (requests) in percentage of the limit of the database’s service tier.
Refer below :-
Select * from sys.dm_db_resource_stats;
And query execution graphs metrics like below :-
You can also find insights and improve performance with options below:-
You can enable Azure Monitor and monitor your SQL server and all the DB’s together and find more insight on the concurrent workers and all data from sys.dm_db_resource_stats like below without having to log into SSMS :-
Go to Azure Monitor > select SQL from left tab > Create new profile :
You can add one Ubuntu 18.04 Linux VM to fetch the logs and data from all your SQL Server databases like below for monitoring :-
In this manner all your data will be managed in a centralized monitoring pane in Azure Monitor.
Reference :-
sys.dm_db_resource_stats (Azure SQL Database and Azure SQL Managed Instance) - SQL Server | Microsoft Learn
Related
How can I monitor the following metrics for Azure SQL Database:
- Buffer Cache hit ratio.
- Page life expectancy.
- Page Splits.
- Lock waits.
- Batch requests.
- SQL compilation.
The new Azure SQL Analytics
Azure SQL Analytics is a cloud monitoring solution for monitoring
performance of Azure SQL Databases at scale across multiple elastic
pools and subscriptions. It collects and visualizes important Azure
SQL Database performance metrics with built-in intelligence for
performance troubleshooting on top.
Performance counters on SQL Azure only collect SQL Server counters of a specific database and do not show Windows performance counters (like Page Life Expectancy). For some performance counters you need to take a first snapshot, then a second snapshot, and then you should substract values of counters between snapshots to get the actual counter value.
Please use the script provided on the following article to properly collect those counters.
Collecting performance counter values from a SQL Azure database.
You are probably looking for dynamic management views. A good starting point will be
Monitoring Azure SQL Database using dynamic management views.
Regarding Buffer Cache hit, Page life etc. check this blog
SQL Server memory performance metrics – Part 4 – Buffer Cache Hit Ratio and Page Life Expectancy
We are looking to host our Azure web application at 2-3 locations globally to reduce load latency and for BCP if an application server fails (we will use Traffic Manager to direct traffic)
We will be co-locating the Azure SQL DB databases along with the web app. We want to get the databases synced near real-time. The data volumes will be under 1 GB on any given day. There will be no on-premise database. Here intent is not to have a master slave but more active-active databases
Given Azure Data Sync is now in GA,
a) What kind of delay in sync should I plan for (I can tolerate a few seconds of latency)
b) Will there be any performance issues in both the DB's during these periods of sync. How do conflicts get resolved- latest timestamp?
c) Can I use out of-the-box azure portal functionality- or will I need additional tools
Minimum sync frequency is 5 minutes.
Check this out https://learn.microsoft.com/en-us/azure/sql-database/sql-database-sync-data
With a Small instance worker role containing a WCF service, I want it to auto-scale if the memory usage goes to n%. The WCF application uses Azure SQL Database, which is a Singleton in my application. If/when the application tier autoscales, what is "different" between the two systems that can be tracked by the database? Is there a way to alter the "Application Name" in a DB connection string when things scale-up? Is there an Azure-specific ID that can be trapped and logged in the DB? I could fall-back on hacking the connection string and passing that into SQL myself, but I am hoping there is something built-in I can use now.
I tried looking around on the Azure team's site(s) but have seen nothing clear/definitive.
Thanks.
Connections to SQL Azure are tracked by the host name which is different from machine to machine. Is this what you're trying to achieve by passing machine name into connection string?
You can monitor the connections to SQL Azure database by executing the following query:
SELECT
e.connection_id,
s.session_id,
s.login_name,
s.last_request_end_time,
s.cpu_time,
s.host_name
FROM
sys.dm_exec_sessions s
INNER JOIN sys.dm_exec_connections e
ON s.session_id = e.session_id
I want to also mention that Azure's native auto-scaling feature does not support auto-scaling based on memory utilization and only based on CPU utilization. To auto-scale based on anything but CPU or queue counts, you'll need to use WASABi API or AzureWatch
How do I see if an SQL Azure database is being throttled?
I want to see data like: what percentage of time it was throttled, the count of throttles, the top reasons of throttles.
See https://stackoverflow.com/questions/2711868/azure-performance/13091125#13091125
Throttling is the least of your troubles. If you need performance then you would be best served to build your own DB servers using VM roles. I found that the performance of these is vastly improved over SQL Azure. For fault tolerance you can provision a primary and a failover in a different VM in a different region if necessary. Make sure that the DB resides on the local drive.
I don't believe that information is currently available. However, the team does share reasons why you could be throttled and how to handle it (see here).
As far as I know the key points to migrate an existing database to SQL Azure are:
Tables has to contain a clustered
index. This is mandatory.
Schema and data migration should be
done through data sync, bulk copy,
or the SQL Azure migration
wizard, but not with the restore option in SSMS.
The .NET code should handle the
transient conditions related with
SQL Azure.
The creation of logins is in the
master database.
Some TSQL features may be not
supported.
And I think that's all, am I right? Am I missing any other consideration before starting a migration?
Kind regards.
Update 2015-08-06
The Web and Business editions are no longer available, they are replaced by Basic, Standard and Premium Tiers.
.CLR Stored Procedure Support is now available
New: SQL Server support for Linked Server and Distributed Queries against Windows Azure SQL Database, more info.
Additional considerations:
Basic tier allows 2 GB
Standard tier allows 250 GB
Premium tier allow 500 GB
The following features are NOT supported:
Distributed Transactions, see feature request on UserVoice
SQL Service broker, see feature request on UserVoice
I'd add in bandwidth considerations (for initial population and on-going bandwidth). This has cost and performance considerations.
Another potential consideration is any long running processes or large transactions that could be subject to SQL Azure's rather cryptic throttling techniques.
Another key area to point out are SQL Jobs. Since SQL Agent is not running, SQL Jobs are not supported.
One way to migrate these jobs are to refactor so that a worker role can kick off these tasks. The content of the job might be moved into a stored procedure to reduce re-architecture. The worker role could then be designed to wake up and run at the appropriate time and kick off the stored procedure.