We're having trouble with the reliability of our SQL Azure database, and in an effort to see if we are consuming excessive resources I tried to view our stats records. MSDN offers some initial guidance on querying sys.resource_stats
Our sys.resource_stats view returns no records at all.
If I try to view the monitoring in the Azure portal, I get an error that the server could not retrieve metrics.
We have no idea why this view would not return any records or how to fix it.
There is known issue that causes this telemetry not to be displayed. A fix for this issue is beeing roll-out.
You can query the sys.dm_db_resource_stats in the database itself (not master) to get the utilization data for the last hour. This view also has the advantage of showing finer granular data than sys.dresource_stats in master.
Related
I upgraded my azure SQL db from P2 250 dtus to P4 500 dtus.
But, during heavy load again we are facing dropped connections and overall performance degradation.
According to me, the number of concurrent requests become too much and it stars dropping connections.
What i understood was p2 has 400 concurrent workers whereas p4 has 800.
https://learn.microsoft.com/en-us/azure/azure-sql/database/resource-limits-dtu-single-databases?view=azuresql
These concurrent workers are not related to dtus as now my dtus in P4 is 40-45% under heavy load also.
Can we get some data or logs to check the current concurrent workers?
Is there any other way to check it.?
Is that's the main reason for dropped connections and performance degradation?
Can we get some data or logs to check the current concurrent workers?
Is there any other way to check it.? Is that’s the main reason for
dropped connections and performance degradation?
You can Monitor and fetch logs and metrics of your Azure SQL DB by selecting the Metrics like below :-
Here, I am checking successful connections to understand the session metrics and worker percentage to understand if the number of requests or queries are affecting the performance.
You can also use various metrics based on your DTU by changing the metrics like below :-
In order to troubleshoot the performance degradation, You can make use of Azure Diagnostics to solve or get an insight like below :-
I selected High CPU utilization which gave me a recommendation to diagnose the issue and also a T-SQL query that can be run inside Azure SQL query editor or SSMS directly.
As, You have faced issue with scaling, You can also try to diagnose by selecting the option below :-
You can connect to your Azure SQL server in SSMS and query the log data directly to get the worker or sessions.
You can query > # sys.dm_db_resource_stats table in SSMS or query editor and get > max_worker_percent
which will give you - Maximum concurrent workers (requests) in percentage of the limit of the database’s service tier.
Refer below :-
Select * from sys.dm_db_resource_stats;
And query execution graphs metrics like below :-
You can also find insights and improve performance with options below:-
You can enable Azure Monitor and monitor your SQL server and all the DB’s together and find more insight on the concurrent workers and all data from sys.dm_db_resource_stats like below without having to log into SSMS :-
Go to Azure Monitor > select SQL from left tab > Create new profile :
You can add one Ubuntu 18.04 Linux VM to fetch the logs and data from all your SQL Server databases like below for monitoring :-
In this manner all your data will be managed in a centralized monitoring pane in Azure Monitor.
Reference :-
sys.dm_db_resource_stats (Azure SQL Database and Azure SQL Managed Instance) - SQL Server | Microsoft Learn
We are storing our Windows/Linux VM metrics and logs into Azure diagnostics storage account for long term retention. We keep this data in Log Analytics as well but being cost conscious we keep only the minimal essential set and for 1 month. However it seems there is no way to efficiently query the Table storage data when we need it - e.g. checking historical cpu usage for a particular machine over a specific period in the past, or checking the logs captured during that period. The partition key and row key is highly convoluted with some very basic help available for the WAD tables schema while none exist for LinuxsyslogVer2v0 table schema. I was curious if anyone else using the diagnostic logs table storage for any querying/reporting? If so how do you query these for a specific host and time period? I can do a querying using non primary/row key but besides being time consuming it will cost a hell eventually considering that will be a table scan. Really appreciate any advice.
You should consider using Azure Data Explorer (ADX) for your long-term storage solution. It allows for KQL queries on your long-term data and is the preferred method for keeping log/security data past the default for services like LogA and Sentinel.
The pricing page for ADX can be a bit confusing and there is a website to help you estimate costs here: https://dataexplorer.azure.com/AzureDataExplorerCostEstimator.html
By default, logs ingested into Azure Sentinel are stored in Azure Monitor Log Analytics. This article explains how to reduce retention costs in Azure Sentinel by sending them to Azure Data Explorer for long-term retention.
Storing logs in Azure Data Explorer reduces costs while retains your ability to query your data, and is especially useful as your data grows. For example, while security data may lose value over time, you may be required to retain logs for regulatory requirements or to run periodic investigations on older data.
https://learn.microsoft.com/en-us/azure/sentinel/store-logs-in-azure-data-explorer?tabs=adx-event-hub
I was trying to configure the default cosmos db metrics on azure monitor to get requests, throughputs, and other related info. as given in documentation.
One issue I found was that if I have a collection by the name of test in my database in cosmos db account , I sometimes see two collections in Azure monitor under my database that are Test and test.
But this is kind of intermittent and if I change the time range it sometimes start showing one collection only. I have checked there is no collection by the name of "Test" (with capital T) in my database.
And also the results provided are actually distributed within the two metrics.
Could not find anything on documentation regarding the same.
Is this something on azure's side or something wrong with any configuration?
(screenshot for the same.)
I have following setting-up for Logic app for deleting entries from Azure Storage Table. It works fine, but there is problem if in storage table is more than 1K entities. In this case were deleted only oldest 1K entities and rest remains in table ...
I found that this is caused by 1K batch limit and that there is "continuation token", which is provided in this case.
Question is how I can include this continuation into my workflow?
Thank you much for help.
So ... I dont have enough reputation points to post image - I try describe it:
Get Entities ([Table])
->
For each ([Get entities result List of Entities])
->
Delete Entity
It only return 1000 records because the Pagination default is off. So go to the Settings, set the Pagination on and set the Threshold a large enough number. I test with 2000, it will return all records.
Even in this official doc doesn't mention Azure Table, however it does have a limits, further more information about Pagination refer to this doc:Get more data, items, or records by using pagination in Azure Logic Apps.
Based on my test, we cannot get the continuationToken header with the Azure Table Storage action. This function might not be implemented for Table Storage action.
The workaround could be to use Loops action, and repeat checking for existing entities.
The continuationToken is included in some actions. For example: the Azure CosmosDB action. You can utilize it with these actions. Here is a tutorial for how to use it.
How can I monitor the following metrics for Azure SQL Database:
- Buffer Cache hit ratio.
- Page life expectancy.
- Page Splits.
- Lock waits.
- Batch requests.
- SQL compilation.
The new Azure SQL Analytics
Azure SQL Analytics is a cloud monitoring solution for monitoring
performance of Azure SQL Databases at scale across multiple elastic
pools and subscriptions. It collects and visualizes important Azure
SQL Database performance metrics with built-in intelligence for
performance troubleshooting on top.
Performance counters on SQL Azure only collect SQL Server counters of a specific database and do not show Windows performance counters (like Page Life Expectancy). For some performance counters you need to take a first snapshot, then a second snapshot, and then you should substract values of counters between snapshots to get the actual counter value.
Please use the script provided on the following article to properly collect those counters.
Collecting performance counter values from a SQL Azure database.
You are probably looking for dynamic management views. A good starting point will be
Monitoring Azure SQL Database using dynamic management views.
Regarding Buffer Cache hit, Page life etc. check this blog
SQL Server memory performance metrics – Part 4 – Buffer Cache Hit Ratio and Page Life Expectancy