I have a App Service that just runs a webjob continuously pulling work off a queue as it arrives. Recently I noticed that the metrics in the App Service blade Overview no longer shows anything:
I used to have those charts on my dashboard. If I go into the Monitoring and Metrics per instance I can then see the CPU time for the same period as the chart above
Am I missing something in a setting somewhere? I'm wondering if the metrics in the Overview now only show details for the website and not the webjobs running on the website? Alternatively has all this functionality been moved into the Monitoring area and I should stop using this, shame if so as the monitoring area doesn't appear to give you much control over the time range.
Just to close this off it appears that Azure was having a bad day as the metrics on the Overview tab have now reappeared after a few days!
Related
I wanted to monitor Azure Logic Apps with the help of Azure Monitor alerts. In alerts, I came across a metric Run Throttled events which is showing some numbers in recent days. But I couldn't find any events anywhere to resolve the issue. Is it possible view the actual run throttled events in Azure Portal?
You will need to setup diagnostic logging for Logic Apps, see here.
When you are done with the setup and initial run through of logs and if interested you want to look at more advanced queries via this logs data then go here.
Specifically on throttling you need to see this. Also take a look at limits set for Logic Apps from here as well.
I have an app service plan which is hosting 2 Web APIs, the issue I am facing is that I am unable to view details such as: CPU Usage, Memory Percentage, Requests, Average Response time etc.
These can be found under the Overview tab for both App Service and App Service Plan but no data is being recorded, even if I retrieve data for the whole week rather than the last hour only.
I have also confirmed that I am hitting the correct App hosted on the correct Plan. Have I missed anything? Do I need to enable something?
I have also generated around 20k requests in the last few hours so I expect something to show up.
I have an MVC based web app running on Azure. The CPU performance of it has been very predictable over the past five months. However, over the past 24 hours, and most recently, from 1:00 pm to 1:30 pm Eastern time, today, in the USA, I have had CPU spikes nearing 100%. The image below, which is for the past 7 days shows this.
This CPU spike is not coming from my app or my users. There has not been an abnormal increase in users, user activity or queries. I also checked Google Analytics to see if perhaps my site was getting hammered by random users etc. It showed nothing out of the ordinary.
There also was a corresponding huge jump in data going out of my site, which is highly unusual. The second image shows data egress for the past week. However, as I said, I checked my Azure SQL Database Query Store and it shows absolutely nothing out of the ordinary. Furthermore, my DTU percentage never even neared 100% during this time, which it certainly would have if this much data was pulled from the database.
I have basically ruled out anything amiss on my end. Is there some way I can check to see if there were issues with Azure causing this?
If you are suspecting an underlying Azure platform issue, both Azure Service Health and Azure Resource Health are useful resources to determine if you are being impacted by platform issue.
Azure Service Health provides personalized service health information when Azure platform issues impact your resources.
https://learn.microsoft.com/en-us/azure/service-health/service-health-overview
Azure Resource Health provides visibility into whether your Azure resources are healthy or unhealthy.
https://learn.microsoft.com/en-us/azure/service-health/resource-health-overview
For a list of supported Azure resources, you can refer to this article which also describes the set of health checks being performed.
https://learn.microsoft.com/en-us/azure/service-health/resource-health-checks-resource-types
I've recently been playing around with the Bing's image search api, however I have a concern I hope to resolve.
It is to do with the limit on the number of api requests that are allowed per month. After doing some reading it seems like if I were to exceed this limit, my Azure account would be billed depending on the number of api calls I have gone over my limit. Is it possible to set up some kind of alert through the Azure management portal that will stop the api from processing any more calls once a specific threshold has been passed?
If anyone has any experience using the Search api and can enlighten me, that would be great.
Try Metrics Monitoring. Go to the service within Azure Portal, Scroll Down to Monitoring -> Metrics and then click Add Metric Alert.
You can create an alert based on the number of successful calls or total calls and the alert can notify you via e-mail. Additionally, if you want to take action automatically after reaching the threshold, you can use Webhooks to make a call out to a web application or Azure Automation Runbook to automatically run PowerShell scripts or some code to prevent overuse. You can also use Logic Apps for that. Check the following link for further details and examples at the end of the page:
https://learn.microsoft.com/en-us/azure/monitoring-and-diagnostics/insights-webhooks-alerts
For some reason I have no monitoring metrics on the dashboard or monitor page for my Azure Website.
This is how my metrics appear:
There may be a transitory/temporary issue on the management functionality. Please try again later, and try also over different duration (1 week for ex).
It seems to have resolved itself after moving from the free account to a subscription.