Azure webapps had a metrics per instance option in the monitoring group which today have disappeared. This allowed to look at the memory and cpu usage of a specific app within an app service.
According to this article, the troubleshooting tools(include Metrics per Instance (Apps)) has now moved into Diagnose and Solve problems.
You could find them in Diagnose and Solve problems as below:
I too was disappointed to see this was removed/moved. The only way I now know how to access this page is:
Go into your App Service Plan > Diagnose and Solve Problems > Under "Solutions to common problems" expand "My app is low on CPU" > Click the link to "CPU/Memory" and it will take you to the Metrics Per Instance (ASP) page.
I hope this is not a permanent solution, or hopefully I am just overlooking the "easy" way to get to this now. If anyone has a better way, please share!
Hope this helps!
Brian
You are probably looking at the Monitoring tile of your web app resource. Look in the Monitoring tile of your app service plan your web app is running on and you will see the CPU and Memory metrics.
As of November 2019 you have to use "Apply splitting" under Monitoring > Metrics.
Related
Currently my app service in azure performs well in staging environments but when it comes to production i find an unusual spike in the response times in application and demands for app-service restart in a few scenarios.
I am trying to analyse this issue and was trying to generate a thread dump using the kudu lite but the container is crashing when we try this and i am currently working with Microsoft on this.
Meanwhile what are the best practices or approach to understand this . I have tried to dig in the application insights logs but there was no much info about the worker threads that are hung up or if the thread pool is exhausted .
Please advise me on this situation on how to analyse and reverse engineer to get to the bottom of this problem .
Thanks in Advance !
In your app service plan check the Diagnosed and solve problems” analyze CPU usage of your App on all instances and see a breakdown of usage of all apps on your server. And check the CPU utilization on each instance serving your app and identify the app and the corresponding process causing High CPU in percentage. Check the Troubleshoot performance degradation of our service
You can use the Kudu console to download the Diagnostic dump in a KUDU -> Tools -> Diagnostic dump
Once you download the diagnostic dump you have log files and a Deployment directory.
You can check the log files to know the spike details
Refer here for more information
I am trying to track down when our frontend started to work that slow. Recently I created new app services within the same service plan.
so now I have six apps (2 frontend, 4 backend) running under same App Service plan using Basic pricing tier. Also, we use Kudu for deployments.
Could that be the reason? or how to look for the reason?
this is overview of that service plan
appreciating any ideas and suggestions
#user122222 This is a high CPU issue and not a slow request issue as others have pointed out.
An immediate action you can take is to scale up. If you are using a B1 instance in the basic tier, try to scale up to a B3, which will provide you with more CPU cores and RAM. See if that provides you relief. If so, then you likely need to remain at this instance level. At this point it would also be worth while to analyze your number of requests. You should scale up when you are running many sites or resource intensive sites and you should scale out when you are receiving a high number of requests.
My money is on the fact that you likely have an issue with your code that is causing a deadlock or similar. Your CPU usage graph is stuck at 100% usage over many hours. Even an overloaded ASP will see a few dips over the course of a few hours.
To troubleshoot high CPU usage, start by using the diagnose and solve problems blade in your app service plan. This is the same troubleshooting tool that a support engineer would use in a paid technical support case. Use it to troubleshoot high CPU (not slow requests as based on your screenshot, it would appear the CPU is the culprit of the slow requests).
This can tell you what app in the ASP is causing the issue and sometimes even tell you the process in that app that is causing the issue. Beyond this, I'd suggest creating and analyzing a memory dump of the problematic web app. More steps on how to do that here.
Please try to restart the worker instance.
https://learn.microsoft.com/en-us/rest/api/appservice/app-service-plans/reboot-worker#code-try-0
since a month one of our web application hosted as WebApp on Azure is having some kind of problem and I cannot find the root cause of that.
This WebApp is hosted on Azure on a 2 x B2 App Service Plan. On the same App Service Plan there is another WebApp that is currently working without any issue.
This WebApp is an ASP.NET WebApi application and exposes a REST set of API.
Effect: without any apparent sense (at least for what I know by now), the ThreadCount metric starts to spin up, sometimes very slowly, sometimes in few minutes. What happens is that no requests seems to be served and the service is dead.
Solution: a simple restart of the application (an this means a restart of the AppPool) causes an immediate obvious drop of the ThreadCount and everything starts as usual.
Other observations: there is no "periodicity" in this event. It happened in the evening, in the morning and in the afternoon. It seems that evening is a preferred timeframe, but I won't say there is any correlation.
What I measured through Azure Monitoring Metric:
- Request Count seems to oscillate normally. There is no peak that causes that increase in ThreadCount
- CPU and Memory seems to be normal, nothing strange.
- Response time, like the others metrics
- Connections (that should be related to sockets) oscillates normally. So I'd exclude something related to DB connections.
What may I do in order to understand what's going on?
After a lot of research, this happened to be related to a wrong usage of Dependency Injection (using Ninject) and an application that wasn't designed to use it.
In order to diagnose, I discovered a very helpful feature in Azure. You can reach it by entering into the app that is having the problem, click on "Diagnose and solve problems" then click on "Diagnostic tools" and then select "Collect .NET profiler report". In that panel, after configuring the storage for the diagnostic files, you can select "Add thread report".
In those report you can easily understand what's going wrong.
Hope this helps.
I deployed an Azure web app back in July and it's been running flawlessly up until about three weeks ago. At that time, I would notice my CPU utilization constantly between 80% to 100%, with no corresponding increase in traffic. The first time I saw this, after concluding it wasn't my app, or increased traffic, causing this, I restarted the web app service and the CPU utilization returned to its normal 5% to 15%. Then after a couple days it started to do it again. And, again, a restart solved the issue.
My question is this. Is this normal to have to restart the web service every day or so? And, if so, why?
Assuming no changes have been made to your code and you have not seen a corresponding increase in traffic, it is not normal. An Azure Web App with no app deployed should almost always stay at 0% CPU utilization. I say "almost always" because Microsoft does run diagnostic and monitoring tools in the background that can cause some very temporary spikes. See here for a thread on that particular issue.
My recommendations are:
When CPU pegs and stays pegged, log into your SCM site. Check the Process Explorer and confirm that it's your w3wp.exe (Note there's a separate w3wp.exe for your SCM site.) that's pegged the CPU.
Ensure that you don't have any Site Extensions or WebJobs that are losing their mind. You can check your installed Site Extensions on the SCM site under the Site Extensions -> Installed tab. Any WebJobs will show up on your SCM process explorer as separate processes from step #1.
Log into the Azure Portal and browse to your Web App's management blade. Go to the Diagnose and Solve Problems blade. From here, you can try "Metrics per Instance" and go through all of the Perf Counters to see if it gives you a clue as to what's wrong. For example, I had SignalR go nuts once and only found it by seeing that my thread count was out of control.
On the Diagnose and Solve Problems blade, you can also check Application Events.
You may have some light shed on this by installing Application Insights on your web application. It has a free tier that will likely have enough space to troubleshoot for a few days. If this is something going bananas with your code, you may get some insight here.
I'm including failed request tracing logs here for completeness. But these would likely show up in Application Insights.
If you've exhausted all of these possibilities, file a support ticket with Microsoft. As the above link shows, they have access to diagnostic tools that we don't and can eliminate the possibility of a runaway diagnostics or infrastructure process. I don't know how much help they can be if the CPU spike is due to your own w3wp.exe that's spiking the CPU.
Of course, if your app is seriously easy to redeploy and it's not a ridiculous hassle, you can just re-provision it and see if you see the same behavior.
I currently have 4 websites hosted in a S2 hosting plan and this evening received a CPU percentage alert. I went to the management portal and checked all of the sites hosted in the hosting plan, however found no reason for it to be so high. After checking site by site and finding no evidence of what could be causing this problem I went and stopped every site, much to my surprise the CPU usage did not drop and it's been a staggering 50% for the last 30 minutes, is there any way to find out what is causing this? Do you guys have any idea if it could be a bug in the azure sites service?
Thanks in advance.
A couple of things to check for:
- do you have any webjobs on that system? They also consume resources but don't show up in all reports.
You can also check the Kudu Process Monitor to see if there are any other processes running (maybe you've been hacked and someone is running something on your box?) If you've never used the Kudu tool, it is quite handy - to get to it in your browser, put '.scm' after the sitename in your url. For example, if your site is
'mysite.azurewebsites.net'
the Kudu tools are at
'mysite.scm.azurewebsites.net'
There is a process explorer in there that you can see what processes are running under your account.