The Prometheus windows exporter can export IIS metrics if specifically stated when starting the service. On the main documentation page of this project, you can find the specifics about exported IIS metrics: IIS Docs
There are no example and no comments about what a metric is supposed to tell you and I'm not really familiar with this. I need to understand if these metrics can help me discovering the amount of memory (RAM) being used by each app pool active on a specific machine or if I have to write a custom exporter.
I have the exporter already active with IIS metrics on the machine I need to monitor and I already tried some queries with these metrics:
windows_iis_worker_file_cache_memory_bytes
windows_iis_worker_file_cache_max_memory_bytes
But can't tell if these are what I'm looking for. Can anyone point me in the right direction? And please, if these metrics already do the job I need, reply with the correct query.
Related
For years I always use IIS Looging always active on my sites. I currently use Windows Server 2008 R2 and Windows Server 2012. The official information I follow is this:
https://learn.microsoft.com/en-us/windows/desktop/http/iis-logging
I have looked for official information from Microsoft asking if it is recommended to always use this feature active or it is better to enable it only when you want to trace a problem.
Do you know if there is any official information?
Is there any study that says how much is the degradation of the
response times or general speed of the site to be active?
If I use an architecture with a Load Balancer F5 or A10 or Apache
that connect to my nodes, is it recommended to use Logging in the
Load Balancer always if it is deactivated in the nodes?
thanks!
IIS logging is processed on separate threads from the gateway services and app pools. Which means that it will not degrade performance.
Don't just take my word for it. If you want to confirm this, you can use a capacity testing tool (not recommended on your prod server, of course). Test your capacity with logging turned-on and logging turned-off. You will see that they are comparable.
Azure webapps had a metrics per instance option in the monitoring group which today have disappeared. This allowed to look at the memory and cpu usage of a specific app within an app service.
According to this article, the troubleshooting tools(include Metrics per Instance (Apps)) has now moved into Diagnose and Solve problems.
You could find them in Diagnose and Solve problems as below:
I too was disappointed to see this was removed/moved. The only way I now know how to access this page is:
Go into your App Service Plan > Diagnose and Solve Problems > Under "Solutions to common problems" expand "My app is low on CPU" > Click the link to "CPU/Memory" and it will take you to the Metrics Per Instance (ASP) page.
I hope this is not a permanent solution, or hopefully I am just overlooking the "easy" way to get to this now. If anyone has a better way, please share!
Hope this helps!
Brian
You are probably looking at the Monitoring tile of your web app resource. Look in the Monitoring tile of your app service plan your web app is running on and you will see the CPU and Memory metrics.
As of November 2019 you have to use "Apply splitting" under Monitoring > Metrics.
What I am trying to achieve is similar to what you can see in mongo-azure repo, specifically, I want to write Azure Diagnostic Configuration File in such a way so that I can get all the instances of performance counters, for example
\Processor(*)\% Processor Time
and it doesn't seem to be working - no data is visible in the table in my storage account.
Is it achievable at all with configuration, and if so, how?
UPD: We were able to get this working for a simple single VM (so it is possible!), but for some reason it still doesn't work for VMs in VMSS where Service Fabric Cluster is running
UPD #2: We did upgrade to VS 2015 tools 1.5 and now it magically works. I am not really sure if that was the root cause problem or we screwed up anywhere else.
Is it achievable at all with configuration, and if so, how?
Based on the documentation here, it seems it is not possible. From this link:
Performance counters available for Microsoft Azure
Azure provides a subset of the performance counters available for
Windows Server, IIS and the ASP.NET stack. The following table lists
some of the performance counters of particular interest for Azure
applications.
Table that follows below only includes Processor(_Total).
This is possible. We do this in CloudMonix when monitoring VMSS and it is working.
How are you instrumenting Diagnostics and which tables are you looking at?
Specifying \LogicalDisk(*)\Available Megabytes yileds all drives and their free space, for example
I am looking to find out the response times that takes for my nodejs application with (lets say) when 1000 users uses it simultaneously. I believe this is called stress testing. How Can I acheive this ?
I am new to testing area and yet to acquire knowledge on tools that will be used.
Edit: I need to know how to have virtual users for the application.
In order to understand the response times for an application under load, you need two components:
A load driver
There's a number of available tools that will simulate users making HTTP requests. A simple open source option is (Apache JMeter. The following page in the documentation shows you how to create a web test plan, including adding virtual users:
http://jmeter.apache.org/usermanual/build-web-test-plan.html
A performance monitoring tool
In order to understand how your application is performing under the load, and measure application responsiveness, you need a performance monitoring tool. One set of options is to track the performance of 'http' monitoring events using Node Application Metrics, either using the API directly, or using an open source monitoring stack like Elasticsearch/Kibana using the ELK Connector for Node Application Metrics. There's a getting started guide to monitoring Node.js application performance with Elasticsearch/Kibana here:
https://developer.ibm.com/open/2015/09/07/using-elasticsearch-and-kibana-with-node-application-metrics/
I am relatively new to Azure. I have a website that has been running for a couple of months with not too much traffic...when users are on the system, the various dashboard monitors go up and then flat line the rest of the time. This week, the CPU time when way up when there were no requests and data going in or out of the site. Is there a way to determine the cause of this CPU activity when the site is not active? It doesn't make sense to me that I should have CPU activity being assigned to my site when there is to site activity.
If your website has significant processing at application start, it is possible your VM got rebooted or your app pool recycled and your onstart handler got executed again (which would cause CPU to spike without any request).
You can analyze this by adding application logs to your Application_Start event (but after initializing trace). There is another comment detailing how to enable logging, but you can also consult this link.
You need to collect data to understand what's going on. So first thing I would say is:
1. Go to Azure management portal -> your website (assuming you are using Azure websites) -> dashboard -> operation logs. Try to see whether there is any suspicious activity going on.
download the logs for your site using any ftp client and analyze what's happening. If there is not much data, I would suggest adding more logging in your application to see what is happening or which module is spinning.
A great way to detect CPU spikes and even determine slow running areas of your application is to use a profiler like New Relic. It's a free add on for Azure that collects data and provides you with a dashboard of data. You might find it useful to determine the exact cause of the CPU spike.
We regularly use it to monitor the performance of our applications. I would recommend it.