Question about IIS logging and time degradation - iis

For years I always use IIS Looging always active on my sites. I currently use Windows Server 2008 R2 and Windows Server 2012. The official information I follow is this:
https://learn.microsoft.com/en-us/windows/desktop/http/iis-logging
I have looked for official information from Microsoft asking if it is recommended to always use this feature active or it is better to enable it only when you want to trace a problem.
Do you know if there is any official information?
Is there any study that says how much is the degradation of the
response times or general speed of the site to be active?
If I use an architecture with a Load Balancer F5 or A10 or Apache
that connect to my nodes, is it recommended to use Logging in the
Load Balancer always if it is deactivated in the nodes?
thanks!

IIS logging is processed on separate threads from the gateway services and app pools. Which means that it will not degrade performance.
Don't just take my word for it. If you want to confirm this, you can use a capacity testing tool (not recommended on your prod server, of course). Test your capacity with logging turned-on and logging turned-off. You will see that they are comparable.

Related

which load balancer would be better in performance mod_jk or mod_proxy or any other open source

I am willing to host an application in a single machine with out any fail-over or load balancing at the hardware level as per my budget.
But, to my knowledge, as the no of hits increases to the tomcat, it has a drawback of going down. So, to get rid of that I want to go with multiple instances for the same application. So to do so, which load balancer would be better either mod_jk or mod_proxy. You can even suggest any other open source tool that helps me in load balancing the application hits.
My application contains structs and not even springs and my OS is rhel 6.x. Please suggest according to the good performance also.
Thanks in advance.
Running multiple instances of one application on the same machine only leads to the application sharing the resources - minus the overhead for the additional Tomcat instances and the loadbalancer.
This is pretty much the same as dividing cargo on individual tires because a car gets slow when it is too loaded. Staying in the picture: what you want is a turbocharger.
Translated that would be a reverse proxy cache.
I'd suggest using varnish. You can configure it to serve static resources like images and stylesheets from RAM after they were delivered the first time, reducing the requests passed to your application drastically.
It may be configured as a classical load balancer, too.

Web Site Availability in Windows Azure [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
App pool timeout for azure web sites
I am working on an asp.net mvc 4 app that is hosted in Windows Azure. This app will not have a lot of traffic as people will intermittently (once an hour) use it. I wanted to try using Windows Azure.
My app is currently set to use the FREE web site mode. I noticed that after 30 minutes, the site takes a long-time (> 5 seconds) to load. After that initial load, its fast. Then, if someone doesn't use it for another 30 minutes, it takes >5 seconds to load again.
I then tried upping the web site mode to a SHARED instance. I experienced the same problem there.
I then tried upping the web site mode to a RESERVED instance. The problem then goes away.
While I'd like to use Windows Azure, paying $50+ a month for a RESERVED instance is pretty expensive for a site that few have used up to this point. However, I can't have the initial lag. That will just defer the few users I have. You could say you get what you pay for. At the same time, I have a hard time believing others are experiencing this problem and not complaining. There has to be something I'm missing.
I figure the problem has to deal with the application pool resetting. However, I can't seem to figure a way around this. Is anyone familiar with this issue? Is there a way to fix it on a FREE or SHARED instance?
Thank you!
This is expected behavior based on how Windows Azure Web Sites work. The app pool they live in is spun up "on demand" and then hangs around for a time period.
For a detailed (and shameless plug) you can check out my article on this: http://www.simple-talk.com/dotnet/.net-framework/windows-azure-websites-%e2%80%93-a-new-hosting-model-for-windows-azure/
In summary:
Web Sites are hosted in a process on a farm of machines running IIS. If a site is idle for some time then the process is torn down automatically. Also, if the box is seeing a lot of pressure due to the other sites on the box the idle timeout may come down quite a bit (even as low as five minutes). When the next call comes in you'll see the process spun up again (likely on a completely different server). This is because you are in a shared environment (and is similar to how Heroku works). Once you move to reserved then you are the ONLY person on that virtual machine and if you suffer from noisy neighbor issues in processing its' because of your own stuff.
There are ways to keep your site "up", such as having a job that pings the url frequently; however, given that the idle timeout is somewhat fluid it may not solve every case. You can check out a recent post by Sandrino on how to use Azure Mobile Services as a job scheduler: http://fabriccontroller.net/blog/posts/job-scheduling-in-windows-azure/ . There are also 3rd party services available that can do the ping for you automatically.
To be honest, the web sites are a great feature for quick development and test, or even relatively low traffic sites as you are talking about. If you need a high level of uptime and better performance then you'll want to look at Reserved, or another option if the cost isn't in line with expectations.
This isn't an Azure problem. It is a "feature" of any web site hosted in IIS. The default time-out for app pools is 20 minutes. Read about App Pool timeouts here - http://technet.microsoft.com/en-us/library/cc771956(v=ws.10).aspx - one method is to create a keep alive page and ping the page every 10 minutes or so.

Monitoring Node.js application running into Windows Azure

Is there a way to enable Performance Counters to monitor Node.js application performance in Windows Azure?
I haven't experimented with it myself yet, but there is node-perfmon which is a wrapper around typeperf. That says it allows you to write performance counters, as well as simple memory / cpu monitoring. Is this the sort of monitoring you were looking for?
Just adding more to above answers..
For application stats monitoring on Node.js you can use Hummingbird. It supports status over http so you can integrate the code in your node.js app add one port to get the monitoring data over HTTP. No need to use Azure Storage Diagnostics and all info in real time in same machine. It's still in pre-alfa, but is handling with few tasks really well.
http://projects.nuttnet.net/hummingbird/
I know about the node.js "monitor" plugin which is the best for Linux machines for system specific performance and also use HTTP to provide system specific data. I am not sure if that can be ported to Windows Server but if can that is one great choice. Read more about monitor usage here:
http://www.sys-con.com/node/2275314
You may want to also look at these, they aren't directly using perfmon, but allow you to monitor the performance of your Node.js server:
http://search.npmjs.org/#/Probes.js
http://search.npmjs.org/#/nodetime
The NPM registry is a great tool for finding Node.js packages.

How do I confirm whether Application Warm-Up plugin works?

I have a web application that's consuming a WCF service. Both are slow on warmup after IIS reset or app pool recycle. So, as a possiible solution I installed Application Warm-Up for IIS 7.5 and set it up for both web site and wcf service.
My concern is, it doesn't seem to make any difference - first time I hit the site it still takes long time to bring it up. I checked event logs, there are no errors. So I'm wondering if anything special needs to be done for that module to work.
In IIS manager, when you go into the site, then into Application Warm-Up, the right-hand side has an "Actions" pane. I think you need the following two things:
Click Add Request and add at least one URL, e.g. /YourService.svc
Click Settings, and check "Start Application Pool 'your pool' when service started"
Do you have both of these? If you don't have the second setting checked, then I think the warmup won't happen until a user hits the site (which probably defeats the purpose of the warmup module in your case).
There is a new module from Microsoft that is part of IIS 8.0 that supercedes the previous warm-up module. This Application Initialization Module for IIS 7.5 is available a separate download.
The module will create a warm-up phase where you can specify a number of requests that must complete before the server starts accepting requests. Most importantly it will provide overlapping processes so that the user will not be served by the newly started process before it is ready.
I have answered a similar question with more details at How to warm up an ASP.NET MVC application on IIS 7.5?.
After you have fixed possible software/code optimizations allow me to suggest that each and evey code needs processing via hardware cpu. And our server skyrocketed in performance when we went to a multicore cpu and installed more GIGS of ram and connected UTP-6 cable insetad of standard UTP 5e cable onto the server... That doesnt fix your problem but if you are obsessed with speed as much as us, then you will be interested in the various dimensions that bottleneck speed.

Isolating a rampant process in IIS

I have a webserver that is pegged and I've been able to isolate it to a particular website instance. I'd like to dig deeper and isolate the particular page/process that is causing the issue.. Any tips?
You can take a memory dump of the process and poke around with windbg.
There are posts on this issue from Tess Ferrandez blog. Just do as she say.
Which version of IIS are you using? Some of the higher ones allow for a separation of which process gets used to handle requests such as a worker process that you could isolate a bit more that way. I'd also suggest reading through the IIS logs to see what requests were being handled, how long they took, etc.
There are many different quirks to each IIS version. The really low ones just had a start/stop functionality, but the newer ones have really given administrators much more control and power, IMO.
You should try using a profiler to identify what is using up the most resources. I've used dotTrace Profiler, although that can be expensive if you're on a tight budget.
It allows you to see exactly what processes and method calls use of the most processing time of a request really well so you can isolate the most resource intensive operations.
You should really be able to use any profiler to do this, not just dotTrace. I just happen to only have experience with this one in particular.
Change your web garden setting to 10 or greater. Then watch your CPU and memory utilization on the web server.
Continue to increase the web garden setting until either the app is completely responsive with less than 5% average utilization OR you have actually maxed your web server's memory.
UPDATE
It's not about diagnosing, it's about properly configuring the IIS server. Web Gardens are one of the top misunderstood features of IIS. By increasing the available threads to handle new requests you remove the appearance of contention at the web server level and place it squarely where it belongs. In this case at your database. Instead of masking a problem it actually highlights exactly where the problem is.
This turned out to be a SQL problem (sql 2005). The solution was found by using SQL activity monitor to identify a suspended process with a Async_network_io wait type. We then ran SQL profiler to narrow it down to two massive queries which were returning an over abundance of results.

Resources