I'm running an ASP.NET MVC application on Windows Server 2012 with IIS 8.5. This morning, users started complaining that our site was down. Logging onto the server I noticed no noticeable memory or CPU pressure being experienced, and restarting the website in IIS resolved the issue.
Looking into the IIS logs I noticed that at around 23:00 is when requests started degrading in performance. At the same time, only certain requests appear to be logged here as successful. All other request to the server are not logged here at all.
I can see our pingdom monitoring was successfully calling our uptime page and getting a successful response, but no other client requests appear to have been successful.
Checking the httperr log I can see all my other requests appear to be failing with the error Connection_Dropped during this period. The issue seems to have started occurring at exactly 23:00 - which is when the application pool recycled.
There are no other errors in our event logs, no application level errors either, so I'm just wondering what other logs could I could potentially look into here to see why requests appear to stop going through to IIS?
Related
I have a web application hosted under IIS. It is a data warehouse, and during its startup process it requires instantiating a large set of items in memory (takes roughly ~20 minutes to fully set up). Because this website is critical to our company, this system must be online 100% during the daytime, and only can be restarted during off-work hours.
For some reason, this web application seems to be "offline" when there is no usage for some time. I know this because the cache is not fully instantiated when the website is visited. This is unacceptable.
It is not clear to me why the website is shut off. The application pool is only set to recycle daily at 4:00 AM (it is 11 AM now).
Are there other settings which I'm not aware of on the IIS part that causes it to shut down the website automatically?
Addl Note: The website does not shut off automatically when running in IISExpress in Visual Studio. Only the production version hosted on IIS shuts off.
Here's a screen of the Advanced Settings for the Application Pool the web site is running under. (Not sure if it's useful.)
I'm on IIS 7.5, Server 2008 R2. It is an ASP.NET 5 Web App.
Check Idle Time-out settings under process model in screenshot. That setting is causing app pool shutting down when remain idle for 20 mins. You can set it to 0 to keep it running all time even when its idle i.e. not processing any requests.
Note: Keeping app pool running all time will consume server's precious memory. It may become critical especially if application is leaking memory.
I deployed node js server to Azure WebApp, and it worked fine. But, I see that sometime the response time is very slow. Also, I see that somewhere above 500 request/second the server start to fail handling request, and I see it use only 15% CPU. Now, I checked and the server return 500 error because the pipe is busy (by the win32 error code). That's why I was wondering if there is something I can change in the IISNode config to improve the server request capacity.
I already enabled the AlwaysOn feature, and also I add a check in Pingdom to keep the site alive. Also, I already changed nodeProcessCountPerApplication to 0 so it use all the available process.
Thank you,
Omer
One thing you can do is enable Always On. Without it, when your site hasn't been visited for 20 minutes the site gets taken down. Then the next time someone makes a request to your site Azure Web Apps warms up (re-sets up) your site but this process takes a few seconds.
Note that Always On is only available for sites in Basic, Standard, or Premium SKUs
Also, check out this page for tips on debugging Node.js apps in Azure Web Apps: https://azure.microsoft.com/en-us/documentation/articles/web-sites-nodejs-debug/
We have a web application that runs on 6 web servers with HAProxy as the load balancer. We use web deploy to sync our IIS and application across all web servers. Starting January some of customers starting reporting application slow downs. After a lot of work we found that request coming to IIS at random times get stuck in BeingRequest state of IIS Web Core. I am attaching a screenshot from one of my server. Any insight into the issue will be really appreciated.
Thanks,
Fahad
I have a similar issue but my requests are stuck in a different part of IIS. If anyone has any input there you may also find it useful - Debugging requests which are 'stuck' in an IIS worker process
In the screenshot you give your oldest request is 16s old - do they stay in forever or are they just very slow to process? If they don't process is the oldest request in the queue always exactly the same URL and if so can you trigger the issue with that URL?
If they do eventually process (or even if they don't) a good first step for you would be to run Failed Request Tracing/Logging - you can configure it directly in inetmgr at site level. Looking at the compact output will give you an overview of if your requests are being sent on loops around IIS or if there is anything triggering that you wouldn't expect in the life of the request.
If they do eventually process also look into utilisation exhaustion - maybe IIS is just crawling as it's struggling for CPU/RAM/IO - check out the usual suspects.
I have an IIS 7.5 server, hosting a web application on .NET 4.0. I have enabled IIS logs to log requests on a daily basis. However I have observed that, after 23.59.59 the timestamp of request freezes within the log, generating a lot of logs (~1000) with that timestamp, when in reality there were hardly 5-10 requests at that point in time. Any clues?
In the last couple of weeks my site has hung. Connections increase but seem to never release. Once it hits 700+ current connections (using the performance tool) the whole site hangs and I have no choice but to do iisreset to get it working again. Normally it's around 100 concurrent connections at peak time. No errors or warnings in the event log when it stops releasing so that isn't helpful. This problem has been happening after copying new DLLs over the old ones to do a site update. I have a single server so no choice but to copy over live. But in the case of today it was fine after the update but then the problem happened two hours later. It's .NET 4.5 site on Windows 2008 R2 64bit. Is there a way I can find out what's causing this, like other log files some place or something I should try doing when it happens? What I have tried is recycling the app pool (doesn't help), turning off the app pool and back on (doesn't turn on, gives exception), restarting the site (doesn't help), and iisreset (works every time).
Usually some requests are "stuck" when you see these syndromes, as app.pool recycle and site restart will wait for the pending requests to finish in general (graceful exit), while iisreset will actually kill the w3wp.exe process after 20 (or 30?) seconds if it doesn't exit on its own (non-graceful exit), this why iisreset works for you.
Listing the active requests (appcmd.exe list request /?) should give you a clue about why is this happening.