I haven't found a definitive list out there, but hopefully someone's got one going or we can come up with one ourselves. What causes disruptions for .NET applications, or general service disruption, running on IIS? For instance, web.config changes will cause a recompilation in JIT (while just deploying a single page doesn't affect the whole app), and iisresets halt everything (natch, but you see where I'm going). How about things like creating a new virtual directory under a current web app?
It's helpful to know all the cases so you know if you can affect a change to a server without causing issues with the whole thing.
EDIT: I had IIS 6 in mind when I asked, but of course a list of anything different in other versions would be helpful as well to people.
It depends on what exactly you are talking about with disruptions. IISReset can cause a Service Unavailable message to display for a short time as IIS is shutdown and re-started.
Changes to the web.config, or adding a .dll file to the bin directory of an application causes a recycle of the application domain but that is not a disruption exactly, more of a "delay" in responding, the user will NOT see an error just a delayed response from the server. You can also get that from changing any files in App_Code or .vb files on non WAP developed sites.
You can also get IIS Worker Process Shutdowns due to inactivity, default setting is 20 minutes. Again this is a delay, not a lack of service.
Related
I m hosting asp.net mvc website and it really slow at the start. it take 40s to 1 minute. how to fix this.
i have tried bundling and minication. Debug mode=false in webconfig
my wesite is www.mmfujistore.com
My first thought on seeing this is that your hosting provider shuts down your server when it is idle for more than a few seconds in order to make room for other customers. That it takes so long to start up is because your shared application is literally booting up if you make a request after the site is down for a while.
If it works fine locally, I am going to bet you are best off finding a better hosting provider. If you update to use .net core you can use any linux host out there. I'm going to bet this isn't a .net issue.
Edit: I agree. I used to work there and I can confirm that they do this to manage the load by limiting the application pool memory considerably for each client. Considering the number of customers on the shared server it gets really difficult to manage, it really does not cause issues for most apps but if your application is memory hungry, they won't do anything to fix it permanently. The best they can do is free your app memory pool temporarily but it will get full once again. Your best option would be to move to another provider if this persists.
By looking at my Pingdom reports I have noted that my WebSite instance is getting recycled. Basically Pingdom is used to keep my site warm. When I look deeper into the Azure Logs ie /LogFiles/kudu/trace I notice a number of small xml files with "shutdown" or "startup" suffixes ie:
2015-07-29T20-05-05_abc123_002_Shutdown_0s.xml
While I suspect this might be to do with MS patching VMs, I am not sure. My application is not showing any raised exceptions, hence my suspicions that it is happening at the OS level. Is there a way to find out why my Instance is being shutdown?
I also admit I am using a one S2 instance scalable to three dependent on CPU usage. We may have to review this to use a 2-3 setup. Obviously this doubles the costs.
EDIT
I have looked at my Operation Logs and all I see is "UpdateWebsite" with status of "succeeded", however nothing for the times I saw the above files for. So it seems that the "instance" is being shutdown, but the event is not appearing in the "Operation Log". Why would this be? Had about 5 yesterday, yet the last "Operation Log" entry was 29/7.
An example of one of yesterday's shutdown xml file:
2015-08-05T13-26-18_abc123_002_Shutdown_1s.xml
You should see entries regarding backend maintenance in operation logs like this:
As for keeping your site alive, standard plans allows you to use the "Always On" feature which pretty much do what pingdom is doing to keep your website warm. Just enable it by using the configure tab of portal.
Configure web apps in Azure App Service
https://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/
Every site on Azure runs 2 applications. 1 is yours and the other is the scm endpoint (a.k.a Kudu) these "shutdown" traces are for the kudu app, not for your site.
If you want similar traces for your site, you'll have to implement them yourself just like kudu does. If you don't have Always On enabled, Kudu get's shutdown after an hour of inactivity (as far as I remember).
Aside from that, like you mentioned Azure will shutdown your app during machine upgrade, though I don't think these shutdowns result in operational log events.
Are you seeing any side-effects? is this causing downtime?
When upgrades to the service are going on, your site might get moved to a different machine. We bring the site up on a new machine before shutting it down on the old one and letting connections drain, however this should not result in any perceivable downtime.
I have a number of small MVC apps deployed as Microsoft Windows Azure websites. This has been working for several months.
Yesterday I rolled out a new one, and the deployment was unremarkable, everything worked fine. But a couple of hours later, access to the site was unavailable. The symptoms were that when the browser tried to navigate to the URL for that site, it would try to load for several minutes and then just give up with a completely blank page.
I attempted to stop and restart the site, and it worked once, but the symptoms came back several minutes later. Then I tried to stop and restart, and it didn't work.
I deployed the identical app to three additional URLs. Again, immediately on deployment, they all work fine, however, they fail at some interval in the future. They seem to not all fail at once. Sometimes restarting the site will fix the problem, and sometimes not.
IMPORTANT: If I wait for some period of time, the site may start to work again on its own.
However, deploying four versions of the app so that our users can go to a backup one if the primary one is not working is not optimal.
Any words of wisdom as to how I might go about debugging this?
ADDITIONAL INFO NOV 25, 2013:
When sites are failing, the IIS logs show either 500 or 502 Internal Service Errors. Our own MVC code is never hit, not even app_start.
You can start by checking the logs and remote debugging
http://www.drdobbs.com/windows/azure-sdk-22-supports-visual-studio-2013/240163499
Are the apps working locally?
Might not be the same problem, but from time to time our Azure instances will get the blue question mark of death as a status.
The reason we found out was that Microsoft will do upgrades on instances from time to time. If you have just one instance in a cloud service/role, then from time to time they will do maintenance and during that time it will be dead.
I have confirmed this with their support.
The only way to get around this that I know of is to create two instances. Then Microsoft guarantees ~99% availability.
Of course I also confirmed with them that this means twice the cost. =/
If that's not the issue I would enable RDP and get onto the machine to see what the problem is. Microsoft has these tools to help debug problems: http://blogs.msdn.com/b/kwill/archive/2013/08/26/azuretools-the-diagnostic-utility-used-by-the-windows-azure-developer-support-team.aspx
First, you should always run multiple instances of your web role with more than 1 upgrade domain. This is configurable in the service definition (CSDEF). Without this, you don't get an SLA from Microsoft, so you can't really complain that the VMs go down.
Second, to figure out what might be going on with these boxes, you should have both logs (my preference is to roll my own with page blobs or table storage), AND you should always have RDP access to a pre-production environment (production as well if you're not too fussed about security). Once on the box, look through the event viewer for errors.
Third, when an outage occurs check out the azure service dashboard (http://www.windowsazure.com/en-us/support/service-dashboard/) for outages.
Lastly, contact Microsoft support. It may take a few hours, but they are pretty good.
That it is happening repeatedly and for extended periods of time (more than 5 minutes), I would be there's something wrong with your hosted service. Again, RDP in and poke around. Good luck.
To debug your sites try to enable diagnostic logs:
http://www.windowsazure.com/en-us/develop/net/common-tasks/diagnostics-logging-and-instrumentation/
Another nice way to look around your site is using the debug console:
https://github.com/projectkudu/kudu/wiki/Kudu-console
Application pools in IIS are recycled very frequently and I can't figure out why. I remember reading about a possible issue in IIS6 that meant you were forced to recycle but a quick search now turns up empty. On IIS6 or 7 you can turn off the idle time, duration and specific time recycle options so no problems there.
So why does every .net site recycle the application pool? If a site didn't have any memory leaks could you set up a site that never needed to recycle?
Also failing this, what would be the best way to ensure background tasks are called, is there any auto restart modules for IIS or should an external service be used to make those calls?
It sounds like it is possible to do if you really wanted/needed to?
Websites are intended to keep running (albeit in a stateless nature). There are a myriad of reasons why app pool recycling can be beneficial to the hosting platform to ensure both the website and the server run at optimum. These include (but not limited to) dynamically compiled assemblies remaining in the appdomain, use of session caching (with no guarantee of cleanup), other websites running amok and resources getting consumed over time etc. An app pool can typically serve more than one website, so app pool recycling can be beneficial to ensure everything runs smoothly.
Besides the initial boot when the app fires up again, the effect should be minimal. Http.sys holds onto requests while a new worker process is started so no requests should be dropped.
From https://weblogs.asp.net/owscott/why-is-the-iis-default-app-pool-recycle-set-to-1740-minutes
You may ask whether a fixed recycle is even needed. A daily recycle is
just a band-aid to freshen IIS in case there is a slight memory leak
or anything else that slowly creeps into the worker process. In theory
you don’t need a daily recycle unless you have a known problem. I used
to recommend that you turn it off completely if you don’t need it.
However, I’m leaning more today towards setting it to recycle once per
day at an off-peak time as a proactive measure.
My reason is that, first, your site should be able to survive a
recycle without too much impact, so recycling daily shouldn’t be a
concern. Secondly, I’ve found that even well behaving app pools can
eventually have something sneak in over time that impacts the app
pool. I’ve seen issues from traffic patterns that cause excessive
caching or something odd in the application, and I’ve seen the very
rare IIS bug (rare indeed!) that isn’t a problem if recycled daily. Is
it a band-aid? Possibly, but if a daily recycle keeps a non-critical
issue from bubbling to the top then I believe that it’s a good
proactive measure to save a lot of troubleshooting effort on something
that probably isn’t important to troubleshoot. However, if you think
you have a real issue that is being suppressed by recycling then, by
all means, turn off the auto-recycling so that you can track down and
resolve your issue. There’s no black and white answer. Only you can
make the best decision for your environment.
There's a lot more useful/interesting info for someone relatively unlearned in the IIS world (like me), I recommend you read it.
We have a Windows Server 2003 web server, and on that server runs about 5-6 top level Sharepoint sites, with a different application pool for each one.
There is one W3WP process that keeps pegging 100% for most of the day (happened yesterday and today) and it's connected (found by doing "Cscript iisapp.vbs" at the command line and matching ProcessID) to a particular Sharepoint site...which is nearly unusable.
What kind of corrective action can I take? These are the following ideas I had
1) Stopping and restarting the Web Site in IIS - For some reason this didn't stop the offending W3WP process??? Any ideas why not?
2) Stopping and restarting the associated Application Pool.
3) Recycling the associated Application Pool.
Any of those sound like the right idea? If not what are some good things to try? I can't do an iisreset since I don't want to alter service to the other, much more heavily used, Sharepoint sites.
If I truly NEED to do some diagnostic work please point me in the right direction. I'm not the Sharepoint admin guy (he's out of town so I'm filling in even though I'm just a developer) but I'll do my best.
If you need any information just let me know and I'll look it up (slowly though, as that one process is pegging the entire machine).
It's not an IISReset that you need. You have a piece of code that is running amok with your memory. Most likely it's not actually a CPU problem but a paging problem. I've encountered this a few times with data structures in memory that grow too large to page in/out effectively and eventually the attempt to page data just begins consuming everything. The steps I would recommend are:
1) Go get the IIS Debug Diagnostics tools. And learn how to use them.
2) If possible, remove the session state from InProc to a state server or a sql server (since this requires serialization of all classes that go into session this may not be possible). This will help alleviate some process related memory issues.
3) Go to your application pool and adjust the number of worker processes upward. Remove Rapid fail protection (this will allow the site to continue serving pages even if rapid catastrophic errors occur).
The IIS debug diagnostics will record a LOT of data, but you can specify specific "catch" alerts that will detect hangs, excessive cpu usage etc. It will capture gigs of data, so be ready for a long wait when attempting to view the logs.
Turns out someone tried to install some features that went haywire.
So he wrote a stsadm script to uninstall those features
Processor was still pegging.
I restarted the IIS Application Pool for that IIS process, didn't fix it.
So then I restarted IIS for that site and that resolved the processor issue.