Recently, content staging became extremely slow for our Kentico 8.2 application (to move a page, it's taking 30 minutes or more). Similar staging tasks before took seconds to complete. We have restarted the website, and that had no effect.
Before, we just had the one website in the Kentico instance. We recently deployed another website to the same instance. This could be a coincidence, but it is the only thing we can think of that might be affecting the staging performance. However, we do not understand why. Why would adding a second website slow down the content staging of a different website? How do we fix it? Also, if the addition of another website is just a coincidence, what are other things to check in the event of slow content staging? We don't really know where to start with this one.
EDIT
Sites are hosted on premise (not Azure) on same server.
Look into the table index fragmentation. It grows over a period of time and make the staging application slow.
Another thing to check, make a frequent sync of tasks / changes to higher environment to reduce the number of records to keep a track.
Hope this helps in resolving your issue.
Related
UPDATE: I've figured it out. See the end of this question.
I have an Azure App Service running four sites. One of the sites has two deployment slots in addition to the primary one. Recently I've been seeing really high CPU utilization for the App Service plan as a whole.
The dark orange line shows the CPU percentage. This is just after restarting all my sites, which brought it down to this level.
However, when I look at the CPU use reported by each site, it's really low.
The darker blue line shows the CPU time, which is basically nothing. I did this for all of my sites, and all the graphs look the same. Basically, it seems that none of my sites are causing the issue.
A couple of the sites have web jobs, so I took a look at the logs but everything is running fine there. The jobs run for a few seconds every few hours.
So my question is: how can I determine the source of this CPU utilization? Any pointers would be greatly appreciated.
UPDATE: Thanks to the replies below, I was able to get more detail into what was happening. I ended up getting what I needed from SCM / Kudu tools. You can get here by going to your web app in Azure and choosing Advanced Tools from the side nav. From the Kudu dashboard, choose Process Explorer. The value in the Total CPU Time column is not directly useful, because it's the time in seconds that the process has run since it started, which might have been minutes or days ago.
However, if you make a record of the value at intervals, you can look at the change over time, and one process might jump out at you. In my case, it was my WebJobs process. Every 60 seconds, this one process was consuming about 10 seconds of processor time, just within one environment.
The great thing about this Kudu dashboard is, if you can catch the problem while it is actually happening, you can hit the Start Profiling button and capture a diagnostic session. You can then open this up in Visual Studio and get some nice details about where the CPU time is being spent.
Just in case anyone else is seeing similar issues, I'll provide more details about my particular case. As I mentioned, my WebJobs exe was the culprit, and I found that all the CPU time was being spent in StackExchange.Redis.SocketManager, which manages connections to Azure Redis Cache. In my main web app, I create only one connection, as recommended. But Since my web jobs only run every once in a while, I was creating a new connection to Azure Redis Cache each time one ran, which apparently can lead to issues. I changed my code to create the Redis Cache connection once when the WebJob process starts up and use the existing connection when any individual WebJob runs.
Time will tell if this really fixes the issue, but I think it will. When the problem occurred, it always fit the same pattern: After a few days of running fine, my CPU would slowly ramp up over the course of about 12 hours. My thinking is that each time a WebJob ran, it created a connection object, which at first didn't produce trouble, but gradually as WebJobs ran every hour or two, cruft was building up until finally some critical threshold was met and the CPU usage would take off.
Hope this helps someone out there. Best wishes!
May be you should go to webApp scm?
%yourAppName%.scm.azurewebsites.com;
There is a page, that can show you all process, that runned now on your web app. (something like Console > Process).
Also you can go to support page (from scm right corner).
You can find some more info about your performance there, and make memory dump (not for this problem, but it useful for performance issues).
According to your description, I assumed that you could leverage the Crash Diagnoser extension to capture dump files from your Web Apps and WebJobs when the CPUs usage percentage is higher than the specific threshold to isolate this issue. For more details, you could refer to this official blog.
I have an umbraco website setup in Azure. The front-end website loads fine but the back end takes more than 15 seconds from when you hit the "Save and Publish" to show the check mark denoting success. I setup a test website but in Azure VM pointing to the same Azure sql database that hosts the same umbraco Azure website and I don't get this problem.
I just spent some time debugging this same scenario. We had a situation where once someone saved a node, it would take 30 seconds before the UI would become responsive again. Network trace confirmed that we were waiting on API calls back from Umbraco.
We were on an S0 SQL instance, so I bumped it to S1 and the perf got worse!? (guessing indexes rebuilding?).
We already have a few Azure specific web.config options set (like useTempStorage="Sync" in our ExamineSettings.config). Ended up adding the line below and now our saves went from 30-35s to 1-2s!
<add key="umbracoContentXMLUseLocalTemp" value="true" />
This is from the load balancing guide available here - https://our.umbraco.org/documentation/Getting-Started/Setup/Server-Setup/load-balancing/flexible
Umbraco Save and publish is fairly DB intensive. An S1 instance is possibly not beefy enough. We use S2 for our dev sites, and that can take up to 5 seconds to save and publish depending on whether anything else is hitting the database.
You may also have issues with other code running that's slowing things down. Things like Examine indexing can be quite slow on Azure. It could also be one a plugin that slowing things down.
How complex are your DocTypes? Also, which version of Umbraco are you running? Some older versions have bugs in that cause performance issues on Azure.
I have a number of small MVC apps deployed as Microsoft Windows Azure websites. This has been working for several months.
Yesterday I rolled out a new one, and the deployment was unremarkable, everything worked fine. But a couple of hours later, access to the site was unavailable. The symptoms were that when the browser tried to navigate to the URL for that site, it would try to load for several minutes and then just give up with a completely blank page.
I attempted to stop and restart the site, and it worked once, but the symptoms came back several minutes later. Then I tried to stop and restart, and it didn't work.
I deployed the identical app to three additional URLs. Again, immediately on deployment, they all work fine, however, they fail at some interval in the future. They seem to not all fail at once. Sometimes restarting the site will fix the problem, and sometimes not.
IMPORTANT: If I wait for some period of time, the site may start to work again on its own.
However, deploying four versions of the app so that our users can go to a backup one if the primary one is not working is not optimal.
Any words of wisdom as to how I might go about debugging this?
ADDITIONAL INFO NOV 25, 2013:
When sites are failing, the IIS logs show either 500 or 502 Internal Service Errors. Our own MVC code is never hit, not even app_start.
You can start by checking the logs and remote debugging
http://www.drdobbs.com/windows/azure-sdk-22-supports-visual-studio-2013/240163499
Are the apps working locally?
Might not be the same problem, but from time to time our Azure instances will get the blue question mark of death as a status.
The reason we found out was that Microsoft will do upgrades on instances from time to time. If you have just one instance in a cloud service/role, then from time to time they will do maintenance and during that time it will be dead.
I have confirmed this with their support.
The only way to get around this that I know of is to create two instances. Then Microsoft guarantees ~99% availability.
Of course I also confirmed with them that this means twice the cost. =/
If that's not the issue I would enable RDP and get onto the machine to see what the problem is. Microsoft has these tools to help debug problems: http://blogs.msdn.com/b/kwill/archive/2013/08/26/azuretools-the-diagnostic-utility-used-by-the-windows-azure-developer-support-team.aspx
First, you should always run multiple instances of your web role with more than 1 upgrade domain. This is configurable in the service definition (CSDEF). Without this, you don't get an SLA from Microsoft, so you can't really complain that the VMs go down.
Second, to figure out what might be going on with these boxes, you should have both logs (my preference is to roll my own with page blobs or table storage), AND you should always have RDP access to a pre-production environment (production as well if you're not too fussed about security). Once on the box, look through the event viewer for errors.
Third, when an outage occurs check out the azure service dashboard (http://www.windowsazure.com/en-us/support/service-dashboard/) for outages.
Lastly, contact Microsoft support. It may take a few hours, but they are pretty good.
That it is happening repeatedly and for extended periods of time (more than 5 minutes), I would be there's something wrong with your hosted service. Again, RDP in and poke around. Good luck.
To debug your sites try to enable diagnostic logs:
http://www.windowsazure.com/en-us/develop/net/common-tasks/diagnostics-logging-and-instrumentation/
Another nice way to look around your site is using the debug console:
https://github.com/projectkudu/kudu/wiki/Kudu-console
Application pools in IIS are recycled very frequently and I can't figure out why. I remember reading about a possible issue in IIS6 that meant you were forced to recycle but a quick search now turns up empty. On IIS6 or 7 you can turn off the idle time, duration and specific time recycle options so no problems there.
So why does every .net site recycle the application pool? If a site didn't have any memory leaks could you set up a site that never needed to recycle?
Also failing this, what would be the best way to ensure background tasks are called, is there any auto restart modules for IIS or should an external service be used to make those calls?
It sounds like it is possible to do if you really wanted/needed to?
Websites are intended to keep running (albeit in a stateless nature). There are a myriad of reasons why app pool recycling can be beneficial to the hosting platform to ensure both the website and the server run at optimum. These include (but not limited to) dynamically compiled assemblies remaining in the appdomain, use of session caching (with no guarantee of cleanup), other websites running amok and resources getting consumed over time etc. An app pool can typically serve more than one website, so app pool recycling can be beneficial to ensure everything runs smoothly.
Besides the initial boot when the app fires up again, the effect should be minimal. Http.sys holds onto requests while a new worker process is started so no requests should be dropped.
From https://weblogs.asp.net/owscott/why-is-the-iis-default-app-pool-recycle-set-to-1740-minutes
You may ask whether a fixed recycle is even needed. A daily recycle is
just a band-aid to freshen IIS in case there is a slight memory leak
or anything else that slowly creeps into the worker process. In theory
you don’t need a daily recycle unless you have a known problem. I used
to recommend that you turn it off completely if you don’t need it.
However, I’m leaning more today towards setting it to recycle once per
day at an off-peak time as a proactive measure.
My reason is that, first, your site should be able to survive a
recycle without too much impact, so recycling daily shouldn’t be a
concern. Secondly, I’ve found that even well behaving app pools can
eventually have something sneak in over time that impacts the app
pool. I’ve seen issues from traffic patterns that cause excessive
caching or something odd in the application, and I’ve seen the very
rare IIS bug (rare indeed!) that isn’t a problem if recycled daily. Is
it a band-aid? Possibly, but if a daily recycle keeps a non-critical
issue from bubbling to the top then I believe that it’s a good
proactive measure to save a lot of troubleshooting effort on something
that probably isn’t important to troubleshoot. However, if you think
you have a real issue that is being suppressed by recycling then, by
all means, turn off the auto-recycling so that you can track down and
resolve your issue. There’s no black and white answer. Only you can
make the best decision for your environment.
There's a lot more useful/interesting info for someone relatively unlearned in the IIS world (like me), I recommend you read it.
I am working in a MOSS 2007 project and have customized many parts of it. There is a problem in the production server where it takes a very long time (more than 15 minutes, sometimes fails due to timeouts) to create a sub site (even with the built-in site templates). While in the development server, it only takes 1 to 2 minutes.
Both servers are having same configuration with 8 cores CPU and 8 GIGs RAM. Both are using separate database servers with the same configuration. The content db size is around 100 GB. More than a hundred subsites are there.
What could be the reason why in the other server it will take so much time? Is there any configuration or something else I need to take care?
Update:
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Update 2:
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
What should I check next?
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Thanks for the inputs everyone, I really appreciate.
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
Any idea what should I check next? Thanks a lot.
Yes, its normal OOB if you haven't turned the Second Stage Recycle bin off or set a site quota. If a site quota has not been set then the growth of the Second Stage Recycle bin is not limited...
the second stage recycle bin is by default limited to 50% size of the site quota, in other words if you have a site quota of 100gb then you would have a Second Stage recycle bin of 50gb. If a site quota has not been set, there are not any growth limitations...
I second everything Nat has said and emphasize splitting the content database. There are instructions on how to this provided you have multiple site collections and not a single massive one.
Also check your SharePoint databases are in good shape. Have you tried DBCC CHECKDB? Do you have SQL Server maintenance plans configured to reindex and reduce fragmentation? Read these resources on TechNet (particularly the database maintenance article) for details.
Finally, see if there is anything more you can do to isolate the SQL Server as the problem. Are there any other applications with databases on the same SQL Server and are they having problems? Are you running performance monitoring on the SQL Server or SharePoint servers that show any bottlenecks?
Backup the production database to dev and attach it to your dev SharePoint server.
Try and create a site. If it does not take forever to create a site, you can assume there is a problem with the Prod database.
Despite that, at 100gig, you are running up to the limit for a content database and should be planning to put content into more than one. you will know why when you try and backup the database. Searching should also be starting to take a good long time now.
So long term you are going to have to plan on splitting your websites out into different content databases.
--Responses--
Yeah, database size is all just about SQL server handling it. 100GB is just the "any more than this and it starts to be a pain" rule of thumb. Full Search crawls will also start a while.
Given that you do not have access to the production database and that creating a sub-site is primarily a database operation, there is nothing you can really do to figure out what the issue is.
You could try creating a subsite while doing a trace of the Dev database and look at the tables those commands reference to see if there is a smoking gun, but without production access you are really hampered.
Does the production system server pages and documents at a reasonable speed?
See if you can start getting some stats from the database during the creation, find out what work is being done. SQL has some great tools for that now.