I need to increase transaction timeout in the Orchard cms, I have one process that takes more than ten minutes, after ten minutes orchard throws timeout exception. I noticed that the default timeout is 30 minutes. I don't know how can I fix it. Any help would be appreciated.
Thank you
If your process takes ten minute, I would strongly recommend you to execute it in another process than the IIS worker.
Furthermore, check if your transactions really need to take more than the default timeout.
The IIS workers nor the transactions are designed to be used this way.
Related
I've inherited an ASP.NET MVC app that takes between twenty seconds and a minute to display every single page. The vast majority of that time is spent in ActiveDirectoryClient. It looks like the original author may have read this Q&A about how to check group membership.
Calling ActiveDirectoryClient.Users.Where(...).ExecuteAsync() takes 3-10 seconds on a good day. Calling IUserFetcher.MemberOf.ExecuteAsync() takes another 5-7 seconds. Basically, every use of ActiveDirectoryClient takes several seconds, and there are many of them.
I tried using ActiveDirectoryClient.IsMemberOfAsync(...), but this just consumes 1.5 GB of RAM and never returns. (By "never" I mean I waited five minutes before stopping the debugger.)
I suspect that the problem is not with these bits of code, but some overall misconfiguration of Azure or the graph client. So maybe this question isn't even on the right site. Where do I begin troubleshooting this?
I am researching into the IIS Application Initialization module and from what I can see, when using the AlwaysRunning option for Start Mode setting for the application pool, basically it starts a new worker process that will always run even if there isn't any requests. When applying this option it starts the process automatically.
My concern is memory management and CPU usage, specifically how is this handled since the process always runs.
How can I compare this to setting the Start Mode to OnDemand and increase the Idle Time minutes to couple of days? That way, I guess, the process will run in idle mode for x days before it's terminated, and reinitialized on the next request and keep running for a couple of days. If I set the minutes to let's say 1.5 days, someone is bound to use the application at least once a day, so it will persist the process runtime and it will never be terminated.
Can anyone share experience regarding this topic?
Thanks
I have multisite application that runs few sites under separate app pools. All are set OnDemand for Start Mode and IdleTime for 1740 minutes, also I use Page Output Cache from app with different times for different page types. There is also NHibernate behind scene and DB is MySql.
The most active site have more than 100k visits per day and almost never is idle. When it starts if I recycle, need 30 seconds to 2 minutes to became full operable depending on requests at the moment and CPU usage is going from 40% to 70%. After the site is up CPU usage is very low (0-4%) if there are no new entries in DB and memory usage is around 3GB when all is cached. Sometimes CPU is going to 20% if at that moment are new request (for not cached content) and there is new entry saving.
Also Page Output Cache works on First Come First Served base so maybe this can also cause little problem while caching is done - user must wait, little more CPU to do the caching.
The most biggest problem in my case is using NHibernate and MySql but Page Output Cache resolved the problem for me when I decided to cache the page modules and content. I realize that is better application to starve for memory then for CPU.
3.5k visitors at one moment when everything is cached gave to me same memory usage (3GB) and CPU (server overall) around 40%
Other sites are using around 1-1.5GB memory and CPU never more then 20% at start.
The application with same settings for app pool and using MSSQL with EF I can't even notice that run on server. It is used by 10-60 users in minute there is not much content except embedding codes and it use 1-5% CPU and never more than 8MB memory. On recycle it is up for less then 10 seconds.
With this my experience I can tell you that all depends on what application serves and how it works :) and how much content do you have.
If you use OnDemand with long IdleTime it will be same as AlwaysStart and process is not used at that moment. If you use OnDemand with short IdleTime more often you will need CPU to start the process.
I am using MVC3, ASP.NET4.5, C#, Razor, EF6.1, SQL Azure
I have been doing some load testing using JMeter, and I have found some surprising results.
I have a test of 30 concurrent users, ramping up over 10 secs. The test plan is fairly simple:
Login
Navigate to page
Do query
Navigate back
Logout
I am using "small" "standard" instances.
I have noticed that web instances may be waiting on external processes, such as databases queries, so the web CPU could be low, but it is still a bottleneck. The CPU could be idling at 40% while waiting for a result set from the DB. So this could also be a reason why the extra instance may not be triggered. Actually this is a real issue. How do you trigger extra instance based on longer wait times? At the moment the only way round this is to have 2 instances up there permanently, or proactively set it up against a schedule.
Use async calls and you won't have to worry about scaling up. The waiting threads will be asleep, freeing up resources to handle other users.
If you still see lengthened response times after that it's probably the external process that's choking and in need of being scaled up
I have an azure web job running continuously, but the logs indicated that over the weekend it's status changed to Aborted, then Stopped. Although I did not use the website for the weekend, I am not sure why this would happen as there are still a lot of messages on the queue that need to be processed.
What can cause a continuous web job to stop or abort?
Does it have a timeout period?
Can the occurrence of multiple errors also cause it to stop or abort?
The job itself doesn't have a timeout period but the website does. Unless you enable the Always On option, the website (and the jobs) will unload after some idle time.
Another reason why a continuous job could stop is if you are running on the free tier and the job uses too much CPU time (I think you have 2.5 minutes CPU time for every 5 minutes).
Maybe someone can shed some light on this simple question:
I have a .NET web application that has been thoroughly vetted. It loads a cache per appdomain (process) whenever one starts and can not fully reply to requests until it completes this cache loading.
I have been examining the settings on my application pools and have started wondering why I was even recycling so often (once every 1,000,000 calls or 2 hours).
What would prevent me from setting auto-recycles to being once every 24 hours or even longer? Why not completely remove the option and just recycle if memory spins out of control for the appdomain?
If your application runs reliably for longer then the threshold set for app pool recycling, then by all means increase the threshold. There is no downside if your app is stable.
For us, we have recycling turned off altogether, and instead have a task that loads a test page every minute and runs an iisreset if it fails to load five times in a row.
You should probably be looking at recycling from the point of view of reliability. Based on historical data, you should have an idea how much memory, CPU and so on your app uses, and the historical patterns and when trouble starts to occur. Knowing that, you can configure recycling to counter those issues. For example, if you know your app has an increasing memory usage pattern* that leads to the app running out of memory after a period of several days, you could configure it to recycle before that would have happened.
* Obviously, you would also want to resolve this bug if possible, but recycling can be used to increase reliability for the customer
The reason they do it is that an application can be "not working" even though it's CPU and memory are fine (think deadlock). The app recycling is a final failsafe measure which can protect flawed code from dying.
Also any code which has failed to implement IDisposable would run finalizers on the recycle which will possibly release held resources.