I have a question about Unity Container. My MVC application starts on Application_Start at Global.asax which is a container of Unity Container that works like below
_container = new UnityContainer();
_container.RegisterType(typeof(MainBCUnitOfWork), new PerResolveLIfeTimeManager());
From what I know, IIS will instantiate the type MainBCUnitOfWork only one time on its life cycle and will utilize the same instance in all requests, which is why I am using LifeTimeManager of type PerResolveLifeTimeManager.
My application has always worked well in this mode, however I am trying to utilize shared database access / cross database access where the required access will come from a (session, querystring) and change the database with the method below:
public void ChangeDatabase(string database)
{
Database.Connection.ConnectionString = "server=localhost;User Id=root;password=mypassword;Persist Security Info=True;database=" + database;
}
On my local testing everything works ok, but I have questions when in production if IIS processing many requests at the same time.
I did some research and found references where IIS only process one request each time, and if needed to process more than one request I should activate Web Garden, but this would bring other problems. See this link IIS and HTTP pipelining, processing requests in parallel
My question is, does IIS server only process one request each time, independent of the source?
The change of database during execution time can interfere with prior requests which are still ongoing?
I use Utity 2 wich does not have PerRequestLIfetimeManager, which would instantiate MainBCUnitOfWork per each request, which is sugested here MVC, EF - DataContext singleton instance Per-Web-Request in Unity
In case I update for a newer version and utilize one instance per request, what would be the impact on my performance?
What is recommended for this situation?
From what I know, IIS will instantiate the type MainBCUnitOfWork only
one time on its life cycle and will utilize the same instance in all
requests, which is why I am using LifeTimeManager of type
PerResolveLifeTimeManager.
This is wrong. Look at this articles (one and two). PerResolveLifeTimeManager is not a singleton life time manager. You'll get new instance of MainBCUnitOfWork for every Resolve.
What is recommended for this situation?
Using PerRequestLifeTimeManager is the best choice for web applications. You will get a new independant instance of your UnitOfWork for every request.
Related
Context
In an ASP.NET Core application I would like to execute an operation which takes say 5 seconds (like sending email). I do know async/await and its purpose in ASP.NET Core, however I do not want to wait the end of the operation, instead I would like to return back to the to the client immediately.
Issue
So it is kinda Fire and Forget either homebrew, either Hangfire's BackgroundJob.Enqueue<IEmailSender>(x => x.Send("hangfire#example.com"));
Suppose I have some more complex method with injected ILogger and other stuff and I would like to Fire and Forget that method. In the method there are error handling and logging.(note: not necessary with Hangfire, the issue is agnostic to how the background worker is implemented). My problem is that method will run completely out of context, probably nothing will work inside, no HttpContext (I mean HttpContextAccessor will give null etc) so no User, no Session etc.
Question
How to correctly solve say this particular email sending problem? No one wants wait with the response 5 seconds, and the same time no one wants to throw and email, and not even logging if the send operation returned with error...
How to correctly solve say this particular email sending problem?
This is a specific instance of the "run a background job from my web app" problem.
there is no universal solution
There is - or at least, a universal pattern; it's just that many developers try to avoid it because it's not easy.
I describe it pretty fully in my blog post series on the basic distributed architecture. I think one important thing to acknowledge is that since your background work (sending an email) is done outside of an HTTP request, it really should be done outside of your web app process. Once you accept that, the rest of the solution falls into place:
You need a durable storage queue for the work. Hangfire uses your database; I tend to prefer cloud queues like Azure Storage Queues.
This means you'll need to copy all the data over that you will need, since it needs to be serialized into that queue. The same restriction applies to Hangfire, it's just not obvious because Hangfire runs in the same web application process.
You need a background process to execute your work queue. I tend to prefer Azure Functions, but another common approach is to run an ASP.NET Core Worker Service as a Win32 service or Linux daemon. Hangfire has its own ad-hoc in-process thread. Running an ASP.NET Core hosted service in-process would also work, though that has some of the same drawbacks as Hangfire since it also runs in the web application process.
Finally, your work queue processor application has its own service injection, and you can code it to create a dependency scope per work queue item if desired.
IMO, this is a normal threshold that's reached as your web application "grows up". It's more complex than a simple web app: now you have a web app, a durable queue, and a background processor. So your deployment becomes more complex, you need to think about things like versioning your worker queue schema so you can upgrade without downtime (something Hangfire can't handle well), etc. And some devs really balk at this because it's more complex when "all" they want to do is send an email without waiting for it, but the fact is that this is the necessary step upwards when a baby web app becomes distributed.
Recently we want to cater the slow loading problem of IIS for first request, after I did some research, I've found that IIS7.5+ has a feature named "Application Initialization" which maybe what I need.
However I have to understand the mechanism before I try to apply it and here is my understanding:
With default IIS setting:
The application pool idle after 20 minutes
The corresponding worker process is killed
First request comes in
IIS starts to create a new worker process
IIS starts to load the application
The client can see after application is loaded
And step 4, 5 makes first request not so responsive.
With Application Initialization set:
The application pool idle after 20 minutes
The corresponding worker process is killed
IIS starts to create a new worker process
IIS starts to load the application through a "fake" request
First request comes in
The client can see after application is loaded
Now the first request is responsive as indeed it is not the first request to the server, sometimes before there was a "fake" request which kicks loading of the application.
What I would like to know is that:
Is my understanding correct?
When application initialization is set, the worker process is still being killed, but a new one is created right after it, is it the case?
That's pretty much how it works. Without Application Initialization, as you mentioned, once the worker process is killed, it is not restarted until a request is sent to it. Upon the first request, a new worker process (W3WP.exe) is started and it starts to load the application. And this cold start of the application is what typically makes the first request less responsive. For eg. if it's an ASP.NET application, the first request triggers the recompilation of the temporary ASP.NET files and this can take several seconds in a moderately large enterprise application.
If you look at the setup of Application Initialization, you will see that there are two main parts to it:
You need to set the startMode of the application pool associated with the website to AlwaysRunning
You need to set preloadEnabled to true on some path (path to the website) on the ApplicationPool
Step 1 is what tells IIS to automatically restart the IIS worker process whenever there is a reboot or IISReset. (You can easily see this in action in TaskManager - do only step 1 and do an IISReset, you should be seeing the existing W3WP.exe process getting removed and a new one is getting created)
Step 2 is what tells IIS to make the initial fake/dummy request that will do all the required initialisation of your web application. For eg. for an ASP.NET application, this essentially will trigger the compilation of all the ASP.NET files, so that the next request - the actual first request to the page does not experience the long delays associated with app initialisation.
While it is true that a traditional approach of keeping using a script to poll the app to prevent it from going idle can do the job, the ApplicationInitalization module makes the job much easier. You can even have IIS issue the dummy request to a custom warmup script that does much more than a simple page load - preloading a cache of several webpages, ahead of time generate/do any task that might otherwise take longer etc.
Official documentations here:
IIS 7.5
IIS 8.0
Your understanding is correct based on my experiences. I first ran into this capability in a performance testing scenario way back in 2014. I was custom coding the ping portion of this into monitoring jobs :O
"The Application Initialization Module basically allows you to turn on
Preloading on the Application Pool and the Site/IIS App, which
essentially fires a request through the IIS pipeline as soon as the
Application Pool has been launched. This means that effectively your
ASP.NET app becomes active immediately, Application_Start is fired
making sure your app stays up and running at all times." - Rick Strahl
Official detailed docs are on the MSDN site, from what I see not much has changed between IIS 7.5 and 8.0 in the way of config.
I am running a load test using JMeter on my Azure web services.
I scale my services on S2 with 4 instances and run JMeter 4 instances with 500 threads on each.
It starts perfectly fine but after a while calls start failing and giving Timeout error (HTTP status:500).
I have checked HTTP request queue on azure and found that on 2nd instance it is very high and two instances it is very low.
Please help me to success my load test.
I assume you are using Azure App Service. If you check the settings of your App, you will notice ARR’s Instance Affinity will be enabled by default. A brief explanation:
ARR cleverly keeps track of connecting users by giving them a special cookie (known as an affinity cookie), which allows it to know, upon subsequent requests, to which server instance they were talking to. This way, we can be sure that once a client establishes a session with a specific server instance, it will keep talking to the same server as long as his session is active.
This is an important feature for session-sensitive applications, but if it's not your case then you can safely disable it to improve the load balance between your instances and avoid situations like the one you've described.
Disabling ARR’s Instance Affinity in Windows Azure Web Sites
It might be due to caching of network names resolution on JVM or OS level so all your requests are hitting only one server. If it is the case - add DNS Cache Manager to your Test Plan and it should resolve your issue.
See The DNS Cache Manager: The Right Way To Test Load Balanced Apps article for more detailed explanation and configuration instructions.
This is a best practices question.
Per this best practices article and per MSDN, the OrganizationServiceProxy is not thread safe.
If you have a multi threaded client application in which you are creating an instance of an
OrganizationServiceContext (on a per thread basis), the constructor of which accepts an
IOrganizationService instance and you pass in a global instance of the OrganizationServiceProxy
(i.e a static instance allocated once at the "process level"), will this cause threading issues and/or if the OrganizationServiceProxy instance faults, will it affect operations that the threads try to perform on their own "local" instance of the OrganizationServiceContext?
My belief is that it will, and that an OrganizationServiceProxy instance needs to be created on a "per thread" basis and that each OrganizationServiceContext in a multi threaded application would need its own corresponding OrganizationServiceProxy instance.
I'm posting this to get confirmation of the above.
Also, the article indicates
The service proxy class performs the metadata download and user authentication by using the following class methods
IServiceManagement<IOrganizationService> orgServiceManagement =
ServiceConfigurationFactory.CreateManagement<IOrganizationService>(
new Uri(organizationUrl))
AuthenticationCredentials authCredentials = orgServiceManagement.Authenticate(credentials)
By caching the service management and authenticated credential objects, your application can more efficiently construct the service proxy objects more than one time per application session
If I try to execute the above API calls manually, in Active directory authentication mode, the authCredentials.SecurityTokenResponse is null as indicated by MSDN
Is there a way to perform the authentication just once for AD mode and pass an authenticated SecurityTokenResponse to a newly created OrganizationServiceProxy via the following constructor?
OrganizationServiceProxy (IServiceConfiguration, SecurityTokenResponse)
so that you don't have to take the authentication and metadata download hit on a "per thread basis" when constructing the OrganizationServiceProxy instance per thread and just take the hit once?
Yes, you will definitely have issue if you attempt multi-threaded operations on a single IOrganization service.
We have two basic multi-threaded CRM applications: batch processors, and another web app. For the batch programs I've found it works better to only have 10 different threads, and to batch up the work among the 10 different threads. So if you're inserting 100,000 records, split them into 10 batches of 10,000, a single organization service for each thread.
We also have a website that does a lot of CRM interactions so there is no real way to batch the requests, so we created a CRM connection pool to reuse any open, already authenticated connections.
Of course this won't work at all if you're not using some system service account.
Background:
I have a system that hosts WCF services inside a Windows Service with NetTCP binding. To add a new service to the collection you simply add the standard WCF config entries inside <system.serviceModel -> services /> and then add a line inside a custom configuration section that tells the hosting framework it needs to initialize the service. Each service is initialized with its own background thread and AppDomain instance to keep everything isolated.
Here is an example of how the services are initialized:
Host
- ServerManager
- ServiceManager
- BaseServerHost
The ServerManager instance has a collection of ServiceManagers that each correlate to a single service instance which is where the standard WCF implementation lies (ServiceHost.Open/Close, etc). The ServiceManager instance instantiates (based on the config - it has the standard assembly/type definition) an instance of the service by use of the BaseServerHost base class (abstract). Every service must inherit from this for the framework to be able to use it. As part of the initialization process BaseServerHost exposes a couple of events, specifically an UnhandledException event that the owning ServiceManager attaches itself to. (This part is critical in relation to the question below.)
This entire process works exceptionally well for us (one instance is running 63 services) as I can bring someone on who doesn't know anything about WCF and they can create services very quickly.
Question:
The problem I have run into is with background threading. A majority of the exposed methods on our endpoints do a significant amount of activity after a standard insert/update/delete method call such as sending messages to other systems. To keep performance up (the front-end is web-based) we let the initial insert/update/delete method do its thing and then fire off a background thread to handle all the stuff an end-user doesn't need to wait for to complete. This option works great until something in that background thread goes unhandled and brings the entire Windows service down, which I understand is by design (and I'm OK with).
Based on all of my research I have found that there is no way to implement a global try/catch (minus using the hacked config of enabling 1.1 handling of background crashing) so my team will have to go back and get those in the appropriate places. That aside, what I've found is on the endpoint side of the WCF hosting appears to be in its own thread on each call and getting that thread to talk to the "parent" has been a nightmare. From the service viewpoint here is the layout:
Endpoint (svc - inherits from BaseServerHost, mentioned above)
- Business Layer
- Data Layer
When I catch an exception on a background thread in the business layer I bubble it up to the Endpoint instance (which inherits from BaseServerHost) which then attempts to fire BaseServerHost's UnhandledException event for this particular service (which was attached to by the owning ServiceManager that instantiated it). Unfortunately the event handler is no longer there so it does nothing at all. I've tried numerous things to get this to work and thus far all of my efforts have been in vain.
When looking at the full model (shown below), I need to make the Business layer know about its parent Endpoint (this works) and the endpoint needs to know about the running BaseServerHost instance which needs to know about the ServiceManager that is hosting it so the errors can be bubbled up to this for use in our standard logging procedures.
Host
- ServerManager
- ServiceManager <=====================
- BaseServerHost ||
- Endpoint (svc) ||
- Business Layer <======
- Data Layer
I've tried static classes with no luck and even went as far as making ServerManager static and expoting its previously internal collection of ServiceManagers (so they can be shutdown), but that collection is always empty or null too.
Thoughts on making this work?
EDIT: After digging a little further I found an example of exactly how I envision this working. In a standard ASP.NET website, on any page/handler etc. you can use the HttpContext.Current property to access the current context for that request. This is exactly how I would want this to work with a "ServiceManager.Current" returning the owning ServiceManager for that service. Perhaps that helps?
Maybe you should look into doing something with CallContext:
http://msdn.microsoft.com/en-us/library/system.runtime.remoting.messaging.callcontext.aspx
You can use either SetData/GetData or LogicalSetData/LogicalGetData, depending on whether you want your ServiceManager to be associated with one physical thread (SetData) or a "logical" thread (LogicalSetData). With LogicalSetData you could make the same ServiceManager instance available within a thread as well as within that thread's "child" threads. Will try to post a couple of potentially useful links later when I can find them.
Here is a link to the "Virtual Singleton Pattern" on codeproject.
Here is a link to "Thread Singleton"
Here is a link to "Ambient Context"
All of these ideas are similar. Essentially, you have an object with a static Current property (can be get or get/set). Current puts its value in (and gets it from) the CallContext using either SetData (to associate the "Current" value with the current thread) or LogicalSetData (to associate the "Current" value with the current thread and to flow the value to any "child" threads).
HttpContext is implemented in a similar fashion.
System.Diagnostics.CorrelationManager is another good example that is implemented in a similar fashion.
I think the Ambient Context article does a pretty good job of explaining what you can accomplish with this idea.
Whenever I dicsuss CallContext, I try to also include this link to this entry from Jeffrey Richter's blog.
Ultimately, I'm not sure if any of this will help you or not. One it would be useful would be if you had a multithreaded server application (maybe each request is fulfilled by a thread and multiple requests can be fulfilled at the same time on different threads), you might have a ServiceManager per thread. In that case, you could have a static Current method on ServiceManager that would always return the correct ServiceManager instance for a particular thread because it stores the ServiceManager in the CallContext. Something like this:
public class ServiceManager
{
static string serviceManagerSlot = "ServiceManager";
public static ServiceManager Current
{
get
{
ServiceManager sm = null;
object o = CallContext.GetData(serviceManagerSlot);
if (o == null)
{
o = new ServiceManager();
CallContext.SetData(serviceManagerSlot, o);
}
sm = (ServiceManager)o;
return sm;
}
set
{
CallContext.SetData(serviceManagerSlot, value);
}
}
}
Early in your process, you might configure a ServiceManager for use in the current thread (or current "logical" thread) and then store in the "Current" property:
ServiceManager sm = new ServiceManager(thread specific properties?);
ServiceManager.Current = sm;
Now, whenever you retrieve ServiceManager.Current in your code, it will be the correct ServiceManager for the thread in which you are current executing.
This whole idea might not really be what you want.
From your comment you say that the CallContext data that you try to retrieve in the event of an exception is null. That probably means that exception is being raised and/or caught on a different thread than the thread on which the CallContext data was set. You might try using LogicalSetData to see if that helps.
As I said, I don't know if any of this will help you, but hopefully I have been clear enough (and the examples have also been clear enough) so you can tell if these ideas apply to your situation or not.
Good luck.