WCF NetTCP with Background Threading - multithreading

Background:
I have a system that hosts WCF services inside a Windows Service with NetTCP binding. To add a new service to the collection you simply add the standard WCF config entries inside <system.serviceModel -> services /> and then add a line inside a custom configuration section that tells the hosting framework it needs to initialize the service. Each service is initialized with its own background thread and AppDomain instance to keep everything isolated.
Here is an example of how the services are initialized:
Host
- ServerManager
- ServiceManager
- BaseServerHost
The ServerManager instance has a collection of ServiceManagers that each correlate to a single service instance which is where the standard WCF implementation lies (ServiceHost.Open/Close, etc). The ServiceManager instance instantiates (based on the config - it has the standard assembly/type definition) an instance of the service by use of the BaseServerHost base class (abstract). Every service must inherit from this for the framework to be able to use it. As part of the initialization process BaseServerHost exposes a couple of events, specifically an UnhandledException event that the owning ServiceManager attaches itself to. (This part is critical in relation to the question below.)
This entire process works exceptionally well for us (one instance is running 63 services) as I can bring someone on who doesn't know anything about WCF and they can create services very quickly.
Question:
The problem I have run into is with background threading. A majority of the exposed methods on our endpoints do a significant amount of activity after a standard insert/update/delete method call such as sending messages to other systems. To keep performance up (the front-end is web-based) we let the initial insert/update/delete method do its thing and then fire off a background thread to handle all the stuff an end-user doesn't need to wait for to complete. This option works great until something in that background thread goes unhandled and brings the entire Windows service down, which I understand is by design (and I'm OK with).
Based on all of my research I have found that there is no way to implement a global try/catch (minus using the hacked config of enabling 1.1 handling of background crashing) so my team will have to go back and get those in the appropriate places. That aside, what I've found is on the endpoint side of the WCF hosting appears to be in its own thread on each call and getting that thread to talk to the "parent" has been a nightmare. From the service viewpoint here is the layout:
Endpoint (svc - inherits from BaseServerHost, mentioned above)
- Business Layer
- Data Layer
When I catch an exception on a background thread in the business layer I bubble it up to the Endpoint instance (which inherits from BaseServerHost) which then attempts to fire BaseServerHost's UnhandledException event for this particular service (which was attached to by the owning ServiceManager that instantiated it). Unfortunately the event handler is no longer there so it does nothing at all. I've tried numerous things to get this to work and thus far all of my efforts have been in vain.
When looking at the full model (shown below), I need to make the Business layer know about its parent Endpoint (this works) and the endpoint needs to know about the running BaseServerHost instance which needs to know about the ServiceManager that is hosting it so the errors can be bubbled up to this for use in our standard logging procedures.
Host
- ServerManager
- ServiceManager <=====================
- BaseServerHost ||
- Endpoint (svc) ||
- Business Layer <======
- Data Layer
I've tried static classes with no luck and even went as far as making ServerManager static and expoting its previously internal collection of ServiceManagers (so they can be shutdown), but that collection is always empty or null too.
Thoughts on making this work?
EDIT: After digging a little further I found an example of exactly how I envision this working. In a standard ASP.NET website, on any page/handler etc. you can use the HttpContext.Current property to access the current context for that request. This is exactly how I would want this to work with a "ServiceManager.Current" returning the owning ServiceManager for that service. Perhaps that helps?

Maybe you should look into doing something with CallContext:
http://msdn.microsoft.com/en-us/library/system.runtime.remoting.messaging.callcontext.aspx
You can use either SetData/GetData or LogicalSetData/LogicalGetData, depending on whether you want your ServiceManager to be associated with one physical thread (SetData) or a "logical" thread (LogicalSetData). With LogicalSetData you could make the same ServiceManager instance available within a thread as well as within that thread's "child" threads. Will try to post a couple of potentially useful links later when I can find them.
Here is a link to the "Virtual Singleton Pattern" on codeproject.
Here is a link to "Thread Singleton"
Here is a link to "Ambient Context"
All of these ideas are similar. Essentially, you have an object with a static Current property (can be get or get/set). Current puts its value in (and gets it from) the CallContext using either SetData (to associate the "Current" value with the current thread) or LogicalSetData (to associate the "Current" value with the current thread and to flow the value to any "child" threads).
HttpContext is implemented in a similar fashion.
System.Diagnostics.CorrelationManager is another good example that is implemented in a similar fashion.
I think the Ambient Context article does a pretty good job of explaining what you can accomplish with this idea.
Whenever I dicsuss CallContext, I try to also include this link to this entry from Jeffrey Richter's blog.
Ultimately, I'm not sure if any of this will help you or not. One it would be useful would be if you had a multithreaded server application (maybe each request is fulfilled by a thread and multiple requests can be fulfilled at the same time on different threads), you might have a ServiceManager per thread. In that case, you could have a static Current method on ServiceManager that would always return the correct ServiceManager instance for a particular thread because it stores the ServiceManager in the CallContext. Something like this:
public class ServiceManager
{
static string serviceManagerSlot = "ServiceManager";
public static ServiceManager Current
{
get
{
ServiceManager sm = null;
object o = CallContext.GetData(serviceManagerSlot);
if (o == null)
{
o = new ServiceManager();
CallContext.SetData(serviceManagerSlot, o);
}
sm = (ServiceManager)o;
return sm;
}
set
{
CallContext.SetData(serviceManagerSlot, value);
}
}
}
Early in your process, you might configure a ServiceManager for use in the current thread (or current "logical" thread) and then store in the "Current" property:
ServiceManager sm = new ServiceManager(thread specific properties?);
ServiceManager.Current = sm;
Now, whenever you retrieve ServiceManager.Current in your code, it will be the correct ServiceManager for the thread in which you are current executing.
This whole idea might not really be what you want.
From your comment you say that the CallContext data that you try to retrieve in the event of an exception is null. That probably means that exception is being raised and/or caught on a different thread than the thread on which the CallContext data was set. You might try using LogicalSetData to see if that helps.
As I said, I don't know if any of this will help you, but hopefully I have been clear enough (and the examples have also been clear enough) so you can tell if these ideas apply to your situation or not.
Good luck.

Related

Synchronisation of managed instances

I want to implement a sort of multiton class (maybe also know as a Manager Design Pattern) that loads (and manages) objects according to user configuration (the key of each object in the multiton is the primary key of the configuration record). These objects are disposed and recreated (i.e. reloaded) if changes in configuration is detected.
Other objects (external to the managed objects) interacts/communicates with these "managed" objects.
E.g.
ManagerA manages instances of configured instances of ClassA.
ObjectB retrieves an instance of ClassA via ManagerA and starts
interacting with the instance.
The problem is that the interaction between ObjectB and the managed instance of ClassA can potentially be on another thread than on which the ManagerA disposes the instance of ClassA and creates a new instance of ClassA (for the new changed configuration). I.e. the managed instance could be disposed just as (or just before) interaction with the managed object.
My question is how should one synchronise the instance management and interaction with these managed instances by external objects?
This is pretty difficult with no code, psuedo code, or such provided but...
If the client interacts with the managed protocol objects by enqueuing actions, and that queue of actions is longer-lived the the managed protocol object, perhaps it would be better to seperate that queue from the managed protocol and just pass a reference to the queue-like object to the client.
When something is enqueued, have your queue-like object check out a properly configured protocol object and use it. I assume that while it is in use (meaning bytes are flying across the wire) it cannot be changed/configured. After that one action has been completed, have the queue-like object then check the protocol object back in to your manager. Upon checkin, if the manager has detected changes in the configuration it can dispose and recreate the protocol object, and if not then it is still sitting there ready to go for next use. If upon detection of configuration change the object is not currently checked out, that recreation step can happen immediately.
The client is shielded from these details because it never access the protocol object directly. (Though if that was a requirement then you could still apply a check-in and check-out concept to the protocol objects to make sure they are up to date, but it is harder to enforce since the client can forget to do the check in and re-checkout).

Multithread operations in WCF Service contract implementation

I've seen a project lately using a background worker to make some operations (get data from other web services) and throw the data using events to the client. This project is a WCF service and consume by an ASP.NET web site by another class library as WCF client role and throwing in turn events to the application. This all multithreaded series made me curious to examine. I've seen that this is a basicHttpBinding binding and the only behavior to the service is the UseSynchronizationContext=false where I found out that they added it after unexplained exception which is normal :)
Now I'm asking about the default ConcurrencyMode for the basicHttpBinding. Shouldn't they make it Reentrant or this is the default behavior?
Is this scenario will continue failing cause they already have an unexplained reference not set to an instance of an object if the WCF service is down from the client?
I believe using multithread operations in a WCF service consume by ASP.NET project which relies on IIS handling is bad cause the page could be sent to the client before the WCF service return data to the client class library and append these to the page.
Can you discuss the above and explain your thoughts?
Shouldn't be better when you need such an asynchronous programming style to inform WCF comsumers to notify after long operation using CallbackContracts and embedded WCF technologies, rather multithreading operations?
Need clarification to correct the design and have some proves that this is a bad service architecture, if it is for real, which I suspect!
Thank you.
It is not inherently bad architecture, but it sounds like it does create a number of possible pitfalls.
The WCF client library is leaving all the coordination up to the ASP.NET application. If the ASP.NET app isn't checking that a call to the WCF service has been completed, then it risks using variables before they have been set with values from the service, and other such race conditions unless explicitly setting up some manner of coordinating the initial call against the completion events.
My recommendation would be to rewrite the WCF client asynchronous methods to return Task objects, from the System.Threading.Tasks namespace (MSDN reference). In this way you can spin off the background processing calling the WCF service, and use the Result property of the Task to ensure the service has completed.
An example:
protected void Page_Load(object sender, EventArgs e)
{
Task<string> t = Task<string>.Factory.StartNew(() =>
{
return MyWcfClientClass.StaticAsyncMethod(MyArguments);
}
/* other control initialization stuff here, while the task
and WCF call continue processing in background */
/* Calling Result causes the thread to wait for the task to
complete as necessary, to ensure we have our correct value */
MyLabel1.Text = t.Result;
}

Ansync thread from WCF RESTful Service

We have created a WCF RESTful service for a WPF(UI) Application. The UI sends a request to the WCF Service which then invokes a suitable method in BLL, which in turn invokes a method in DAL. All these layers have been separated using IOC/DI.
Now, for a new feature, we want that when a new object of a certain type is added to the database, it should go through 3 steps which would be performed in a separate thread.
That is, if service sends a request to BLL to add a new object OBJ to the database, the BLL should save the object into database through the DAL and then initiate a new thread to perform a some actions upon the object without blocking the WCF Request.
But whenever we try to do so by starting a new thread in the BLL, the application crashes. It is so because the 'InRequestScope' object of the database context has been disposed and the thread cannot update the database. Also the WCF request does not ends until the thread is completed, although the return value has been provided and the BLL method has completed execution.
Any help would be much valued.
I have figured out the solution and explanation for this behavior. Turns out to be a rather silly one.
Since I was creating a thread from the BLL (with IsBackground = true;), the parent thread (originated by the service request) was waiting for this thread to end. And when both the threads ended, the response was sent back to the client. And the solution, well, use a BackgroundWorker instead, no rocket science, just common sense.
And for the disposing of context, since the objects were InRequestScope, and the request had ended. So every time a Repository required a UnitOfWork (uow/context), it would generate a new context and end it as soon as the database request was complete. And the solution would be, create a uow instance, store in a variable, pass it to the repository required to be used, and force all repositories to use the same uow instance than creating a new one for itself.
This seem more of a client-side concern than a service-side concern. Why not have the client make asynchronous requests to WCF service since this automatically provides multi-threaded access to the service.
The built-in System.Net.WebClient (since you're access a webHttpBinding or WCF Web API endpoint) can be used asynchronously. This blog post gives a quick overview of how it is done. Although this MSDN article seems to apply to file I/O, about three quarters down, there is a detailed explanation on coding asynchronous WebClient usage.

ColdFusion singleton object pool

In our ColdFusion application we have stateless model objects.
All the data I want I can get with one method call (it calls other internally without saving the state).
Methods usually ask the database for the data. All methods are read only, so I don't have to worry about thread safety (please correct me if I'm wrong).
So there is no need to instantiate objects at all. I could call them statically, but ColdFusion doesn't have static methods - calling the method would mean instantiating the object first.
To improve performance I have created singletons for every Model object.
So far it works great - each object is created once and then accessed as needed.
Now my worry is that all requests for data would go through only 1 model object.
Should I? I mean if on my object I have a method getOfferData() and it's time-consuming.
What if a couple of clients want to access it?
Will second one wait for the first request to finish or is it executed in a separate thread?
It's the same object after all.
Should I implement some kind of object pool for this?
The singleton pattern you are using won't cause the problem you are describing. If getOfferData() is still running when another call to that function gets called on a different request then this will not cause it to queue unless you do one of the following:-
Use cflock to grant an exclusive lock
Get queueing connecting to your database because of locking / transactions
You have too many things running and you use all the available concurrent threads available to ColdFusion
So the way you are going about it is fine.
Hope that helps.

Releasing resources in an Application?

I am working on an application in which I need a connection to a server. I also need to access this connection from different activities.
To achieve this I was going to override the Application class and create the connection there. This would allow for easy interaction from every Activity as I could simply call getApplicationContext().getConnection() to get access to my own connection class.
The problem with this approach is that the Application class does not have any onDestroy() method or similar in which I can release the connection and any related resources. I do not think that leaving it idle until onLowMemory() is called is the best approach here.
I cannot add a custom release() method, as I don't know when to call it (there are two Activities that can be the last one to be active, and depending on the users actions they do not know if the other is to be started when the active one is shut down).
Is there a good solution to this, should I just ignore releasing resources (before onLowMemory()) or is there a better way to achieve what I want (possibly a Service, but as there will be several calls to an underlying class it might get overly problematic with the Service?)
Just use Singleton Design Pattern. Making your Connection class Singleton gives you approach to access connection from different activities, and don`t forget to handle multithreading.

Resources