Long running WCF in shared hosting - multithreading

I have a WCF (on top of IIS) which will be hosted on a shared hosting environment, so I don't have access to window services or permissions for installations.
This WCF would have a long running computation (it is a spatial interpolation), so my question is about which architecture to use in order to not affect performance, in particular I don't want to grab threads from the ASP.NET tread pool for such long task.
I know that a possible solution would be a window service for multi-threading computation and MSMQ for communicating between the WCf and the window service, but as I said I don't have the possibility to install a service.
Anybody could suggest a solution?
thanks in advance

You could simple use an asynchronous/one-way method on your WCF service and call this.
We use a similar method to upload data and kick off the import process. The client will then poll using another WCF method and when the initial process has finished, update the relevant data to indicate that and return it back to the client in the poll.

Related

WCF service, launch a task every X hours

I have a WCF service which needs to perform some actions on database every 1 hour and also needs to generate a file with some information.
So which is better, do it through a timer or thread?
The problem of thread would be the constant iteration (with a little delay) on a loop checking if the time has elapsed and if so do the action.
Any ideas on how to achieve this scenario the most efficient possible?
Sounds like you need long running service.
WCF by it self not good solution.
You should look at Windows Services or WCF + WF hosted in app fabric
One of the reasongs, WCF does not support autostart, so you will have to start it every time after pool recycle(if you host in IIS, or any other hosting process)

Lots of database access makes a WCF service unresponsive

I have a WCF duplex service (session instancing with net.tcp binding) that can potentially do a lot of database access. When it goes into its high-db-access routine, it become unresponsive to any WCF service call.
I've checked the CPU utilization and, while high, it is not 100%. This is on a 4 core box.
It seems that no actual requests are getting handled by WCF. Not only are no service operations called, but I have some security custom behaviors further down the WCF stack that are not getting invoked either. This makes me think that somehow WCF is getting starved of threads and so is not able to allocate new threads for access.
I don't think this is a throttling problem, since I can make this happen when there are a few current sessions with the service (less than 5).
The DB operations in question are thousands of single row inserts. This is done with a single entity framework context that is disposed and reconstructed every 500 calls to reduce the memory accumulation of entity frameworks internal caches. The target database is SQL Express 2008 R2.
There are a fair number of worker threads in this service. All are instantiated by the Task Parallel Library... some as regular short-lived tasks (which uses the CLR threadpool), others are long-running tasks (get their own CLR thread), but non of these are I/O threads (unless the CLR is doing some magic that I don't know about). The DB writes are happening on a long-running Task.
Are there any WCF or debugging diagnostic tools that can report or visualize the current state of WCF threads, worker threads, I/O threads and WCF throttling?
Environment Summary
WCF service is hosted inside a Windows service
WCF service uses a net.tcp duplex binding
WCF service is session based (required by duplex contract)
There is only a single instance of the Windows service, but multiple instances of the WCF service object are instantiated as required by WCF.
WCF service operations quickly delegate work to background threads that are persistent inside the Windows service. All WCF operations quickly return and then trickle additional results generated by the background threads down the WCF duplex callback channel.
All DB access is done on background worker threads. No WCF threads are used for long-running work.
WCF service has a single custom security behavior. There are no other behaviors (such as reliable messaging, etc)
If you're using session instancing, then each each client connection has a host service. If the host is engaged in accessing the database, then I'd say it's blocking & won't be able to handle any other calls from its client until it's done. You might be able to change the way it works to be asynchronous. Call a start method which kicks off the database activity on a worker thread and returns immediately. Either have a progress method to check the status, or since you're using duplex, have the host signal the client when it's done.
Or is the wcf service not accepting new clients?

WF4 Affinity on Windows Azure and other NLB environments

I'm using Windows Azure and WF4 and my workflow service is hosted in a web-role (with N instances). My job now is find out how
to do an affinity, in a way that I can send messages to the right workflow instance. To explain this scenario, my workflow (attached) starts with a "StartWorkflow" receive activity, creates 3 "Person" and, in a parallel-for-each, waits for the confirmation of these 3 people ("ConfirmCreation" Receive Activity).
I then started to search how the affinity is made in others NLB environments (mainly looked for informations about how this works on Windows Server AppFabric), but I didn't find a precise answer. So how is it done in others NLB environments?
My next task is find out how I could implement a system to handle this affinity on Windows Azure and how much would this solution cost (in price, time and amount of work) to see if its viable or if it's better to work with only one web-role instance while we wait for the WF4 host for the Azure AppFabric. The only way I found was to persist the workflow instance. Is there other ways of doing this?
My third, but not last, task is to find out how WF4 handles multiple messages received at the same time. In my scenario, this means how it would handle if the 3 people confirmed at the same time and the confirmation messages are also received at the same time. Since the most logical answer for this problem seems to be to use a queue, I started looking for information about queues on WF4 and found people speaking about MSQM. But what is the native WF4 messages handler system? Is this handler really a queue or is it another system? How is this concurrency handled?
You shouldn't need any affinity. In fact that's kinda the whole point of durable Workflows. Whilst your workflow is waiting for this confirmation it should be persisted and unloaded from any one server.
As far as persistence goes for Windows Azure you would either need to hack the standard SQL persistence scripts so that they work on SQL Azure or write your own InstanceStore implementation that sits on top of Azure Storage. We have done the latter for a workflow we're running in Azure, but I'm unable to share the code. On a scale of 1 to 10 for effort, I'd rank it around an 8.
As far as multiple messages, what will happen is the messages will be received and delivered to the workflow instance one message at a time. Now, it's possible that every one of those messages goes to the same server or maybe each one goes to a diff. server. No matter how it happens, the workflow runtime will attempt to load the workflow from the instance store, see that it is currently locked and block/retry until the workflow becomes available to process the next message. So you don't have to worry about concurrent access to the same workflow instance as long as you configure everything correctly and the InstanceStore implementation is doing its job.
Here's a few other suggestions:
Make sure you use the PersistBeforeSend option on your SendReply actvities
Configure the following workflow service options
<workflowIdle timeToUnload="00:00:00" />
<sqlWorkflowInstanceStore ... instanceLockedExceptionAction="AggressiveRetry" />
Using the out of the box SQL instance store with SQL Azure is a bit of a problem at the moment with the Azure 1.3 SDK as each deployment, even if you made 0 code changes, results in a new service deployment meaning that already persisted workflows can't continue. That is a bug that will be solved but a PITA for now.
As Drew said your workflow instance should just move from server to server as needed, no need to pin it to a specific machine. And even if you could that would hurt scalability and reliability so something to be avoided.
Sending messages through MSMQ using the WCF NetMsmqBinding works just fine. Internally WF uses a completely different mechanism called bookmarks that allow a workflow to stop and resume. Each Receive activity, as well as others like Delay, will create a bookmark and wait for that to be resumed. You can only resume existing bookmarks. Even resuming a bookmark is not a direct action but put into an internal queue, not MSMQ, by the workflow scheduler and executed through a SynchronizationContext. You get no control over the scheduler but you can replace the SynchronizationContext when using the WorkflowApplication and so get some control over how and where activities are executed.

worker process in IIS shared hosting

Can anyone tell me, is there a way to run a process in IIS shared hosting service.
Suppose, the scenario is like "I want to send emails to a list of email id's after everywhere 3 hrs", so the challenge here is the process should not be invoked by a HTTP link. It should be automatic.
I think we can do this by IIS worker processes.
Also this all will be happening on a shared server(like GoDaddy) in IIS7, .NET 3.5
Please anyone give me a direction.
Thanks in advance.
This question was asked ages ago, but for what it's worth - I ended up using Hangfire to handle my long running tasks in my ASP app.
You can easily configure it for shared hosting and then for a dedicated server if you can scale up / out according to your needs.
It's super easy to use, just follow the doc step by step.
Cheers,
Andrew
You should write and run this as a Windows service, assuming you have access to install a service.
You could run a background worker thread from your asp.net code-behind but the problem is that IIS will terminate the thread after it is idle for a relatively short period of time. I tried going this route, trying to geocode a list of addresses (800+, from a SharePoint list) and IIS kept timing out my thread and stopping it. We ended up going with adding events to the SharePoint list that would geocode when the item was changed/added to the list.
One other option you could look into is using Windows Workflow API, it was designed for this kind of thing.

How to create a job in IIS capable of running an extended process

I have a web service app, I have 1 web service call that could take anything from 1 hour to 14 hours, depending on the data that needs to be processed and the time of the month.
Is there any way to create a job in IIS that could be capable of running this extended process. I also need job management and reporting to be able to see if jobs are running, so that new jobs aren't created on top of others.
I will be working with IIS6 primarily. And would like to use C# code.
Right now I am using a web service call, but I don't like the idea of having web services run for such a long time, and due to the nature of the web service, I can't split the functionality any more.
IIS jobs would be awesome if they are available. Any ideas?
If I were you, I would make a command line app that is kicked off by the web service. Running a commandline app is pretty straight forward, basically
Process p = new Process();
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = "appname.exe";
p.Start();
There are a limited amount of worker processes per machine, they aren't really meant for long running jobs.
One possibility, with a bit of setup cost, is to have your processing run as a Windows service that listens to a message queue (MSMQ or similar), and have your web service simply post the request onto the message queue to be handled by the processing service.
Monitoring jobs is more difficult; your web service would need to have a way of querying your processing service to find out its state. This is an IPC (interprocess communication) problem, which has many different solutions with various tradeoffs that depend on your environment and circumstances.
That said, for simple cases, Matt's solution is probably sufficient.

Resources