Is it possible to gracefully shutdown a Self-Hosted ServiceStack service? - servicestack

In a ServiceStack Self-Hosted service, is it possible to gracefully shutdown the service when if pending requests exist?
Use AppHost.Stop()? (derived from AppHostHttpListenerBase)

I don't think there is a built in mechanism for this, though it would be nice to see it. I use my own simplistic Graceful shutdown method.
Essentially I have a static bool, IsShuttingDown flag that is checked prior to starting each request, at the first possible opportunity in the service pipeline. (RawHttpHandlers)
If this flag is set true it means I am not wanting the service to handle any more requests, and will instead send http status 503 Unavailable to the client.
My graceful shutdown method simply sets IsShuttingDown flag and starts a timeout timer of 60 seconds to give any currently processing requests time to complete. After which the service stops calling AppHost.Stop(). (See end of question for how to do it without a timer)
My code is for ServiceStack v3, you may have to modify it slightly to get it to work with v4 if you are using that version.
In your AppHost:
public static bool IsShuttingDown = false;
public override void Configure(Funq.Container container)
{
// Other configuration options ...
// Handle the graceful shutdown response
var gracefulShutdownHandler = new CustomActionHandler((httpReq, httpRes) => {
httpRes.StatusCode = 503;
httpRes.StatusDescription = "Unavailable";
httpRes.Write("Service Unavailable");
httpRes.EndRequest();
});
SetConfig(new EndpointHostConfig {
// Other EndPoint configuration options ...
RawHttpHandlers = { httpReq => IsShuttingDown ? gracefulShutdownHandler : null }
});
}
The CustomActionHandler is just copied from here, it is responsible for handling the request. (A custom action handler is included already in v4 so it wouldn't be needed)
public class CustomActionHandler : IServiceStackHttpHandler, IHttpHandler
{
public Action<IHttpRequest, IHttpResponse> Action { get; set; }
public CustomActionHandler(Action<IHttpRequest, IHttpResponse> action)
{
if (action == null)
throw new Exception("Action was not supplied to ActionHandler");
Action = action;
}
public void ProcessRequest(IHttpRequest httpReq, IHttpResponse httpRes, string operationName)
{
Action(httpReq, httpRes);
}
public void ProcessRequest(HttpContext context)
{
ProcessRequest(context.Request.ToRequest(GetType().Name),
context.Response.ToResponse(),
GetType().Name);
}
public bool IsReusable
{
get { return false; }
}
}
I appreciate that using a timer doesn't guarantee that all requests will have ended in the 60 seconds, but it works for my needs, where most requests are handled in far far less time.
To avoid using a timer (immediate shutdown when all connections closed):
Because there is no access to the underlying connection pool, you would have to keep track of what connections are active.
For this method I would use the PreExecuteServiceFilter and PostExecuteServiceFilter to increment & decrement an active connections counter. I am thinking you would want to use Interlocked.Increment and Interlocked.Decrement to ensure thread safety of your count. I haven't tested this, and there is probably a better way.
In your AppHost:
public static int ConnectionCount;
// Configure Method
// As above but with additional count tracking.
ConnectionCount = 0;
SetConfig(new EndpointHostConfig {
// Other EndPoint configuration options ...
RawHttpHandlers = { httpReq => IsShuttingDown ? gracefulShutdownHandler : null },
// Track active connection count
PreExecuteServiceFilter = () => Interlocked.Increment(ref ConnectionCount),
PostExecuteServiceFilter = (obj, req, res) => {
Interlocked.Decrement(ref ConnectionCount);
// Check if shutting down, and if there are no more connections, stop
if(IsShuttingDown && ConnectionCount==0){
res.EndRequest(); // Ensure last request gets their data before service stops.
this.Stop();
}
},
});
Hope some of this helps anyway.

Related

Azure Cloud Service: RoleEnvironment.StatusCheck event not firing

I am maintaining a legacy Cloud Services application hosted on Azure targeting .net 4.6.1. Inside the Application_Start method of the Global.asax on the Web Role we are registering an event handler for RoleEnvironment.StatusCheck however our logs are demonstrating that this event call back is never being called or triggered.
According to this blog: https://convective.wordpress.com/2010/03/18/service-runtime-in-windows-azure/ we were expecting this event to be triggered every 15 seconds and we believe this was happening however has since stopped. We expect that the stopped working around the time we installed some new DLLs into the solution (some of these dlls include: Microsoft.Rest.ClientRuntime.dll, Microsoft.Azure.Storage.Common.dll, Microsoft.Azure.Storage.Blob.dll, Microsoft.Azure.KeyVault.dll)
We've tried RDP-ing onto the VM to check the event logs but nothing obvious is there. Any suggestions on where we may be able to search for clues?
It seems your event handler is not registered. Try below code with a different approach:
public class WorkerRole : RoleEntryPoint
{
public override bool OnStart()
{
RoleEnvironment.StatusCheck += RoleEnvironmentStatusCheck;
return base.OnStart();
}
// Use the busy object to indicate that the status of the role instance must be Busy
private volatile bool busy = true;
private void RoleEnvironmentStatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
if (this.busy)
{
// Sets the status of the role instance to Busy for a short interval.
// If you want the role instance to remain busy, add code to
// continue to call the SetBusy method
e.SetBusy();
}
}
public override void Run()
{
Trace.TraceInformation("Worker entry point called", "Information");
while (true)
{
Thread.Sleep(10000);
}
}
public override void OnStop()
{
base.OnStop();
}
}

Passing user and auditing information in calls to Reliable Services in Service Fabric transport

How can I pass along auditing information between clients and services in an easy way without having to add that information as arguments for all service methods? Can I use message headers to set this data for a call?
Is there a way to allow service to pass that along downstream also, i.e., if ServiceA calls ServiceB that calls ServiceC, could the same auditing information be send to first A, then in A's call to B and then in B's call to C?
There is actually a concept of headers that are passed between client and service if you are using fabric transport for remoting. If you are using Http transport then you have headers there just as you would with any http request.
Note, below proposal is not the easiest solution, but it solves the issue once it is in place and it is easy to use then, but if you are looking for easy in the overall code base this might not be the way to go. If that is the case then I suggest you simply add some common audit info parameter to all your service methods. The big caveat there is of course when some developer forgets to add it or it is not set properly when calling down stream services. It's all about trade-offs, as alway in code :).
Down the rabbit hole
In fabric transport there are two classes that are involved in the communication: an instance of a IServiceRemotingClient on the client side, and an instance of IServiceRemotingListener on the service side. In each request from the client the messgae body and ServiceRemotingMessageHeaders are sent. Out of the box these headers include information of which interface (i.e. which service) and which method are being called (and that's also how the underlying receiver knows how to unpack that byte array that is the body). For calls to Actors, which goes through the ActorService, additional Actor information is also included in those headers.
The tricky part is hooking into that exchange and actually setting and then reading additional headers. Please bear with me here, it's a number of classes involved in this behind the curtains that we need to understand.
The service side
When you setup the IServiceRemotingListener for your service (example for a Stateless service) you usually use a convenience extension method, like so:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(context =>
this.CreateServiceRemotingListener(this.Context));
}
(Another way to do it would be to implement your own listener, but that's not really what we wan't to do here, we just wan't to add things on top of the existing infrastructure. See below for that approach.)
This is where we can provide our own listener instead, similar to what that extention method does behind the curtains. Let's first look at what that extention method does. It goes looking for a specific attribute on assembly level on your service project: ServiceRemotingProviderAttribute. That one is abstract, but the one that you can use, and which you will get a default instance of, if none is provided, is FabricTransportServiceRemotingProviderAttribute. Set it in AssemblyInfo.cs (or any other file, it's an assembly attribute):
[assembly: FabricTransportServiceRemotingProvider()]
This attribute has two interesting overridable methods:
public override IServiceRemotingListener CreateServiceRemotingListener(
ServiceContext serviceContext, IService serviceImplementation)
public override IServiceRemotingClientFactory CreateServiceRemotingClientFactory(
IServiceRemotingCallbackClient callbackClient)
These two methods are responsible for creating the the listener and the client factory. That means that it is also inspected by the client side of the transaction. That is why it is an attribute on assembly level for the service assembly, the client side can also pick it up together with the IService derived interface for the client we want to communicate with.
The CreateServiceRemotingListener ends up creating an instance FabricTransportServiceRemotingListener, however in this implementation we cannot set our own specific IServiceRemotingMessageHandler. If you create your own sub class of FabricTransportServiceRemotingProviderAttribute and override that then you can actually make it create an instance of FabricTransportServiceRemotingListener that takes in a dispatcher in the constructor:
public class AuditableFabricTransportServiceRemotingProviderAttribute :
FabricTransportServiceRemotingProviderAttribute
{
public override IServiceRemotingListener CreateServiceRemotingListener(
ServiceContext serviceContext, IService serviceImplementation)
{
var messageHandler = new AuditableServiceRemotingDispatcher(
serviceContext, serviceImplementation);
return (IServiceRemotingListener)new FabricTransportServiceRemotingListener(
serviceContext: serviceContext,
messageHandler: messageHandler);
}
}
The AuditableServiceRemotingDispatcher is where the magic happens. It is our own ServiceRemotingDispatcher subclass. Override the RequestResponseAsync (ignore HandleOneWay, it is not supported by service remoting, it throws an NotImplementedException if called), like this:
public class AuditableServiceRemotingDispatcher : ServiceRemotingDispatcher
{
public AuditableServiceRemotingDispatcher(ServiceContext serviceContext, IService service) :
base(serviceContext, service) { }
public override async Task<byte[]> RequestResponseAsync(
IServiceRemotingRequestContext requestContext,
ServiceRemotingMessageHeaders messageHeaders,
byte[] requestBodyBytes)
{
byte[] userHeader = null;
if (messageHeaders.TryGetHeaderValue("user-header", out auditHeader))
{
// Deserialize from byte[] and handle the header
}
else
{
// Throw exception?
}
byte[] result = null;
result = await base.RequestResponseAsync(requestContext, messageHeaders, requestBodyBytes);
return result;
}
}
Another, easier, but less flexible way, would be to directly create an instance of FabricTransportServiceRemotingListener with an instance of our custom dispatcher directly in the service:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(context =>
new FabricTransportServiceRemotingListener(this.Context, new AuditableServiceRemotingDispatcher(context, this)));
}
Why is this less flexible? Well, because using the attribute supports the client side as well, as we see below
The client side
Ok, so now we can read custom headers when receiving messages, how about setting those? Let's look at the other method of that attribute:
public override IServiceRemotingClientFactory CreateServiceRemotingClientFactory(IServiceRemotingCallbackClient callbackClient)
{
return (IServiceRemotingClientFactory)new FabricTransportServiceRemotingClientFactory(
callbackClient: callbackClient,
servicePartitionResolver: (IServicePartitionResolver)null,
traceId: (string)null);
}
Here we cannot just inject a specific handler or similar as for the service, we have to supply our own custom factory. In order not to have to reimplement the particulars of FabricTransportServiceRemotingClientFactory I simply encapsulate it in my own implementation of IServiceRemotingClientFactory:
public class AuditedFabricTransportServiceRemotingClientFactory : IServiceRemotingClientFactory, ICommunicationClientFactory<IServiceRemotingClient>
{
private readonly ICommunicationClientFactory<IServiceRemotingClient> _innerClientFactory;
public AuditedFabricTransportServiceRemotingClientFactory(ICommunicationClientFactory<IServiceRemotingClient> innerClientFactory)
{
_innerClientFactory = innerClientFactory;
_innerClientFactory.ClientConnected += OnClientConnected;
_innerClientFactory.ClientDisconnected += OnClientDisconnected;
}
private void OnClientConnected(object sender, CommunicationClientEventArgs<IServiceRemotingClient> e)
{
EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> clientConnected = this.ClientConnected;
if (clientConnected == null) return;
clientConnected((object)this, new CommunicationClientEventArgs<IServiceRemotingClient>()
{
Client = e.Client
});
}
private void OnClientDisconnected(object sender, CommunicationClientEventArgs<IServiceRemotingClient> e)
{
EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> clientDisconnected = this.ClientDisconnected;
if (clientDisconnected == null) return;
clientDisconnected((object)this, new CommunicationClientEventArgs<IServiceRemotingClient>()
{
Client = e.Client
});
}
public async Task<IServiceRemotingClient> GetClientAsync(
Uri serviceUri,
ServicePartitionKey partitionKey,
TargetReplicaSelector targetReplicaSelector,
string listenerName,
OperationRetrySettings retrySettings,
CancellationToken cancellationToken)
{
var client = await _innerClientFactory.GetClientAsync(
serviceUri,
partitionKey,
targetReplicaSelector,
listenerName,
retrySettings,
cancellationToken);
return new AuditedFabricTransportServiceRemotingClient(client);
}
public async Task<IServiceRemotingClient> GetClientAsync(
ResolvedServicePartition previousRsp,
TargetReplicaSelector targetReplicaSelector,
string listenerName,
OperationRetrySettings retrySettings,
CancellationToken cancellationToken)
{
var client = await _innerClientFactory.GetClientAsync(
previousRsp,
targetReplicaSelector,
listenerName,
retrySettings,
cancellationToken);
return new AuditedFabricTransportServiceRemotingClient(client);
}
public Task<OperationRetryControl> ReportOperationExceptionAsync(
IServiceRemotingClient client,
ExceptionInformation exceptionInformation,
OperationRetrySettings retrySettings,
CancellationToken cancellationToken)
{
return _innerClientFactory.ReportOperationExceptionAsync(
client,
exceptionInformation,
retrySettings,
cancellationToken);
}
public event EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> ClientConnected;
public event EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> ClientDisconnected;
}
This implementation simply passes along anything heavy lifting to the underlying factory, while returning it's own auditable client that similarily encapsulates a IServiceRemotingClient:
public class AuditedFabricTransportServiceRemotingClient : IServiceRemotingClient, ICommunicationClient
{
private readonly IServiceRemotingClient _innerClient;
public AuditedFabricTransportServiceRemotingClient(IServiceRemotingClient innerClient)
{
_innerClient = innerClient;
}
~AuditedFabricTransportServiceRemotingClient()
{
if (this._innerClient == null) return;
var disposable = this._innerClient as IDisposable;
disposable?.Dispose();
}
Task<byte[]> IServiceRemotingClient.RequestResponseAsync(ServiceRemotingMessageHeaders messageHeaders, byte[] requestBody)
{
messageHeaders.SetUser(ServiceRequestContext.Current.User);
messageHeaders.SetCorrelationId(ServiceRequestContext.Current.CorrelationId);
return this._innerClient.RequestResponseAsync(messageHeaders, requestBody);
}
void IServiceRemotingClient.SendOneWay(ServiceRemotingMessageHeaders messageHeaders, byte[] requestBody)
{
messageHeaders.SetUser(ServiceRequestContext.Current.User);
messageHeaders.SetCorrelationId(ServiceRequestContext.Current.CorrelationId);
this._innerClient.SendOneWay(messageHeaders, requestBody);
}
public ResolvedServicePartition ResolvedServicePartition
{
get { return this._innerClient.ResolvedServicePartition; }
set { this._innerClient.ResolvedServicePartition = value; }
}
public string ListenerName
{
get { return this._innerClient.ListenerName; }
set { this._innerClient.ListenerName = value; }
}
public ResolvedServiceEndpoint Endpoint
{
get { return this._innerClient.Endpoint; }
set { this._innerClient.Endpoint = value; }
}
}
Now, in here is where we actually (and finally) set the audit name that we want to pass along to the service.
Call chains and service request context
One final piece of the puzzle, the ServiceRequestContext, which is a custom class that allows us to handle an ambient context for a service request call. This is relevant because it gives us an easy way to propagate that context information, like the user or a correlation id (or any other header information we want to pass between client and service), in a chain of calls. The implementation ServiceRequestContext looks like:
public sealed class ServiceRequestContext
{
private static readonly string ContextKey = Guid.NewGuid().ToString();
public ServiceRequestContext(Guid correlationId, string user)
{
this.CorrelationId = correlationId;
this.User = user;
}
public Guid CorrelationId { get; private set; }
public string User { get; private set; }
public static ServiceRequestContext Current
{
get { return (ServiceRequestContext)CallContext.LogicalGetData(ContextKey); }
internal set
{
if (value == null)
{
CallContext.FreeNamedDataSlot(ContextKey);
}
else
{
CallContext.LogicalSetData(ContextKey, value);
}
}
}
public static Task RunInRequestContext(Func<Task> action, Guid correlationId, string user)
{
Task<Task> task = null;
task = new Task<Task>(async () =>
{
Debug.Assert(ServiceRequestContext.Current == null);
ServiceRequestContext.Current = new ServiceRequestContext(correlationId, user);
try
{
await action();
}
finally
{
ServiceRequestContext.Current = null;
}
});
task.Start();
return task.Unwrap();
}
public static Task<TResult> RunInRequestContext<TResult>(Func<Task<TResult>> action, Guid correlationId, string user)
{
Task<Task<TResult>> task = null;
task = new Task<Task<TResult>>(async () =>
{
Debug.Assert(ServiceRequestContext.Current == null);
ServiceRequestContext.Current = new ServiceRequestContext(correlationId, user);
try
{
return await action();
}
finally
{
ServiceRequestContext.Current = null;
}
});
task.Start();
return task.Unwrap<TResult>();
}
}
This last part was much influenced by the SO answer by Stephen Cleary. It gives us an easy way to handle the ambient information down a hierarcy of calls, weather they are synchronous or asyncronous over Tasks. Now, with this we have a way of setting that information also in the Dispatcher on the service side:
public override Task<byte[]> RequestResponseAsync(
IServiceRemotingRequestContext requestContext,
ServiceRemotingMessageHeaders messageHeaders,
byte[] requestBody)
{
var user = messageHeaders.GetUser();
var correlationId = messageHeaders.GetCorrelationId();
return ServiceRequestContext.RunInRequestContext(async () =>
await base.RequestResponseAsync(
requestContext,
messageHeaders,
requestBody),
correlationId, user);
}
(GetUser() and GetCorrelationId() are just helper methods that gets and unpacks the headers set by the client)
Having this in place means that any new client created by the service for any aditional call will also have the sam headers set, so in the scenario ServiceA -> ServiceB -> ServiceC we will still have the same user set in the call from ServiceB to ServiceC.
what? that easy? yes ;)
From inside a service, for instance a Stateless OWIN web api, where you first capture the user information, you create an instance of ServiceProxyFactoryand wrap that call in a ServiceRequestContext:
var task = ServiceRequestContext.RunInRequestContext(async () =>
{
var serviceA = ServiceProxyFactory.CreateServiceProxy<IServiceA>(new Uri($"{FabricRuntime.GetActivationContext().ApplicationName}/ServiceA"));
await serviceA.DoStuffAsync(CancellationToken.None);
}, Guid.NewGuid(), user);
Ok, so to sum it up - you can hook into the service remoting to set your own headers. As we see above there is some work that needs to be done to get a mechanism for that in place, mainly creating your own subclasses of the underlying infrastructure. The upside is that once you have this in place, then you have a very easy way for auditing your service calls.

Do the Request filters get run from BasicAppHost?

I know that the services get wired-up by instantiating the BasicAppHost, and the IoC by using the ConfigureContainer property, but where is the right place to add the filters? The test in question never fire the global filter:
[TestFixture]
public class IntegrationTests
{
private readonly ServiceStackHost _appHost;
public IntegrationTests()
{
_appHost = new BasicAppHost(typeof(MyServices).Assembly)
{
ConfigureContainer = container =>
{
//
}
};
_appHost.Plugins.Add(new ValidationFeature());
_appHost.Config = new HostConfig { DebugMode = true };
_appHost.GlobalRequestFilters.Add(ITenantRequestFilter);
_appHost.Init();
}
private void ITenantRequestFilter(IRequest req, IResponse res, object dto)
{
var forTennant = dto as IForTenant;
if (forTennant != null)
RequestContext.Instance.Items.Add("TenantId", forTennant.TenantId);
}
[TestFixtureTearDown]
public void TestFixtureTearDown()
{
_appHost.Dispose();
}
[Test]
public void CanInvokeHelloServiceRequest()
{
var service = _appHost.Container.Resolve<MyServices>();
var response = (HelloResponse)service.Any(new Hello { Name = "World" });
Assert.That(response.Result, Is.EqualTo("Hello, World!"));
}
[Test]
public void CanInvokeFooServiceRequest()
{
var service = _appHost.Container.Resolve<MyServices>();
var lead = new Lead
{
TenantId = "200"
};
var response = service.Post(lead); //Does not fire filter.
}
}
ServiceStack is set at 4.0.40
Updated
After perusing the ServiceStack tests (which I highly recommend BTW) I came across a few example of the AppHost being used AND tested. It looks like the "ConfigureAppHost" property is the right place to configure the filters, e.g.
ConfigureAppHost = host =>
{
host.Plugins.Add(new ValidationFeature());
host.GlobalRequestFilters.Add(ITenantRequestFilter);
},
ConfigureContainer = container =>
{
}
Updated1
And they still don't fire.
Updated2
After a bit of trial and error I think it's safe to say that NO, the filters are not hooked up while using the BasicAppHost. What I have done to solve my problem was to switch these tests to use a class that inherits from AppSelfHostBase, and use the c# servicestack clients to invoke the methods on my service. THIS does cause the global filters to be executed.
Thank you,
Stephen
No the Request and Response filters only fire for Integration Tests where the HTTP Request is executed through the HTTP Request Pipeline. If you need to test the full request pipeline you'd need to use a Self-Hosting Integration test.
Calling a method on a Service just does that, i.e. it's literally just making a C# method call on a autowired Service - there's no intermediate proxy magic intercepting the call in between.

Why is my call to Azure killing HttpContext.Current

I have an MVC application in which I have a controller that receives data from the user and then uploads a file to Azure blob storage. The application is using Unity IoC to handle dependency injection.
During the workflow I have isolated the following code as demonstrating the problem
public class MvcController : Controller
{
private IDependencyResolver _dependencyResolver;
public MvcController() : this(DependencyResolver.Current)
{
}
public MvcController(IDependencyResolver dependencyResolver)
{
this._dependencyResolver = dependencyResolver;
}
public GetService<T>()
{
T resolved = _dependencyResolver.GetService<T>()
if (resolved == null)
throw new Exception(string.Format("Dependency resolver does not contain service of type {0}", typeof(T).Name));
return resolved;
}
}
public class MyController : MvcController
{
[NoAsyncTimeout]
public async Task<ActionResult> SaveFileAsync(/* A bunch of arguments */)
{
/* A bunch of code */
//This line gets a concrete instance from HttpContext.Current successfully...
IMyObject o = GetService<IMyObject>();
await SaveFileToAzure(/* A bunch of parameters */);
.
.
/* Sometime later */
Method2(/* A bunch of parameters */);
}
private Method2(/* A bunch of parameters */)
{
//This line fails because HttpContext.Current is null
IMyObject o = GetService<IMyObject>();
/* A bunch of other code */
}
private async Task SaveFileToAzure(/* A bunch of parameters */)
{
//Grab a blob container to store the file data...
CloudBlobContainer blobContainer = GetBlobContainer();
ICloudBlob blob = blobContainer.GetBlockBlobReference(somePath);
Stream dataStream = GetData();
System.Threading.CancellationToken cancelToken = GetCancellationToken();
//All calls to DependencyResolver.GetService<T>() after this line of code fail...
response = await blob.UploadStreamAsync(dataStream, cancelToken);
/* A bunch of other code */
}
}
Unity has a registration for my object:
container.RegisterType<IMyObject, MyObject>(new HttpLifetimeManager());
My lifetime manager is defined as follows:
public sealed class HttpRequestLifetimeManager : LifetimeManager
{
public Guid Key { get; private set; }
public HttpRequestLifetimeManager()
{
this.Key = Guid.NewGuid();
}
public override object GetValue()
{
return HttpContext.Current.Items[(object)this.Key];
}
public override void SetValue(object newValue)
{
HttpContext.Current.Items[(object)this.Key] = newValue;
}
public override void RemoveValue()
{
HttpContext.Current.Items.Remove((object)this.Key);
}
}
Nothing complicated.
Stepping into the HttpRequestLifetimeManager on the failing GetService() calls shows that after the UploadStreamAsync() call HttpContext.Current is null...
Has anyone else come across this problem? If so, is this a bug? Is this expected behaviour? Am I doing something out of the ordinary? What should I do to resolve it?
I can hack around it by storing a reference to HttpContext.Current prior to the offending call and restoring it after, but that doesn't seem like the right approach.
Any ideas?
To echo #Joachim - http context may not be available to your async thread. Compare the current thread id where you can see httpcontext is available, to the thread id where you can see that it isn't - i'm assuming you will see they are 2 different threads. If my assumption is correct this may be a sign that your main thread (the one with httpcontext) does not have a "synchronizationcontext". (you can see http://blogs.msdn.com/b/pfxteam/archive/2012/01/20/10259049.aspx for more details of how that works) If so, it may mean that the code immediately after your await statement is actually not running on the same thread as the code prior to the await statement! So from your perspective, one moment you have http context and the next you don't because execution has actually been switched to another thread! You should probably look at implementing / setting a synchronizationcontext on your main thread if that's the case and then control will be returned to your original thread with http context and that should fix your problem, or alternatively you could retrieve your object from http context on the original thread and find a way to pass it as a parameter to the async method/s so that they don't need to access http context to get their state.

Silverlight - limit application to one WCF call at a time

Silverlight can only send a certain number of simultaneous WCF requests at a time. I am trying to serialize the requests that a particular section of my application is performing because I don't need them to run concurrently.
The problem is as follows (summary below):
"WCF proxies in Silverlight applications use the SynchronizationContext of the thread from which the web service call is initiated to schedule the invocation of the async event handler when the response is received. When the web service call is initiated from the UI thread of a Silverlight application, the async event handler code will also execute on the UI thread."
http://tomasz.janczuk.org/2009/08/improving-performance-of-concurrent-wcf.html
summary: basically, if you block the thread that is calling the async method, it will never get called.
I can't figure out the right model of threading this such which would give me what I want in a reasonable way.
My only other requirement is that I don't want the UI thread to block.
As far as I can see, what should work is if the UI thread has a worker thread which queues up the calls as Action delegates, then uses an AutoResetEvent to execute a task one at a time in yet another worker thread. There are two problems:
1) The thread that calls async can't block, because then async will never get called. In fact, if you put that thread into a wait loop, I've noticed it doesn't get called either
2) You need a way to signal from the completed method of the async call that it is done.
Sorry that was so long, thanks for reading. Any ideas?
I have used a class that i build on my own to execute load operations synchronous. With the class you can register multiple load operations of diffrent domaincontexts and then execute them one by one. You can provide an Action to the constructor of the class that gets called, when all operations are finished (successful or failed).
Here´s the code of the class. I think it´s not complete and you have to change it to match your expectations. Maybe it can help you in your situation.
public class DomainContextQueryLoader {
private List<LoadOperation> _failedOperations;
private Action<DomainContextQueryLoader> _completeAction;
private List<QueuedQuery> _pendingQueries = new List<QueuedQuery>();
public DomainContextQueryLoader(Action<DomainContextQueryLoader> completeAction) {
if (completeAction == null) {
throw new ArgumentNullException("completeAction", "completeAction is null.");
}
this._completeAction = completeAction;
}
/// <summary>
/// Expose the count of failed operations
/// </summary>
public int FailedOperationCount {
get {
if (_failedOperations == null) {
return 0;
}
return _failedOperations.Count;
}
}
/// <summary>
/// Expose an enumerator for all of the failed operations
/// </summary>
public IList<LoadOperation> FailedOperations {
get {
if (_failedOperations == null) {
_failedOperations = new List<LoadOperation>();
}
return _failedOperations;
}
}
public IEnumerable<QueuedQuery> QueuedQueries {
get {
return _pendingQueries;
}
}
public bool IsExecuting {
get;
private set;
}
public void EnqueueQuery<T>(DomainContext context, EntityQuery<T> query) where T : Entity {
if (IsExecuting) {
throw new InvalidOperationException("Query cannot be queued, cause execution of queries is in progress");
}
var loadBatch = new QueuedQuery() {
Callback = null,
Context = context,
Query = query,
LoadOption = LoadBehavior.KeepCurrent,
UserState = null
};
_pendingQueries.Add(loadBatch);
}
public void ExecuteQueries() {
if (IsExecuting) {
throw new InvalidOperationException("Executing of queries is in progress");
}
if (_pendingQueries.Count == 0) {
throw new InvalidOperationException("No queries are queued to execute");
}
IsExecuting = true;
var query = DequeueQuery();
ExecuteQuery(query);
}
private void ExecuteQuery(QueuedQuery query) {
System.Diagnostics.Debug.WriteLine("Load data {0}", query.Query.EntityType);
var loadOperation = query.Load();
loadOperation.Completed += new EventHandler(OnOperationCompleted);
}
private QueuedQuery DequeueQuery() {
var query = _pendingQueries[0];
_pendingQueries.RemoveAt(0);
return query;
}
private void OnOperationCompleted(object sender, EventArgs e) {
LoadOperation loadOperation = sender as LoadOperation;
loadOperation.Completed -= new EventHandler(OnOperationCompleted);
if (loadOperation.HasError) {
FailedOperations.Add(loadOperation);
}
if (_pendingQueries.Count > 0) {
var query = DequeueQuery();
ExecuteQuery(query);
}
else {
IsExecuting = false;
System.Diagnostics.Debug.WriteLine("All data loaded");
if (_completeAction != null) {
_completeAction(this);
_completeAction = null;
}
}
}
}
Update:
I´ve just noticed that you are not using WCF RIA Services, so maybe this class will not help your.
There are some options:
- You can take a look at the Agatha-rrsl either by inspecting the implementation of it or by just using it instead of pure wcf. The framework allows you to queue requests. You can read more here.
- Another option is to use the Reactive extension. There is a SO example here and more info here and here.
- You can try the Power Thread library from Jeffrey Richter. He describes it on his book CLR via C#. You can find the library here. This webcast gives you some info about it.
- You can always roll your own implementation. The yield statement is a good help here. Error handling makes it very difficult to get the solution right.

Resources