Porting from gridgain to ignite - what the ignite equivalents for these gridgain methods - gridgain

In porting our code base from gridgain to ignite, I've found similar / renamed methods for most of the ignite methods. There are a few that I need to clarify though.
What is the ignite equivalent for
//Listener for asynchronous local node grid events. You can subscribe for local node grid event notifications via {#link GridEventStorageManager#addLocalEventListener
public interface GridLocalEventListener extends EventListener {}
What is the recommended way to invoke a compute future. See picture for compile failures.
Apart from that, it looks like future.listenAsync() should be future.listen()
final ProcessingTaskAdapter taskAdapter = new ProcessingTaskAdapter(task, manager, node);
ComputeTaskFuture<ProcessingJob> future = grid.cluster()
.forPredicate(this) //===> what should this be
.compute().execute(taskAdapter, job);
future.listen(new IgniteInClosure<IgniteFuture<ProcessingJob>>() {
#Override
public void apply(IgniteFuture<ProcessingJob> future) {
try {
// Need this to extract the remote exception, if one occurred
future.get();
} catch (IgniteException e) {
manager.fail(e.getCause() != null ? e.getCause() : e);
} finally {
manager.finishJob(job);
jobDistributor.distribute(taskAdapter.getSelectedNode());
}
}

There is no special class anymore, you simply use IgnitePredicate as a listener. Refer to [1] for details.
Refer to [2] for information about async support. Also note that projections were replaced with cluster groups [3] (one of your compile errors is because of that). And you're correct, listenAsync was renamed to listen.
[1] https://apacheignite.readme.io/docs/events
[2] https://apacheignite.readme.io/docs/async-support
[3] https://apacheignite.readme.io/docs/cluster-groups

Related

ServiceStack with MiniProfiler for .Net 6

I was attempting to add Profiling into ServiceStack 6 with .Net 6 and using the .Net Framework MiniProfiler Plugin code as a starting point.
I noticed that ServiceStack still has Profiler.Current.Step("Step Name") in the Handlers, AutoQueryFeature and others.
What is currently causing me some stress is the following:
In ServiceStackHandlerBase.GetResponseAsync(IRequest httpReq, object request) the Async Task is not awaited. This causes the step to be disposed of the when it reaches the first async method it must await, causing all the subsequent nested steps to not be children. Is there something simple I'm missing here or is this just a bug in a seldom used feature?
In SqlServerOrmLiteDialectProvider most of the async methods make use of an Unwrap function that drills down to the SqlConnection or SqlCommand this causes an issue when attempting to wrap a command to enable profiling as it ignores the override methods in the wrapper in favour of the IHasDbCommand.DbCommand nested within. Not using IHasDbCommand on the wrapping command makes it attempt to use wrapping command but hits a snag because of the forced cast to SqlCommand. Is there an easy way to combat this issue, or do I have to extend each OrmliteDialectProvider I wish to use that has this issue to take into account the wrapping command if it is present?
Any input would be appreciated.
Thanks.
Extra Information Point 1
Below is the code from ServiceStackHandlerBase that appears (to me) to be a bug?
public virtual Task<object> GetResponseAsync(IRequest httpReq, object request)
{
using (Profiler.Current.Step("Execute " + GetType().Name + " Service"))
{
return appHost.ServiceController.ExecuteAsync(request, httpReq);
}
}
I made a small example that shows what I am looking at:
using System;
using System.Threading.Tasks;
public class Program
{
public static async Task<int> Main(string[] args)
{
Console.WriteLine("App Start.");
await GetResponseAsync();
Console.WriteLine("App End.");
return 0;
}
// Async method with a using and non-awaited task.
private static Task GetResponseAsync()
{
using(new Test())
{
return AdditionAsync();
}
}
// Placeholder async method.
private static async Task AdditionAsync()
{
Console.WriteLine("Async Task Started.");
await Task.Delay(2000);
Console.WriteLine("Async Task Complete.");
}
}
public class Test : IDisposable
{
public Test()
{
Console.WriteLine("Disposable instance created.");
}
public void Dispose()
{
Console.WriteLine("Disposable instance disposed.");
}
}
My Desired Result:
App Start.
Disposable instance created.
Async Task Started.
Async Task Complete.
Disposable instance disposed.
App End.
My Actual Result:
App Start.
Disposable instance created.
Async Task Started.
Disposable instance disposed.
Async Task Complete.
App End.
This to me shows that even though the task is awaited at a later point in the code, the using has already disposed of the contained object.
Mini Profiler was coupled to System.Web so isn't supported in ServiceStack .NET6.
To view the generated SQL you can use a BeforeExecFilter to inspect the IDbCommand before it's executed.
This is what PrintSql() uses to write all generated SQL to the console:
OrmLiteUtils.PrintSql();
Note: when you return a non-awaited task it just means it doesn't get awaited at that point, it still gets executed when the return task is eventually awaited.
To avoid the explicit casting you should be able to override a SQL Server Dialect Provider where you'll be able to replace the existing implementation with your own.

Redis Connections May Not be Closing with c#

I'm connecting to Azure Redis and they show me the number of open connections to my redis server. I've got the following c# code that encloses all my Redis sets and gets. Should this be leaking connections?
using (var connectionMultiplexer = ConnectionMultiplexer.Connect(connectionString))
{
lock (Locker)
{
redis = connectionMultiplexer.GetDatabase();
}
var o = CacheSerializer.Deserialize<T>(redis.StringGet(cacheKeyName));
if (o != null)
{
return o;
}
lock (Locker)
{
// get lock but release if it takes more than 60 seconds to complete to avoid deadlock if this app crashes before release
//using (redis.AcquireLock(cacheKeyName + "-lock", TimeSpan.FromSeconds(60)))
var lockKey = cacheKeyName + "-lock";
if (redis.LockTake(lockKey, Environment.MachineName, TimeSpan.FromSeconds(10)))
{
try
{
o = CacheSerializer.Deserialize<T>(redis.StringGet(cacheKeyName));
if (o == null)
{
o = func();
redis.StringSet(cacheKeyName, CacheSerializer.Serialize(o),
TimeSpan.FromSeconds(cacheTimeOutSeconds));
}
redis.LockRelease(lockKey, Environment.MachineName);
return o;
}
finally
{
redis.LockRelease(lockKey, Environment.MachineName);
}
}
return o;
}
}
}
You can keep connectionMultiplexer in a static variable and not create it for every get/set. That will keep one connection to Redis always opening and proceed your operations faster.
Update:
Please, have a look at StackExchange.Redis basic usage:
https://github.com/StackExchange/StackExchange.Redis/blob/master/Docs/Basics.md
"Note that ConnectionMultiplexer implements IDisposable and can be disposed when no longer required, but I am deliberately not showing using statement usage, because it is exceptionally rare that you would want to use a ConnectionMultiplexer briefly, as the idea is to re-use this object."
It works nice for me, keeping single connection to Azure Redis (sometimes, create 2 connections, but this by design). Hope it will help you.
I was suggesting try using Close (or CloseAsync) method explicitly. In a test setting you may be using different connections for different test cases and not want to share a single multiplexer. A search for public code using Redis client shows a pattern of Close followed by Dispose calls.
Noting in the XML method documentation of Redis client that close method is described as doing more:
//
// Summary:
// Close all connections and release all resources associated with this object
//
// Parameters:
// allowCommandsToComplete:
// Whether to allow all in-queue commands to complete first.
public void Close(bool allowCommandsToComplete = true);
//
// Summary:
// Close all connections and release all resources associated with this object
//
// Parameters:
// allowCommandsToComplete:
// Whether to allow all in-queue commands to complete first.
[AsyncStateMachine(typeof(<CloseAsync>d__183))]
public Task CloseAsync(bool allowCommandsToComplete = true);
...
//
// Summary:
// Release all resources associated with this object
public void Dispose();
And then I looked up the code for the client, found it here:
https://github.com/StackExchange/StackExchange.Redis/blob/master/src/StackExchange.Redis/ConnectionMultiplexer.cs
And we can see Dispose method calling Close (not the usual override-able protected Dispose(bool)), further more with the wait for connections to close set to true. It appears to be an atypical dispose pattern implementation in that by trying all the closure and waiting on them it is chancing to run into exception while Dispose method contract is supposed to never throw one.

Nested IMessageQueueClient publish using Servicestack InMemoryTransientMessageService

We are using InMemoryTransientMessageService to chain several one-way notification between services. We can not use Redis provider, and we do not really need it so far. Synchronous dispatching is enough.
We are experimenting problems when using a publish inside a service that is handling another publish. In pseudo-code:
FirstService.Method()
_messageQueueClient.Publish(obj);
SecondService.Any(obj)
_messageQueueClient.Publish(obj);
ThirdService.Any(obj)
The SecondMessage is never handled. In the following code of ServiceStack TransientMessageServiceBase, when the second message is processed, the service "isRunning" so it does not try to handled the second:
public virtual void Start()
{
if (isRunning) return;
isRunning = true;
this.messageHandlers = this.handlerMap.Values.ToList().ConvertAll(
x => x.CreateMessageHandler()).ToArray();
using (var mqClient = MessageFactory.CreateMessageQueueClient())
{
foreach (var handler in messageHandlers)
{
handler.Process(mqClient);
}
}
this.Stop();
}
I'm not sure about the impact of changing this behaviour in order to be able to nest/chain message publications. Do you think it is safe to remove this check? Some other ideas?
After some tests, it seems there is no problem in removing the "isRunning" control. All nested publications are executed correctly.

Async Logger. Can I lose/delay log entries?

I'm implementing my own logging framework. Following is my BaseLogger which receives the log entries and push it to the actual Logger which implements the abstract Log method.
I use the C# TPL for logging in an Async manner. I use Threads instead of TPL. (TPL task doesn't hold a real thread. So if all threads of the application end, tasks will stop as well, which will cause all 'waiting' log entries to be lost.)
public abstract class BaseLogger
{
// ... Omitted properties constructor .etc. ... //
public virtual void AddLogEntry(LogEntry entry)
{
if (!AsyncSupported)
{
// the underlying logger doesn't support Async.
// Simply call the log method and return.
Log(entry);
return;
}
// Logger supports Async.
LogAsync(entry);
}
private void LogAsync(LogEntry entry)
{
lock (LogQueueSyncRoot) // Make sure we ave a lock before accessing the queue.
{
LogQueue.Enqueue(entry);
}
if (LogThread == null || LogThread.ThreadState == ThreadState.Stopped)
{ // either the thread is completed, or this is the first time we're logging to this logger.
LogTask = new new Thread(new ThreadStart(() =>
{
while (true)
{
LogEntry logEntry;
lock (LogQueueSyncRoot)
{
if (LogQueue.Count > 0)
{
logEntry = LogQueue.Dequeue();
}
else
{
break;
// is it possible for a message to be added,
// right after the break and I leanve the lock {} but
// before I exit the loop and task gets 'completed' ??
}
}
Log(logEntry);
}
}));
LogThread.Start();
}
}
// Actual logger implimentations will impliment this method.
protected abstract void Log(LogEntry entry);
}
Note that AddLogEntry can be called from multiple threads at the same time.
My question is, is it possible for this implementation to lose log entries ?
I'm worried that, is it possible to add a log entry to the queue, right after my thread exists the loop with the break statement and exits the lock block, and which is in the else clause, and the thread is still in the 'Running' state.
I do realize that, because I'm using a queue, even if I miss an entry, the next request to log, will push the missed entry as well. But this is not acceptable, specially if this happens for the last log entry of the application.
Also, please let me know whether and how I can implement the same, but using the new C# 5.0 async and await keywords with a cleaner code. I don't mind requiring .NET 4.5.
Thanks in Advance.
While you could likely get this to work, in my experience, I'd recommend, if possible, use an existing logging framework :) For instance, there are various options for async logging/appenders with log4net, such as this async appender wrapper thingy.
Otherwise, IMHO since you're going to be blocking a threadpool thread during your logging operation anyway, I would instead just start a dedicated thread for your logging. You seem to be kind-of going for that approach already, just via Task so that you'd not hold a threadpool thread when nothing is logging. However, the simplification in implementation I think benefits just having the dedicated thread.
Once you have a dedicated logging thread, you then only need have an intermediate ConcurrentQueue. At that point, your log method just adds to the queue and your dedicated logging thread just does that while loop you already have. You can wrap with BlockingCollection if you need blocking/bounded behavior.
By having the dedicated thread as the only thing that writes, it eliminates any possibility of having multiple threads/tasks pulling off queue entries and trying to write log entries at the same time (painful race condition). Since the log method is now just adding to a collection, it doesn't need to be async and you don't need to deal with the TPL at all, making it simpler and easier to reason about (and hopefully in the category of 'obviously correct' or thereabouts :)
This 'dedicated logging thread' approach is what I believe the log4net appender I linked to does as well, FWIW, in case that helps serve as an example.
I see two race conditions off the top of my head:
You can spin up more than one Thread if multiple threads call AddLogEntry. This won't cause lost events but is inefficient.
Yes, an event can be queued while the Thread is exiting, and in that case it would be "lost".
Also, there's a serious performance issue here: unless you're logging constantly (thousands of times a second), you're going to be spinning up a new Thread for each log entry. That will get expensive quickly.
Like James, I agree that you should use an established logging library. Logging is not as trivial as it seems, and there are already many solutions.
That said, if you want a nice .NET 4.5-based approach, it's pretty easy:
public abstract class BaseLogger
{
private readonly ActionBlock<LogEntry> block;
protected BaseLogger(int maxDegreeOfParallelism = 1)
{
block = new ActionBlock<LogEntry>(
entry =>
{
Log(entry);
},
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = maxDegreeOfParallelism,
});
}
public virtual void AddLogEntry(LogEntry entry)
{
block.Post(entry);
}
protected abstract void Log(LogEntry entry);
}
Regarding the loosing waiting messages on app crush because of unhandled exception, I've bound a handler to the event AppDomain.CurrentDomain.DomainUnload. Goes like this:
protected ManualResetEvent flushing = new ManualResetEvent(true);
protected AsyncLogger() // ctor of logger
{
AppDomain.CurrentDomain.DomainUnload += CurrentDomain_DomainUnload;
}
protected void CurrentDomain_DomainUnload(object sender, EventArgs e)
{
if (!IsEmpty)
{
flushing.WaitOne();
}
}
Maybe not too clean, but works.

Simpleinjector: Is this the right way to RegisterManyForOpenGeneric when I have 2 implementations and want to pick one?

Using simple injector with the command pattern described here and the query pattern described here. For one of the commands, I have 2 handler implementations. The first is a "normal" implementation that executes synchronously:
public class SendEmailMessageHandler
: IHandleCommands<SendEmailMessageCommand>
{
public SendEmailMessageHandler(IProcessQueries queryProcessor
, ISendMail mailSender
, ICommandEntities entities
, IUnitOfWork unitOfWork
, ILogExceptions exceptionLogger)
{
// save constructor args to private readonly fields
}
public void Handle(SendEmailMessageCommand command)
{
var emailMessageEntity = GetThisFromQueryProcessor(command);
var mailMessage = ConvertEntityToMailMessage(emailMessageEntity);
_mailSender.Send(mailMessage);
emailMessageEntity.SentOnUtc = DateTime.UtcNow;
_entities.Update(emailMessageEntity);
_unitOfWork.SaveChanges();
}
}
The other is like a command decorator, but explicitly wraps the previous class to execute the command in a separate thread:
public class SendAsyncEmailMessageHandler
: IHandleCommands<SendEmailMessageCommand>
{
public SendAsyncEmailMessageHandler(ISendMail mailSender,
ILogExceptions exceptionLogger)
{
// save constructor args to private readonly fields
}
public void Handle(SendEmailMessageCommand command)
{
var program = new SendAsyncEmailMessageProgram
(command, _mailSender, _exceptionLogger);
var thread = new Thread(program.Launch);
thread.Start();
}
private class SendAsyncEmailMessageProgram
{
internal SendAsyncEmailMessageProgram(
SendEmailMessageCommand command
, ISendMail mailSender
, ILogExceptions exceptionLogger)
{
// save constructor args to private readonly fields
}
internal void Launch()
{
// get new instances of DbContext and query processor
var uow = MyServiceLocator.Current.GetService<IUnitOfWork>();
var qp = MyServiceLocator.Current.GetService<IProcessQueries>();
var handler = new SendEmailMessageHandler(qp, _mailSender,
uow as ICommandEntities, uow, _exceptionLogger);
handler.Handle(_command);
}
}
}
For a while simpleinjector was yelling at me, telling me that it found 2 implementations of IHandleCommands<SendEmailMessageCommand>. I found that the following works, but not sure whether it is the best / optimal way. I want to explicitly register this one interface to use the Async implementation:
container.RegisterManyForOpenGeneric(typeof(IHandleCommands<>),
(type, implementations) =>
{
// register the async email handler
if (type == typeof(IHandleCommands<SendEmailMessageCommand>))
container.Register(type, implementations
.Single(i => i == typeof(SendAsyncEmailMessageHandler)));
else if (implementations.Length < 1)
throw new InvalidOperationException(string.Format(
"No implementations were found for type '{0}'.",
type.Name));
else if (implementations.Length > 1)
throw new InvalidOperationException(string.Format(
"{1} implementations were found for type '{0}'.",
type.Name, implementations.Length));
// register a single implementation (default behavior)
else
container.Register(type, implementations.Single());
}, assemblies);
My question: is this the right way, or is there something better? For example, I'd like to reuse the existing exceptions thrown by Simpleinjector for all other implementations instead of having to throw them explicitly in the callback.
Update reply to Steven's answer
I have updated my question to be more explicit. The reason I have implemented it this way is because as part of the operation, the command updates a System.Nullable<DateTime> property called SentOnUtc on a db entity after the MailMessage is successfully sent.
The ICommandEntities and IUnitOfWork are both implemented by an entity framework DbContext class.The DbContext is registered per http context, using the method described here:
container.RegisterPerWebRequest<MyDbContext>();
container.Register<IUnitOfWork>(container.GetInstance<MyDbContext>);
container.Register<IQueryEntities>(container.GetInstance<MyDbContext>);
container.Register<ICommandEntities>(container.GetInstance<MyDbContext>);
The default behavior of the RegisterPerWebRequest extension method in the simpleinjector wiki is to register a transient instance when the HttpContext is null (which it will be in the newly launched thread).
var context = HttpContext.Current;
if (context == null)
{
// No HttpContext: Let's create a transient object.
return _instanceCreator();
...
This is why the Launch method uses the service locator pattern to get a single instance of DbContext, then passes it directly to the synchronous command handler constructor. In order for the _entities.Update(emailMessageEntity) and _unitOfWork.SaveChanges() lines to work, both must be using the same DbContext instance.
NOTE: Ideally, sending the email should be handled by a separate polling worker. This command is basically a queue clearing house. The EmailMessage entities in the db already have all of the information needed to send the email. This command just grabs an unsent one from the database, sends it, then records the DateTime of the action. Such a command could be executed by polling from a different process / app, but I will not accept such an answer for this question. For now, we need to kick off this command when some kind of http request event triggers it.
There are indeed easier ways to do this. For instance, instead of registering a BatchRegistrationCallback as you did in your last code snippet, you can make use of the OpenGenericBatchRegistrationExtensions.GetTypesToRegister method. This method is used internally by the RegisterManyForOpenGeneric methods, and allows you to filter the returned types before you send them to an RegisterManyForOpenGeneric overload:
var types = OpenGenericBatchRegistrationExtensions
.GetTypesToRegister(typeof(IHandleCommands<>), assemblies)
.Where(t => !t.Name.StartsWith("SendAsync"));
container.RegisterManyForOpenGeneric(
typeof(IHandleCommands<>),
types);
But I think it would be better to make a few changes to your design. When you change your async command handler to a generic decorator, you completely remove the problem altogether. Such a generic decorator could look like this:
public class SendAsyncCommandHandlerDecorator<TCommand>
: IHandleCommands<TCommand>
{
private IHandleCommands<TCommand> decorated;
public SendAsyncCommandHandlerDecorator(
IHandleCommands<TCommand> decorated)
{
this.decorated = decorated;
}
public void Handle(TCommand command)
{
// WARNING: THIS CODE IS FLAWED!!
Task.Factory.StartNew(
() => this.decorated.Handle(command));
}
}
Note that this decorator is flawed because of reasons I'll explain later, but let's go with this for the sake of education.
Making this type generic, allows you to reuse this type for multiple commands. Because this type is generic, the RegisterManyForOpenGeneric will skip this (since it can't guess the generic type). This allows you to register the decorator as follows:
container.RegisterDecorator(
typeof(IHandleCommands<>),
typeof(SendAsyncCommandHandler<>));
In your case however, you don't want this decorator to be wrapped around all handlers (as the previous registration does). There is an RegisterDecorator overload that takes a predicate, that allows you to specify when to apply this decorator:
container.RegisterDecorator(
typeof(IHandleCommands<>),
typeof(SendAsyncCommandHandlerDecorator<>),
c => c.ServiceType == typeof(IHandleCommands<SendEmailMessageCommand>));
With this predicate applied, the SendAsyncCommandHandlerDecorator<T> will only be applied to the IHandleCommands<SendEmailMessageCommand> handler.
Another option (which I prefer) is to register a closed generic version of the SendAsyncCommandHandlerDecorator<T> version. This saves you from having to specify the predicate:
container.RegisterDecorator(
typeof(IHandleCommands<>),
typeof(SendAsyncCommandHandler<SendEmailMessageCommand>));
As I noted however, the code for the given decorator is flawed, because you should always build a new dependency graph on a new thread, and never pass on dependencies from thread to thread (which the original decorator does). More information about this in this article: How to work with dependency injection in multi-threaded applications.
So the answer is actually more complex, since this generic decorator should really be a proxy that replaces the original command handler (or possibly even a chain of decorators wrapping a handler). This proxy must be able to build up a new object graph in a new thread. This proxy would look like this:
public class SendAsyncCommandHandlerProxy<TCommand>
: IHandleCommands<TCommand>
{
Func<IHandleCommands<TCommand>> factory;
public SendAsyncCommandHandlerProxy(
Func<IHandleCommands<TCommand>> factory)
{
this.factory = factory;
}
public void Handle(TCommand command)
{
Task.Factory.StartNew(() =>
{
var handler = this.factory();
handler.Handle(command);
});
}
}
Although Simple Injector has no built-in support for resolving Func<T> factory, the RegisterDecorator methods are the exception. The reason for this is that it would be very tedious to register decorators with Func dependencies without framework support. In other words, when registering the SendAsyncCommandHandlerProxy with the RegisterDecorator method, Simple Injector will automatically inject a Func<T> delegate that can create new instances of the decorated type. Since the proxy only refences a (singleton) factory (and is stateless), we can even register it as singleton:
container.RegisterSingleDecorator(
typeof(IHandleCommands<>),
typeof(SendAsyncCommandHandlerProxy<SendEmailMessageCommand>));
Obviously, you can mix this registration with other RegisterDecorator registrations. Example:
container.RegisterManyForOpenGeneric(
typeof(IHandleCommands<>),
typeof(IHandleCommands<>).Assembly);
container.RegisterDecorator(
typeof(IHandleCommands<>),
typeof(TransactionalCommandHandlerDecorator<>));
container.RegisterSingleDecorator(
typeof(IHandleCommands<>),
typeof(SendAsyncCommandHandlerProxy<SendEmailMessageCommand>));
container.RegisterDecorator(
typeof(IHandleCommands<>),
typeof(ValidatableCommandHandlerDecorator<>));
This registration wraps any command handler with a TransactionalCommandHandlerDecorator<T>, optionally decorates it with the async proxy, and always wraps it with a ValidatableCommandHandlerDecorator<T>. This allows you to do the validation synchronously (on the same thread), and when validation succeeds, spin of handling of the command on a new thread, running in a transaction on that thread.
Since some of your dependencies are registered Per Web Request, this means that they would get a new (transient) instance an exception is thrown when there is no web request, which is they way this is implemented in the Simple Injector (as is the case when you start a new thread to run the code). As you are implementing multiple interfaces with your EF DbContext, this means Simple Injector will create a new instance for each constructor-injected interface, and as you said, this will be a problem.
You'll need to reconfigure the DbContext, since a pure Per Web Request will not do. There are several solutions, but I think the best is to make an hybrid PerWebRequest/PerLifetimeScope instance. You'll need the Per Lifetime Scope extension package for this. Also note that also is an extension package for Per Web Request, so you don't have to use any custom code. When you've done this, you can define the following registration:
container.RegisterPerWebRequest<DbContext, MyDbContext>();
container.RegisterPerLifetimeScope<IObjectContextAdapter,
MyDbContext>();
// Register as hybrid PerWebRequest / PerLifetimeScope.
container.Register<MyDbContext>(() =>
{
if (HttpContext.Current != null)
return (MyDbContext)container.GetInstance<DbContext>();
else
return (MyDbContext)container
.GetInstance<IObjectContextAdapter>();
});
UPDATE
Simple Injector 2 now has the explicit notion of lifestyles and this makes the previous registration much easier. The following registration is therefore adviced:
var hybrid = Lifestyle.CreateHybrid(
lifestyleSelector: () => HttpContext.Current != null,
trueLifestyle: new WebRequestLifestyle(),
falseLifestyle: new LifetimeScopeLifestyle());
// Register as hybrid PerWebRequest / PerLifetimeScope.
container.Register<MyDbContext, MyDbContext>(hybrid);
Since the Simple Injector only allows registering a type once (it doesn't support keyed registration), it is not possible to register a MyDbContext with both a PerWebRequest lifestyle, AND a PerLifetimeScope lifestyle. So we have to cheat a bit, so we make two registrations (one per lifestyle) and select different service types (DbContext and IObjectContextAdapter). The service type is not really important, except that MyDbContext must implement/inherit from that service type (feel free to implement dummy interfaces on your MyDbContext if this is convenient).
Besides these two registrations, we need a third registration, a mapping, that allows us to get the proper lifestyle back. This is the Register<MyDbContext> which gets the proper instance back based on whether the operation is executed inside a HTTP request or not.
Your AsyncCommandHandlerProxy will have to start a new lifetime scope, which is done as follows:
public class AsyncCommandHandlerProxy<T>
: IHandleCommands<T>
{
private readonly Func<IHandleCommands<T>> factory;
private readonly Container container;
public AsyncCommandHandlerProxy(
Func<IHandleCommands<T>> factory,
Container container)
{
this.factory = factory;
this.container = container;
}
public void Handle(T command)
{
Task.Factory.StartNew(() =>
{
using (this.container.BeginLifetimeScope())
{
var handler = this.factory();
handler.Handle(command);
}
});
}
}
Note that the container is added as dependency of the AsyncCommandHandlerProxy.
Now, any MyDbContext instance that is resolved when HttpContext.Current is null, will get a Per Lifetime Scope instance instead of a new transient instance.

Resources