Hello I have a command bus, a query bus, which basically has a keypair with the name of the command or query and the handler and then I execute the command that should publish my event.
But I have some doubts about how I could do my event-bus.
is the command-bus part of an event-bus?
how could I do an event-bus with the handlers
command-bus:
export interface ICommand {
}
export interface ICommandHandler<
TCommand extends ICommand = any,
TResult = any
> {
execute(command: TCommand): Promise<TResult>
}
export interface ICommandBus<CommandBase extends ICommand = ICommand> {
execute<T extends CommandBase>(command: T): Promise<any>
register(data:{commandHandler: ICommandHandler, command: ICommand}[]): void
}
command-bus implementation:
export class CommandBus<Command extends ICommand = ICommand>
implements ICommandBus<Command> {
private handlers = new Map<string, ICommandHandler<Command>>()
public execute<T extends Command>(command: T): Promise<any> {
const commandName = this.getCommandName(command as any)
const handler = this.handlers.get(commandName)
if (!handler) throw new Error(``)
return handler.execute(command)
}
public register(
data: { commandHandler: ICommandHandler; command: ICommand }[],
): void {
data.forEach(({command,commandHandler}) => {
this.bind(commandHandler, this.getCommandName(command as any))
})
}
private bind<T extends Command>(handler: ICommandHandler<T>, name: string) {
this.handlers.set(name, handler)
}
private getCommandName(command: Function): string {
const { constructor } = Object.getPrototypeOf(command)
return constructor.name as string
}
}
Here another question arose, who should have the responsibility to publish the events in my event db or read a stream of my event db is my class event-store?
event-store class:
export class EventStoreClient {
[x: string]: any;
/**
* #constructor
*/
constructor(private readonly config: TCPConfig) {
this.type = 'event-store';
this.eventFactory = new EventFactory();
this.connect();
}
connect() {
this.client = new TCPClient(this.config);
return this;
}
getClient() {
return this.client;
}
newEvent(name: any, payload: any) {
return this.eventFactory.newEvent(name, payload);
}
close() {
this.client.close();
return this;
}
}
And then I have doubts about how to implement my event-bus, with my event handlers and my events.
I would be happy if someone could help me ..
event-interface:
export interface IEvent {
readonly aggregrateVersion: number
readonly aggregateId: string
}
export interface IEventHandler<T extends IEvent = any> {
handle(event: T): any
}
maybe usage:
commandBus.execute(new Command())
class commandHandler {
constructor(repository: IRepository, eventBus ????){}
execute(){
//how i can publish an event with after command handler logic with event bus her
}
}
I see there's some confusion between the various Buses and the Event Store. Before attempting to implement an Event Bus, you need to answer one important question that lies at the foundation of any Event Sourcing implementation:
How to preserve the Event Store as the Single Source of Truth?
That is, your Event Store contains the complete state of the domain. This also means that the consumers of the Event Bus (whatever it ends up being - a message queue, a streaming platform, Redis, etc.) should only get the events that are persisted. Therefore, the goals become:
Only deliver events on the Bus that are persisted to the Store (so if you get an error writing to the Store, or maybe a Concurrency Exception, do not deliver via bus!)
Deliver all events to all interested consumers, without losing any events
These two goals intuitively translate to "I want atomic commit between the Event Store and the Event Bus". This is simplest to achieve when they're the same thing!
So, instead of thinking about how to connect an "Event Bus" to command handlers and send events back and forth, think about how to retrieve already persisted events from the Event Store and subscribe to that. This also removes any dependency between command handlers and event subscribers - they live on different sides of the Event Store (writer vs. reader), and could be in different processes, on different machines.
Related
we have a scenario where we must integrate requests with the same destination system, which exposes its operations with REST APIs (provided by a third party, most likely not Azure). So this is a scenario where n messages are mapped in n actions on the same destination system. There is no multicast or broadcast.
So we are considering Service Bus to achieve this, based on previous experiences on other use cases, and taking advantage of dead letter mechanism among other things.
We need to integrate 6 or 7 different actions with the 3rd party. So on Service Bus we can achieve this by creating 1 topic per action, and this is important because the data that travels on the message is different from action to action.
But we are facing a situation when consuming topics. We are able to have an hosted service in Azure (App Service) that listens on a specific topic and does its stuff.
But since we are trying to listen on several topics, we would like to avoid writing and deploying multiple app services, we would like (if possible) to have a single app service where we 'trigger' each ServiceBusProcessor (one per topic) and even though they all rely on the limits of the app service itself, each processor is independent and is listening on its topic in parallel and processing.
I'll share a code sample below of our hosted service, but we found out two options, we would like to have opinions:
Option 1: we send all messages to the same topic, then by using filters we determine which is the appropriate action. This would make code simple, but it would put all messages on the same 'line' which would make the topic an all purpose topic, which seems wrong
Option 2: based on our sample below, which represents a single hosted service which listens on a single topic, we would break it and inject a List of listeners that implement the same interface, and each one of them would be working independently on its topic and its message. We are not sure if this is feasible and if it works properly, because the app service would have to handle multiple ServiceBusProcessors side by side.
We'd like to know if we are missing some option, or if there is any other better way to achieve this. Hope I've explained it well.
I send below a sample of our hosted service. Thanks a lot.
public class MyService : IHostedService, IMyService
{
private ILogger<MyService> _logger;
public MyService(ILogger<MyService> logger)
{
_logger = logger;
}
public Task StartAsync(CancellationToken cancellationToken)
{
ServiceBusClient client = new ServiceBusClient("connectionString");
ServiceBusProcessor processor = client.CreateProcessor("topicName", "subscriptionName");
processor.ProcessMessageAsync += ProcessMessageAsync;
processor.ProcessErrorAsync += ProcessErrorAsync;
_logger.LogInformation("Listener initialized");
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
public async Task ProcessMessageAsync(ProcessMessageEventArgs args)
{
var body = args.Message.Body;
// Do stuff with this body...
await args.CompleteMessageAsync(args.Message);
}
public Task ProcessErrorAsync(ProcessErrorEventArgs args)
{
_logger.LogError($"Error ocurred: {args.Exception.ToString()} with message: {args.Exception.Message}");
return Task.CompletedTask;
}
}
Then at ConfigureServices:
services.AddHostedService<MyService>();
So, following option 2, the sample above would be transformed in the following, considering 2 listeners:
public interface IMyService
{
}
public interface IMyListener
{
Task Initialize();
Task ProcessMessageAsync(ProcessMessageEventArgs args);
Task ProcessErrorAsync(ProcessErrorEventArgs args);
}
public class BaseListener
{
private string _connectionString;
private string _topicName;
private string _subscriptionName;
private ILogger<BaseListener> _logger;
public BaseListener(ILogger<BaseListener> logger, string connectionString, string topicName, string subscriptionName)
{
this._connectionString = connectionString;
this._topicName = topicName;
this._subscriptionName = subscriptionName;
this._logger = logger;
}
public Task Initialize()
{
ServiceBusClient client = new ServiceBusClient(this._connectionString);
ServiceBusProcessor processor = client.CreateProcessor(this._topicName, this._subscriptionName);
processor.ProcessMessageAsync += ProcessMessageAsync;
processor.ProcessErrorAsync += ProcessErrorAsync;
_logger.LogInformation("Listener initialized");
return Task.CompletedTask;
}
public async Task ProcessMessageAsync(ProcessMessageEventArgs args)
{
var body = args.Message.Body;
// Do stuff with this body...
await args.CompleteMessageAsync(args.Message);
}
public Task ProcessErrorAsync(ProcessErrorEventArgs args)
{
return Task.CompletedTask;
}
}
public class MyListener1: BaseListener, IMyListener
{
public MyListener1(ILogger<MyListener1> logger) : base(logger, "connectionString", "topic1", "subscription")
{
}
}
public class MyListener2 : BaseListener, IMyListener
{
public MyListener2(ILogger<MyListener2> logger) : base(logger, "connectionString", "topic2", "subscription")
{
}
}
public class MyService : IHostedService, IMyService
{
private ILogger<MyService> _logger;
private IEnumerable<IMyListener> _listeners;
public MyService(ILogger<MyService> logger, IEnumerable<IMyListener> listeners)
{
_logger = logger;
_listeners = listeners;
}
public Task StartAsync(CancellationToken cancellationToken)
{
foreach(var listener in this._listeners)
{
listener.Initialize();
}
_logger.LogInformation("Listeners initialized");
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
}
And on ConfigureServices:
services.AddHostedService<MyService>();
services.AddSingleton<IMyListener, MyListener1>();
services.AddSingleton<IMyListener, MyListener2>();
How can I pass along auditing information between clients and services in an easy way without having to add that information as arguments for all service methods? Can I use message headers to set this data for a call?
Is there a way to allow service to pass that along downstream also, i.e., if ServiceA calls ServiceB that calls ServiceC, could the same auditing information be send to first A, then in A's call to B and then in B's call to C?
There is actually a concept of headers that are passed between client and service if you are using fabric transport for remoting. If you are using Http transport then you have headers there just as you would with any http request.
Note, below proposal is not the easiest solution, but it solves the issue once it is in place and it is easy to use then, but if you are looking for easy in the overall code base this might not be the way to go. If that is the case then I suggest you simply add some common audit info parameter to all your service methods. The big caveat there is of course when some developer forgets to add it or it is not set properly when calling down stream services. It's all about trade-offs, as alway in code :).
Down the rabbit hole
In fabric transport there are two classes that are involved in the communication: an instance of a IServiceRemotingClient on the client side, and an instance of IServiceRemotingListener on the service side. In each request from the client the messgae body and ServiceRemotingMessageHeaders are sent. Out of the box these headers include information of which interface (i.e. which service) and which method are being called (and that's also how the underlying receiver knows how to unpack that byte array that is the body). For calls to Actors, which goes through the ActorService, additional Actor information is also included in those headers.
The tricky part is hooking into that exchange and actually setting and then reading additional headers. Please bear with me here, it's a number of classes involved in this behind the curtains that we need to understand.
The service side
When you setup the IServiceRemotingListener for your service (example for a Stateless service) you usually use a convenience extension method, like so:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(context =>
this.CreateServiceRemotingListener(this.Context));
}
(Another way to do it would be to implement your own listener, but that's not really what we wan't to do here, we just wan't to add things on top of the existing infrastructure. See below for that approach.)
This is where we can provide our own listener instead, similar to what that extention method does behind the curtains. Let's first look at what that extention method does. It goes looking for a specific attribute on assembly level on your service project: ServiceRemotingProviderAttribute. That one is abstract, but the one that you can use, and which you will get a default instance of, if none is provided, is FabricTransportServiceRemotingProviderAttribute. Set it in AssemblyInfo.cs (or any other file, it's an assembly attribute):
[assembly: FabricTransportServiceRemotingProvider()]
This attribute has two interesting overridable methods:
public override IServiceRemotingListener CreateServiceRemotingListener(
ServiceContext serviceContext, IService serviceImplementation)
public override IServiceRemotingClientFactory CreateServiceRemotingClientFactory(
IServiceRemotingCallbackClient callbackClient)
These two methods are responsible for creating the the listener and the client factory. That means that it is also inspected by the client side of the transaction. That is why it is an attribute on assembly level for the service assembly, the client side can also pick it up together with the IService derived interface for the client we want to communicate with.
The CreateServiceRemotingListener ends up creating an instance FabricTransportServiceRemotingListener, however in this implementation we cannot set our own specific IServiceRemotingMessageHandler. If you create your own sub class of FabricTransportServiceRemotingProviderAttribute and override that then you can actually make it create an instance of FabricTransportServiceRemotingListener that takes in a dispatcher in the constructor:
public class AuditableFabricTransportServiceRemotingProviderAttribute :
FabricTransportServiceRemotingProviderAttribute
{
public override IServiceRemotingListener CreateServiceRemotingListener(
ServiceContext serviceContext, IService serviceImplementation)
{
var messageHandler = new AuditableServiceRemotingDispatcher(
serviceContext, serviceImplementation);
return (IServiceRemotingListener)new FabricTransportServiceRemotingListener(
serviceContext: serviceContext,
messageHandler: messageHandler);
}
}
The AuditableServiceRemotingDispatcher is where the magic happens. It is our own ServiceRemotingDispatcher subclass. Override the RequestResponseAsync (ignore HandleOneWay, it is not supported by service remoting, it throws an NotImplementedException if called), like this:
public class AuditableServiceRemotingDispatcher : ServiceRemotingDispatcher
{
public AuditableServiceRemotingDispatcher(ServiceContext serviceContext, IService service) :
base(serviceContext, service) { }
public override async Task<byte[]> RequestResponseAsync(
IServiceRemotingRequestContext requestContext,
ServiceRemotingMessageHeaders messageHeaders,
byte[] requestBodyBytes)
{
byte[] userHeader = null;
if (messageHeaders.TryGetHeaderValue("user-header", out auditHeader))
{
// Deserialize from byte[] and handle the header
}
else
{
// Throw exception?
}
byte[] result = null;
result = await base.RequestResponseAsync(requestContext, messageHeaders, requestBodyBytes);
return result;
}
}
Another, easier, but less flexible way, would be to directly create an instance of FabricTransportServiceRemotingListener with an instance of our custom dispatcher directly in the service:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(context =>
new FabricTransportServiceRemotingListener(this.Context, new AuditableServiceRemotingDispatcher(context, this)));
}
Why is this less flexible? Well, because using the attribute supports the client side as well, as we see below
The client side
Ok, so now we can read custom headers when receiving messages, how about setting those? Let's look at the other method of that attribute:
public override IServiceRemotingClientFactory CreateServiceRemotingClientFactory(IServiceRemotingCallbackClient callbackClient)
{
return (IServiceRemotingClientFactory)new FabricTransportServiceRemotingClientFactory(
callbackClient: callbackClient,
servicePartitionResolver: (IServicePartitionResolver)null,
traceId: (string)null);
}
Here we cannot just inject a specific handler or similar as for the service, we have to supply our own custom factory. In order not to have to reimplement the particulars of FabricTransportServiceRemotingClientFactory I simply encapsulate it in my own implementation of IServiceRemotingClientFactory:
public class AuditedFabricTransportServiceRemotingClientFactory : IServiceRemotingClientFactory, ICommunicationClientFactory<IServiceRemotingClient>
{
private readonly ICommunicationClientFactory<IServiceRemotingClient> _innerClientFactory;
public AuditedFabricTransportServiceRemotingClientFactory(ICommunicationClientFactory<IServiceRemotingClient> innerClientFactory)
{
_innerClientFactory = innerClientFactory;
_innerClientFactory.ClientConnected += OnClientConnected;
_innerClientFactory.ClientDisconnected += OnClientDisconnected;
}
private void OnClientConnected(object sender, CommunicationClientEventArgs<IServiceRemotingClient> e)
{
EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> clientConnected = this.ClientConnected;
if (clientConnected == null) return;
clientConnected((object)this, new CommunicationClientEventArgs<IServiceRemotingClient>()
{
Client = e.Client
});
}
private void OnClientDisconnected(object sender, CommunicationClientEventArgs<IServiceRemotingClient> e)
{
EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> clientDisconnected = this.ClientDisconnected;
if (clientDisconnected == null) return;
clientDisconnected((object)this, new CommunicationClientEventArgs<IServiceRemotingClient>()
{
Client = e.Client
});
}
public async Task<IServiceRemotingClient> GetClientAsync(
Uri serviceUri,
ServicePartitionKey partitionKey,
TargetReplicaSelector targetReplicaSelector,
string listenerName,
OperationRetrySettings retrySettings,
CancellationToken cancellationToken)
{
var client = await _innerClientFactory.GetClientAsync(
serviceUri,
partitionKey,
targetReplicaSelector,
listenerName,
retrySettings,
cancellationToken);
return new AuditedFabricTransportServiceRemotingClient(client);
}
public async Task<IServiceRemotingClient> GetClientAsync(
ResolvedServicePartition previousRsp,
TargetReplicaSelector targetReplicaSelector,
string listenerName,
OperationRetrySettings retrySettings,
CancellationToken cancellationToken)
{
var client = await _innerClientFactory.GetClientAsync(
previousRsp,
targetReplicaSelector,
listenerName,
retrySettings,
cancellationToken);
return new AuditedFabricTransportServiceRemotingClient(client);
}
public Task<OperationRetryControl> ReportOperationExceptionAsync(
IServiceRemotingClient client,
ExceptionInformation exceptionInformation,
OperationRetrySettings retrySettings,
CancellationToken cancellationToken)
{
return _innerClientFactory.ReportOperationExceptionAsync(
client,
exceptionInformation,
retrySettings,
cancellationToken);
}
public event EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> ClientConnected;
public event EventHandler<CommunicationClientEventArgs<IServiceRemotingClient>> ClientDisconnected;
}
This implementation simply passes along anything heavy lifting to the underlying factory, while returning it's own auditable client that similarily encapsulates a IServiceRemotingClient:
public class AuditedFabricTransportServiceRemotingClient : IServiceRemotingClient, ICommunicationClient
{
private readonly IServiceRemotingClient _innerClient;
public AuditedFabricTransportServiceRemotingClient(IServiceRemotingClient innerClient)
{
_innerClient = innerClient;
}
~AuditedFabricTransportServiceRemotingClient()
{
if (this._innerClient == null) return;
var disposable = this._innerClient as IDisposable;
disposable?.Dispose();
}
Task<byte[]> IServiceRemotingClient.RequestResponseAsync(ServiceRemotingMessageHeaders messageHeaders, byte[] requestBody)
{
messageHeaders.SetUser(ServiceRequestContext.Current.User);
messageHeaders.SetCorrelationId(ServiceRequestContext.Current.CorrelationId);
return this._innerClient.RequestResponseAsync(messageHeaders, requestBody);
}
void IServiceRemotingClient.SendOneWay(ServiceRemotingMessageHeaders messageHeaders, byte[] requestBody)
{
messageHeaders.SetUser(ServiceRequestContext.Current.User);
messageHeaders.SetCorrelationId(ServiceRequestContext.Current.CorrelationId);
this._innerClient.SendOneWay(messageHeaders, requestBody);
}
public ResolvedServicePartition ResolvedServicePartition
{
get { return this._innerClient.ResolvedServicePartition; }
set { this._innerClient.ResolvedServicePartition = value; }
}
public string ListenerName
{
get { return this._innerClient.ListenerName; }
set { this._innerClient.ListenerName = value; }
}
public ResolvedServiceEndpoint Endpoint
{
get { return this._innerClient.Endpoint; }
set { this._innerClient.Endpoint = value; }
}
}
Now, in here is where we actually (and finally) set the audit name that we want to pass along to the service.
Call chains and service request context
One final piece of the puzzle, the ServiceRequestContext, which is a custom class that allows us to handle an ambient context for a service request call. This is relevant because it gives us an easy way to propagate that context information, like the user or a correlation id (or any other header information we want to pass between client and service), in a chain of calls. The implementation ServiceRequestContext looks like:
public sealed class ServiceRequestContext
{
private static readonly string ContextKey = Guid.NewGuid().ToString();
public ServiceRequestContext(Guid correlationId, string user)
{
this.CorrelationId = correlationId;
this.User = user;
}
public Guid CorrelationId { get; private set; }
public string User { get; private set; }
public static ServiceRequestContext Current
{
get { return (ServiceRequestContext)CallContext.LogicalGetData(ContextKey); }
internal set
{
if (value == null)
{
CallContext.FreeNamedDataSlot(ContextKey);
}
else
{
CallContext.LogicalSetData(ContextKey, value);
}
}
}
public static Task RunInRequestContext(Func<Task> action, Guid correlationId, string user)
{
Task<Task> task = null;
task = new Task<Task>(async () =>
{
Debug.Assert(ServiceRequestContext.Current == null);
ServiceRequestContext.Current = new ServiceRequestContext(correlationId, user);
try
{
await action();
}
finally
{
ServiceRequestContext.Current = null;
}
});
task.Start();
return task.Unwrap();
}
public static Task<TResult> RunInRequestContext<TResult>(Func<Task<TResult>> action, Guid correlationId, string user)
{
Task<Task<TResult>> task = null;
task = new Task<Task<TResult>>(async () =>
{
Debug.Assert(ServiceRequestContext.Current == null);
ServiceRequestContext.Current = new ServiceRequestContext(correlationId, user);
try
{
return await action();
}
finally
{
ServiceRequestContext.Current = null;
}
});
task.Start();
return task.Unwrap<TResult>();
}
}
This last part was much influenced by the SO answer by Stephen Cleary. It gives us an easy way to handle the ambient information down a hierarcy of calls, weather they are synchronous or asyncronous over Tasks. Now, with this we have a way of setting that information also in the Dispatcher on the service side:
public override Task<byte[]> RequestResponseAsync(
IServiceRemotingRequestContext requestContext,
ServiceRemotingMessageHeaders messageHeaders,
byte[] requestBody)
{
var user = messageHeaders.GetUser();
var correlationId = messageHeaders.GetCorrelationId();
return ServiceRequestContext.RunInRequestContext(async () =>
await base.RequestResponseAsync(
requestContext,
messageHeaders,
requestBody),
correlationId, user);
}
(GetUser() and GetCorrelationId() are just helper methods that gets and unpacks the headers set by the client)
Having this in place means that any new client created by the service for any aditional call will also have the sam headers set, so in the scenario ServiceA -> ServiceB -> ServiceC we will still have the same user set in the call from ServiceB to ServiceC.
what? that easy? yes ;)
From inside a service, for instance a Stateless OWIN web api, where you first capture the user information, you create an instance of ServiceProxyFactoryand wrap that call in a ServiceRequestContext:
var task = ServiceRequestContext.RunInRequestContext(async () =>
{
var serviceA = ServiceProxyFactory.CreateServiceProxy<IServiceA>(new Uri($"{FabricRuntime.GetActivationContext().ApplicationName}/ServiceA"));
await serviceA.DoStuffAsync(CancellationToken.None);
}, Guid.NewGuid(), user);
Ok, so to sum it up - you can hook into the service remoting to set your own headers. As we see above there is some work that needs to be done to get a mechanism for that in place, mainly creating your own subclasses of the underlying infrastructure. The upside is that once you have this in place, then you have a very easy way for auditing your service calls.
I have been using the ServiceStack MQ Server/Client to empower a message based architecture in my platform and it has been working flawlessly. I am now trying to do something that I do not believe is supported by the SS Message Producer/Consumer.
Essentially I am firing off messages (events) at a centralized data center and I have ~2000 decentralized nodes all over the US over a non reliable network that need to potentially know about about this event BUT the event needs to be targeted to only one of the ~2000 nodes. I need the flexibility of the arbitrarily named channels with Pub/Sub but the durability of the MQ. I started off with Pub/Sub but the network is too unreliable so I have moved the solution to use the RedisMQServer. I have it working but wanted to make sure I am not missing something in the interface. I am curious if the creators of SS have thought through this use case and if so what the outcome of that discussion was? This does fight the concept of using the POCO's to drive the outcomes/actions of the message consumption. Maybe that is the reason?
Here is my producer
public ExpressLightServiceResponse Get(ExpressLightServiceRequest query)
{
var result = new ExpressLightServiceResponse();
var assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(new AssemblyName("ArbitaryNamespace"), AssemblyBuilderAccess.Run);
var moduleBuilder = assemblyBuilder.DefineDynamicModule("ModuleName");
var typeBuilder = moduleBuilder.DefineType(string.Format("EventA{0}", query.Store), TypeAttributes.Public);
typeBuilder.DefineDefaultConstructor(MethodAttributes.Public);
var newType = typeBuilder.CreateType();
using (var messageProducer = _messageService.CreateMessageProducer())
{
var message = MessageFactory.Create(newType.CreateInstance());
messageProducer.Publish(message);
}
return result;
}
Here is my consumer
public class ServerAppHost : AppHostHttpListenerBase
{
private readonly string _store;
public string StoreQueue => $"EventA{_store}";
public ServerAppHost(string store) : base("Express Light Server", typeof(PubSubServiceStatsService).Assembly)
{
_store = store;
}
public override void Configure(Container container)
{
container.Register<IRedisClientsManager>(new PooledRedisClientManager(ConfigurationManager.ConnectionStrings["Redis"].ConnectionString));
var assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(new AssemblyName("ArbitaryNamespace"), AssemblyBuilderAccess.Run);
var moduleBuilder = assemblyBuilder.DefineDynamicModule("ModuleName");
var typeBuilder = moduleBuilder.DefineType(StoreQueue, TypeAttributes.Public);
typeBuilder.DefineDefaultConstructor(MethodAttributes.Public);
var newType = typeBuilder.CreateType();
var mi = typeof(Temp).GetMethod("Foo");
var fooRef = mi.MakeGenericMethod(newType);
fooRef.Invoke(new Temp(container.Resolve<IRedisClientsManager>()), null);
}
}
public class Temp
{
private readonly IRedisClientsManager _redisClientsManager;
public Temp(IRedisClientsManager redisClientsManager)
{
_redisClientsManager = redisClientsManager;
}
public void Foo<T>()
{
var mqService = new RedisMqServer(_redisClientsManager);
mqService.RegisterHandler<T>(DoWork);
mqService.Start();
}
private object DoWork<T>(IMessage<T> arg)
{
//Do work
return null;
}
}
What this gives me is the flexibility of Pub/Sub with the durability of a Queue. Does anyone see/know of a more "native" way to achieve this?
There should only be 1 MQ Host registered in your AppHost so I'd firstly remove it out of your wrapper class and have it just register the handler, e.g:
public override void Configure(Container container)
{
//...
container.Register<IMessageService>(
c => new RedisMqServer(c.Resolve<IRedisClientsManager>());
var mqServer = container.Resolve<IMessageService>();
fooRef.Invoke(new Temp(mqServer), null);
mqServer.Start();
}
public class Temp
{
private readonly IMessageService mqServer;
public Temp(IMessageService mqServer)
{
this.mqServer = mqServer;
}
public void Foo<T>() => mqService.RegisterHandler<T>(DoWork);
}
But this approach isn't good fit for ServiceStack which encourages the use of code-first Messages which defines the Service Contract that client/servers use to process the messages that are sent and received. So if you want to use ServiceStack for sending custom messages I'd recommend either having a separate class per message otherwise have a generic Type like SendEvent where the message or event type is a property on the class.
Otherwise if you want to continue with custom messages don't use RedisMqServer, you can just use a dedicated MQ like Rabbit MQ or if you prefer use a Redis List directly - which is the data structure that all Redis MQ's use underneath.
I am using Akka.net and looking to implement a reactive equivalent of a 'DDD repository', from what I have seen from here http://qnalist.com/questions/5585484/ddd-eventsourcing-with-akka-persistence and https://gitter.im/petabridge/akka-bootcamp/archives/2015/06/25
I understand the idea of having a coordinator that keeps a number of actors in memory according to some live in-memory count or some amount of elapsed time.
As a summary (based on the links above) I am trying to:
Create an Aggregate coordinator (for each actor type) that returns aggregates on request.
Each aggregate uses Context.SetReceiveTimeout method to identify if it's not used for some period of time. If so, it will receive ReceiveTimeout message.
On receipt of timeout message, the Child will send a Passivate message back to coordinator (which in turn will then cause the coordinator to shut the child down).
Whilst the child is being shutdown, all messages to child are intercepted by the coordinator and buffered.
Once shutdown of child has been confirmed (in the coordinator), if there are buffered messages for that child it is recreated and all messages flushed through to the recreated child.
How would one intercept the messages that are being attempted to be sent to the child (step 4) and instead route them to the parent? Or in other words I want the child to say at the point of sending the Passivate message to also say "hey don't send me anymore messages, send them to my parent instead".
This would save me routing everything through the coordinator (or am i going about it in the wrong way and message intercept impossible to do, and should instead proxy everything through the parent)?
I have my message contracts:
public class GetActor
{
public readonly string Identity;
public GetActor(string identity)
{
Identity = identity;
}
}
public class GetActorReply
{
public readonly IActorRef ActorRef;
public GetActorReply(IActorRef actorRef)
{
ActorRef = actorRef;
}
}
public class Passivate // sent from child aggregate to parent coordinator
{
}
Coordinator class, which for every aggregate type there is a unique instance:
public class ActorLifetimeCoordinator r<T> : ReceiveActor where T : ActorBase
{
protected Dictionary<Identity,IActorRef> Actors = new Dictionary<Identity, IActorRef>();
protected Dictionary<Identity, List<object>> BufferedMsgs = new Dictionary<Identity, List<object>>();
public ActorLifetimeCoordinator()
{
Receive<GetActor>(message =>
{
var actor = GetActor(message.Identity);
Sender.Tell(new GetActorReply(actor), Self); // reply with the retrieved actor
});
Receive<Passivate>(message =>
{
var actorToUnload = Context.Sender;
var task = actorToUnload.GracefulStop(TimeSpan.FromSeconds(10));
// the time between the above and below lines, we need to intercept messages to the child that is being
// removed from memory - how to do this?
task.Wait(); // dont block thread, use pipeto instead?
});
}
protected IActorRef GetActor(string identity)
{
IActorRef value;
return Actors.TryGetValue(identity, out value)
? value : Context.System.ActorOf(Props.Create<T>(identity));
}
}
Aggregate base class from which all aggregates derive:
public abstract class AggregateRoot : ReceivePersistentActor
{
private readonly DispatchByReflectionStrategy _dispatchStrategy
= new DispatchByReflectionStrategy("When");
protected AggregateRoot(Identity identity)
{
PersistenceId = Context.Parent.Path.Name + "/" + Self.Path.Name + "/" + identity;
Recover((Action<IDomainEvent>)Dispatch);
Command<ReceiveTimeout>(message =>
{
Context.Parent.Tell(new Passivate());
});
Context.SetReceiveTimeout(TimeSpan.FromMinutes(5));
}
public override string PersistenceId { get; }
private void Dispatch(IDomainEvent domainEvent)
{
_dispatchStrategy.Dispatch(this, domainEvent);
}
protected void Emit(IDomainEvent domainEvent)
{
Persist(domainEvent, success =>
{
Dispatch(domainEvent);
});
}
}
Easiest (but not simplest) option here is to use Akka.Cluster.Sharding module, which covers areas of coordinator pattern with support for actors distribution and balancing across the cluster.
If you will choose that you don't need it, unfortunately you'll need to pass messages through coordinator - messages themselves need to provide identifier used to determine recipient. Otherwise you may end up sending messages to dead actor.
I was hoping for some guidance on how to use the EventProcessorHost with a worker role. Basically I am hoping to have the EventProcessorHost process the partitions in parallel and I'm wondering where I should go about placing this type of code within the worker role and if I'm missing anything key.
var manager = NamespaceManager.CreateFromConnectionString(connectionString);
var desc = manager.CreateEventHubIfNotExistsAsync(path).Result;
var client = Microsoft.ServiceBus.Messaging.EventHubClient.CreateFromConnectionString(connectionString, path);
var host = new EventProcessorHost(hostname, path, consumerGroup, connectionString, blobStorageConnectionString);
EventHubProcessorFactory<EventData> factory = new EventHubProcessorFactory<EventData>();
host.RegisterEventProcessorFactoryAsync(factory);
Everything I've read says the EventProcessorHost will divide up the partitions on its own, but is the above code sufficient to process all the partitions asynchronously?
Here's a simplified version of how we process our event hub from an Worker Role. We keep the instance in the mainWorker role and call the IEventProcessor to start processing it.
This way we can call it and close it down when the Worker Responds to shutdown events etc.
EDIT:
As for the processing it in parallel, the IEventProcessor class will just grab 10 more events from the event hub when it's finished processing the current one. Handling all the fancy partition leasing for you.
It's a synchronous workflow, When I scale to multiple worker roles I start to see the partitions get split between instances and it gets faster etc. You'd have to roll your own solution if you wanted it to process the event hub in a different way.
public class WorkerRole : RoleEntryPoint
{
private readonly CancellationTokenSource _cancellationTokenSource = new CancellationTokenSource();
private readonly ManualResetEvent _runCompleteEvent = new ManualResetEvent(false);
private EventProcessorHost _eventProcessorHost;
public override bool OnStart()
{
ThreadPool.SetMaxThreads(4096, 2048);
ServicePointManager.DefaultConnectionLimit = 500;
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
var eventClient = EventHubClient.CreateFromConnectionString("consumersConnectionString",
"eventHubName");
_eventProcessorHost = new EventProcessorHost(Dns.GetHostName(), eventClient.Path,
eventClient.GetDefaultConsumerGroup().GroupName,
"consumersConnectionString", "blobLeaseConnectionString");
return base.OnStart();
}
public override void Run()
{
try
{
RunAsync(this._cancellationTokenSource.Token).Wait();
}
finally
{
_runCompleteEvent.Set();
}
}
private async Task RunAsync(CancellationToken cancellationToken)
{
// starts processing here
await _eventProcessorHost.RegisterEventProcessorAsync<EventProcessor>();
while (!cancellationToken.IsCancellationRequested)
{
await Task.Delay(TimeSpan.FromMinutes(1));
}
}
public override void OnStop()
{
_eventProcessorHost.UnregisterEventProcessorAsync().Wait();
_cancellationTokenSource.Cancel();
_runCompleteEvent.WaitOne();
base.OnStop();
}
}
I have multiple processors for the specific partitions (you can guarantee FIFO this way), but you can implement you're own logic easily i.e. skip the use of a EventDataProcessor class and Dictionary lookup in my example and just implement some logic within the ProcessEventsAsync method.
public class EventProcessor : IEventProcessor
{
private readonly Dictionary<string, IEventDataProcessor> _eventDataProcessors;
public EventProcessor()
{
_eventDataProcessors = new Dictionary<string, IEventDataProcessor>
{
{"A", new EventDataProcessorA()},
{"B", new EventDataProcessorB()},
{"C", new EventDataProcessorC()}
}
}
public Task OpenAsync(PartitionContext context)
{
return Task.FromResult<object>(null);
}
public async Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach(EventData eventData in messages)
{
// implement your own logic here, you could just process the data here, just remember that they will all be from the same partition in this block
try
{
IEventDataProcessor eventDataProcessor;
if(_eventDataProcessors.TryGetValue(eventData.PartitionKey, out eventDataProcessor))
{
await eventDataProcessor.ProcessMessage(eventData);
}
}
catch (Exception ex)
{
_//log exception
}
}
await context.CheckpointAsync();
}
public async Task CloseAsync(PartitionContext context, CloseReason reason)
{
if (reason == CloseReason.Shutdown)
await context.CheckpointAsync();
}
}
Example of one of our EventDataProcessors
public interface IEventDataProcessor
{
Task ProcessMessage(EventData eventData);
}
public class EventDataProcessorA : IEventDataProcessor
{
public async Task ProcessMessage(EventData eventData)
{
// Do Something specific with data from Partition "A"
}
}
public class EventDataProcessorB : IEventDataProcessor
{
public async Task ProcessMessage(EventData eventData)
{
// Do Something specific with data from Partition "B"
}
}
Hope this helps, it's been rock solid for us so far and scales easily to multiple instances