Domain events for local consumption in DDD - domain-driven-design

I'm familiarizing myself DDD lately and trying to get hold of the key concepts and I have a query incase of publishing domain events for the local subscribers, so can I assume that the event publisher, takes care of both publishing to remote subscribers thro AQMP while also leveraging observable to publish it towards the local subscribers, is this a scalable solution? or is there a familiar pattern to handle this?(Also advise if there is a reactive solution to the prblm, may be RxJava or the likes)

The locality of subscribers, local or remote, should be completely transparent to the domain entity raising an publishing the event.
Your thoughts are very similar to design I use. I use a quite primitive version of a local event publishers, which communicate via messaging middleware (AMQP to propagate events to all other local publishers).
I'm not doing that in Java, but in fact, for the local event "bus" I use Rx.Net (reactive extensions have an almost identical API in all languages, so RxJava would work)
a simplified version with transaction support ripped out would look like this:
public class EventHub
{
private readonly ISubject<object, object> _messages;
public EventHub()
{
var _subject = new Subject<object>();
m_messages = Subject.Synchronize(_subject);
}
public void Publish<T>(T message)
{
m_messages.OnNext(message);
}
public IObservable<T> AsObservable<T>()
{
return m_messages
.ObserveOn(ThreadPoolScheduler.Instance)
.OfType<T>()
.Synchronize()
.AsObservable();
}
}
Adding remote capability would be adding another subscriber, which would subscribe to all events and route them to other local EventHubs, and also listen to other eventhubs and publish their events on the local hub. It would not change the EventHub component.

Related

How to test message-driven-channel-adapter with MockIntegrationContext

I am trying to test a Spring Integration flow that starts off from a message-driven-channel-adapter configured as:
<int-jms:message-driven-channel-adapter id="myAdapter" ... />
My test goes like:
#SpringJUnitConfig(locations = {"my-app-context.xml"})
#SpringIntegrationTest(noAutoStartup = {"myAdapter"})
public class MyIntegrationFlowTest {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#Test
public void myTest() {
...
MessageSource<String> messageSource = () -> new GenericMessage<>("foo");
mockIntegrationContext.substituteMessageSourceFor("myAdapter", messageSource);
...
}
}
I am however getting the following error:
org.springframework.beans.factory.BeanNotOfRequiredTypeException: Bean named 'myAdapter' is expected to be of type 'org.springframework.integration.endpoint.SourcePollingChannelAdapter' but was actually of type 'org.springframework.integration.jms.JmsMessageDrivenEndpoint'
How should one specify an alternate source for the channel adapter for testing using the MockIntegrationContext, or by some other method?
The Message Driver Channel Adapter is really not a Source Polling Channel Adapter. So, the substituteMessageSourceFor() is indeed cannot be used for that type of components, which, essentially is a MessageProducerSupport implementation, not a SourcePollingChannelAdapter for a MessageSource.
The difference exists because not all protocols provides a listener-like hooks to spawn some self-managed task to subscribe to. The good example is JDBC, which is only passive system expecting requests. Therefore a polling channel adapter with a JdbcPollingChannelAdapter (which is a MessageSource) implementation must be used to interact with DB in event-driven manner.
Other systems (like JMS in your case) provides some listener (or consumer) API for what we can spawn a while task (see MessageListenerContainer in spring-jms) and let its MessageProducerSupport to emit messages to the channel.
Therefore you need to distinguish for yourself with what type of component you interact before choosing a testing strategy.
Since there is no extra layer in case of message-driver channel adapter, but rather some specific, self-managed MessageProducerSupport impl, we not only provide a particular mocking API, but even don't require to know anything else, but just standard unit testing feature and a message channel this endpoint is producing in the configuration.
So, the solution for you is something like:
#SpringIntegrationTest(noAutoStartup = {"myAdapter"}) - that's fully correct in your code: we really have to stop the real channel adapter to not pollute our testing environment.
You just need to inject into your test class a MessageChannel that id="myAdapter" is producing to. In your test code you just build a Message and send it into this channel. No need to worry about a MockIntegrationContext at all.

IMediatr with Autofac in Domain Objects DDD

I have set my Domain Model objects to be independent of any service and infrastructure logic.
I am also using Domain Events to react to some changes in Domain Models.
Now my problem is how to raise those events from the Domain Model objects itself.
Currently I am using Udi Dahan's DomainEvents static class for this (I need evens to be handled exactly when they happen and not at a latter time).
The events are used for many things, like logging, updating the data in related services and other Domain Model objects and db, publishing messages to the MassTransit bus etc.
The DomainEvents static class uses Autofac scope that I inject at some point in it, to find the IMediatr instance and to publish the events, like this:
public static class DomainEvents
{
private static ILifetimeScope Scope;
public async static Task RaiseAsync<TDomainEvent>(TDomainEvent #event) where TDomainEvent : IDomainEvent
{
var mediator = Scope?.Resolve<IMediatorBus>();
if (mediator != null)
{
await mediator!.Publish(#event).ConfigureAwait(false);
}
else
{
Debug.WriteLine("Mediator not set for DomainEvents!");
}
}
public static void SetScope(ILifetimeScope scope)
{
Scope = scope;
}
}
This all works ok in a single-threaded environment, but the method DomainEvents.SetScope() is a possible racing problem in multhi-threaded environment.
Ie. When I introduce MassTransit and create message consumers, each Message consumer will set the current LifetimeScope to DomainEvents by that method, and here is the problem, each consumer will overwrite the lifetime scope with the new one.
Why I use DomainEvents static class? Because I don't want to pollute my Domain Model Objects with infrastructure stuff.
I thought about making DomainEvents non static (define an interface), but then I need them injected in every Domain Model Object and I'm still thinking about this, but maybe there is a better way.
I want to know if there is a better way to handle this?
Maybe some change in DomainEvents class? Or maybe remove the DomainEvents static class end use an interface or DomainService to do this.
The problem is I don't like static classes, but I also don't like pushing non domain-specific dependencies into Domain Model Objects.
Please help.
UPDATE
To better clarify the process and for what I use DomainEvents...
I have a long-running process that can take from few minutes to few hours/days to complete.
So the process is going like this:
I receive an message from MassTransit ie ProcessStartMessage(processId)
Get the ProcessData for (processId) from Db.
Construct an in-memory Domain Model ProcessTracker (singleton) and put all the data I loaded from DB in it. (in-memory cache)
I receive another message from Masstransit ie. ProcessStatusChanged(processId, data).
Forward this message data to in-memory singleton ProcessTracker to process.
ProcessTracker process the data.
For ProcessTracker to be able to process this data it instantiates many Domain Model Objects, each responsible to process some part of the data. (Note there is NO more db calls and entity hydration from db, it all happens in memory, also Domain Model is not mapped to any entity, it is not connected to any db object).
At some point I need to log what a Domain Model object in the chain has done, has it work finished or started, has reached some milestone etc. This is done by raising DomainEvents. I also need to notify the GUI of those events, so they are used to send Masstransit messages too.
Ie.(pseudo code):
public class ProcessTracker
{
private Step _currentStep;
public void ProcessData(data)
{
_currentStep.ProcessData(data);
DomainEvents.Raise(new ProcesTrackerDataProcessed());
...
}
}
public class Step
{
public Phase _currentPhase;
public void ProcessData(data)
{
if (data.IsManual && _someOtherCondition())
{
DomainEvents.Raise(new StepDataEvent1());
...
}
if(data.CanTransition)
{
DomainEvents.Raise(new TransitionToNewPhase(this, data));
}
_currentPhase.DoSomeWork(data);
DomainEvents.Raise(new StepDataProcessed(this, data));
...
}
}
About db updates, those are not transactional and not important to the process and the Domain Model Object state is kept only in memory, if the process crash the process MUST begin from the start (there is NO recovery).
To end the process:
I receive ProcessEnd from the MassTransit
The message data is forwarded to the ProcessTracker
ProcessTracker handles the data an nets a result of the proceess
The result of the process is saved to db
A message is sent to other parties in the process that notifies them of a process completion.
Ask yourself first what are you going to do when you raise an event from your domain model?
Normally it works like this:
Get a command
Load a domain object from a repository
Execute behaviour
(here probably) Raise an event
Persist the new domain object state
So, where your extra domain event handlers would fit? Are you going to execute some other database calls, send an email? Remember that it all happens now, when you haven't even persisted the changed state of your domain object. What if your persistence fails? It will happen after you executed all the domain handlers.
You should not execute more than one transaction when you handle a single command. The Aggregate pattern clearly tells you that the aggregate is the transaction boundary. You should raise domain events after you complete the transaction, or within the same technical transaction, but it should only persist the aggregate state and the event. Domain event reactions potentially trigger transactions for other domain objects, and it should be done outside of the scope of handling the current command.
The issue is not at all technical, it is a design problem.
If you use MassTransit, you can only make it (relatively) reliable if you handle the command in a message consumer. Then, you can use in-memory outbox, which will not send an event unless the consumer succeeded. It is still not guaranteed that the event will be published in case of the broker failure.
Unless you go to Event Sourcing, you have two 100% reliable options:
Use a transactional outbox pattern (NServiceBus has one and it's quite complex). It has limitations on what type of database you use.
Store the event to the same database as the domain object, in a different table, within the same transaction. Poll the table with DELETE INTO and emit events to the broker from there.

Seeking an understanding of ServiceStack.Redis: IRedisClient.PublishMessage vs IMessageQueueClient.Publish

I am having a hard time separating the IRedisClient.PublishMessage and IMessageQueueClient.Publish and realize I must be mixing something up.
ServiceStack gives us the option to listen for pub/sub broadcasts like this:
static IRedisSubscription _subscription;
static IRedisClient redisClientSub;
static int received = 0;
static void ReadFromQueue()
{
redisClientSub = redisClientManager.GetClient();
_subscription = redisClientSub.CreateSubscription();
_subscription.OnMessage = (channel, msg) =>
{
try
{
received++;
}
catch (Exception ex)
{
}
};
Task.Run(() => _subscription.SubscribeToChannels("Test"));
}
Looks nice, straightforward. But what about the producer?
When looking at the classes available, I thought that one could either user the IRedisClient.PublishMessage(string toChannel, string message) or IMessageQueueClient.Publish(string queueName, IMessage message).
redisClient.PublishMessage("Test", json);
// or:
myMessageQueueClient.Publish("Test", new Message<CoreEvent>(testReq));
In both cases, you specify the channel name yourself. This is the behaviour I am seeing:
the subscriber above only receives the message if I use IRedisClient.PublishMessage(string toChannel, string message) and never if I use IMessageQueueClient.Publish(string queueName, IMessage message)
If I publish using IRedisClient.PublishMessage, I expected the "Test" channel to be populated (if I view with a Redis browser), but it is not. I never see any trace of the queue (let's say I don't start the subscription, but producers adds messages)
If I publish using IMessageQueueClient.Publish(string queueName, IMessage message), the channel "Test" is created and the messages are persisted there, but never popped/fetched-and-deleted.
I want to understand the difference between the two. I have looked at source code and read all I can about it, but I haven't found any documentation regarding IRedisClient.PublishMessage.
Mythz answered this on ServiceStack forum, here.
He writes:
These clients should not be used interchangeably, you should only be
using ServiceStack MQ clients to send MQ Messages or the Message MQ
Message wrapper.
The redis subscription is low level API to create a Redis Pub/Sub
subscription, a more useful higher level API is the Managed Pub/Sub
Server which wraps the pub/sub subscription behind a managed thread.
Either way, MQ Server is only designed to process messages from MQ
clients, if you’re going to implement your own messaging
implementation use your own messages & redis clients not the MQ
clients or MQ Message class.
and
No IRedisClient (& ServiceStack.Redis) APIs are for Redis Server, the
PublishMessage API sends the redis PUBLISH command. IRedisSubscription
creates a Redis Pub/Sub subscription, see Redis docs to learn how
Redis Pub/Sub works. The ServiceStack.Redis library and all its APIs
are just for Redis Server, it doesn’t contain any
ServiceStack.Messaging MQ APIs.
So just use ServiceStack.Redis for your custom Redis Pub/Sub
subscription implementation, i.e. don’t use any ServiceStack.Messaging
APIs which is for ServiceStack MQ only.

SI subscription to multiple mqtt topics

I'm trying to learn how to handle MQTT Messages in Spring-Integration.
Have created a converter, that subscribes with a single MqttPahoMessageDrivenChannelAdapter per MQTT Topic for consuming and converting the messages.
The problem is our data provider is planning to "speed-up" publishing messages on his side. So instead of having a few(<=10) topics each of which has messages with about 150 fields it is planned to publish each of those fields to the separate MQTT topic.
This means my converter would have to consume ca. 1000 mqtt topics, but I do not know whether:
Is spring-integration still a good choice for it. Cause afaik. the mentioned adapter uses the PAHO MqttClient that will consume the messages from all of the topics it is subscribed to in one single thread and creating 1000 instances of those adapters is an overkill.
If we stick further to spring-integration and use the provided components, would it be a good idea to create a single inbound adapter for all of the fields, that previously were in messages of one topic but moving the conversion away from the adapter bean to a separate bean ( that does the conversion) connected with an executer-channel to the adapter and thus executing the conversion of those fields on some threadpool in parallel.
Thanks in advance for your answers!
I think your idea makes sense.
For that purpose you need to implement a passthrough MqttMessageConverter and provide an MqttMessage as a payload and topic as a header:
public class PassThroughMqttMessageConverter implements MqttMessageConverter {
#Override
public Message<?> toMessage(String topic, MqttMessage mqttMessage) {
return MessageBuilder.withPayload(mqttMessage)
.setHeader(MqttHeaders.RECEIVED_TOPIC, topic)
.build();
}
#Override
public Object fromMessage(Message<?> message, Class<?> targetClass) {
return null;
}
#Override
public Message<?> toMessage(Object payload, MessageHeaders headers) {
return null;
}
}
So, you really will be able to perform a target conversion downstream, after a mentioned ExecutorChannel in the custom transformer.
You also may consider to implement a custom MqttPahoClientFactory (an extension of the DefaultMqttPahoClientFactory may work as well) and provide a custom ScheduledExecutorService to be injected into the MqttClient you are going create in the getClientInstance().

How can services written in JAVA communicate with zeromq broker written in C

I have written a request-reply broker using zeromq and the C programming language. The broker routes client requests to the appropriate services, and then routes the reply back to the client. The services are written in JAVA.
Can someone please explain how to have the services communicate with the broker. I am sure that this must be a common scenario, but I don't have much experience, so can someone please help me with making my code inter-operable.
Please assume that the services will not be zeromq aware. Is node.js to be used in such a scenario? Will I have to write an http front end?
Here's one way you can do it using async PUSH/PULL sockets. I'm psuedo-coding this, so fill in the blanks yourself:
Assuming the Java services are POJO's residing in their own process, let's say we have a simple service with no zmq dependencies:
public class MyJavaService{
public Object invokeService(String params){
}
}
Now we build a Java delegate layer that pulls in messages from the broker, delegating requests to the Java service methods, and returning the response on a separate socket:
//receive on this
Socket pull = ctx.createSocket(ZMQ.PULL)
pull.connect("tcp://localhost:5555")
//respond on this
Socket push = ctx.createSocket( ZMQ.PUSH)
psuch.connect("tcp://localhost:5556")
while( true){
ZMsg msg = pull.recvMsg( pull)
//assume the msg has 2 frames,
//one for service to invoke,
//the other with arguments
String svcToInvoke = msg.popString()
String svcArgs = msg.popString()
if( "MyJavaService".equals(svcToInvoke)){
ZMsg respMsg = new ZMsg()
respMsg.push( (new MyJavaService()).invokeService( svcArgs))
respMsg.send( push)
}
}
On the broker side, just create the PUSH/PULL sockets to communicate with the Java services layer (I'm not a C++ programmer, so forgive me)
int main () {
zmq::context_t context(1);
zmq::socket_t push(context, ZMQ_PUSH);
push.bind( "tcp://localhost:5555");
// First allow 0MQ to set the identity
zmq::socket_t pull(context, ZMQ_PULL);
pull.bind( "tcp://localhost:5556");
//code here to handle request/response,
//to clients
..
}
Using PUSH/PULL works for this approach, but the ideal approach is to use ROUTER on the server, and DEALER on the client, for full asynchronous communication, example here.
Hope it helps!

Resources