DomainEventPublisher consistency - domain-driven-design

Having just read Vaughn Vernon's effective aggregate design, I'm wondering about failures related to event publishing.
In the given example at page 9 (page 3 of the PDF), we call DomainEventPublisher.publish(). The event being published allows other aggregates to execute their behaviours.
What I'm wondering is: What happens if DomainEventPublisher.publish() fails ? What happens if DomainEventPublisher.publish() succeeds, but the transaction fails ?
How implementations handle these two cases ?

DomainEventPublisher.publish() is synchronous. You'd setup a generic handler (handles all events) which stores the events in the same database transaction as the business process, which means your event storage must have the ability to be transactionnal with whatever other storage mechanism you rely on to store the state of your aggregates.
Once events have been written on disk transactionnaly, you can then put them on a message queue for asynchronous delivery.
Are there other known ways to do it?
Well, rather than using a static DomainEventPublisher you could record events in a collection on the AR, just like in event sourcing and then implement a centralised mechanism to store them (e.g. transaction hooks, using aspects, etc.).

What happens if DomainEventPublisher.publish() succeeds, but the
transaction fails?
In this case I am against Vernon approach. I prefer to return the events to the application service. This way I can persist the changes performed by the aggregate using a transaction (if needed) and, if everything is Ok, I will publish the event. This also helps to keep the business layer entirely clean and pure.
In a few words; if the transaction fails then no event is raised.
What happens if DomainEventPublisher.publish() fails?
A domain event never fails, by business rules, because it's a notification of things that happened. If an aggregate said Yes to the operation and return a event expressing the business changes; then nothing in the world should say that this operation can not be done or has to be undone.
If the event fails by infrastructure then you need to have the tools to re-raise it (automatically or manually) when the outage is fixed and eventually archive the consistency in your system. Take a look at NServiceBus. It provides retries, error queues, logs and so on to never loose the events.
If the message system is down you have at least event logs that you can use to re-rise them into the message system.

Related

How to handle publishing event when message broker is out?

I'm thinking how can I handle sending events when suddenly message broker go down. Please take a look at this code
using (var uow = uowProvider.Create())
{
...
...
var policy = offer.Buy(customer);
uow.Policies.Add(policy);
// DB changes are saved here! but what would happen if...
await uow.CommitChanges();
// ...eventPublisher throw an exception?
await eventPublisher.PublishMessage(PolicyCreated(policy));
return true;
}
IMHO if eventPublisher throw exception the event PolicyCreated won't be published. I don't know how to deal with this situation. The event must be published in system. I suppose that only good solution will be creating some kind of retry mechanism but I'm not sure...
I would like to elaborate a bit on the answers provided by both #Imran Arshad and #VoiceOfUnreason which are, of course, correct.
There are basically 3 patterns when it comes to publishing messages:
exactly once delivery (requires distributed transactions)
at most once delivery (no distributed transaction but may miss messages - like the actor model)
at least once delivery (no distributed transaction but may have duplicate messages)
The following is all in terms of your example.
For exactly once delivery both the database and the queue would need to provide the ability to enlist in distributed transactions. Some queues do not proivde this functionality out-of-the-box (like RabbitMQ) and even though it may be possible to roll your own it may not be the best option. Distributed transactions are typically quite slow.
For at most once delivery we have to accept that we may miss messages and I'm guessing that in most use-cases this is quite troublesome. You would get around this by tracking the progress and picking up the missed messages and resending them if required.
For at least once delivery we would need to ensure that the messages are idempotent. When we get a duplicate messages (usually quite an edge case) they should be ignored or their outcome should be the same as the initial message processed.
Now, there are a couple of ways around your issue. You could start a database transaction and make your database changes. Before you comit you perform the message sending. Should that fail then your transaction would be rolled back. That works fine for sending a single message but in your case some subscribers may have received a message. This complicates matters as all your subscribers need to receive the message or none of them get to receive it.
You could have your subscriber check whether the state is indeed true and whether it should continue processing. This places a burden on the subscriber and introduces some coupling. It could either postpone the action should the state not allow processing, or ignore it.
Another option is that instead of publishing the event you send yourself a command that indicates completion of the step. The command handler would perform the publishing and retry until all subscriber queues receive the message. This would require the relevant subscribers to ignore those messages that they had already processed (idempotence).
The outbox is a store-and-forward approach and will eventually send the message to all subscribers. You could have your outbox perhaps be included in the database transaction. In my Shuttle.Esb service bus one of the folks that used it came across a weird side-effect that I had not planned. He used a sql-based queue as an outbox and the queue connection was to the same database. It was therefore included in the database transasction and would roll back with all the other changes if not committed. Apologies for promoting my own product but I'm sure other service bus offerings may have the same functionality.
There are therefore quite a few things to consider and various techniques to mitigate the risk of a queue outage. I would, however, move the queue interaction to before the database commit.
For reliable system you need to save events locally. If your broker is down you have to retry and publish event.
There are many ways to achieve this but most common is outbox pattern. Just like your mail box your event/message stays locally and you keep retrying until it's sent and you mark the message published in your local DB.
you can read more about here Publish Events
You'll want to review Udi Dahan's discussion of Reliable Messaging without Distributed Transactions.
But very roughly, the PolicyCreated event becomes part of the unit of work; either because it is saved in the Policy representation itself, or because it is saved in an EventRepository that participates in the same transaction as the Policies repository.
Once you've captured the information in your database, retry the publish is relatively straight forward - read the events from the database, publish, optionally mark the events in the database as successfully published so that they can be cleaned up.

How to control idempotency of messages in an event-driven architecture?

I'm working on a project where DynamoDB is being used as database and every use case of the application is triggered by a message published after an item has been created/updated in DB. Currently the code follows this approach:
repository.save(entity);
messagePublisher.publish(event);
Udi Dahan has a video called Reliable Messaging Without Distributed Transactions where he talks about a solution to situations where a system can fail right after saving to DB but before publishing the message as messages are not part of a transaction. But in his solution I think he assumes using a SQL database as the process involves saving, as part of the transaction, the correlationId of the message being processed, the entity modification and the messages that are to be published. Using a NoSQL DB I cannot think of a clean way to store the information about the messages.
A solution would be using DynamoDB streams and subscribe to the events published either using a Lambda or another service to transformed them into domain-specific events. My problem with this is that I wouldn't be able to send the messages from the domain logic, the logic would be spread across the service processing the message and the Lambda/service reacting over changes and the solution would be platform-specific.
Is there any other way to handle this?
I can't say a specific solution based on DynamoDB since I've not used this engine ever. But I've built an event driven system on top of MongoDB so I can share my learnings you might find useful for your case.
You can have different approaches:
1) Based on an event sourcing approach you can just save the events/messages your use case produce within a transaction. In Mongo when you are just inserting/appending new items to the same collection you can ensure atomicity. Anyway, if the engine does not provide that capability the query operation is so centralized that you are reducing the possibility of an error at minimum.
Once all the events are stored, you can then consume them and project them to a given state and then persist the updated state in another transaction.
Here you have to deal with eventual consistency as data will be stale in your read model until you have projected the events.
2) Another approach is applying the UnitOfWork pattern where you cache all the query operations (insert/update/delete) to save both events and the state. Once your use case finishes, you execute all the cached queries against the database (flush). This way although the operations are not atomic you are again centralizing them quite enough to minimize errors.
Of course the best is to use an ACID database if you require that capability and any other approach will be a workaround to get close to it.
About publishing the events I don't know if you mean they are published to a messaging transportation mechanism such as rabbitmq, Kafka, etc. But that must be a background process where you fetch the events from the DB and publishes them in order to break the 2 phase commit within the same transaction.

DDD - How handle transactions in the "returning domain events" pattern?

In the DDD litterature, the returning domain event pattern is described as a way to manage domain events. Conceptually, the aggregate root keeps a list of domain events, populated when you do some operations on it.
When the operation on the aggregate root is done, the DB transaction is completed, at the application service layer, and then, the application service iterates on the domain events, calling an Event Dispatcher to handle those messages.
My question is concerning the way we should handle transaction at this moment. Should the Event Dispatcher be responsible of managing a new transaction for each event it process? Or should the application service manages the transaction inside the domain event iteration where it calls the domain Event Dispatcher? When the dispatcher uses infrastructure mecanism like RabbitMQ, the question is irrelevent, but when the domain events are handled in-process, it is.
Sub-question related to my question. What is your opinion about using ORM hooks (i.e.: IPostInsertEventListener, IPostDeleteEventListener, IPostUpdateEventListener of NHibernate) to kick in the Domain Events iteration on the aggregate root instead of manually doing it in the application service? Does it add too much coupling? Is it better because it does not require the same code being written at each use case (the domain event looping on the aggregate and potentially the new transaction creation if it is not inside the dispatcher)?
My question is concerning the way we should handle transaction at this moment. Should the Event Dispatcher be responsible of managing a new transaction for each event it process? Or should the application service manages the transaction inside the domain event iteration where it calls the domain Event Dispatcher?
What you are asking here is really a specialized version of this question: should we ever update more than one aggregate in a single transaction?
You can find a lot of assertions that the answer is "no". For instance, Vaughn Vernon (2014)
A properly designed aggregate is one that can be modified in any way required by the business with its invariants completely consistent within a single transaction. And a properly designed bounded context modifies only one aggregate instance per transaction in all cases.
Greg Young tends to go further, pointing out that adhering to this rule allows you to partition your data by aggregate id. In other words, the aggregate boundaries are an explicit expression of how your data can be organized.
So your best bet is to try to arrange your more complicated orchestrations such that each aggregate is updated in its own transaction.
My question is related to the way we handle the transaction of the event sent after the initial aggregate is altered after the initial transaction is completed. The domain event must be handled, and its process could need to alter another aggregate.
Right, so if we're going to alter another aggregate, then there should (per the advice above) be a new transaction for the change to the aggregate. In other words, it's not the routing of the domain event that determines if we need another transaction -- the choice of event handler determines whether or not we need another transaction.
Just because event handling happens in-process doesn't mean the originating application service has to orchestrate all transactions happening as a consequence of the events.
If we take in-process event handling via the Observable pattern for instance, each Observer will be responsible for creating its own transaction if it needs one.
What is your opinion about using ORM hooks (i.e.:
IPostInsertEventListener, IPostDeleteEventListener,
IPostUpdateEventListener of NHibernate) to kick in the Domain Events
iteration on the aggregate root instead of manually doing it in the
application service?
Wouldn't this have to happen during the original DB transaction, effectively turning everything into immediate consistency (if events are handled in process)?

How to handle projection errors by event sourcing and CQRS?

I want to use event sourcing and CQRS, and so I need projections (I hope I use the proper term) to update my query databases. How can I handle database errors?
For example one of my query cache databases is not available, but I already updated the others. So the not-available database won't be in snyc with the others when it comes back to business. How will it know that it have to run for instance the last 10 domain events from the event storage? I guess I have to store information about the current state of the databases, but what if that database state storage fails? Any ideas, best practices how to solve this kind of problems?
In either case, you must tell your messaging bus that the processing failed and it should redeliver the event later, in the hope that the database will be back online then. This is essentially why we are using message bus systems with an "at least once"-delivery guarantee.
For transactional query databases, you should also rollback the transaction, of course. If your query database(s) do not support transactions, you must make sure on the application side that updates are idempotent - i.e., if your event arrives on the next delivery attempt, your projection code and/or database must be designed such that the repeated processing of the event does not harm the state of the database. This is sometimes trivial to achieve (e.g., when the event leads to a changed person's name in the projection), but often not-so-trivial (e.g., when the projection simply increments view counts). But this is what you pay for when you are using non-transactional databases.

Domain driven design and domain events

I'm new to DDD and I'm reading articles now to get more information. One of the articles focuses on domain events (DE). For example sending email is a domain event raised after some criteria is met while executing piece of code.
Code example shows one way of handling domain events and is followed by this paragraph
Please be aware that the above code will be run on the same thread within the same transaction as the regular domain work so you should avoid performing any blocking activities, like using SMTP or web services. Instead, prefer using one-way messaging to communicate to something else which does those blocking activities.
My questions are
Is this a general problem in handling DE? Or it is just concern of the solution in mentioned article?
If domain events are raised in transaction and the system will not handle them synchronously, how should they be handled?
When I decide to serialize these events and let scheduler (or any other mechanism) execute them, what happens when transaction is rolled back? (in the article event is raised in code executed in transaction) who will cancel them (when they are not persisted to database)?
Thanks
It's a general problem period never mind DDD
In general, in any system which is required to respond in a performant manner (e.g. a Web Server, any long running activities should be handled asynchronously to the triggering process.
This means queue.
Rolling back your transaction should remove item from the queue.
Of course, you now need additional mechanisms to handle the situation where the item on the queue fails to process - i.e the email isn't sent - you also need to allow for this in your triggering code - having a subsequent process RELY on the earlier process having already occurred is going to cause issues at some point.
In short, your queueing mechanism should itself be transactional and allow for retries and you need to think about the whole chain of events as a workflow.

Resources