Imagine an event sourced system where there exists a consuming service that is subscribed to a certain Event A. Once this consumer detects Event A has been emitted in the network, it handles it somehow and dispatches its own Event B.
How would someone replay such a system. Before the replay, both Event A and Event B exist in the event store/database. If we replay Event A and Event B, would this not double count the dispatch of Event B (once being deduced from A and the other being replayed from our event store)? How do you go about replaying events in general when 1 event may cause a cascading chain of other dispatched events.
It is not really a form of replaying the events in the system so that each event is published again and triggers actions. It is more like rehydrating (reconstituting) aggregates from events which are stored in the event store.
The implementation could for instance involve a specific constructor (or factory method) of an aggregate that takes a list of the stored domain events related to the specific aggregate. The aggregate than simply applies those events to mutate it's own state until the current state of the aggregate is reached.
You can take a look at such an implementation in Vaughn Vernons sample Event Sourcing and CQRS project iddd_collaboration. I directly referenced the implementation of a Forum Aggregate which is derived from Vaughn Vernon's implementation of an EventSourcedRootEntity.
You can look into the Forum constructor
public Forum(List<DomainEvent> anEventStream, int aStreamVersion) {
super(anEventStream, aStreamVersion);
}
and the related implementations of the different when() methods and the base class functionalities of EventSourcedRootEntity.
Note: If there is a huge amount of events and performance issues are a concern during aggregate rehydration looking into the concepts of snapshots might also be of your interest.
Events "replay" can easily be handled within the aggregate pattern because applying events does not cause new transactions, but rather the state is rehydrated.
It's important to have only event appliers in the aggregate constructor when it's instantiated out of a list of ordered events.
That's pretty much event sourcing. But there are potential problems when expanding this into event driven architecture (EDA) where an entity/aggregate/microservice/module reacts to an event by initiating another transaction.
In your example, an entity A produces an event A. The entity B reacts to event A by sending a new command, or starting a new transaction that ends up producing an event B.
So right now the event store has event A and event B.
How to ensure a replay or a new read of that stream or all streams doesn't cause a write amplify? Because as soon as event A handlers reads the event won't know if it's the first time it has handled it (and has to initiate the next transaction, command B --> event B, or if it's a replay and doesn't have to do anything about it because it already happened and there is an event B already in the stream.
I am assuming this is your concern, and it's a big one if the event reaction implies making a payment, for example. We wouldn't want to make a new payment each time the event A is handled.
There are a few options:
Never replay events in systems that react to events by creating new transactions. Never replay events unless it's for aggregate instantiation (event sourcing) which uses events just to re-hydrate state, or unless it's for projection/read models that are idempotent or when the projections are being recreated (because DB was dropped for example)
Another option is to react to an event A by appending a command B to a "command stream" (i.e: queue) and have the command handler receive it asynchronously and create the transaction that produces event B. This way, you can rely on the Event store duplicate check and prevent the append of a command if it already exist. The scenario would be similar to this:
A. Transaction A produces event A which is appended to an event store stream
B. Event Handler A reacts to event A and adds a command B to a command stream
C. Command handler B receives the command B and executes transaction that produces an event B appended to the stream.
Now the first time this would work as expected.
If the projections that use event A and event B to write in DB a read model replay events, all is good. It reads event A and then event B.
If the "reactive" event handlers receive event A again, it attempts to append a command B to the command stream. The event/command store detects that command B is a duplicate (optimistic concurrency control using some versioning) and doesn't add it. Command handler B never gets the old command again.
It's important to notice that the processed commands should result in a checkpoint that is never deleted, so that commands are never ever replayed. That's the key.
Probably there are also other mechanisms out there.
You're referring to what's called a "Saga Pattern" and in order to resolve it you need to make your commands explicit. This example helps to illustrate the difference between Commands and Events.
Events are the record of what happened. The are immutable, connected with an entity and describe the original intention of the user.
Commands are a request to do something, which may cause an event to be recorded. They can also cause 'real world' state changes outside of the event-sourced system, but in doing so they should cause an event that records the external change happened.
A few rules will resolve your conundrum:
You cannot record an event without a corresponding command having executed. Every event was caused by a command.
You cannot process commands until the event stream has 'caught up' to the present. Otherwise you are taking action on a partial replay of history.
Back to the Saga Pattern:
In the Saga Pattern, events can lead to more commands. In this way, the system can progress based on a cascade of events and commands and execute a distributed workflow, choreographed by the relations between system state, commands generated, and further events generated.
In your case, as long as you wait for the full event stream to be replayed before issuing the next command, you can then prevent the duplicate cascading event by checking that the action has not already been done.
If event B already exists, there's no need to issue another command to generate event B again.
Related
When command emits more than one event, how to ensure correct rehydration. what is correct way mark many events as atomic change
The rehydration is complete, when all available events are applied. Though, when your command emits multiple events, you just have to ensure that those events get persisted together in an atomic operation.
Example:
Current Event Stream:
1. MyEvent1
Then execute a command that emits multiple events:
MyCommand emits ->
MyEvent1
MyEvent2
These Events will get appended to the Event Stream as an atomic operation.
New Event Stream:
1. MyEvent1
2. MyEvent1
3. MyEvent2
Now, when rehydrating the Aggregate, you just read the entire Event Stream til the end, and you're done.
When command emits more than one event, how to ensure correct rehydration. what is correct way mark many events as atomic change
Typically
Your domain logic returns a sequence of events.
You atomically write the entire sequence into the event history
When reconstituting, you use the event history
Note that the second step implies that your durable event storage will support the atomic write of an event sequence.
Note: we don't usually rely on the broadcast mechanism when reconstituting the state of our domain model, but instead use the history.
In the CQRS world, that would include creating the read model by loading the history, and then playing the events in the fixed order in which they were written.
It depends on your underlying storage. A common way to implement atomic writes for persistent storage that does not support transactions is to create a batch of events and write that as a single operation.
When rehydrating the events you can flatten the batches to a flat sequence/stream of events to build your current state. During rehydration you really care about all events, since you will always apply the next command on fully hydrated current state. Thus, there is no point in keeping the batches at that point.
There are a number of solutions out there that support writing/rehydration of state like this if you don't feel like building your own.
Event store (https://eventstore.com)
Axon server (https://axoniq.io)
Serialized (https://serialized.io) for full disclosure - this is our product
Good luck
Does event sourcing should be at the end of process and should there be event handlers?
while following event sourcing I know at any time we can derive application state from events but should we save that state in database as well?
Events are emitted after domain logic was executed (some operation was performed on the entity/domain object). Firstly event is persisted in the store and then published to the bus and other consumers (microservices, event handlers from your diagramm) can subscribe on it. Persist and publish of the event should be like a transaction.
Every time when new operation needs to be executed on domain object the whole set of events is reading from the Event Store for that entity. Some entities can have a lot of events and to optimize performance so called State Snapshots are used. Basically it is State of the domain object after X events. Snapshots can be created every X events. They stored separately in the Event Store and usually Event Sourcing libraries allow to configure Snapshots. But it's just for performance purposes.
I've quickly created diagram to show that is usually happened inside Command Handler
It looks like you want to use the CQRS + Event sourcing pattern.
Here are some examples:
https://dzone.com/articles/microservices-with-cqrs-and-event-sourcing
https://danielwhittaker.me/2020/02/20/cqrs-step-step-guide-flow-typical-application/
According to the links above, you can fix your architecture and I think this answers the first question.
As for the second question, you should have an external event store.
These two patterns are well described in this book.
I am learning about DDD,CQRS and Event-sourcing and there is something I cannot figure out. Commands trigger changes in the aggregates and once the change is performed an event is fired. The event is subsequently handled by other parts of the system and preserved in the event store. However, I do not understand how replaying events would recreate the aggregate, if changes are triggered by commands.
Example: If we have a online shop.
AddItemToCardCommand -> Card Aggregate adds the item to its card -> ItemAddedToCardEvent -> The event is handled by whoever.
However, if the event is replayed, the aggregate would not add the item to its card.
To sum up, my question is how should I recreate aggregates based on the events in the event store? Also, any general advice on how to replay events the right way would be appreaciated.
For simplicity, let's assume a stateless process - our service doesn't try to keep copies of things in memory, but instead reloads aggregates as needed.
The service receives AddItemToCardCommand:{card:123, ...}. We don't have the current state of card:123 in memory, so we need to create it. We do that by loading the state of card:123 from our durable store. Because we chose to use event sourced storage, the "state" we read from the durable store is a representation of the history of events previously written by the service.
Event histories have within them all of the information you need to remember, but not necessarily in a convenient "shape" - append only lists are a great data structure for writes, but not necessarily good for reads.
What this often means is that we will "replay" the events to create an in memory object which we can then use to answer questions about the events we will write next.
The same pattern is used when answering simple queries: we load the history of events from the store, transform the event history into a more convenient shape, and then use that shape to compute the answer.
In circumstances where query latency is more important than timeliness, we might design our query handler to read the convenient shapes from a cache, rather than trying to compute them fresh every time; a concurrently running background thread would be responsible to waking up periodically to compute new contents for the cache.
Using an async process to pull updates from an event stream is a common pattern; Greg Young discusses some of the advantages of that approach in his Polyglot Data talk.
In an ideal event scenario, you would not have an already constructed aggregate structure available in your database. You repeatedly arrive at the end data structure by running through all events stored so far.
Let me illustrate with some pseudocode of adding items to cart, and then fetching the cart data.
# Create a new cart
POST /cart/new
# Store a series of events related to the cart (in database as records, similar to array items)
POST /cart/add -> CartService.AddItem(item_data) -> ItemAddedToCart
A series of events would look like:
* ItemAddedToCart
* ItemAddedToCart
* ItemAddedToCart
* ItemRemovedFromCart
* ItemAddedToCart
When its time to fetch cart data from the DB, you construct a new cart instance (or retrieve a cart instance if persisted) and replay the events on it.
cart = Cart(id=ID1)
# Fetch contents of Cart with id ID1
for each event in ID1 cart's events:
if event is ItemAddedToCart:
cart.add_item(event.data)
else if event is ItemRemovedFromCart:
cart.remove_item(event.data)
return cart
Occasionally, when there are too many events related to the cart, you may want to generate the aggregate structure then and save it in DB. Next time, you can start with the aggregate structure savepoint, and continue applying new events. This optimization helps save time and improve performance when there are too many events to process.
What may help is to not think of the command as changing the state but rather the event as changing the state. In fact, I don't quite see how else one would go about doing so. The command handler in your aggregate would apply the invariants and, if all is OK, would immediately create the event and call some method that would apply it ([Apply|On|Do]MyEvent). The fact that you have an event after the fact does not necessarily mean other parts of your system would handle it. It is however required for event sourcing. Once you have an event you can most certainly pass that on to other parts of your system via, say, publishing on a service bus.
When you replay your events you are calling the same methods that the commands were calling to actually mutate the state of your aggregate:
public MyEvent MyCommand(string data)
{
if (string.IsNullOrWhiteSpace(data))
{
throw new ArgumentException($"Argument '{nameof(data)}' may not be empty.");
}
return On(new MyEvent
{
Data = data
});
}
private MyEvent On(MyEvent myEvent)
{
// change the relevant state
someState = myEvent.Data;
return myEvent;
}
Your event sourcing infrastructure would call On(MyEvent) for MyEvent when replaying. Since you have an event it means that it was a valid state transition and can simply be applied; else something went wrong in your initial command processing and you probably have a bug.
All events in an event store would be in chronological order for an aggregate. In addition to this the events should have a global sequence number to facilitate projection processing.
You could have a generic projection that accepts any/all events and then publishes the event on a service bus for system integration. You could also place that burden on a client of the event store to have it keep track of the position itself and then read events off the store itself. You could combine these and have the client subscribe to service bus events but ensure that it executes them in the same order by keeping track of the position (global sequence number) itself and update it as the events are processed.
I've been studying DDD for a while, and stumbled into design patterns like CQRS, and Event sourcing (ES). These patterns can be used to help achieving some concepts of DDD with less effort.
In the architecture exemplified below, the aggregates know how to handle the commands and events related to itself. In other words, the Event Handlers and Command Handlers are the Aggregates.
Then, I’ve started modeling one sample Domain just to understand how the implementation would follow the business logic. For this question here is my domain (It’s based on this):
I know this is a bad modeled example, but I’m using it just as an example.
So, using ES, at the end of the operation, we would save all the events (Green arrows) into the event store (if there were no Exceptions), each event into its given Event Stream (Aggregate Type + Aggregate Id):
Everything seems right until now. So If we want to Rebuild the internal state of an instance of any of this Aggregate, we only have to new it up (new()) and apply all the events saved in its respective Event Stream in the correct order.
My question is related to changes in the model. Because, software development is a process where we never stop learning about our domain, and we always come with new ideas. So, let’s analyze some change scenarios:
Change Scenario 1:
Let´s pretend that now, if the Reservation Aggregate check’s that the seat is not available, it should send an event (Seat not reserved) and this event should be handled by one new Aggregate that will store all people that got their seat not reserved:
In the hypothesis where the old system already handled the initial command (Place order) correctly, and saved all the events to its respective event streams:
When we want to Rebuild the internal state of an instance of any of this Aggregate, we only have to new it up (new()) and apply all the events saved in its respective Event Stream in the correct order. (Nothing changed). The only thing, is that the new Use case didn’t exist back in the old model.
Change Scenario 2:
Let’s pretend that now, when the payment is accepted we handle this event (Payment Accepted) in a new Aggregate (Finance Aggregate) and not in the Order Aggregate anymore. And It send a new Event (Payment Received) to the Order Aggregate. I know this scenario is not well structured, but something like this could happen.
In the hypothesis where the old system already handled the initial command (Place order) correctly, and saved all the events to its respective event streams:
When we want to Rebuild the internal state of an instance of any of this Aggregate, we have a problem when applying the events from the Aggregate Event Stream to itself:
Now, the order doesn’t know anymore how to handle Payment Accepted Event.
Problems
So as the examples showed, whenever a system change reflects in an event being handled by a different event handler (Aggregate), there are some major problems. Because, we cannot rebuild the internal state anymore.
So, this problem can have some solutions:
Possible Solution
When an event is not handled by the aggregate in which Event Stream it is stored, we can find the new handler and create a new instance and send the event to it. But to maintain the internal state correct, we need the last event (Payment Received) to be handled by the Order Aggregate. So, we let it dispatch the event (and possible commands):
This solution can have some problems. Let’s imagine that a new command (Place Order) arrives and it has to create this order instance and save the new state. Now we would have:
In gray are the events that were already saved in the last call when the system hadn’t already gone through model changes.
We can see that a new Event Stream is created for the new aggregate (Finance W). And we can see that Event Streams are append-only, so the Payment Accepted event in the Order Y Event Stream is still there.
The first Payment Accepted event in Finance W Event Stream is the one that was supposed to be handled by the Order but had to find a new handler.
The Yellow payment received event in Order’s Event Stream is the event that was generated by the new handler of the Payment Accepted when the Payment Accepted event from the Order’s Event Stream was handled by the Finance.
All the other Green Events are new events that were generated by handling the Place Order Command in the new model.
Problem With the Solution
The next time the aggregate needs to be rebuild, there will be a Payment Accepted event in the stream (because it is append-only), and it will again call the new handler, but this have already been done and the Payment Received event have already been saved to the stream. So, it is not necessary to go through this again, we could ignore this event and continue.
Question
So, my question is how can we handle with model changes that impact who handle each event? How can we rebuild the internal state of an Aggregate after a change like this?
Will we need to build some event Stream migration that changes the events from one stream to the new schema (one or more streams)? Just like we would need in a Relational database?
Will we never be allowed to remove one handler, so we can only add new handlers? This would lead to unmanageable system…
You got almost all right, except one thing: Aggregates should not handle events from other Aggregates. It's like a non-event-sourced Aggregate shares a table with another Aggregate: they should not.
In event-driven DDD, Aggregates are the system's building blocks that receive Commands (things that express the intent) and return Events (things that had happened). For every Command type must exist one and only one Aggregate type that handle it. Before executing a Command, the Aggregate is fed with all its own previously emitted Events, that is, every Event that was emitted in the past by this Aggregate instance is applied to this Aggregate instance, in the chronological order.
So, if you want to correctly model your system, you are not allowed to send events from one Aggregate as events to another Aggregate (a different type or instance).
If you need to model business processes that involve multiple Aggregates, the correct way of doing it is by using a Saga/Process manager. This is a different component. It is the opposite of an Aggregate.
It receive Events emitted by Aggregates and sends Commands to other Aggregates.
In simplest cases, a Saga manager simply takes properties from one Event and creates+populates a Command with those properties. Then it sends the Command to the destination Aggregate.
In more complicated cases, the Saga waits for multiple Events and when all are received only then it creates and sends a Command.
The Saga may also deduplicate or reorder events.
In your case, a Saga could be Sale, whose purpose would be to coordinate the entire sales process, from ordering to product dispatching.
In conclusion, you have that problem because you have not modeled correctly your system. If your Aggregates would have handled only their specific Commands (and not somebody else's Events) then even if you must create a new Saga when a new Business process emerges, it would send the same Command to the Same Aggregate.
Answering briefly
my question is how can we handle with model changes that impact who handle each event?
Handling events is generally an easy thing to change, because the handling part is ephemeral. Events have a single writer, but they can have many readers. You just need to arrange for the plumbing to notify each subscriber of the event.
So in scenario #1, its the PaymentAggregate that writes down the PaymentAccepted event (in its own stream), and then your plumbing notifies the OrderAggregate that the PaymentAccepted event happened, and it does the next thing in its own logic.
To change to scenario #2, we'd leave the Payment Aggregate unchanged, but we'd arrange the plumbing so that it tells the FinanceAggregate about PaymentAccepted, and that it tells the OrderAggregate about PaymentReceived.
Your pictures make it hard to see this; I think you aren't being careful to track that each change of state is stored in the stream of the aggregate that changed. Not your fault - the Microsoft picture is really awful.
In other words, your arrow #3 "Seats Reserved" isn't a SeatsReserved event, it's a Handle(SeatsReserved) command.
I'm building a service using the familiar event sourcing pattern:
A request is received.
The aggregate's history is loaded.
The aggregate is rebuilt (from its history).
New events are prepared and the aggregate is updated in response to the incoming request from Step 1.
These events are written to the log, and are made available (published) to any subscribers.
In my case, Step 5 is accomplished in two parts. The events are written to the event log. A background process reads from the event log and publishes all events starting from an offset.
In some cases, I need to publish side effects in addition to events related to the aggregate. As far as the system is concerned, these are events too because they are consumed by and affect the state of other services. However, they don't affect the history of the aggregate in this service and are not needed to rebuild it.
How should I handle these in the code?
Option 1-
Don't write side-effecting events to the event log. Publish these in the main process prior to Step 5.
Option 2-
Write everything to the event log and ignore side-effecting events when the history is loaded. (These aren't part of the history!)
Option 3-
Write side-effecting events to a dummy aggregate so they are published, but never loaded.
Option 4-
?
In the first option, there may be trouble if there is a concurrency violation. If the write fails in Step 5, the side effect cannot be easily rolled back. The second option write events that are not part of the aggregate's history. When loading in Step 2, these side-effecting events would have to be ignored. The 3rd option feels like a hack.
Which of these seems right to you?
Name events correctly
Events are "things that happened". So if you are able to name the events that only trigger side effects in a "X happened" fashion, they become a natural part of the event history.
In my experience, this is always possible, because side-effects don't happen out of thin air. Sometimes the name becomes a bit artificial, but it is still better to name events that way than to call them e.g. "send email to that client event".
In terms of your list of alternatives, this would be option 2.
Example
Instead of calling an event "send status email to customer event", call it "status email triggered event". Of course, if there is a better name for the actual trigger, use that one :-)
Option 4 - Have some other service subscribe to the events and produce the side effects, and any additional events related to them.
Events should be fine-grained.
Option 1- Don't write side-effecting events to the event log. Publish
these in the main process prior to Step 5.
What if you later need this part of the history by building a new bounded context?
Option 2- Write everything to the event log and ignore side-effecting
events when the history is loaded. (These aren't part of the history!)
How to ignore the effect of something which does not have any effect? :D
Option 3- Write side-effecting events to a dummy aggregate so they are
published, but never loaded.
Why do you need consistency boundary around something which you will never change?
What you are talking about is the most common form of domain events, which you use to communicate with other BC-s. Ofc. you need to save them.