When you use Node's EventEmitter, you subscribe to a single event. Your callback is only executed when that specific event is fired up:
eventBus.on('some-event', function(data){
// data is specific to 'some-event'
});
In Flux, you register your store with the dispatcher, then your store gets called when every single event is dispatched. It is the job of the store to filter through every event it gets, and determine if the event is important to the store:
eventBus.register(function(data){
switch(data.type){
case 'some-event':
// now data is specific to 'some-event'
break;
}
});
In this video, the presenter says:
"Stores subscribe to actions. Actually, all stores receive all actions, and that's what keeps it scalable."
Question
Why and how is sending every action to every store [presumably] more scalable than only sending actions to specific stores?
The scalability referred to here is more about scaling the codebase than scaling in terms of how fast the software is. Data in flux systems is easy to trace because every store is registered to every action, and the actions define every app-wide event that can happen in the system. Each store can determine how it needs to update itself in response to each action, without the programmer needing to decide which stores to wire up to which actions, and in most cases, you can change or read the code for a store without needing to worrying about how it affects any other store.
At some point the programmer will need to register the store. The store is very specific to the data it'll receive from the event. How exactly is looking up the data inside the store better than registering for a specific event, and having the store always expect the data it needs/cares about?
The actions in the system represent the things that can happen in a system, along with the relevant data for that event. For example:
A user logged in; comes with user profile
A user added a comment; comes with comment data, item ID it was added to
A user updated a post; comes with the post data
So, you can think about actions as the database of things the stores can know about. Any time an action is dispatched, it's sent to each store. So, at any given time, you only need to think about your data mutations a single store + action at a time.
For instance, when a post is updated, you might have a PostStore that watches for the POST_UPDATED action, and when it sees it, it will update its internal state to store off the new post. This is completely separate from any other store which may also care about the POST_UPDATED event—any other programmer from any other team working on the app can make that decision separately, with the knowledge that they are able to hook into any action in the database of actions that may take place.
Another reason this is useful and scalable in terms of the codebase is inversion of control; each store decides what actions it cares about and how to respond to each action; all the data logic is centralized in that store. This is in contrast to a pattern like MVC, where a controller is explicitly set up to call mutation methods on models, and one or more other controllers may also be calling mutation methods on the same models at the same time (or different times); the data update logic is spread through the system, and understanding the data flow requires understanding each place the model might update.
Finally, another thing to keep in mind is that registering vs. not registering is sort of a matter of semantics; it's trivial to abstract away the fact that the store receives all actions. For example, in Fluxxor, the stores have a method called bindActions that binds specific actions to specific callbacks:
this.bindActions(
"FIRST_ACTION_TYPE", this.handleFirstActionType,
"OTHER_ACTION_TYPE", this.handleOtherActionType
);
Even though the store receives all actions, under the hood it looks up the action type in an internal map and calls the appropriate callback on the store.
Ive been asking myself the same question, and cant see technically how registering adds much, beyond simplification. I will pose my understanding of the system so that hopefully if i am wrong, i can be corrected.
TLDR; EventEmitter and Dispatcher serve similar purposes (pub/sub) but focus their efforts on different features. Specifically, the 'waitFor' functionality (which allows one event handler to ensure that a different one has already been called) is not available with EventEmitter. Dispatcher has focussed its efforts on the 'waitFor' feature.
The final result of the system is to communicate to the stores that an action has happened. Whether the store 'subscribes to all events, then filters' or 'subscribes a specific event' (filtering at the dispatcher). Should not affect the final result. Data is transferred in your application. (handler always only switches on event type and processes, eg. it doesn't want to operate on ALL events)
As you said "At some point the programmer will need to register the store.". It is just a question of fidelity of subscription. I don't think that a change in fidelity has any affect on 'inversion of control' for instance.
The added (killer) feature in facebook's Dispatcher is it's ability to 'waitFor' a different store, to handle the event first. The question is, does this feature require that each store has only one event handler?
Let's look at the process. When you dispatch an action on the Dispatcher, it (omitting some details):
iterates all registered subscribers (to the dispatcher)
calls the registered callback (one per stores)
the callback can call 'waitfor()', and pass a 'dispatchId'. This internally references the callback of registered by a different store. This is executed synchronously, causing the other store to receive the action and be updated first. This requires that the 'waitFor()' is called before your code which handles the action.
The callback called by 'waitFor' switches on action type to execute the correct code.
the callback can now run its code, knowing that its dependancies (other stores) have already been updated.
the callback switches on the action 'type' to execute the correct code.
This seems a very simple way to allow event dependancies.
Basically all callbacks are eventually called, but in a specific order. And then switch to only execute specific code. So, it is as if we only triggered a handler for the 'add-item' event on the each store, in the correct order.
If subscriptions where at a callback level (not 'store' level), would this still be possible? It would mean:
Each store would register multiple callbacks to specific events, keeping reference to their 'dispatchTokens' (same as currently)
Each callback would have its own 'dispatchToken'
The user would still 'waitFor' a specific callback, but be a specific handler for a specific store
The dispatcher would then only need to dispatch to callbacks of a specific action, in the same order
Possibly, the smart people at facebook have figured out that this would actually be less performant to add the complexity of individual callbacks, or possibly it is not a priority.
Related
I am learning about DDD,CQRS and Event-sourcing and there is something I cannot figure out. Commands trigger changes in the aggregates and once the change is performed an event is fired. The event is subsequently handled by other parts of the system and preserved in the event store. However, I do not understand how replaying events would recreate the aggregate, if changes are triggered by commands.
Example: If we have a online shop.
AddItemToCardCommand -> Card Aggregate adds the item to its card -> ItemAddedToCardEvent -> The event is handled by whoever.
However, if the event is replayed, the aggregate would not add the item to its card.
To sum up, my question is how should I recreate aggregates based on the events in the event store? Also, any general advice on how to replay events the right way would be appreaciated.
For simplicity, let's assume a stateless process - our service doesn't try to keep copies of things in memory, but instead reloads aggregates as needed.
The service receives AddItemToCardCommand:{card:123, ...}. We don't have the current state of card:123 in memory, so we need to create it. We do that by loading the state of card:123 from our durable store. Because we chose to use event sourced storage, the "state" we read from the durable store is a representation of the history of events previously written by the service.
Event histories have within them all of the information you need to remember, but not necessarily in a convenient "shape" - append only lists are a great data structure for writes, but not necessarily good for reads.
What this often means is that we will "replay" the events to create an in memory object which we can then use to answer questions about the events we will write next.
The same pattern is used when answering simple queries: we load the history of events from the store, transform the event history into a more convenient shape, and then use that shape to compute the answer.
In circumstances where query latency is more important than timeliness, we might design our query handler to read the convenient shapes from a cache, rather than trying to compute them fresh every time; a concurrently running background thread would be responsible to waking up periodically to compute new contents for the cache.
Using an async process to pull updates from an event stream is a common pattern; Greg Young discusses some of the advantages of that approach in his Polyglot Data talk.
In an ideal event scenario, you would not have an already constructed aggregate structure available in your database. You repeatedly arrive at the end data structure by running through all events stored so far.
Let me illustrate with some pseudocode of adding items to cart, and then fetching the cart data.
# Create a new cart
POST /cart/new
# Store a series of events related to the cart (in database as records, similar to array items)
POST /cart/add -> CartService.AddItem(item_data) -> ItemAddedToCart
A series of events would look like:
* ItemAddedToCart
* ItemAddedToCart
* ItemAddedToCart
* ItemRemovedFromCart
* ItemAddedToCart
When its time to fetch cart data from the DB, you construct a new cart instance (or retrieve a cart instance if persisted) and replay the events on it.
cart = Cart(id=ID1)
# Fetch contents of Cart with id ID1
for each event in ID1 cart's events:
if event is ItemAddedToCart:
cart.add_item(event.data)
else if event is ItemRemovedFromCart:
cart.remove_item(event.data)
return cart
Occasionally, when there are too many events related to the cart, you may want to generate the aggregate structure then and save it in DB. Next time, you can start with the aggregate structure savepoint, and continue applying new events. This optimization helps save time and improve performance when there are too many events to process.
What may help is to not think of the command as changing the state but rather the event as changing the state. In fact, I don't quite see how else one would go about doing so. The command handler in your aggregate would apply the invariants and, if all is OK, would immediately create the event and call some method that would apply it ([Apply|On|Do]MyEvent). The fact that you have an event after the fact does not necessarily mean other parts of your system would handle it. It is however required for event sourcing. Once you have an event you can most certainly pass that on to other parts of your system via, say, publishing on a service bus.
When you replay your events you are calling the same methods that the commands were calling to actually mutate the state of your aggregate:
public MyEvent MyCommand(string data)
{
if (string.IsNullOrWhiteSpace(data))
{
throw new ArgumentException($"Argument '{nameof(data)}' may not be empty.");
}
return On(new MyEvent
{
Data = data
});
}
private MyEvent On(MyEvent myEvent)
{
// change the relevant state
someState = myEvent.Data;
return myEvent;
}
Your event sourcing infrastructure would call On(MyEvent) for MyEvent when replaying. Since you have an event it means that it was a valid state transition and can simply be applied; else something went wrong in your initial command processing and you probably have a bug.
All events in an event store would be in chronological order for an aggregate. In addition to this the events should have a global sequence number to facilitate projection processing.
You could have a generic projection that accepts any/all events and then publishes the event on a service bus for system integration. You could also place that burden on a client of the event store to have it keep track of the position itself and then read events off the store itself. You could combine these and have the client subscribe to service bus events but ensure that it executes them in the same order by keeping track of the position (global sequence number) itself and update it as the events are processed.
I've been studying DDD for a while, and stumbled into design patterns like CQRS, and Event sourcing (ES). These patterns can be used to help achieving some concepts of DDD with less effort.
In the architecture exemplified below, the aggregates know how to handle the commands and events related to itself. In other words, the Event Handlers and Command Handlers are the Aggregates.
Then, I’ve started modeling one sample Domain just to understand how the implementation would follow the business logic. For this question here is my domain (It’s based on this):
I know this is a bad modeled example, but I’m using it just as an example.
So, using ES, at the end of the operation, we would save all the events (Green arrows) into the event store (if there were no Exceptions), each event into its given Event Stream (Aggregate Type + Aggregate Id):
Everything seems right until now. So If we want to Rebuild the internal state of an instance of any of this Aggregate, we only have to new it up (new()) and apply all the events saved in its respective Event Stream in the correct order.
My question is related to changes in the model. Because, software development is a process where we never stop learning about our domain, and we always come with new ideas. So, let’s analyze some change scenarios:
Change Scenario 1:
Let´s pretend that now, if the Reservation Aggregate check’s that the seat is not available, it should send an event (Seat not reserved) and this event should be handled by one new Aggregate that will store all people that got their seat not reserved:
In the hypothesis where the old system already handled the initial command (Place order) correctly, and saved all the events to its respective event streams:
When we want to Rebuild the internal state of an instance of any of this Aggregate, we only have to new it up (new()) and apply all the events saved in its respective Event Stream in the correct order. (Nothing changed). The only thing, is that the new Use case didn’t exist back in the old model.
Change Scenario 2:
Let’s pretend that now, when the payment is accepted we handle this event (Payment Accepted) in a new Aggregate (Finance Aggregate) and not in the Order Aggregate anymore. And It send a new Event (Payment Received) to the Order Aggregate. I know this scenario is not well structured, but something like this could happen.
In the hypothesis where the old system already handled the initial command (Place order) correctly, and saved all the events to its respective event streams:
When we want to Rebuild the internal state of an instance of any of this Aggregate, we have a problem when applying the events from the Aggregate Event Stream to itself:
Now, the order doesn’t know anymore how to handle Payment Accepted Event.
Problems
So as the examples showed, whenever a system change reflects in an event being handled by a different event handler (Aggregate), there are some major problems. Because, we cannot rebuild the internal state anymore.
So, this problem can have some solutions:
Possible Solution
When an event is not handled by the aggregate in which Event Stream it is stored, we can find the new handler and create a new instance and send the event to it. But to maintain the internal state correct, we need the last event (Payment Received) to be handled by the Order Aggregate. So, we let it dispatch the event (and possible commands):
This solution can have some problems. Let’s imagine that a new command (Place Order) arrives and it has to create this order instance and save the new state. Now we would have:
In gray are the events that were already saved in the last call when the system hadn’t already gone through model changes.
We can see that a new Event Stream is created for the new aggregate (Finance W). And we can see that Event Streams are append-only, so the Payment Accepted event in the Order Y Event Stream is still there.
The first Payment Accepted event in Finance W Event Stream is the one that was supposed to be handled by the Order but had to find a new handler.
The Yellow payment received event in Order’s Event Stream is the event that was generated by the new handler of the Payment Accepted when the Payment Accepted event from the Order’s Event Stream was handled by the Finance.
All the other Green Events are new events that were generated by handling the Place Order Command in the new model.
Problem With the Solution
The next time the aggregate needs to be rebuild, there will be a Payment Accepted event in the stream (because it is append-only), and it will again call the new handler, but this have already been done and the Payment Received event have already been saved to the stream. So, it is not necessary to go through this again, we could ignore this event and continue.
Question
So, my question is how can we handle with model changes that impact who handle each event? How can we rebuild the internal state of an Aggregate after a change like this?
Will we need to build some event Stream migration that changes the events from one stream to the new schema (one or more streams)? Just like we would need in a Relational database?
Will we never be allowed to remove one handler, so we can only add new handlers? This would lead to unmanageable system…
You got almost all right, except one thing: Aggregates should not handle events from other Aggregates. It's like a non-event-sourced Aggregate shares a table with another Aggregate: they should not.
In event-driven DDD, Aggregates are the system's building blocks that receive Commands (things that express the intent) and return Events (things that had happened). For every Command type must exist one and only one Aggregate type that handle it. Before executing a Command, the Aggregate is fed with all its own previously emitted Events, that is, every Event that was emitted in the past by this Aggregate instance is applied to this Aggregate instance, in the chronological order.
So, if you want to correctly model your system, you are not allowed to send events from one Aggregate as events to another Aggregate (a different type or instance).
If you need to model business processes that involve multiple Aggregates, the correct way of doing it is by using a Saga/Process manager. This is a different component. It is the opposite of an Aggregate.
It receive Events emitted by Aggregates and sends Commands to other Aggregates.
In simplest cases, a Saga manager simply takes properties from one Event and creates+populates a Command with those properties. Then it sends the Command to the destination Aggregate.
In more complicated cases, the Saga waits for multiple Events and when all are received only then it creates and sends a Command.
The Saga may also deduplicate or reorder events.
In your case, a Saga could be Sale, whose purpose would be to coordinate the entire sales process, from ordering to product dispatching.
In conclusion, you have that problem because you have not modeled correctly your system. If your Aggregates would have handled only their specific Commands (and not somebody else's Events) then even if you must create a new Saga when a new Business process emerges, it would send the same Command to the Same Aggregate.
Answering briefly
my question is how can we handle with model changes that impact who handle each event?
Handling events is generally an easy thing to change, because the handling part is ephemeral. Events have a single writer, but they can have many readers. You just need to arrange for the plumbing to notify each subscriber of the event.
So in scenario #1, its the PaymentAggregate that writes down the PaymentAccepted event (in its own stream), and then your plumbing notifies the OrderAggregate that the PaymentAccepted event happened, and it does the next thing in its own logic.
To change to scenario #2, we'd leave the Payment Aggregate unchanged, but we'd arrange the plumbing so that it tells the FinanceAggregate about PaymentAccepted, and that it tells the OrderAggregate about PaymentReceived.
Your pictures make it hard to see this; I think you aren't being careful to track that each change of state is stored in the stream of the aggregate that changed. Not your fault - the Microsoft picture is really awful.
In other words, your arrow #3 "Seats Reserved" isn't a SeatsReserved event, it's a Handle(SeatsReserved) command.
I am developing a simple DDD + Event sourcing based app for educational purposes.
In order to set event version before storing to event store I should query event store but my gut tells that this is wrong because it causes concurrency issues.
Am I missing something?
There are different answers to that, depending on what use case you are considering.
Generally, the event store is a dumb, domain agnostic appliance. It's superficially similar to a List abstraction -- it stores what you put in it, but it doesn't actually do any work to satisfy your domain constraints.
In use cases where your event stream is just a durable record of things that have happened (meaning: your domain model does not get a veto; recording the event doesn't depend on previously recorded events), then append semantics are fine, and depending on the kind of appliance you are using, you may not need to know what position in the stream you are writing to.
For instance: the API for GetEventStore understands ExpectedVersion.ANY to mean append these events to the end of the stream wherever it happens to be.
In cases where you do care about previous events (the domain model is expected to ensure an invariant based on its previous state), then you need to do something to ensure that you are appending the event to the same history that you have checked. The most common implementations of this communicate the expected position of the write cursor in the stream, so that the appliance can reject attempts to write to the wrong place (which protects you from concurrent modification).
This doesn't necessarily mean that you need to be query the event store to get the position. You are allowed to count the number of events in the stream when you load it, and to remember how many more events you've added, and therefore where the stream "should" be if you are still synchronized with it.
What we're doing here is analogous to a compare-and-swap operation: we get a representation of the original state of the stream, create a new representation, and then compare and swap the reference to the original to point instead to our changes
oldState = stream.get()
newState = domainLogic(oldState)
stream.compareAndSwap(oldState, newState)
But because a stream is a persistent data structure with append only semantics, we can use a simplified API that doesn't require duplicating the existing state.
events = stream.get()
changes = domainLogic(events)
stream.appendAt(count(events), changes)
If the API of your appliance doesn't allow you to specify a write position, then yes - there's the danger of a data race when some other writer changes the position of the stream between your query for the position and your attempt to write. Data obtained in a query is always stale; unless you hold a lock you can't be sure that the data hasn't changed at the source while you are reading your local copy.
I guess you shouldn't to think about event version.
If you talk about the place in the event stream, in general, there's no guaranteed way to determine it at the creation moment, only in processing time or in event-storage.
If it is exactly about event version (see http://cqrs.nu/Faq, How do I version/upgrade my events?), you have it hardcoded in your application. So, I mean next use case:
First, you have an app generating some events. Next, you update app and events are changed (you add some fields or change payload structure) but kept logical meaning. So, now you have old events in your ES, and new events, that differ significantly from old. And to distinguish one from another you use event version, eg 0 and 1.
I am trying to implement another DDD bounded context with CQRS and ES.
I wonder, given there is CreateUserCommand that creates User in my domain model (not a word about saving). Then it fires UserCreatedEvent.
I have two event handlers for that event:
PersistUserEventHandler (updates state of app) and
SendWelcomeEmailEventHandler (sends welcome email to user)
Now, I know, that:
Order of processing event in Event Handlers should not matter
Saving state should be detail, because source of truth is in my event store.
But, what if I do not want to send welcome email until my read model is fully updated? Because, what if for example process is delayed or some error occurs and I am not able to persist that user into read model right now? Then I do not want send that welcome email now, because if user clicked to for example link to his profile in mail, he would see "user does not exists".
I saw people are persisting changes through repository directly in command handlers (which would solve this problem), but that does not make sense with Event Sourcing, because I want to be able to replay all events (with event handlers for persisting only to prevent all other side effects) and get actual state of application in persistence layer.
Or should I listen to UserCreatedEvent only with event handler that actually persists it into read model and then raise in this event handler another event CreatedUserSavedEvent and all emails etc. would have been sent by their handlers?
I suppose NO too, because it reminds me some event hell and also if I get EventBus into some event handler, I am getting into circular reference problem which is just effect of that I am violating rule that every depencency should point down to lower components of my system and not the other side.
So, how is this usualy solved or am I missing something?
PersistUserEventHandler (updates state of app)
You might be mistaking Read Models for a homogeneous whole that accurately represents the current state of an application, i.e. a second source of absolute truth besides the event log.
I tend to see them more as a bunch of partial, opinionated parcels of state that may not all be updated at the same time and may reflect different truths.
I don't recommend taking read models as a source of data in another context than the use case they were designed for. In your example, SendWelcomeEmail should probably not rely on the User read model but only on the data contained in the UserCreated event.
Now you can share code between read model projectors and other types of event handlers to avoid duplication, but sharing data seems risky.
If users have random UUID then it should not be a problem. If a user arrive at an url and the readmodel is not up to date then you could show a "loading in progress,please wait" message.
If you really want to know if the user really exists - for example you want to see the difference between "user does not exists" and "read model is not sunchronized yet" then you could send a special command that don't generate any events (or just test a command if your command dispatcher supports dry running of commands) and throw exception if user does not exist.
Aggregate B has calculations that need to be eventually consistent with aggregate A. Aggregate A can be mutated using eight methods and each method results in B needing to be updated. It seems an eventually consistent task, but the actual update time frame should be within seconds.
I don't want to rely on the application layer to 'remember' to trigger the update. (Jimmy Bogard says this as well.) What's the best way to model this?
Using a domain service with double dispatch is a pain:
The service will have to be a parameter on every method on A
Multiple mutation methods will usually be called in a row and I don't want to trigger an update in B each time a method is called.
Constructor injection is also a pain:
There are situations where A is not mutated, so being forced to instantiate and inject a domain service to watch for mutation that certainly won't happen feels wrong.
Again, multiple mutation methods will usually be called in a row and I don't want to trigger an update in B each time a method is called.
Domain events sound good but I'm not sure what that looks like. Each mutation method raises a domain event?
Again, multiple mutation methods will usually be called in a row and I don't want to trigger an update in B each time a method is called.
How do I model 'knowing' when A is finished being updated and knowing whether it has been updated so I can trigger B's update without relying on the application layer to call methods in a particular order each time?
Or is this really a repository-level or application-level concern, even though it seems to be a domain requirement?
Your number 3. is commonly used and a very straight-forward technique:
Raise a domain event AChangedType1, ..., AChangedTypeN on model A updates
Let a saga/process manager listen on AChangedTypeX and issue a corresponding UpdateBTypeX command.
It's loosely coupled (neither A nor B now about each other) and scales well (easy parallelization), and the relation between them is explicitly modeled in the long running process.
If you don't want to trigger an update to B on every change on A, then you can delay the update by some time before you send out the UpdateBTypeX command (as it is commonly done in network protocols, see, e.g., TCP's delayed acks.