I have an Order aggregate with the following commands:
CreateOrderCommand
PlaceOrderCommand
... (rest redacted as they are not pertinent to the question) ...
The PlaceOrderCommand refers to placing the Order onto an external execution venue. I have captured the behaviour for placing an order onto an external execution venue within a separate (non CQRS) #Service. However, I am struggling (due to lack of experience with Axon) with how best to connect my #Service with the aggregate.
My normal way of thinking would have me:
Inject the #Service into the aggregate's #Autowired constructor.
When the PlaceOrderCommand is issued, use the service to place the order onto the relevant execution venue and once done emit an event (either OrderPlacedSuccessfullyEvent or ErrorInOrderPlacementEvent).
Change the aggregate's state within the relevant #EventSourcingHandler.
My question is:
does my description above of how to handle this particular use case with Axon make sense (in particular injecting a #Service into an aggregate feels a bit off to me)?
Or is there a different best practice of how to model my scenario when using CQRS/event sourcing with Axon?
Axon requires an empty constructor in the aggregate. How does this reconcile with having the #Autowired constructor?
The other thing I was potentially considering was:
having a PlaceOrderInstructionCommand (instead of the simple PlaceOrderCommand) which emits a ReceivedPlaceOrderInstructionEvent that a separate event listener is listening for.
that event listener would have the relevant #Service injected into it and would do the placement of the Order.
then after placing the Order it would send a command (or should it emit an event?) to the aggregate informing it to update its state.
Could you please advise on what is best practice for modelling this scenario?
The PlaceOrderCommand refers to placing the Order onto an external execution venue.
I'm assuming that placing the Order to an external execution venue means interacting with an external system. If yes, then it is should not be part of your domain. In that case, you would need to raise an Integration Event.
As you mentioned, you could raise a Command like ProcessOrder from your Domain. Within that Command, you can update your Domain (eg, set the OrderStatus to Processing) and raise an integration event like OrderArrived which is then handled by a separate process.
From Microsoft Docs:
The purpose of integration events is to propagate committed transactions and updates to additional subsystems, whether they are other microservices, Bounded Contexts or even external applications.
Integration events must be based on asynchronous communication between multiple microservices (other Bounded Contexts) or even external systems/applications.
You would handle that integration event as a separate process (or worker) outside of your Domain. This is where your#Service would be injected. Once, the order is processed successfully you can then broadcast integration event called OrderPlaced.
Now, any subscriber which has anything to do with placing the order would subscribe to the event. In your case, your Domain is interested in updating the state once the order is placed. Hence, you would would Subscribe to OrderPlaced event within your Domain to update the status of the Order.
Hope it helps.
Related
I am creating an application using Domain Driven Design and a process manager need has come up in order to coordinate multiple use cases from different bounded contexts. I've seen that in order for the process manager to correlate event response data for specific requests, it uses correlation ids.
So, supposing that the process manager creates this correlation id and also creates a command that triggers a specific use case. Then, it wants to pass this id and/or some other metadata (through the command) to the event that will eventually be produced by the use case.
But, where should this info be passed? Is it the Aggregate that has a domain logic like for example CreateUser(userProps, metadata) and then emits UserCreated(userProps, metadata) event? It seems ugly and not domain's responsibility to have to add the metadata to every method on the aggregate.
How can these metadata end up in that event in a clean way? This event is eventually an integration event, because the domain event UserCreated is wrapped and sent as an integration one with a specific schema that other bounded contexts are aware of it.
Thank you!
I am confused about where to handle domain events in an application that is based on the hexagonal architecture. I am talking about the bounded-context-internal domain events, and not about inter-context integration/application/public events.
Background
As far as I understand, application logic (i.e. use case logic, workflow logic, interaction with infrastructure etc.) is where command handlers belong, because they are specific to a certain application design and/or UI design. Command handlers then call into the domain layer, where all the domain logic resides (domain services, aggregates, domain events). The domain layer should be independent of specific application workflows and/or UI design.
In many resources (blogs, books) I find that people implement domain event handlers in the application layer, similar to command handlers. This is because the handling of a domain event should be done in its own transaction. And since it could influence other aggregates, these aggregates must be loaded via infrastructure first. The key point however, is this: The domain event is torn apart and turned into a series of method calls to aggregates. This important translation resides in the application layer only.
Question
I consider the knowledge about what domain events cause what effects on other aggregates as an integral part of the domain knowledge itself. If I were to delete everything except my domain layer, shouldn't that knowledge be retained somewhere? In my view, we should place domain event handlers directly in the domain layer itself:
They could be domain services which receive both a domain event and an aggregate that might be affected by it, and transform the domain event into one or many method calls.
They could be methods on aggregates themselves which directly consume the entire domain event (i.e. the signature contains the domain event type) and do whatever they want with it.
Of course, in order to load the affected aggregate, we still need a corresponding handler in the application layer. This handler only starts a new transaction, loads the interested aggregate and calls into the domain layer.
Since I have never seen this mentioned anywhere, I wonder if I got something wrong about DDD, domain events or the difference between application layer and domain layer.
EDIT: Examples
Let's start with this commonly used approach:
// in application layer service (called by adapter)
public void HandleDomainEvent(OrderCreatedDomainEvent event) {
var restaurant = this.restaurantRepository.getByOrderKind(event.kind);
restaurant.prepareMeal(); // Translate the event into a (very different) command - I consider this important business knowledge that now is only in the application layer.
this.mailService.notifyStakeholders();
}
How about this one instead?
// in application layer service (called by adapter)
public void HandleDomainEvent(OrderCreatedDomainEvent event) {
var restaurant = this.restaurantRepository.getByOrderKind(event.kind);
this.restaurantDomainService.HandleDomainEvent(event, restaurant);
this.mailService.notifyStakeholders();
}
// in domain layer handler (called by above)
public void HandleDomainEvent(OrderCreatedDomainEvent event, Restaurant restaurant) {
restaurant.prepareMeal(); // Now this translation knowledge (call it policy) is preserved in only the domain layer.
}
The problem with most even handler classes is that they often are tied to a specific messaging technology and therefore often placed in the infrastructure layer.
However, nothing prevents you to write technology-agnostic handlers and use technology-aware adapters that dispatches to them.
For instance, in one application I've built I had the concept of an Action Required Policy. The policy drove the assignment/un-assignment of a given Work Item to a special workload bucket whenever the policy rule was satisfied/unsatisfied. The policy had to be re-evaluated in many scenarios such as when documents were attached to the Work Item, when the Work Item was assigned, when an external status flag was granted, etc.
I ended up creating an ActionRequiredPolicy class in the domain which had event handling methods such as void when(CaseAssigned event) and I had an even handler in the infrastructure layer that simply informed the policy.
I think another reason people put these in the infrastructure or application layers is that often the policies react to events by triggering new commands. Sometimes that approach feels natural, but some other times you want want to make it explicit that an action must occur in response to an event and otherwise can't happen: translating events to commands makes it less explicit.
Here's an older question I asked related to that.
Your description sounds very much like event-sourcing.
If event-sourcing (the state of an aggregate is solely derived from the domain events), then the event handler is in the domain layer, and in fact the general tendency would be to have a port/adapter/anti-corruption-layer emit commands; the command-handler for an aggregate then (if necessary) uses the event handler to derive the state of the aggregate, then based on the state and the command emits events which are persisted so that the event handler can derive the next state. Note that here, the event handler definitely belongs in the domain layer and the command handler likely does, too.
More general event driven approaches, to my mind, tend to implicitly utilize the fact that one side's event is very often another side's command.
It's worth noting that an event is in some sense often just a reified method call on an aggregate.
I follow this strategy for managing domain events:
First of all, it is good to persist them in an event store, so that you have consistence between the fact that triggered the event (for example, a user was created) and the actions it triggers (for example, send an email to the user).
Assuming we have a command bus:
I put a decorator around it that persists the events generated by the command.
A worker processes the event store and publish the events outside the bounded context (BC).
Other BCs (or the same that published it) interested in the event, subscribe to it. The event handers are like command handlers, they belong to the application layer.
If you use hexagonal architecture, the hexagon is splitted into application layer and domain.
I am working on a backend and try to implement CQRS patterns.
I'm pretty clear about events, but sometimes struggle with commands.
I've seen that commands are requested by users, or example ChangePasswordCommand. However in implementation level user is just calling an endpoint, handled by some controller.
I can inject an UserService to my controller, which will handle domain logic and this is how basic tutorials do (I use Nest.js). However I feel that maybe this is where I should use command - so should I execute command ChangePasswordCommand in my controller and then domain module will handle it?
Important thing is that I need return value from the command, which is not a problem from implementation perspective, but it doesn't look good in terms of CQRS - I should ADD and GET at the same time.
Or maybe the last option is to execute the command in controller and then emit an event (PasswordChangedEvent) in command handler. Next, wait till event comes back and return the value in controller.
This last option seems quite good to me, but I have problems with clear implementation inside request lifecycle.
I base on
https://docs.nestjs.com/recipes/cqrs
While the answer by #cperson is technically correct, I would like to add a few nuances to it.
First something that may not be clear from the answer description where it advises to "emit an event (PasswordChangedEvent) in command handler". This is what I would prefer as well, but watch out:
The Command is part of the infrastructure layer, and the Event is part of the domain.
So from the command you should trigger code on the AggregateRoot that emits the event.
This can be done with mergeObjectContext or eventBus.publish (see the NestJS docs).
Events can be applied from other domain objects, but the aggregate usually is the emitter (upon commit).
The other point I wanted to address is that an event-sourced architecture is assumed, i.e. applying CQRS/ES. While CQRS is often used in combination with Event Sourcing there is nothing that prescribes doing so. Event Sourcing can give additional advantages, but also comes with significant added complexity. You should carefully weigh the pros and cons of having ES.
In many cases you do not need Event Sourcing. Having just CQRS already gives you a lot of benefits, such as having your domain / bounded contexts well-contained. Separation between reads and writes, single-responsibility commands + queries (more SOLID in general), cleaner architecture, etc. On a higher level it is easier to shift focus from 'how do I implement this (CRUD-wise)?', to 'how do these user requirements fit in the domain model?'.
Without ES you can have a single relational database and e.g. persist using TypeORM. You can persist events, but it is not needed. In many scenario's you can avoid the eventual consistency where clients need to subscribe to events (maybe you just use them to drive saga's and update read-side views/projections).
You can always start with just CQRS and add Event Sourcing later, when the need arises.
As your architecture evolves, you may find that you require a command bus if you are using Processes/Sagas to manage workflows and inter-aggregate communication. If and when that is the case, it will naturally make sense to use that bus for all commands.
The following is the method I would prefer:
execute the command in controller and then emit an event (PasswordChangedEvent) in command handler. Next, wait till event comes back and return the value in controller.
As for implementation details, in .NET, we use a SignalR websockets service that will read the event bus (where all events are published) and will forward events to clients that have subscribed to them.
In this case, the workflow would be:
The user posts to the controller.
The controller appends the command to the command bus.
The controller returns an ID identifying the command.
The client (browser client) subscribes to events relating to this command.
The command is received by the domain service and handled. An event is emitted to the event store.
The event is published to the event bus.
The event listener subscription service receives the event, finds the subscription, and sends the event to the client.
The client receives the event and notifies the user.
In the DDD litterature, the returning domain event pattern is described as a way to manage domain events. Conceptually, the aggregate root keeps a list of domain events, populated when you do some operations on it.
When the operation on the aggregate root is done, the DB transaction is completed, at the application service layer, and then, the application service iterates on the domain events, calling an Event Dispatcher to handle those messages.
My question is concerning the way we should handle transaction at this moment. Should the Event Dispatcher be responsible of managing a new transaction for each event it process? Or should the application service manages the transaction inside the domain event iteration where it calls the domain Event Dispatcher? When the dispatcher uses infrastructure mecanism like RabbitMQ, the question is irrelevent, but when the domain events are handled in-process, it is.
Sub-question related to my question. What is your opinion about using ORM hooks (i.e.: IPostInsertEventListener, IPostDeleteEventListener, IPostUpdateEventListener of NHibernate) to kick in the Domain Events iteration on the aggregate root instead of manually doing it in the application service? Does it add too much coupling? Is it better because it does not require the same code being written at each use case (the domain event looping on the aggregate and potentially the new transaction creation if it is not inside the dispatcher)?
My question is concerning the way we should handle transaction at this moment. Should the Event Dispatcher be responsible of managing a new transaction for each event it process? Or should the application service manages the transaction inside the domain event iteration where it calls the domain Event Dispatcher?
What you are asking here is really a specialized version of this question: should we ever update more than one aggregate in a single transaction?
You can find a lot of assertions that the answer is "no". For instance, Vaughn Vernon (2014)
A properly designed aggregate is one that can be modified in any way required by the business with its invariants completely consistent within a single transaction. And a properly designed bounded context modifies only one aggregate instance per transaction in all cases.
Greg Young tends to go further, pointing out that adhering to this rule allows you to partition your data by aggregate id. In other words, the aggregate boundaries are an explicit expression of how your data can be organized.
So your best bet is to try to arrange your more complicated orchestrations such that each aggregate is updated in its own transaction.
My question is related to the way we handle the transaction of the event sent after the initial aggregate is altered after the initial transaction is completed. The domain event must be handled, and its process could need to alter another aggregate.
Right, so if we're going to alter another aggregate, then there should (per the advice above) be a new transaction for the change to the aggregate. In other words, it's not the routing of the domain event that determines if we need another transaction -- the choice of event handler determines whether or not we need another transaction.
Just because event handling happens in-process doesn't mean the originating application service has to orchestrate all transactions happening as a consequence of the events.
If we take in-process event handling via the Observable pattern for instance, each Observer will be responsible for creating its own transaction if it needs one.
What is your opinion about using ORM hooks (i.e.:
IPostInsertEventListener, IPostDeleteEventListener,
IPostUpdateEventListener of NHibernate) to kick in the Domain Events
iteration on the aggregate root instead of manually doing it in the
application service?
Wouldn't this have to happen during the original DB transaction, effectively turning everything into immediate consistency (if events are handled in process)?
I have an entity called Order, and an aggregate root OrderManager that updates order state and some other information based on requests from application layer (AppLayer calls OrderManager and OrderManager manages internal state including Orders).
Each order has expiration time, so I want to schedule an action to handle the expiration. I dont know where to put it. I think of two approaches:
Define an interface IScheduler in domain model. So OrderManager uses this interface for tasks scheduling.
Dont define interface but schedule the expitation handling at the application level. This means that app layer calls some method like OrderManager.HandleExpiration
Personally I prefer the first approach, but maybe anyone has another idea?
First off, the Order entity should be the aggregate root of the Order aggregate. It should encapsulate state changing behavior such that there is no need for a manager class. The application service would then delegate directly to the Order entity.
As far as handling order expiration, a few things have to be considered. Does any of this need to be persistent? In other words, will there be orders that are persisted to the database and not loaded in the application, for which expiration will need to be handled? If so, then you need to implement a workflow which goes outside of application boundaries. One way to do this is to have a continuously running background service which polls the database for orders that expire at the current time. It then sends a command to a handled which handles the order expiration event. This handler would delegate back to your domain, ultimately to the HandleExpiration method on the Order entity. An order expiration is just an event, handled like any other domain event, and the background service is just a part of infrastructure that makes this event possible. This seems to fit best with your approach #2.