I am trying to learn more about DDD and was going through the DomainEvents. Let's say we have three microservices Service A, Service B and Service C.
Service A has an entity Foo defined as below:
public class Foo : AggregateRoot
{
public string id {get; private set;}
public string name {get; private set;}
public string email {get; private set;}
}
and the Service B is a service that depends upon the email from Foo while the Service C depends on the name from Foo and the data is replicated from Service A to Service B and to Service C whenever there is a change in the values of Foo via a Bus.
Guidelines about Domain Events that I came accross:
Do not share excess information as part of the DomainEvent data.
When consuming BoundedContext knows about Producing BoundedContext maybe share the Id otherwise share full information
Don't use DomainClasses to represent data in Events
Use Primitive types for data in Events
Now the question that arose due to conflicting guidelines:
Does it mean that I should fire two different events when they change like FooNameChange and FooEmailChanged and only use the id along with the updated value as part of the Event Payload?
Or can I just make a single DomainEvent called FooChanged and take the state of the Foo serialize it and fire the event. Then write up a handler as part of the same BoundedContext that would take the data and drop it on the Bus for any service subscribed to the message and the individual service decides on what actions to take based on the Id that was attached and the event arg (the updated data).
If you need to talk across services, you should be perhaps looking for Integration Events instead of "Domain Events"
From Microsoft Docs
Domain events versus integration events
Semantically, domain and
integration events are the same thing: notifications about something
that just happened. However, their implementation must be different.
Domain events are just messages pushed to a domain event dispatcher,
which could be implemented as an in-memory mediator based on an IoC
container or any other method.
On the other hand, the purpose of integration events is to propagate committed transactions and updates to additional subsystems, whether
they are other microservices, Bounded Contexts or even external
applications. Hence, they should occur only if the entity is
successfully persisted, otherwise it's as if the entire operation
never happened.
As mentioned before, integration events must be based on asynchronous communication between multiple microservices (other
Bounded Contexts) or even external systems/applications.
Thus, the event bus interface needs some infrastructure that allows
inter-process and distributed communication between potentially remote
services. It can be based on a commercial service bus, queues, a
shared database used as a mailbox, or any other distributed and
ideally push based messaging system.
What information you send within the integration events, it really depends. You have the following choices:
Publish the event such as FooNameChanged, FooEmailChanged with only Id of Foo. In that scenario, if your consumers need further information of what has changed, they would need to make a call your service (perhaps a REST API call). A disadvantage of this approach is that if you have many subscribers to your event then all those services would call your service to get the details of the event almost at the same time.
Publish the event with the full data (note that it is not same as your Domain) which a consuming service may need such as PerviousValue, CurrentValue, etc. If your payload is not huge, this can be a good option. These types of events are typically called "FAT events"
DomainEvents are not patch documents.
Which is to say, we aren't trying to create general purpose descriptions of changes, but instead to align our messages with the concepts of our domain as we understand them.
So whether two changes belong in the same event, or in different events, gets a big "it depends".
Related
I am creating an application using Domain Driven Design and a process manager need has come up in order to coordinate multiple use cases from different bounded contexts. I've seen that in order for the process manager to correlate event response data for specific requests, it uses correlation ids.
So, supposing that the process manager creates this correlation id and also creates a command that triggers a specific use case. Then, it wants to pass this id and/or some other metadata (through the command) to the event that will eventually be produced by the use case.
But, where should this info be passed? Is it the Aggregate that has a domain logic like for example CreateUser(userProps, metadata) and then emits UserCreated(userProps, metadata) event? It seems ugly and not domain's responsibility to have to add the metadata to every method on the aggregate.
How can these metadata end up in that event in a clean way? This event is eventually an integration event, because the domain event UserCreated is wrapped and sent as an integration one with a specific schema that other bounded contexts are aware of it.
Thank you!
I have been reading the book Patterns, principles and practices of domain driven design
, specifically the chapter dedicated to repositories, and in one of the code examples it uses infrastructure interfaces in the use cases, is it correct that the application layer has knowledge of infrastructure?, I thought that use cases should only have knowledge of the domain ...
Using interface to seperate from implementation is the right way, so use cases layer knows interfaces not infrastructure detail.
It is the Application Layer's responsibility to invoke the (injected) infrastructure services, call the domain layer methods, and persist/load necessary data for the business logic to be executed. The domain layer is unconcerned about how data is persisted or loaded, yes, but the application layer makes it possible to use the business logic defined in the domain layer.
You would probably have three layers that operate on any request: A Controller that accepts the request and knows which application layer method to invoke, an Application Server that knows what data to load and which domain layer method to invoke, and the Domain Entity (usually an Aggregate) that encloses the business logic (a.k.a Invariants).
The Controller's responsibility is only to gather the request params (gather user input in your case), ensure authentication (if needed), and then make the call to the Application Service method.
Application Services are direct clients of the domain model and act as intermediaries to coordinate between the external world and the domain layer. They are responsible for handling infrastructure concerns like ID Generation, Transaction Management, Encryption, etc.
Let's take the example of an imaginary MessageSender Application Service. Here is an example control flow:
API sends the request with conversation_id, user_id (author), and message.
Application Service loads Conversation from the database. If the Conversation ID is valid, and the author can participate in this conversation (these are invariants), you invoke a send method on the Conversation object.
The Conversation object adds the message to its own data, runs its business logic, and decides which users to send it to.
The Conversation object raises events to be dispatched into a message interface (collected in a temporary variable valid for that session) and returns. These events contain the entire data to reconstruct details of the message (timestamps, audit log, etc.) and don't just cater to what is pushed out to the receiver later.
The Application Service persists the updated Conversation object and dispatches all events raised during the recent processing.
A subscriber listening for the event gathers it, constructs the message in the right format (picking only the data it needs from the event), and performs the actual push to the receiver.
So you see the interplay between Application Services and Domain Objects is what makes it possible to use the Domain in the first place. With this structure, you also have a good implementation of the Open-Closed Principle.
Your Conversation object changes only if you are changing business logic (like who should receive the message).
Your Application service will seldom change because it simply loads and persists Conversation objects and publishes any raised events to the message broker.
Your Subscriber logic changes only if you are pushing additional data to the receiver.
I am confused about where to handle domain events in an application that is based on the hexagonal architecture. I am talking about the bounded-context-internal domain events, and not about inter-context integration/application/public events.
Background
As far as I understand, application logic (i.e. use case logic, workflow logic, interaction with infrastructure etc.) is where command handlers belong, because they are specific to a certain application design and/or UI design. Command handlers then call into the domain layer, where all the domain logic resides (domain services, aggregates, domain events). The domain layer should be independent of specific application workflows and/or UI design.
In many resources (blogs, books) I find that people implement domain event handlers in the application layer, similar to command handlers. This is because the handling of a domain event should be done in its own transaction. And since it could influence other aggregates, these aggregates must be loaded via infrastructure first. The key point however, is this: The domain event is torn apart and turned into a series of method calls to aggregates. This important translation resides in the application layer only.
Question
I consider the knowledge about what domain events cause what effects on other aggregates as an integral part of the domain knowledge itself. If I were to delete everything except my domain layer, shouldn't that knowledge be retained somewhere? In my view, we should place domain event handlers directly in the domain layer itself:
They could be domain services which receive both a domain event and an aggregate that might be affected by it, and transform the domain event into one or many method calls.
They could be methods on aggregates themselves which directly consume the entire domain event (i.e. the signature contains the domain event type) and do whatever they want with it.
Of course, in order to load the affected aggregate, we still need a corresponding handler in the application layer. This handler only starts a new transaction, loads the interested aggregate and calls into the domain layer.
Since I have never seen this mentioned anywhere, I wonder if I got something wrong about DDD, domain events or the difference between application layer and domain layer.
EDIT: Examples
Let's start with this commonly used approach:
// in application layer service (called by adapter)
public void HandleDomainEvent(OrderCreatedDomainEvent event) {
var restaurant = this.restaurantRepository.getByOrderKind(event.kind);
restaurant.prepareMeal(); // Translate the event into a (very different) command - I consider this important business knowledge that now is only in the application layer.
this.mailService.notifyStakeholders();
}
How about this one instead?
// in application layer service (called by adapter)
public void HandleDomainEvent(OrderCreatedDomainEvent event) {
var restaurant = this.restaurantRepository.getByOrderKind(event.kind);
this.restaurantDomainService.HandleDomainEvent(event, restaurant);
this.mailService.notifyStakeholders();
}
// in domain layer handler (called by above)
public void HandleDomainEvent(OrderCreatedDomainEvent event, Restaurant restaurant) {
restaurant.prepareMeal(); // Now this translation knowledge (call it policy) is preserved in only the domain layer.
}
The problem with most even handler classes is that they often are tied to a specific messaging technology and therefore often placed in the infrastructure layer.
However, nothing prevents you to write technology-agnostic handlers and use technology-aware adapters that dispatches to them.
For instance, in one application I've built I had the concept of an Action Required Policy. The policy drove the assignment/un-assignment of a given Work Item to a special workload bucket whenever the policy rule was satisfied/unsatisfied. The policy had to be re-evaluated in many scenarios such as when documents were attached to the Work Item, when the Work Item was assigned, when an external status flag was granted, etc.
I ended up creating an ActionRequiredPolicy class in the domain which had event handling methods such as void when(CaseAssigned event) and I had an even handler in the infrastructure layer that simply informed the policy.
I think another reason people put these in the infrastructure or application layers is that often the policies react to events by triggering new commands. Sometimes that approach feels natural, but some other times you want want to make it explicit that an action must occur in response to an event and otherwise can't happen: translating events to commands makes it less explicit.
Here's an older question I asked related to that.
Your description sounds very much like event-sourcing.
If event-sourcing (the state of an aggregate is solely derived from the domain events), then the event handler is in the domain layer, and in fact the general tendency would be to have a port/adapter/anti-corruption-layer emit commands; the command-handler for an aggregate then (if necessary) uses the event handler to derive the state of the aggregate, then based on the state and the command emits events which are persisted so that the event handler can derive the next state. Note that here, the event handler definitely belongs in the domain layer and the command handler likely does, too.
More general event driven approaches, to my mind, tend to implicitly utilize the fact that one side's event is very often another side's command.
It's worth noting that an event is in some sense often just a reified method call on an aggregate.
I follow this strategy for managing domain events:
First of all, it is good to persist them in an event store, so that you have consistence between the fact that triggered the event (for example, a user was created) and the actions it triggers (for example, send an email to the user).
Assuming we have a command bus:
I put a decorator around it that persists the events generated by the command.
A worker processes the event store and publish the events outside the bounded context (BC).
Other BCs (or the same that published it) interested in the event, subscribe to it. The event handers are like command handlers, they belong to the application layer.
If you use hexagonal architecture, the hexagon is splitted into application layer and domain.
I have an Order aggregate with the following commands:
CreateOrderCommand
PlaceOrderCommand
... (rest redacted as they are not pertinent to the question) ...
The PlaceOrderCommand refers to placing the Order onto an external execution venue. I have captured the behaviour for placing an order onto an external execution venue within a separate (non CQRS) #Service. However, I am struggling (due to lack of experience with Axon) with how best to connect my #Service with the aggregate.
My normal way of thinking would have me:
Inject the #Service into the aggregate's #Autowired constructor.
When the PlaceOrderCommand is issued, use the service to place the order onto the relevant execution venue and once done emit an event (either OrderPlacedSuccessfullyEvent or ErrorInOrderPlacementEvent).
Change the aggregate's state within the relevant #EventSourcingHandler.
My question is:
does my description above of how to handle this particular use case with Axon make sense (in particular injecting a #Service into an aggregate feels a bit off to me)?
Or is there a different best practice of how to model my scenario when using CQRS/event sourcing with Axon?
Axon requires an empty constructor in the aggregate. How does this reconcile with having the #Autowired constructor?
The other thing I was potentially considering was:
having a PlaceOrderInstructionCommand (instead of the simple PlaceOrderCommand) which emits a ReceivedPlaceOrderInstructionEvent that a separate event listener is listening for.
that event listener would have the relevant #Service injected into it and would do the placement of the Order.
then after placing the Order it would send a command (or should it emit an event?) to the aggregate informing it to update its state.
Could you please advise on what is best practice for modelling this scenario?
The PlaceOrderCommand refers to placing the Order onto an external execution venue.
I'm assuming that placing the Order to an external execution venue means interacting with an external system. If yes, then it is should not be part of your domain. In that case, you would need to raise an Integration Event.
As you mentioned, you could raise a Command like ProcessOrder from your Domain. Within that Command, you can update your Domain (eg, set the OrderStatus to Processing) and raise an integration event like OrderArrived which is then handled by a separate process.
From Microsoft Docs:
The purpose of integration events is to propagate committed transactions and updates to additional subsystems, whether they are other microservices, Bounded Contexts or even external applications.
Integration events must be based on asynchronous communication between multiple microservices (other Bounded Contexts) or even external systems/applications.
You would handle that integration event as a separate process (or worker) outside of your Domain. This is where your#Service would be injected. Once, the order is processed successfully you can then broadcast integration event called OrderPlaced.
Now, any subscriber which has anything to do with placing the order would subscribe to the event. In your case, your Domain is interested in updating the state once the order is placed. Hence, you would would Subscribe to OrderPlaced event within your Domain to update the status of the Order.
Hope it helps.
I'm trying to model a news post that contains information about the user that posted it. I believe the best way is to send user summary information along with the message to create a news post, but I'm a little confused how to update that summary information if the underlying user information changes. Right now I have the following NewsPostActor and UserActor
public interface INewsPostActor : IActor
{
Task SetInfoAndCommitAsync(NewsPostSummary summary, UserSummary postedBy);
Task AddCommentAsync(string content, UserSummary, postedBy);
}
public interface IUserActor : IActor, IActorEventPublisher<IUserActorEvents>
{
Task UpdateAsync(UserSummary summary);
}
public interface IUserActorEvents : IActorEvents
{
void UserInfoChanged();
}
Where I'm getting stuck is how to have the INewsPostActor implementation subscribe to events published by IUserActor. I've seen the SubscribeAsync method in the sample code at https://github.com/Azure/servicefabric-samples/blob/master/samples/Actors/VS2015/VoiceMailBoxAdvanced/VoicemailBoxAdvanced.Client/Program.cs#L45 but is it appropriate to use this inside the NewsPostActor implementation? Will that keep an actor alive for any reason?
Additionally, I have the ability to add comments to news posts, so should the NewsPostActor also keep a subscription to each IUserActor for each unique user who comments?
Events may not be what you want to be using for this. From the documentation on events (https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-reliable-actors-events/)
Actor events provide a way to send best effort notifications from the
Actor to the clients. Actor events are designed for Actor-Client
communication and should NOT be used for Actor-to-Actor communication.
Worth considering notifying the relevant actors directly or have an actor/service that will manage this communication.
Service Fabric Actors do not yet support a Publish/Subscribe architecture. (see Azure Feedback topic for current status.)
As already answered by charisk, Actor-Events are also not the way to go because they do not have any delivery guarantees.
This means, the UserActor has to initiate a request when a name changes. I can think of multiple options:
From within IUserAccount.ChangeNameAsync() you can send requests directly to all NewsPostActors (assuming the UserAccount holds a list of his posts). However, this would introduce additional latency since the client has to wait until all posts have been updated.
You can send the requests asynchronously. An easy way to do this would be to set a "NameChanged"-property on your Actor state to true within ChangeNameAsync() and have a Timer that regularly checks this property. If it is true, it sends requests to all NewsPostActors and sets the property to false afterwards. This would be an improvement to the previous version, however it still implies a very strong connection between UserAccounts and NewsPosts.
A more scalable solution would be to introduce the "Message Router"-pattern. You can read more about this pattern in Vaughn Vernon's excellent book "Reactive Messaging Patterns with the Actor Model". This way you can basically setup your own Pub/Sub model by sending a "NameChanged"-Message to your Router. NewsPostActors can - depending on your scalability needs - subscribe to that message either directly or through some indirection (maybe a NewsPostCoordinator). And also depending on your scalability needs, the router can forward the messages either directly or asynchronously (by storing it in a queue first).