How would I load initial data (from the UI let's say) when integrating two Bounded Contexts via messaging?
Example:
Bounded Context #1 - Airport
Bounded Context #2 - User Agent (UI) - Responsible for displaying/updating Airplaines in the airport.
When UI is just starting, I want to query the "Airport" for ALL airplaines.
How would I go about it?
My current thinking is to simulate a method call:
UI Context - Post a "GetAirplanes" message to the "UI Queue"
Airport Context - subscribes to "UI Queue", sees "GetAirplanes" message
Airport Context - Post a "AllAirplanes" message on the "Airport Queue"
UI Context - subscribes to "Airport" queue
UI Context - receives message "AllAirplanes" and updates HTML table.
A good approach to this is to build a read model from the events. A read model is just a simple dto that is suited to your ui. This is what you query. It should be super simple and optimised for the ui.
Generally you don't query your domain at all. It is responsible for handling commands and raising event messages that represent state changes.
You subscribe to these events to ensure your read model is up to date and ready to serve your ui.
I have a post that you may find helpful, in which I go into a bit more detail.
Over view of cqrs and event sourcing
Related
I am creating an application using Domain Driven Design and a process manager need has come up in order to coordinate multiple use cases from different bounded contexts. I've seen that in order for the process manager to correlate event response data for specific requests, it uses correlation ids.
So, supposing that the process manager creates this correlation id and also creates a command that triggers a specific use case. Then, it wants to pass this id and/or some other metadata (through the command) to the event that will eventually be produced by the use case.
But, where should this info be passed? Is it the Aggregate that has a domain logic like for example CreateUser(userProps, metadata) and then emits UserCreated(userProps, metadata) event? It seems ugly and not domain's responsibility to have to add the metadata to every method on the aggregate.
How can these metadata end up in that event in a clean way? This event is eventually an integration event, because the domain event UserCreated is wrapped and sent as an integration one with a specific schema that other bounded contexts are aware of it.
Thank you!
How to design asyn flows in Lagom ?
Problem faced: In our product we have a Lead Aggregate which has a User Id (represents the owner of the lead), Now User has a limitation which says one user can have max of 10 Lead associated with this. We designed this by creating a separate Service ResourceManagement and when a User asks for Picking a Lead, we send a Command to LeadAggregate which generates a Event LeadPickRequested. On ProcessManager Listen to the event and asks for the Resource From ResourceManagement, on Success send Command to LeadAggregate - MarkAsPicked and on this send Push notification to the User that Lead is Picked but from building the UI perspective its very difficult and same cannot be done for exposing our API to third party.
One Sol. we have done is when request is received on Service save a RequestID Vs Request Future . in Command Add the request Id and when the LeadAggregate finally change into Picked State or Picked Failure a PM listen to the event , checks if a RequestFuture is there for the request Id , then complete the future with correct response. This way it works as Sync API for the end User.
Any Better Sol. for this
If you want to provide a synchronous API, I only see two options:
Design your domain model so that Lead creation logic, the "10 leads max" rule and the list of leads for a user are co-located in the same Aggregate root (hint: an AR can spawn another AR).
Accept to involve more than one non-new Aggregate in the same transaction.
The tradeoff depends on transactional analysis about the aggregates in question - will reading from them in the same transaction lead to a lot of locking and race conditions?
Hi I have a following scenario,
There is two seperate application
ShopManagament - This handle the registration of shop. Contaning aggregate
Shop and other aggregates
NotifyService - This send the mail , sms , notification. Contaning aggregate Email
, SMS , Nofication
The both application build using CQRS/ES with DDD.
Technology used to build the application is Spring with Axon and for messaging usign RabbitMQ
Step 1 -
A shop is registered by issue a command ShopRegisrtraionCommand (ofcourse this handle by the shop aggregate and change the status when event is fired) ,
which fire an event ShopRegistratedEvent .
Step 2 -
When shop ShopRegistredEvent is fired , then I have a EventHandler which listen ShopRegistredEvent and
send the SendEmailVerificationCommand (you can say a request or it act as request )to NotifyService.
Step 3 -
The same command (SendEmailVerificationCommand ) is also handle by the Shop aggregate and
then shop aggregates fire an event MailVerifcationSendedEvent,
this event changes the verification status of Shop to "MailInSendingProcess".
Step 4 -
On other side NotifyService handle that command (SendEmailVerificationCommand or request ) and send the mail ,
if the email successfully sent then NotifyService fire a VerificationEmailSent.
Step 5 -
VerificationEmailSentEvent (fired by NotifyService) is listen by ShopManagment Application using the event listener ,
then this event listener issue a VerificationMailSendedSuccesfullyCommand for the shop aggregates,
and then shop aggregate fire an event VerificationEmailDeliveredEvent , this changes the verifcation status "MailDelivered".
Step 6 -
If the mail sending failed due to any reasons , NotifyService fire another event VerificationEmailSendingUnsuccessfullEvent
which handle by ShopManagament event listener and issue a another command VerificationEmailUnsuccessfull to shop aggregate and then shop
aggregate fire an event VerficationMailSendingFailedEvent , this event change the status of verification status to "MailSendingFalied".
Here the two BC communicate using request and event.
Question -
Can we send command to other bounded context as I am sending in my application or there is another approach.
Is the tracking the status of the Email sending is part of Shop aggregate or I have to create the another aggregate like EmailVerifcation
because I have to resend the falied mail using the schedular.
Is any other way to manage this type of thing if happinning?
I have seen this back and forth between services for verification happen before, but it is typically a process I'd prefer to avoid. It requires intricate teamwork with services for something relatively simple; the intricacy will typically cause pain in the future.
Now to answering your questions:
This should be fine I'd say. A Command is nothing more then a form of message, just like queries or the events in your system. The downside might be that the command-sending Bounded Context should be aware of the 'language' the other Bounded Context speaks. Some form of anti corruption layer might be in place here. See of this as a service which receives the command-sending request of BC-1 in its language and translates it to the language of BC-2. From an Axon Framework perspective I'd also recommend to setting up the DistributedCommandBus, as it contains a component (the CommandRouter to be precise) which is aware of what commands which node might handle.
& 3. This wholly depends on how your domain is modeled. On face value, I'd say a Shop aggregate typically isn't aware of any emails being sent, so from that end I'd say 'no, don't include it in the aggregate'. A Saga would likely be a better fit to send a command to your NotifyService. That Saga would listen to the ShopRegistredEvent and as a response would publish the SendEmailVerificationCommand to the NotifyService. The Saga is able to either act on the callback of the SendEmailVerificationCommand or handle the VerificationEmailSentEvent and VerificationEmailSendingUnsuccessfullEvent to perform the required follow up logic after a (un)successful email.
Hope this gives you some insights Ashwani!
In the DDD litterature, the returning domain event pattern is described as a way to manage domain events. Conceptually, the aggregate root keeps a list of domain events, populated when you do some operations on it.
When the operation on the aggregate root is done, the DB transaction is completed, at the application service layer, and then, the application service iterates on the domain events, calling an Event Dispatcher to handle those messages.
My question is concerning the way we should handle transaction at this moment. Should the Event Dispatcher be responsible of managing a new transaction for each event it process? Or should the application service manages the transaction inside the domain event iteration where it calls the domain Event Dispatcher? When the dispatcher uses infrastructure mecanism like RabbitMQ, the question is irrelevent, but when the domain events are handled in-process, it is.
Sub-question related to my question. What is your opinion about using ORM hooks (i.e.: IPostInsertEventListener, IPostDeleteEventListener, IPostUpdateEventListener of NHibernate) to kick in the Domain Events iteration on the aggregate root instead of manually doing it in the application service? Does it add too much coupling? Is it better because it does not require the same code being written at each use case (the domain event looping on the aggregate and potentially the new transaction creation if it is not inside the dispatcher)?
My question is concerning the way we should handle transaction at this moment. Should the Event Dispatcher be responsible of managing a new transaction for each event it process? Or should the application service manages the transaction inside the domain event iteration where it calls the domain Event Dispatcher?
What you are asking here is really a specialized version of this question: should we ever update more than one aggregate in a single transaction?
You can find a lot of assertions that the answer is "no". For instance, Vaughn Vernon (2014)
A properly designed aggregate is one that can be modified in any way required by the business with its invariants completely consistent within a single transaction. And a properly designed bounded context modifies only one aggregate instance per transaction in all cases.
Greg Young tends to go further, pointing out that adhering to this rule allows you to partition your data by aggregate id. In other words, the aggregate boundaries are an explicit expression of how your data can be organized.
So your best bet is to try to arrange your more complicated orchestrations such that each aggregate is updated in its own transaction.
My question is related to the way we handle the transaction of the event sent after the initial aggregate is altered after the initial transaction is completed. The domain event must be handled, and its process could need to alter another aggregate.
Right, so if we're going to alter another aggregate, then there should (per the advice above) be a new transaction for the change to the aggregate. In other words, it's not the routing of the domain event that determines if we need another transaction -- the choice of event handler determines whether or not we need another transaction.
Just because event handling happens in-process doesn't mean the originating application service has to orchestrate all transactions happening as a consequence of the events.
If we take in-process event handling via the Observable pattern for instance, each Observer will be responsible for creating its own transaction if it needs one.
What is your opinion about using ORM hooks (i.e.:
IPostInsertEventListener, IPostDeleteEventListener,
IPostUpdateEventListener of NHibernate) to kick in the Domain Events
iteration on the aggregate root instead of manually doing it in the
application service?
Wouldn't this have to happen during the original DB transaction, effectively turning everything into immediate consistency (if events are handled in process)?
I'm trying to model a news post that contains information about the user that posted it. I believe the best way is to send user summary information along with the message to create a news post, but I'm a little confused how to update that summary information if the underlying user information changes. Right now I have the following NewsPostActor and UserActor
public interface INewsPostActor : IActor
{
Task SetInfoAndCommitAsync(NewsPostSummary summary, UserSummary postedBy);
Task AddCommentAsync(string content, UserSummary, postedBy);
}
public interface IUserActor : IActor, IActorEventPublisher<IUserActorEvents>
{
Task UpdateAsync(UserSummary summary);
}
public interface IUserActorEvents : IActorEvents
{
void UserInfoChanged();
}
Where I'm getting stuck is how to have the INewsPostActor implementation subscribe to events published by IUserActor. I've seen the SubscribeAsync method in the sample code at https://github.com/Azure/servicefabric-samples/blob/master/samples/Actors/VS2015/VoiceMailBoxAdvanced/VoicemailBoxAdvanced.Client/Program.cs#L45 but is it appropriate to use this inside the NewsPostActor implementation? Will that keep an actor alive for any reason?
Additionally, I have the ability to add comments to news posts, so should the NewsPostActor also keep a subscription to each IUserActor for each unique user who comments?
Events may not be what you want to be using for this. From the documentation on events (https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-reliable-actors-events/)
Actor events provide a way to send best effort notifications from the
Actor to the clients. Actor events are designed for Actor-Client
communication and should NOT be used for Actor-to-Actor communication.
Worth considering notifying the relevant actors directly or have an actor/service that will manage this communication.
Service Fabric Actors do not yet support a Publish/Subscribe architecture. (see Azure Feedback topic for current status.)
As already answered by charisk, Actor-Events are also not the way to go because they do not have any delivery guarantees.
This means, the UserActor has to initiate a request when a name changes. I can think of multiple options:
From within IUserAccount.ChangeNameAsync() you can send requests directly to all NewsPostActors (assuming the UserAccount holds a list of his posts). However, this would introduce additional latency since the client has to wait until all posts have been updated.
You can send the requests asynchronously. An easy way to do this would be to set a "NameChanged"-property on your Actor state to true within ChangeNameAsync() and have a Timer that regularly checks this property. If it is true, it sends requests to all NewsPostActors and sets the property to false afterwards. This would be an improvement to the previous version, however it still implies a very strong connection between UserAccounts and NewsPosts.
A more scalable solution would be to introduce the "Message Router"-pattern. You can read more about this pattern in Vaughn Vernon's excellent book "Reactive Messaging Patterns with the Actor Model". This way you can basically setup your own Pub/Sub model by sending a "NameChanged"-Message to your Router. NewsPostActors can - depending on your scalability needs - subscribe to that message either directly or through some indirection (maybe a NewsPostCoordinator). And also depending on your scalability needs, the router can forward the messages either directly or asynchronously (by storing it in a queue first).