Async Flows Design in Lagom or Microservices - domain-driven-design

How to design asyn flows in Lagom ?
Problem faced: In our product we have a Lead Aggregate which has a User Id (represents the owner of the lead), Now User has a limitation which says one user can have max of 10 Lead associated with this. We designed this by creating a separate Service ResourceManagement and when a User asks for Picking a Lead, we send a Command to LeadAggregate which generates a Event LeadPickRequested. On ProcessManager Listen to the event and asks for the Resource From ResourceManagement, on Success send Command to LeadAggregate - MarkAsPicked and on this send Push notification to the User that Lead is Picked but from building the UI perspective its very difficult and same cannot be done for exposing our API to third party.
One Sol. we have done is when request is received on Service save a RequestID Vs Request Future . in Command Add the request Id and when the LeadAggregate finally change into Picked State or Picked Failure a PM listen to the event , checks if a RequestFuture is there for the request Id , then complete the future with correct response. This way it works as Sync API for the end User.
Any Better Sol. for this

If you want to provide a synchronous API, I only see two options:
Design your domain model so that Lead creation logic, the "10 leads max" rule and the list of leads for a user are co-located in the same Aggregate root (hint: an AR can spawn another AR).
Accept to involve more than one non-new Aggregate in the same transaction.
The tradeoff depends on transactional analysis about the aggregates in question - will reading from them in the same transaction lead to a lot of locking and race conditions?

Related

How can correlation id from a process manager be passed to an integration event?

I am creating an application using Domain Driven Design and a process manager need has come up in order to coordinate multiple use cases from different bounded contexts. I've seen that in order for the process manager to correlate event response data for specific requests, it uses correlation ids.
So, supposing that the process manager creates this correlation id and also creates a command that triggers a specific use case. Then, it wants to pass this id and/or some other metadata (through the command) to the event that will eventually be produced by the use case.
But, where should this info be passed? Is it the Aggregate that has a domain logic like for example CreateUser(userProps, metadata) and then emits UserCreated(userProps, metadata) event? It seems ugly and not domain's responsibility to have to add the metadata to every method on the aggregate.
How can these metadata end up in that event in a clean way? This event is eventually an integration event, because the domain event UserCreated is wrapped and sent as an integration one with a specific schema that other bounded contexts are aware of it.
Thank you!

How to send message to Microsoft EventHub with Db Transaction?

I want to send the event to Microsoft Event-hub with Db transaction:
Explanation:
User hit a endpoint of order creation.
OrderService accept the order and put that order into the db.
Now Order service want to send that orderId as event to another services using the Event-hub.
How can I achieve transactional behaviour for step 2 and 3?
I know these solutions:
Outbox pattern: Where I put message in another table with order creation transaction. And there is one cron/scheduler, that takes the message from table and mark them delivered. and next time cron will take only not delivered messages.
Use Database audit log and library that taken of this things. Library will bind the database table to Event-hub. Then on every update library will send that change to Event-hub.
I wanted to know is there any in-built transactional feature in Event-hub?
Or
Is there any better way to handle this thing?
There is no concept of transactions within Event Hubs at present. I'm not sure, given the limited context that was shared, that Event Hubs is the best fit for your scenario. Azure Service Bus has transaction support and may be a more natural fit for your intended flow.
In this kind of distributed scenario, regardless of which message broker you decide on, I would advise embracing eventual consistency and considering a pattern similar to:
Your order creation endpoint receives a request
The order creation endpoint assigns a unique identifier for the request and emits the event to Event Hubs; if the send was successful it returns a 202 (Accepted) to the caller and a Retry-After header to indicate to the caller that they should wait for that period of time before checking the status of that order's creation.
Some process is responsible for reading events from the Event Hub and creating that order within the database. Depending on your ecosystem's tolerance, this may be a dedicated process or could be something like an Azure Function with an Event Hubs trigger.
Other event consumers interested in orders will also see the creation request and will call into your order service or database for the details using the unique identifier that as assigned by the order creation endpoint; this may or may not be the official order number within the system.

CQRS/ES - Way of Communication between two Bounded Context

Hi I have a following scenario,
There is two seperate application
ShopManagament - This handle the registration of shop. Contaning aggregate
Shop and other aggregates
NotifyService - This send the mail , sms , notification. Contaning aggregate Email
, SMS , Nofication
The both application build using CQRS/ES with DDD.
Technology used to build the application is Spring with Axon and for messaging usign RabbitMQ
Step 1 -
A shop is registered by issue a command ShopRegisrtraionCommand (ofcourse this handle by the shop aggregate and change the status when event is fired) ,
which fire an event ShopRegistratedEvent .
Step 2 -
When shop ShopRegistredEvent is fired , then I have a EventHandler which listen ShopRegistredEvent and
send the SendEmailVerificationCommand (you can say a request or it act as request )to NotifyService.
Step 3 -
The same command (SendEmailVerificationCommand ) is also handle by the Shop aggregate and
then shop aggregates fire an event MailVerifcationSendedEvent,
this event changes the verification status of Shop to "MailInSendingProcess".
Step 4 -
On other side NotifyService handle that command (SendEmailVerificationCommand or request ) and send the mail ,
if the email successfully sent then NotifyService fire a VerificationEmailSent.
Step 5 -
VerificationEmailSentEvent (fired by NotifyService) is listen by ShopManagment Application using the event listener ,
then this event listener issue a VerificationMailSendedSuccesfullyCommand for the shop aggregates,
and then shop aggregate fire an event VerificationEmailDeliveredEvent , this changes the verifcation status "MailDelivered".
Step 6 -
If the mail sending failed due to any reasons , NotifyService fire another event VerificationEmailSendingUnsuccessfullEvent
which handle by ShopManagament event listener and issue a another command VerificationEmailUnsuccessfull to shop aggregate and then shop
aggregate fire an event VerficationMailSendingFailedEvent , this event change the status of verification status to "MailSendingFalied".
Here the two BC communicate using request and event.
Question -
Can we send command to other bounded context as I am sending in my application or there is another approach.
Is the tracking the status of the Email sending is part of Shop aggregate or I have to create the another aggregate like EmailVerifcation
because I have to resend the falied mail using the schedular.
Is any other way to manage this type of thing if happinning?
I have seen this back and forth between services for verification happen before, but it is typically a process I'd prefer to avoid. It requires intricate teamwork with services for something relatively simple; the intricacy will typically cause pain in the future.
Now to answering your questions:
This should be fine I'd say. A Command is nothing more then a form of message, just like queries or the events in your system. The downside might be that the command-sending Bounded Context should be aware of the 'language' the other Bounded Context speaks. Some form of anti corruption layer might be in place here. See of this as a service which receives the command-sending request of BC-1 in its language and translates it to the language of BC-2. From an Axon Framework perspective I'd also recommend to setting up the DistributedCommandBus, as it contains a component (the CommandRouter to be precise) which is aware of what commands which node might handle.
& 3. This wholly depends on how your domain is modeled. On face value, I'd say a Shop aggregate typically isn't aware of any emails being sent, so from that end I'd say 'no, don't include it in the aggregate'. A Saga would likely be a better fit to send a command to your NotifyService. That Saga would listen to the ShopRegistredEvent and as a response would publish the SendEmailVerificationCommand to the NotifyService. The Saga is able to either act on the callback of the SendEmailVerificationCommand or handle the VerificationEmailSentEvent and VerificationEmailSendingUnsuccessfullEvent to perform the required follow up logic after a (un)successful email.
Hope this gives you some insights Ashwani!

CQRS and DDD boundaries

I've have a couple of questions to which I am not finding any exact answer. I've used CQRS before, but probably I was not using it properly.
Say that there are 5 services in the domain: Gateway, Sales, Payments, Credit and Warehouse, and that during the process of a user registering with the application, the front-end submits a few commands, the same front-end will then, once the user is registered, send a few other commands to create an order and apply for a credit.
Now, what I usually do is create a gateway, which receives all pubic commands, which are then validated, and if valid, are transformed into domain commands. I only use events to store data and if one service needs some action to be performed in other service, a domain command is sent directly from one service to the other. But I've seen in other systems that event handlers are used for more than store data. So my question is, what are the limits to what event handlers can do? And is it correct to send commands between services when a specific service requires that some other service performs an action or is it more correct to have the initial event raise and event and let the handler in the other service perform that action in the event handler. I am asking this because I've seen events like: INeedCreditAproved, when I was hoping to see a domain command like: ApprovedCredit.
Any input is welcome.
You're missing an important concept here - Sagas (Process Managers). You have a long-running workflow and it's better expressed centrally.
Sagas listen to events and emit commands. So OrderAccepted event will start a Saga, which then emit ApproveCredit and ReserveStock commands, to be sent to Credit and Warehouse services respectively. Saga can then listen to command success/failure events and compensate approprietely, like say emiting SendEmail command or whatever else.
One year ago I was sending commands like "send commands between services by event handlers when a specific service requires that some other service performs an action" but a stupid decision made by me switched to using events like you said "to have the initial event raise and event and let the handler in the other service perform that action in the event handler" and it worked at first. The most stupid decision I could make. Now I am switching back to sending commands from event handlers.
You can see that other people like Rinat do similar things with event ports/receptors and it is working for them, I think:
http://abdullin.com/journal/2012/7/22/bounded-context-is-a-team-working-together.html
http://abdullin.com/journal/2012/3/31/anatomy-of-distributed-system-a-la-lokad.html
Good luck

Domain driven design and domain events

I'm new to DDD and I'm reading articles now to get more information. One of the articles focuses on domain events (DE). For example sending email is a domain event raised after some criteria is met while executing piece of code.
Code example shows one way of handling domain events and is followed by this paragraph
Please be aware that the above code will be run on the same thread within the same transaction as the regular domain work so you should avoid performing any blocking activities, like using SMTP or web services. Instead, prefer using one-way messaging to communicate to something else which does those blocking activities.
My questions are
Is this a general problem in handling DE? Or it is just concern of the solution in mentioned article?
If domain events are raised in transaction and the system will not handle them synchronously, how should they be handled?
When I decide to serialize these events and let scheduler (or any other mechanism) execute them, what happens when transaction is rolled back? (in the article event is raised in code executed in transaction) who will cancel them (when they are not persisted to database)?
Thanks
It's a general problem period never mind DDD
In general, in any system which is required to respond in a performant manner (e.g. a Web Server, any long running activities should be handled asynchronously to the triggering process.
This means queue.
Rolling back your transaction should remove item from the queue.
Of course, you now need additional mechanisms to handle the situation where the item on the queue fails to process - i.e the email isn't sent - you also need to allow for this in your triggering code - having a subsequent process RELY on the earlier process having already occurred is going to cause issues at some point.
In short, your queueing mechanism should itself be transactional and allow for retries and you need to think about the whole chain of events as a workflow.

Resources