How can correlation id from a process manager be passed to an integration event? - domain-driven-design

I am creating an application using Domain Driven Design and a process manager need has come up in order to coordinate multiple use cases from different bounded contexts. I've seen that in order for the process manager to correlate event response data for specific requests, it uses correlation ids.
So, supposing that the process manager creates this correlation id and also creates a command that triggers a specific use case. Then, it wants to pass this id and/or some other metadata (through the command) to the event that will eventually be produced by the use case.
But, where should this info be passed? Is it the Aggregate that has a domain logic like for example CreateUser(userProps, metadata) and then emits UserCreated(userProps, metadata) event? It seems ugly and not domain's responsibility to have to add the metadata to every method on the aggregate.
How can these metadata end up in that event in a clean way? This event is eventually an integration event, because the domain event UserCreated is wrapped and sent as an integration one with a specific schema that other bounded contexts are aware of it.
Thank you!

Related

How to send message to Microsoft EventHub with Db Transaction?

I want to send the event to Microsoft Event-hub with Db transaction:
Explanation:
User hit a endpoint of order creation.
OrderService accept the order and put that order into the db.
Now Order service want to send that orderId as event to another services using the Event-hub.
How can I achieve transactional behaviour for step 2 and 3?
I know these solutions:
Outbox pattern: Where I put message in another table with order creation transaction. And there is one cron/scheduler, that takes the message from table and mark them delivered. and next time cron will take only not delivered messages.
Use Database audit log and library that taken of this things. Library will bind the database table to Event-hub. Then on every update library will send that change to Event-hub.
I wanted to know is there any in-built transactional feature in Event-hub?
Or
Is there any better way to handle this thing?
There is no concept of transactions within Event Hubs at present. I'm not sure, given the limited context that was shared, that Event Hubs is the best fit for your scenario. Azure Service Bus has transaction support and may be a more natural fit for your intended flow.
In this kind of distributed scenario, regardless of which message broker you decide on, I would advise embracing eventual consistency and considering a pattern similar to:
Your order creation endpoint receives a request
The order creation endpoint assigns a unique identifier for the request and emits the event to Event Hubs; if the send was successful it returns a 202 (Accepted) to the caller and a Retry-After header to indicate to the caller that they should wait for that period of time before checking the status of that order's creation.
Some process is responsible for reading events from the Event Hub and creating that order within the database. Depending on your ecosystem's tolerance, this may be a dedicated process or could be something like an Azure Function with an Event Hubs trigger.
Other event consumers interested in orders will also see the creation request and will call into your order service or database for the details using the unique identifier that as assigned by the order creation endpoint; this may or may not be the official order number within the system.

How to publish event of a none AggregateRoot object?

So I am using NestJs with CQRS and DDD in a microservices environment with Eventstore and MySql as a database. In NestJS in order to publish an event, the object needs to be of type AggregateRoot. So I am publishing the object returned after saving it to the database in such way that it is of type AggregateRoot.
What I need to do now is just publish an incoming request as-is in an Event and other services will listen to that event without the need for it to be of type AggregateRoot.
Example: I have an incoming Order to the Order-Microservice containing objects needed in other microservices (like Delivery-Microservice and Assembly-Microservice). I don't need to save it in the order-ms to be able to publish it, because it contains data that I don't necessarily need in the order-ms.
The NestJs EventPublisher requires an object to be of type AggregateRoot. How should I publish that event to EventStore?
It doesn't always require AggregateRoot.
It is true that you can trigger events from different locations (as #maciej-sikorski mentions in his reference to the NestJS documentation.
But you have to be careful here, as doing this can easily break common DDD / CQRS concepts and best-practices. In general:
Command changes state conceptually in an AggregateRoot, and Event represents this state.
AggregateRoot serves as public entrypoint (i.e. 'public API') of your Orders bounded context.
Events are part of the domain, and should be related to the aggregate root, once it is committed.
So don't emit Events anywhere outside your domain objects. But it is okay to e.g. apply a DeliveryAddressValidated event to the aggregate root from a value object called DeliveryAddress that ensures all address fields are filled in, and the address exists. Upon committing the Order aggregate root - when the aggregate is in a consistent state - this event could be picked up in the Delivery-Microservice.
Alternatively - given your Order receives information it does not need - you could have a look at rearranging your bounded contexts. I don't know your design, but sticking with Delivery domain, you could have a Saga that waits for OrderRequested event (indicating Order is valid, but delivery address data + validation is not part of this process) and then triggers a DeliverOrder command (or e.g. a ValidateDelivery command) to the Delivery-Microservice.

#Service injection into aggregates?

I have an Order aggregate with the following commands:
CreateOrderCommand
PlaceOrderCommand
... (rest redacted as they are not pertinent to the question) ...
The PlaceOrderCommand refers to placing the Order onto an external execution venue. I have captured the behaviour for placing an order onto an external execution venue within a separate (non CQRS) #Service. However, I am struggling (due to lack of experience with Axon) with how best to connect my #Service with the aggregate.
My normal way of thinking would have me:
Inject the #Service into the aggregate's #Autowired constructor.
When the PlaceOrderCommand is issued, use the service to place the order onto the relevant execution venue and once done emit an event (either OrderPlacedSuccessfullyEvent or ErrorInOrderPlacementEvent).
Change the aggregate's state within the relevant #EventSourcingHandler.
My question is:
does my description above of how to handle this particular use case with Axon make sense (in particular injecting a #Service into an aggregate feels a bit off to me)?
Or is there a different best practice of how to model my scenario when using CQRS/event sourcing with Axon?
Axon requires an empty constructor in the aggregate. How does this reconcile with having the #Autowired constructor?
The other thing I was potentially considering was:
having a PlaceOrderInstructionCommand (instead of the simple PlaceOrderCommand) which emits a ReceivedPlaceOrderInstructionEvent that a separate event listener is listening for.
that event listener would have the relevant #Service injected into it and would do the placement of the Order.
then after placing the Order it would send a command (or should it emit an event?) to the aggregate informing it to update its state.
Could you please advise on what is best practice for modelling this scenario?
The PlaceOrderCommand refers to placing the Order onto an external execution venue.
I'm assuming that placing the Order to an external execution venue means interacting with an external system. If yes, then it is should not be part of your domain. In that case, you would need to raise an Integration Event.
As you mentioned, you could raise a Command like ProcessOrder from your Domain. Within that Command, you can update your Domain (eg, set the OrderStatus to Processing) and raise an integration event like OrderArrived which is then handled by a separate process.
From Microsoft Docs:
The purpose of integration events is to propagate committed transactions and updates to additional subsystems, whether they are other microservices, Bounded Contexts or even external applications.
Integration events must be based on asynchronous communication between multiple microservices (other Bounded Contexts) or even external systems/applications.
You would handle that integration event as a separate process (or worker) outside of your Domain. This is where your#Service would be injected. Once, the order is processed successfully you can then broadcast integration event called OrderPlaced.
Now, any subscriber which has anything to do with placing the order would subscribe to the event. In your case, your Domain is interested in updating the state once the order is placed. Hence, you would would Subscribe to OrderPlaced event within your Domain to update the status of the Order.
Hope it helps.

Scheduled tasks in DDD

I have an entity called Order, and an aggregate root OrderManager that updates order state and some other information based on requests from application layer (AppLayer calls OrderManager and OrderManager manages internal state including Orders).
Each order has expiration time, so I want to schedule an action to handle the expiration. I dont know where to put it. I think of two approaches:
Define an interface IScheduler in domain model. So OrderManager uses this interface for tasks scheduling.
Dont define interface but schedule the expitation handling at the application level. This means that app layer calls some method like OrderManager.HandleExpiration
Personally I prefer the first approach, but maybe anyone has another idea?
First off, the Order entity should be the aggregate root of the Order aggregate. It should encapsulate state changing behavior such that there is no need for a manager class. The application service would then delegate directly to the Order entity.
As far as handling order expiration, a few things have to be considered. Does any of this need to be persistent? In other words, will there be orders that are persisted to the database and not loaded in the application, for which expiration will need to be handled? If so, then you need to implement a workflow which goes outside of application boundaries. One way to do this is to have a continuously running background service which polls the database for orders that expire at the current time. It then sends a command to a handled which handles the order expiration event. This handler would delegate back to your domain, ultimately to the HandleExpiration method on the Order entity. An order expiration is just an event, handled like any other domain event, and the background service is just a part of infrastructure that makes this event possible. This seems to fit best with your approach #2.

Custom Logging mechanism: Master Operation with n-Operation Details or Child operations

I'm trying to implement logging mechanism in a Service-Workflow-hybrid application. The requirements for logging is that instead for independent log action, each log must be considered as a detail operation and placed against a parent/master operation. So, it's a parent-child and goes to database table(s). This is the primary reason, NLog failed.
To help understand better, I'm diving in a generic detail. This is how the application flow goes:
Now, the Main entry point of the application (normally called Program.cs) is Platform. It initializes an engine that is capable of listening incoming calls from ISDN lines, VoIP, or web services. The interface is generic, so any call that reaches the Platform triggers OnConnecting(). OnConnecting() is a thread-safe event and can be triggered as many times as system requires.
Within OnConnecting(), a new instance of our custom Workflow manager is launched and the context is a custom object called ProcessingInfo:
new WorkflowManager<ZeProcessingInfo>();
Where, ZeProcessingInfo:
var ZeProcessingInfo = new ProcessingInfo(this, new LogMaster());
As you can see, the ProcessingInfo is composed of Platform itself and a new instance of LogMaster. LogMaster is defined in an independent assembly.
Now this LogMaster is available throughout the WorkflowManager, all the Workflows it launches, all the activities within any running Workflow, and passed on to external code called from within any Activity. Now, when a new LogMaster is initialized, a Master Operation entry is created in the database and this LogMaster object now lives until this call is ended after a series of very serious roller coaster rides through different workflows. Upon every call of OnConnecting(), a new Master Operation is created and maintained.
The LogMaster allows for calling a AddDetail() method that adds new child detail under the internally stored Master Operation (distinguished through a Guid Primary Key). The LogMaster is built upon Entity Framework.
And, I'm able to log under the same Master Operation as many times as I require. But the application requirements are changing and there is a need to log from other assemblies now. There is a Platform Server assembly witch is a Windows Service that acts as a server listening to web service based calls and once a client calls a method, OnConnecting in Platform is triggered.
I need a mechanism to somehow retrieve the related LogMaster object so that I can add detail to the same Master Operation. But Platform Server is the once triggering the OnConnecting() on the Platform and thus, instantiating LogMaster. This creates a redundancy loop.
Also, failure scenarios are being considered as well. If LogMaster fails, need to revert to Event Logging from Database Logging. If Event Logging is failed (or not allowed through unified configuration), need to revert to file-based (XML) logging.
I hope I have given a rough idea. I don't expect code but I need some strategy for a very seamless plug-able configurable logging mechanism that supports Master-Child operations.
Thanks for reading. Any help would be much appreciated.
I've read this question a number of times and it was pretty hard to figure out what was going on. I don't think your diagram helps at all. If your question is about trying to retrieve the master log record when writing child log records then I would forget about trying to create normalised data in the log tables. You will just slow down the transactional system in trying to do so. You want the log/audit records to write as fast as possible and you can later aggregate them when you want to read them.
Create a de-normalised table for the logs entries and use a single Guid in that table to track the session/parent log master. Yes this will be a big table but it will write fast.
As for guaranteed delivery of log messages to a destination, I would try not to create multiple destinations as combining them later will be a nightmare but rather use something like MSMQ to emit the audit logs as fast as possible and have another service pick them up and process them in a guaranteed delivery manner. ETW (Event Logging) is not guaranteed under load and you will not know that it has failed.

Resources