In DDD: Commands that produce events that no one consumes - domain-driven-design

I am applying Domain driven design in my project and I am running into a scenario where a user action translates into a command in one of my bounded contexts and thus produces an event. However I see that none of my other bounded contexts would care about consuming this event. Essentially all this command is doing is saving/ updating state in my bounded context.
My questions are :
does a command have to produce an event ?
If so, does it matter that nobody is listening ?

does a command have to produce an event ?
Absolutely not.
If you were using event sourcing as your persistence strategy, then all of your changes of state would be "events". But there's no particular reason that you must expose the event elsewhere.
Hyrum's Law is one reason that you might prefer not to broadcast an event
With a sufficient number of users of an API,
it does not matter what you promise in the contract:
all observable behaviors of your system
will be depended on by somebody.
Don't guess what information should be included in the event until you have enough data at hand to make a good guess.
does it matter that nobody is listening ?
In an ideal world, not really -- in practice, costs may well figure into the decision.

It is not a necessity for a command to produce an event, though it is preferable.
One of the most significant benefits of events is the ability to generate the state of the system as on a particular date/time from scratch. If you were to religiously bubble-up events for all changes in the system, you will be able to apply events sequentially and bring the system to the state of your requirement.
IMHO, it does not matter that nobody is listening at that point in time. I am assuming that you are persisting your event into an event sink/log as part of your application. Doing so has two benefits:
a. If you were to discover a future requirement which would benefit from an event log, you are already a step ahead.
b. Even if nobody ever consumes the event, as mentioned in the first response, being able to generate a state of the system as on a particular date/time is a cool ability to have.
The pattern of persisting events and deriving a system state has its roots in Event Sourcing. You would want to take a deeper look at it to see its benefits.

does a command have to produce an event ?
No, its not a requirement, a command can just modify a state of an entity without dispatching an event. As said in others responses you should require every state change to produce a domain event if you use Event Sourcing.
If so, does it matter that nobody is listening ?
Yes and No.
Yes because there is no good or bad model, only useful model, if no one is listening then your modeling is not really useful to anyone.
No because domain event are called invariant, they reflect business event and change only when the business requirement change, and so they are part of your API (see the concept of published language).

Related

Domain event naming for things that haven't happened yet

Okay, I appreciate the title sounds odd. An event should always be for something that has happened, e.g. OrderCreated, ParcelShipped, etc.
However, I wonder if anyone has any thoughts on the following problem.
Consider an HR application which models people and their jobs. For simplicity's sake, a person can have a bunch of jobs and they can be ended at a date. The person has an EndJob operation which takes an endDate.
If the endDate is in the future, what would the domain event be?
JobEndedEvent (this is not true)
JobEndDateAddedEvent (this is quite technical)
Consumers in other bounded contexts will be interested to know that the Job will be ending, but may also wish to be informed at the point the job ends as well. I feel that the latter should be the consumer's responsibility, rather than the source's.
Any thoughts would be welcomed... Thanks.
Well, from a Domain perspective, you're probably talking about JobTerminationScheduledEvent because from the language point of view, you're notifying other contexts about a scheduling of a job's ending.
It's not the actual thing, schedulings can change and you will leave up to the other contexts how will they handle such an information. A given context might consider the scheduling to be enough information to consider the job will end by the given date.
Another context, as being notifying that such an event happened they might want to double-check when the date comes to make sure no changes happened before taking further action.
In the end, your context is actually expressing what happened which is: nothing concrete. You have a scheduled date defined for this action to happen, but it didn't happen yet.
If the endDate is in the future, what would the domain event be?
JobCompletionScheduled?
We made the decision now, but it's effective date is in the future. That's a perfectly normal thing to do in a line of business, and the decision itself is useful business intelligence to capture.
Dig around with your domain experts, and listen closely - there may already be vocabulary in your domain that describes this case.
Although there is an event that happens when you have specified that someones job is 'going to' end lets call it "jobEndIntentionEvent", there is also an implicit event that happens when the persons job actually ends - "jobEndEvent".
Now, the source bounded context possibly doesn't need to raise this "jobEndEvent" to act on it itself. You may have multiple bounded contexts that are only really interested in knowing about this event though. So should it raise it at all? Or do the multiple other bounded contexts all have to play the cards they're dealt - i.e. listen for the "jobEndIntentionEvent" and implement code that fires when the event they would have liked to have heard ("jobEndEvent") would have been received?
Or should the origin bounded context be nice and fire this 'integration event' for everyone.
Or alternatively a nicer solution would be we have a scheduling bounded context that is a subscriber to "jobEndIntentionEvents" and similar ones to that, and it knows to convert them into the REAL events that people actually care about - "jobEndEvents".

DDD - how to rehydrate

Question: what is the best, efficient and future proof way to
rehydrate an aggregate from a repository? What are the pro's and con's of the provided ways and are my perceptions correct?
Let's say we have an Aggregate Root with private setters but public getters for accessing state
Behaviour is done through methods on the aggregate root.
A repository is instructed to load an aggregate.
At the moment I see a couple of possible ways to achieve this:
set the state through reflection (manual or automatic eg.
Automapper)
make constructors that accept properties so state is set
load the aggregate with a state object
1) Jimmy Bogard alludes that his tool Automapper isn't meant for two-way mapping. But some people argue that we have to be pragmatic, use tools in a way it helps you.
For me, I don't like a full rehydration through reflection. Maybe Automapper ceise to exist or aggregate roots are bent in such a way the mapping can be done (see some comments of Vaughn on his article).
2) creating constructors for rehydration, with a couple of parameters so the state of the aggregate is rehydrated in a correct way.
These couple of parameters can expand (= new constructors) or the definition can change. I like this approach, except the part of having a bunch of parameters.
3) the state is a property of the aggregate root. The state is encapsulated in a new object and this object is build by the repository and is then given to the aggregate root for proper init.
Some people argue that building this state object is more work (new class, exposure of state properties on entity and aggregate root to enforce business rules), but it provides a clean way to initiliaze the state.
Say that we need event sourcing, does the loading of a state resemble in loading events? And does the state object provide a way of handling events? Is it more future proof?
I would argue that trying to future-proof too much represents a trap that many people fall into that adds undue complexity to a codebase. There is a fine balancing act between sound architectural decisions and over-architecting a solution to problem that is not guaranteed to exist.
That being said, I fully agree with what Jimmy says, in regards to AutoMapper not being intended for two-way mapping. Your domain represents the "truth" in your application, and should not be directly mutable. I have worked on projects with two-way mappings, and while they do work, there is a tendency to start treating the domain objects as nothing more than DTOs. It becomes painful when you start having read-only properties, having to reflect to do your setting - tooling or not. From a DDD perspective, we should not be allowing for outside influences to simply say what a property value should be, because it will lead to an anemic domain model over time, most likely.
Internal states do work well, but they are at the cost of additional overhead and complexity. There is a legitimate trade-off, as you mention, in that you are adding a fair amount of work. However, you can use that opportunity to allow the aggregate to validate the state against the self-contained business rules within the aggregate, prior to allowing the state to be set. That addresses the largest concern that I have with two-way mapping. You can at least enforce that a state object contains valid data and then only construct the aggregate if it is valid. It is more testable, as well. The largest problem that I have seen with this approach is that the skill level of your team will have a direct bearing on the success of this being utilized correctly. It could be argued that the complexity does not add enough value to implement domain-wide, as you will likely have aggregates that have different levels of churn. A couple of projects that I have been involved in have used this approach, and I found little advantage over straight constructor usage.
Normally, I use constructors for rehydration in most cases. It walks the line between not being overly-complex, plus it leaves responsibility for the aggregate to allow or disallow the construction of the object - again, allowing for the domain to be in control of whether the hydration attempt would result in a valid object. A good compromise to constructor bloat is the use a mutable DTO as a parameter for the constructor, essentially acting as a data structure to maintain a consistent constructor signature over time. In that essence, it is also somewhat future-proof. It takes the most attractive perk of the state object approach, which is the clean signatures, but removes the additional layer of an internal abstraction.
You mention event sourcing as a possibility down the road. State loading is not very similar to what you would be doing, at all (in my opinion). With a state object, you are snapshotting the state of the aggregate at a given point in time. With event sourcing, you will be replaying events, each of which represents the data required to mutate the state, as opposed to the state, itself. As such, your constructor will likely be a collection of events, representing a chain of deltas to mutate the state repeatedly, until it reaches the current state. When you want to hydrate your aggregate, you will supply it with the events that are related to that aggregate, and it will replay them to get to the current state. This is one of the true strengths of event sourcing, as well. You are forcing the hydration of your domain objects to go through the business logic required to create them, each time. Given a list of events, the aggregate will enforce that each state change is valid by applying the event in a consistent fashion, whether the event is being applied in real-time, or replayed to get to the current state.
Back to the future-proof aspect, as it relates to event sourcing, there is a conscious effort required when events require change. Since you have to replay an event to get to the current state, you will very likely have to deprecate events and bring up new events to transition to as your business logic changes. You may (read as "likely will") find yourself versioning events. Not only does your aggregate need to understand current state change requirements, but it also needs to understand previous state change requirements. So, if you change an event handler, you will have to ensure that it will be valid for existing events, as well. When you are adding additional data to an event, it is usually not too involved. But when you start removing data from an event signature, you instantly make that event at risk for being incompatible with earlier structures. Likewise, even changing the names of the data structures inside of an event can cause backwards compatibility issues. If you start event sourcing, you do not need to worry as much about future-proofing as you do backwards compatibility. Event sourcing is great, but be prepared for additional complexity.

Diagnosing Azure stateful actors

I'm still trying to get my mind around Azure Service Fabric Stateful Actors. So, my (current) problem is best put into an example like this:
I've got a helpdesk system, where each ticket is a stateful actor. The actor knows about the state it's in (posted, dealt with, rejected, ...), can access the associated data and all that.
I find I have made a mistake and a bunch of those 50.000 tickets are in the wrong state. So, I need to
fix the code
publish the solution
fix the data content of a subset of those 50.000 actors.
Now, how can I query the state of those actors, like "give me each actor that is in "rejected" and belongs to a user whose name starts with a german ümlaut"? How can I then patch the state data of those actors?
Do I really have to add a query method to each actor and wake up each single actor? Or is there a way to query those state dictionaries outside of the actors sitting on top of them?
The short answer is yes, in a situation like that you'd have to wake up each single actor (eventually).
If you are already in that state, I think JoshL's suggestion makes sense.
To avoid this sort of situations, you could keep an index dictionary in a stateful service, holding the information you'll want to query on e.g. the actor id and the status (posted, dealt with, etc.). You then only have to wake up those actors that are relevant.
There are two approaches you can take for that:
Have the stateful service direct the flow of information - be responsible for updating the index dictionary and telling actors what to do (e.g. change status).
Have the actors responsible for notifying the stateful service for state updates (this could be done periodically through reminders for example).
Perhaps you could consider overriding OnActivateAsync in your actor class(es) and implement the cleanup logic there, then upgrade your SF application?
This would prevent the need to iterate every single instance externally (as the SF runtime will call OnActivateAsync for you), and would ensure that the logic runs for each instance only if/when needed (only upon next activation for a given instance).
more on Actor activate/deactivate/etc.
Best of luck!

What if domain event failed?

I am new to DDD. Now I was looking at the domain event. I am not sure if I understand this domain event correctly, but I am just thinking what will happen if domain event published failed?
I have a case here. When a buyer order something from my website, firstly we will create a object, Order with line of items. The domain event, OrderWasMade, will be published to deduct the stock in Inventory. So here is the case, what if when the event was handled, the item quantity will be deducted, but what if when the system try to deduct the stock, it found out that there is no stock remaining for the item (amount = 0). So, the item amount can't be deducted but the order had already being committed.
Will this kind of scenario happen?
Sorry to have squeeze in 2 other questions here.
It seems like each event will be in its own transaction scope, which means the system requires to open multiple connection to database at once. So if I am using IIS Server, I must enable DTC, am I correct?
Is there any relationship between domain-events and domain-services?
A domain event never fails because it's a notification of things that happened (note the past tense). But the operation which will generate that event might fail and the event won't be generated.
The scenario you told us shows that you're not really doing DDD, you're doing CRUD using DDD words. Yes, I know you're new to it, don't worry, everybody misunderstood DDD until they got it (but it might take some time and plenty of practice).
DDD is about identifying the domain model abstraction, which is not code. Code is when you're implementing that abstraction. It's very obvious you haven't done the proper modelling, because the domain expert should tell you what happens if products are out of stock.
Next, there's no db/acid transactions at this level. Those are an implementation detail. The way DDD works is identifying where the business needs things to be consistent together and that's called an aggregate.
The order was submitted and this where that use case stops. When you publish the OrderWasMadeevent, another use case (deducting the inventory or whatever) is triggered. This is a different business scenario related but not part of "submit order". If there isn't enough stock then another event is published NotEnoughInventory and another use case will be triggered. We follow the business here and we identify each step that the business does in order to fulfill the order.
The art of DDD consists in understanding and identifying granular business functionality, the involved aggregates, business behaviour which makes decisions etc and this has nothing to do the database or transactions.
In DDD the aggregate is the only place where a unit of work needs to be used.
To answer your questions:
It seems like each event will be in its own transaction scope, which means the system requires to open multiple connection to database at once. So if I am using IIS Server, I must enable DTC, am I correct?
No, transactions,events and distributed transactions are different things. IIS is a web server, I think you want to say SqlServer. You're always opening multiple connections to the db in a web app, DTC has nothing to do with it. Actually, the question tells me that you need to read a lot more about DDD and not just Evans' book. To be honest, from a DDD pov it doesn't make much sense what you're asking.. You know one of principles of DD: the db (as in persistence details) doesn't exist.
Is there any relationship between domain-events and domain-services
They're both part of the domain but they have different roles:
Domain events tell the world that something changed in the domain
Domain services encapsulate domain behaviour which doesn't have its own persisted state (like Calculate Tax)
Usually an application service (which acts as a host for a business use case) will use a domain service to verify constraints or to gather data required to change an aggregate which in turn will generate one or more events. Aggregates are the ones persisted and always, an aggregate is persisted in an atomic manner i.e db transaction / unit of work.
what will happen if domain event published failed?
MikeSW already described this - publishing the event (which is to say, making it part of the history) is a separate concern from consuming the event.
what if when the system try to deduct the stock, it found out that there is no stock remaining for the item (amount = 0). So, the item amount can't be deducted but the order had already being committed.
Will this kind of scenario happen?
So the DDD answer is: ask your domain experts!
If you sit down with your domain experts, and explore the ubiquitous language, you are likely to discover that this is a well understood exception to the happy path for ordering, with an understood mitigation ("we mark the status of the order as pending, and we check to see if we've already ordered more inventory from the supplier..."). This is basically a requirements discovery exercise.
And when you understand these requirements, you go do it.
Go do it typically means a "saga" (a somewhat misleading and overloaded use of the term); a business process/workflow/state machine implementation that keeps track of what is going on.
Using your example: OrderWasMade triggers an OrderFulfillment process, which tracks the "state" of the order. There might be an "AwaitingInventory" state where OrderFulfillment parks until the next delivery from the supplier, for example.
Recommended reading:
http://udidahan.com/2010/08/31/race-conditions-dont-exist/
http://udidahan.com/2009/04/20/saga-persistence-and-event-driven-architectures/
http://joshkodroff.com/blog/2015/08/21/an-elegant-abandoned-cart-email-using-nservicebus/
If you need the stock to be immediately consistent at all times, a common way of handling this in event sourced systems (can also in non-event based systems, this is orthogonal really) is to rely on optimistic locking at the event store level.
Events basically have a revision number that they expect the stream of events to be at to take effect. Once the event hits the persistent store, its revision number is checked against the real stream number and if they don't match, a conflict exception is raised and the transaction is aborted.
Now as #MikeSW pointed out, depending on your business requirements, stock checking can be an out-of-band process that handles the problem in an eventually consistent way. Eventually can range from milliseconds if another part of the process takes over immediately, to hours if an email is sent with human action needing to be taken.
In other words, if your domain requires it, you can choose to trade this sequence of events
(OrderAbortedOutOfStock)
for
(OrderMade, <-- Some amount of time --> OrderAbortedOutOfStock)
which amounts to the same aggregate state in the end

Should entity contain only behavior that modifies the state?

I had a discussion recently with a co-worker, where he insisted that in Domain-Driven Design entities should not have a behavior that does not modify its state. In my experience to date, I never heard about this limitation. Is it a valid DDD rule?
To give some context (simplified scenario) - in our domain we have Computer entity, on which you can start Processes, our integration layer will actually delegate it to a remote physical Computer and start a process there.
So, should StartProcess be a behaviour of a Computer entity? Or should it be included in Domain Service as it does not affect the state of the Computer entity directly? (it modifies the state indirectly as once the process is over, and data is synchronized back to our system).
To me Entity is a natural place for it, as it follows the ubiquitous language, but I am wondering if someone has good reasons against (or other reasons for).
IMO an entity behavior does not need to modify state, but at the very least should emit an event. In this case, the event would be something like ProcessStarted. CQRS/event-sourcing views aggregates essentially as command handlers - they handle commands and emit events. State is made explicit when required for behavior or when denormalized for query purposes.

Resources