Extra boundary data in command execution - domain-driven-design

I've a design problem that is turning around in my mind since a while and I'm not finding a good solution about it. It's about CQRS and domain boundaries.
Let's say I've a context, that's about taking bookings, and consequently events, for a system. The system allows to have a single booking linked to single event (that's already done, no problems), and weekly bookings linked to a collection of events. Weekly bookings are done defining a day of the week (extra data is not relevant); a weekly booking has always a starting and ending day (half year).
The system has also two types of days: normal days and not working days, where an event cannot be held.
As a business request the users want that for every weekly booking the system alone cancels that particular events that are hold on not working days.
Actually bookings and events are stored in two tables. An event is canceled when it's stored with a special flag. I've no link with the table of the days because I've never used it in my business context. As business boundary (with other small data, not relevant here) this was working great, up to now.
Here is my problem: to satisfy the users request (create an event for every deleted day), I need informations about all the days of half year (just the ones in the same weekday are enough). But, to obtain this information, how can I proceed?
My possible solutions:
Load all the days of half year in the root entity. This could be really heavy, and I've to extend my business boundary.
Preprocess the command, creating one with extra informations. It would be a command in command, something I've read being dangerous. That's enough for me.
Extend the command with the list of invalid days. How I check that a day is invalid? I've to access data outside my actual boundary, that's the same as 1.
Create a service that is used in the command handler to get the list of the not working days. The days context information would be moved in a common (or shared) context.
Create an event listener for weekly events. When a weekly event is created, it loads the list of not working days (for that weekday of that half year) and fires a sequence of commands to cancel that particular days. This would seal the boundaries, not adding extra data to a common context and reuse same code (cancel event) for extra purposes.
Which would be the best solution?

Litmus test : ask your stakeholders if it ever happens that a working day becomes a non-working day, and what is supposed to happen to weekly bookings on those days. Also, does it ever happen that a non-working day becomes a working day, and what is supposed to happen to bookings on those days.
Create an event listener for weekly events. When a weekly event is created, it loads the list of not working days (for that weekday of that half year) and fires a sequence of commands to cancel that particular days. This would seal the boundaries, not adding extra data to a common context and reuse same code (cancel event) for extra purposes.
Close, based on my understanding of what you have written.
To my mind, you really have two different aggregates; you have the definition of the weekly booking, and you have daily schedules which collect events from different bookings.
When you create a booking, your inputs are a start date, and end date, a day of the week, and probably a domain service that can return a list of days of the week in that range. Think schedule or itinerary -- we're defining the candidate days for this particular booking.
You event listener, upon seeing a new booking, fires a command to the schedule aggregate for that particular day, adding the event requested by the daily booking. Because the schedule knows whether or not it is a "non-working day", it can mark each of those events as scheduled or cancelled (if you want that information to be explicit; you could leave it implied by the state of the working day in some systems).
Empty schedules can be created in advance, or on demand using some generic recipe to determine whether or not they are working days, and can support changes to their own working status if that's part of your current domain.
The key ideas here are that non-working days are a part of your domain model, and since they span multiple booking objects, they clearly are an entity that sits outside of the booking aggregate.

Related

What notation marks that an activity must be completed within two-day time in an activity diagram?

Customers can pay for an order instantly or later. When the order is pay-later, I want to draw a notation that signifies that the customer must pay within two-day time in an activity diagram. If the customer does not pay within two-day time, the system will mark the order as canceled.
In this attached image, the first swimlane is for the actor Customer, and the second swimlane is for the actor System. I created a time event notation that signifies that the customer must pay within 48 hours. Then, I placed the merge/branch node on the customer swimlane to signify that the customer is the actor that must make the payment.
The issue that I thought of about my current diagram is that someone might misunderstand the time event notation. Someone might understand the notation as a sign that the system will always wait 48 hours before marking the order as canceled or awaiting shipment. In reality, the system will mark the order as awaiting shipment as soon as the customer pays. However, if the customer doesn't pay after 48 hours, the system will mark the order as canceled.
How can I draw a better diagram to signify the above description?
An accept time event action (e.g. an AcceptEventAction with a single TimeEvent trigger) cannot have an input flow, so your diagram is invalid, and then does not show what you want.
The guards of the flow after a Decision must be written between brackets ([]).
I placed the merge/branch node on the customer swimlane to signify that the customer is the actor that must make the payment.
but this is check by the system independently of the customer, so this is wrong / unclear
The fact the two actions creating order are not in the customer swimlane is also wrong / unclear for me
After the action create an order with the awaiting payment status you can create a new timer dedicated to the current order of the customer. In case a customer pays before 2 days the corresponding timer is deleted.
But that can produce a lot of timers, you can also memorize the current order more timeout in a fifo and you have a unique timer. In case a customer pays before 2 days the corresponding order is removed from the fifo.
That unique timer can periodically check the memorized orders, but that pooling wake-up the system even when nothing must be done.
That unique timer can be started when a first order is memorized, then when the system wake-up it manages the too older orders, then if the fifo become empty the timer is stopped else it is updated depending on the delay of the first (older) order in the fifo
Per #qwerty_so's comment, I have decided to use an interruptible region. This interruption trigger of this region is the system accepting payment. Here's the new diagram.
EDIT
As per #bruno's comments and #Axel Scheithauer's comments, I have cropped the more complete image of my activity diagram. An Accept Time Event Action seems to be able to have incoming edges/flows, contrary to #bruno's comments. Furthermore, I believe that the incomplete screenshot was what caused confusion in my diagram.
I also revised my diagram so that the interruptible region's signal comes from the Accept Time Event Action instead of Accept Event Action.
Diagram 1:
Diagram 2:

How should i guarantee consistency in database involving finance transaction operations

I am trying to figure out how to handle consistency in the database.
In scenario:
User A has an accounting document in the database include a balance field representing the amount of his current money. (supposed initially he has 100$)
My system has many methods to charge his account.
Suppose 2 methods occur at the same time, each method charges him for 10$, these steps occur concurrently in below orders:
Method 1 READ his balance and store in memory (100$)
Method 2 READ his balance and store in memory (100$)
... some business logics
Method 1 UPDATE his balance by subtracting variable in memory by 10 (100$ - 10$) and then save it
Method 2 UPDATE his balance by subtracting variable in memory by 10 (100$ - 10$) and then save it
This means he has been charged only 10$ instead of 20$.
I searched this situation a while and can not get it clear (sorry for my stupidity).
Really appreciate yours helps to enlighten my featherbrained. :)
You just discovered why financial transactions are complicated :-)
Have you ever wondered why it takes time for you to have an updated balance in your bank account? Or why you actually have two balances, instead of one?
That's because your account can actually go negative and (up to a certain point) that will be fine.
So in a real life scenario what happens is that you have a balance of 100$, you pay 10$ and until that transaction is processed and confirmed by the receiver, you still have your 100$. If you do 20 transactions of 10$ each, you'll be able to complete them because the system will most likely not be able to notice.
And honestly, it shouldn't. Think of credit cards, you might not have enough money now, but maybe you know you'll have enough when the credit is due.
So, the race condition you describe only works if you actually read the value and then update it.
There are a few approaches:
Read the current balance, and update the row using the old balance as a field in the where statement. This way if it updates no rows you know that you need to re-read and update.
Don't update the balance and only do it time-based, say once per hour. Yes, you might still have to do some checks, but the system will overall be more responsive.
Lock the database row as your first step. This would work but there's a chance that it will make the app slower.
Race condition you describe is low level design concern. With backend engine like Node that will handle the incomming request in first come first serve fashion you don't need to think about this case. Race condition you describe is not possible if you respect the order in which database update callbacks are fired. They are fired in the same order they have been issued in. So you should call next update only when the previous has finished. Promisses are great way to do this.

DDD handling Aggregate updates over time

Using Event Sourcing, I have a domain in which aggregates should be updated from time to time. When I create an aggregate, I have an expiry time (this can be arbitrary) on it, and after that time I have to update some properties of the entity. (This can be forced using an UpdateCommand too.) I have few processes in mind:
After the aggregate creation, I store the aggregate ID and the expiry time in an RDBMS.
In a cron job I query the database for expired aggregates, and submit an UpdateCommand
Others include emitting UpdateCommands (or events?) from the read side.
Using a saga to coordinate updates, this is similar to the first. But either way, I have to store the expiry times.
So, I have to store the events and write into a database on the write side transactionally. However, I am not sure if creating a read-side for the write-side (?) is the correct solution in the DDD world, or is it applicable? What are the recommended solutions?
I also need to run some commands after some time expires.
For example, I need to emit a ContractExpiredEvent after 1 year (the ContractAggregate decides when but usually it is 1 year). The problem is that the Aggregate must be the one that decides when and what command to executes, so this is a Domain concern more than an Infrastructure one.
How I did that? I was inspired by Udi Dahan's video in which he introduce the term Timeout. Long story short, the Aggregate requests that a command should be send to itself after a period of time passes. It does that by yielding it from a command handler. The underlying CQRS framework gets that scheduled command and persists it in a special repository. Then, a cron job process all scheduled commands when their time comes.
There's well compatibility between ES and DDD.
However, I am not sure if creating a read-side for the write-side (?) is the correct solution in the DDD world, or is it applicable?
Yes, it's a part of domain aggregate in your case (if you talk about storing expiry times on write-side).
So, I have to store the events and write into a database on the write side transactionally.
I suggest you to use the saga for writing into a db.
John Carmack, 1998:
If you don't consider time an input value, think about it until you do -- it is an important concept
The pattern you should be looking for is that the real world (where time is) tells the aggregate the current time, and the aggregate decides whether or not to expire itself.
With that pattern in place, you can use any strategy you like for scheduling when the real world tells the aggregate what time it is.
You don't need immediately consistent scheduling in the aggregate, you just need some idempotent message handling and an "at least once" delivery process.
the aggregate has a method which can cause an update if it is necessary based on the current time, not blindly. At some time I have to fetch the right aggregate from the store, call that method and store the changes back (if any), or retry later, right?
Yes, that's the right idea.
Notice that if you call that method twice after the expiration time, the first call will load the history, append the expiration events, and store the updated history. The second call loads the history, can see that the aggregate is already expired, and retires without making any change to the history.
You can also use bi-temporal event sourcing. When events are stored, there are two dates:
the date when the event is added to the database (createdAt)
the date when the event has to be applied (validFrom)
The events are then applied in the order defined by validFrom property.
Using this, you can:
"fix the past" by adding a new event (createdAt = now and validFrom = now - x)
schedule events in the future by adding a new event (createdAt = now and validFrom = now + y)
I suggest to watch this great video of Thomas Pierrain at DDD Europe 2018: https://www.youtube.com/watch?v=xzekp1RuZbM

DDD how to model time tracking?

I am developing an application that has employee time tracking module. When employee starts working (e.g. at some abstract machine), we need to save information about him working. Each day lots of employees work at lots of machines and they switch between them. When they start working, they notify the system that they have started working. When they finish working - they notify the system about it as well.
I have an aggregate Machine and an aggregate Employee. These two are aggregate roots with their own behavior. Now I need a way to build reports for any given Employee or any given Machine for any given period of time. For example, I want to see which machines did given employee used over period of time and for how long. Or I want to see which employees worked at this given machine for how long over period of time.
Ideally (I think) my aggregate Machine should have methods startWorking(Employee employee) and finishWorking(Employee employee).
I created another aggregate: EmployeeWorkTime that stores information about Machine, Employee and start,finish timestamps. Now I need a way to modify one aggregate and create another at the same time (or ideally some another approach since this way it's somewhat difficult).
Also, employees have a Shift that describes for how many hours a day they must work. The information from a Shift should be saved in EmployeeWorkTime aggregate in order to be consistent in a case when Shift has been changed for given Employee.
Rephrased question
I have a Machine, I have an Employee. HOW the heck can I save information:
This Employee worked at this Machine from 1.05.2017 15:00 to 1.05.1017 18:31.
I could do this simply using CRUD, saving multiple aggregates in one transaction, going database-first. But I want to use DDD methods to be able to manage complexity since the overall domain is pretty complex.
From what I understand about your domain you must model the process of an Employee working on a machine. You can implement this using a Process manager/Saga. Let's name it EmployeeWorkingOnAMachineSaga. It work like that (using CQRS, you can adapt to other architectures):
When an employee wants to start working on a machine the EmployeeAggregate receive the command StartWorkingOnAMachine.
The EmployeeAggregate checks that the employee is not working on another machine and if no it raises the EmployeeWantsToWorkOnAMachine and change the status of the employee as wantingToWorkOnAMachine.
This event is caught by the EmployeeWorkingOnAMachineSaga that loads the MachineAggregate from the repository and it sends the command TryToUseThisMachine; if the machine is not vacant then it rejects the command and the saga sends the RejectWorkingOnTheMachine command to the EmployeeAggregate which in turns change it's internal status (by raising an event of course)
if the machine is vacant, it changes its internal status as occupiedByAnEmployee (by raising an event)
and similar when the worker stops working on the machine.
Now I need a way to build reports for any given Employee or any given Machine for any given period of time. For example, I want to see which machines did given employee used over period of time and for how long. Or I want to see which employees worked at this given machine for how long over period of time.
This should be implemented by read-models that just listen to the relevant events and build the reports that you need.
Also, employees have a Shift that describes for how many hours a day they must work. The information from a Shift should be saved in EmployeeWorkTime aggregate in order to be consistent in a case when Shift has been changed for given Employee
Depending on how you want the system to behave you can implement it using a Saga (if you want the system to do something if the employee works more or less than it should) or as a read-model/report if you just want to see the employees that do not conform to their daily shift.
I am developing an application that has employee time tracking module. When employee starts working (e.g. at some abstract machine), we need to save information about him working. Each day lots of employees work at lots of machines and they switch between them. When they start working, they notify the system that they have started working. When they finish working - they notify the system about it as well.
A critical thing to notice here is that the activity you are tracking is happening in the real world. Your model is not the book of record; the world is.
Employee and Machine are real world things, so they probably aren't aggregates. TimeSheet and ServiceLog might be; these are the aggregates (documents) that you are building by observing the activity in the real world.
If event sourcing is applicable there, how can I store domain events efficiently to build reports faster? Should each important domain event be its own aggregate?
Fundamentally, yes -- your event stream is going to be the activity that you observe. Technically, you could call it an aggregate, but its a pretty anemic one; easier to just think of it as a database, or a log.
In this case, it's probably just full of events like
TaskStarted {badgeId, machineId, time}
TaskFinished {badgeId, machineId, time}
Having recorded these events, you forward them to the domain model. For instance, you would take all of the events with Bob's badgeId and dispatch them to his Timesheet, which starts trying to work out how long he was at each work station.
Given that Machine and Employee are aggregate roots (they have their own invariants and business logic in a complex net of interrelations, timeshift-feature is only one of the modules)
You are likely to get yourself into trouble if you assume that your digital model controls a real world entity. Digital shopping carts and real world shopping carts are not the same thing; the domain model running on my phone can't throw things out of my physical cart when I exceed my budget. It can only signal that, based on the information that it has, the contents are not in compliance with my budgeting policy. Truth, and the book of record are the real world.
Greg Young discusses this in his talk at DDDEU 2016.
You can also review the Cargo DDD Sample; in particular, pay careful attention to the distinction between Cargo and HandlingHistory.
Aggregates are information resources; they are documents with internal consistency rules.

Recurrent workflow stops after a number of iterations

A workflow is started upon instantiation of the the entity Hazaa. It waits for a while and then creates a new instance of Hazaa. After that, it's put to sleep as successful.
I'd expect it to fire perpetually creating a bunch of Hazaas. However, I only get 15 new ones before the procreations cease. Together with the original one that I create manually to set off the workflow-flow, there's 16 instances in total. I've tested with longer delays (up to several hours) but the behavior is consistent.
That's for CRM On-line. On premise, the behavior is similar but limited to 8 instances in grand total.
According to the harvest of links I've found, there's a setting in CRM to control the number of iterations. The problem is that my solution will be mainly deployed for on-line customers so unless I own the cloud, that's a show stopper.
I understand it's CRM protecting against the recurrence. What can I do about it?
The best solution I can think of at the moment is to set up a super workflow, firing the sub workflow 16 times. Then I'd need to have a super super workflow etc. Not a braggable in my view.
A CorrelationToken contains a counter and a one-hour "self-destruct" timer.
When the first workflow runs, a new CorrelationToken is created. The counter is set to 1 and the timer is set to one hour.
When the second workflow is started from the first workflow (even indirectly, such as in your case), this same CorrelationToken is used if its self-destruct timer has not already expired. If it has, a new CorrelationToken is created. If it hasn't, it increments the counter and resets the timer. Lather, rinse, repeat.
The second (and subsequent) workflows will only execute if the counter is 8 or less (On-Premise) or 16 or less (CRM Online)
What this really means is that in practice, if your child workflows are executing sooner than one hour apart, the CorrelationToken never gets a chance to expire, which means eventually the counter increments past the limit. It does not mean that you can execute up to 8 (or 16) of these workflows every hour.
It sounds like you already figured most of this out, but I wanted to give other readers background. So, to answer your question: if your design includes looping workflows that are executed sooner than one hour apart, you will need to consider an alternate design. It will definitely involve an external process or service.
If I'm understanding you correctly, it sounds like you're creating an infinite loop, which is why the CRM kills workflows like these, since otherwise they'll never end. On what condition would you stop making more Hazaa records? You could add a number field and increment that field on each new Hazaa and when it reaches a certain number stop the workflow.

Resources