I'm working on an invoicing web application which uses event sourcing and CQRS.
I have 2 denormalizers for different queries (one for a list of invoice summaries, and one for a single invoice with full details). I find it strange I need to copy a lot of the logic for these 2 denormalizers - for example, listen to events that change the total, subtotal, taxes etc.
I ended up passing the aggregate itself, which contains the real calculated data, on the messaging bus instead of just the events and had the denormalizers listen to it instead of the events.
This made it simpler for me, but seems different from the pattern. This approach was not mentioned in any of the articles I've read.
I liked the idea of passing just the events on the bus and having each denormalizer react to what it needs, but in practice it felt more cumbersome.
I'd love to hear what you think.
Thanks for your help!
As suggested by guillaume31 in above comment, you may simply enrich your domain model with special events such as NewTotalComputed. Depending on the number of events, this might however soon clutter your domain model.
Alternatively, you can refactor the computation logic into a special strategy class that is used from both the domain model (aggregate root) and the read model.
Related
I'm currently working on a CMMS software. Most of these packages have some way of managing your inventory, being able to create Purchase Orders to buy more stock, etc.
Part of our system works perfectly with Event Sourcing because there are clearly defined events. But there is also a bunch of attached data that are almost completely unrelated to that whole process and are just used by companies for reporting at a later date.
For example, a Purchase Order might have a department attached to it. We don't use this department in any stage of the Purchase Order, but a company might come along and want a report for 'How much money has each department spent'. On the other hand, another company might be so small/specialized they doesn't even have separate departments so it doesn't care about it.
That's just 1 example, but there can be a decent amount of fields like that (Think 5-10 per Entity). In this kind of situation, is it better to just have a 'PurchaseOrderUpdated' event that covers all these fields? Or is it considered better to have an individual update events like 'PurchaseOrderDepartmentChanged'?
I personally think there should be an event per property, but I think the rest of my team would complain about having to set up that many events.
It sounds like you may be mixing up two different approaches.
An event in an event sourced system describes a change in state that HAS already happened. This is typically triggered by a command issued to (typically) an aggregate root. A key feature of these domain objects is that they don't have any public properties. It wouldn't therefore make sense to create an 'Update' per property. Here is a blog post which talks about how events are names and used:
6 Code Smells with your CQRA Events - and how to avoid them
Regarding the payload of an event. I have found it better to ensure events are rich in data. A key value of doing event sourcing is the ability to create new read models that you hadn't considered when you initially built the system. Or as you rightly pointed out, create invaluable reporting for the business. IMHO make rich events which include contextual information as well the key information about the state change.
A better way to think about events is that they are transactional boundaries. So if by default you have a potential to update a few fields, you shouldn't need to emit more than one event. For this event-based thinking, event modeling helps communicate this with the rest of the organization: https://eventmodeling.org/posts/what-is-event-modeling/
I am just starting out with ES/DDD and I have a question how one is supposed to do reporting in this architecture. Lets take a typical example, where you have a Customer Aggregate, Order Aggregate, and Product Aggregate all independent.
Now if i want to run a query across all 3 aggregates and/or services, but that data is each in a separate DB, maybe one is SQL, one is a MongoDB, and one something else. How is one supposed to design or be able to run a query that would require a join across these aggregates ?
You should design the Reporting as a simple read-model/projection, possible in its own bounded context (BC), that just listen to the relevant events from the other bounded contexts (Customer BC, Ordering BC and Inventory BC) and builds the needed reports with full data denormalization (i.e. at query time you won't need to query the original sources).
Because of events you won't need any joins as you could maintain a private local state attached to the Reporting read-model in which you can store temporary external models and query those temporary read-models as needed thus avoiding external additional queries to the other BCs.
An anti-corruption layer would not be necessary in this case as there would be no write-model involved in the Reporting BC.
Things are really as simple as that because you already have an event-driven architecture (you use Event sourcing).
UPDATE:
This particular solution is very handy in creating new reports that you haven't thought ahead of time. Every time you thing about a new report you just create a new Read-model (as in you write its source code) then you replay all the relevant events on it. Read-models are side-effect free, you can replay all the events (from the beggining of time) any time and as many time you want.
Read-model rebuilding is done in two situations:
you create a new Read-model
you modify an existing one by listening to a new event or the algorithm differs too much from the initial version
You can read more here:
DDD/CQRS specialized forum - Grey Young is there!
Event sourcing applied – the read model
Writing an Event-Sourced CQRS Read Model
A post in first group describing Read Model rebuilding
Or you can search about this using this text: event sourcing projection rebuilding
Domain-Driven Design is more concerned with the command side of things. You should not attempt to query your domain as that leads to pain and suffering.
Each bounded context may have its own data store and that data store may be a different technology as you have stated.
For reporting you would use a reporting store. How you get data into that store would either require each bounded context to publish events that the reporting BC would pick up and use to update the reporting store or you could make use of event sourcing where the reporting store would project the events into the relevant reporting structures.
There are known practices to solve this.
One might be having a reporting context, which, as Eben has pointed out, will listen to domain events from other contexts and update its store. This of course will lead to issues, since this reporting context will be coupled to all services it reports from. Some might say this is a necessary evil but this is not always the case.
Another technique is to aggregate on-demand. This is not very complex and can be done on different layers/levels. Consider aggregation on the web API level or even on the front-end level, if your reporting is on the screen (not sent by mail as PDF, for example).
This is well known as UI composition and Udi Dahan has wrote an article about this, which is worth reading: UI Composition Techniques for Correct Service Boundires. Also, Mauro Servienti has wrote a blog post about this recently: The secret of better UI composition.
Mauro mentions two types of composition, which I mentioned above. The API/server-side composition is called ViewModel Composition in his post, and front-end (JavaScript) composition is mentioned in the Client side composition process section. Server-side composition is illustrated by this picture:
DDD strategic modeling tools says:
Design two different models 1. Write Models (Handles Command Side) 2.Read Models (POCOs/POJOs) whatever u call them.
I am new to DDD. Now I was looking at the domain event. I am not sure if I understand this domain event correctly, but I am just thinking what will happen if domain event published failed?
I have a case here. When a buyer order something from my website, firstly we will create a object, Order with line of items. The domain event, OrderWasMade, will be published to deduct the stock in Inventory. So here is the case, what if when the event was handled, the item quantity will be deducted, but what if when the system try to deduct the stock, it found out that there is no stock remaining for the item (amount = 0). So, the item amount can't be deducted but the order had already being committed.
Will this kind of scenario happen?
Sorry to have squeeze in 2 other questions here.
It seems like each event will be in its own transaction scope, which means the system requires to open multiple connection to database at once. So if I am using IIS Server, I must enable DTC, am I correct?
Is there any relationship between domain-events and domain-services?
A domain event never fails because it's a notification of things that happened (note the past tense). But the operation which will generate that event might fail and the event won't be generated.
The scenario you told us shows that you're not really doing DDD, you're doing CRUD using DDD words. Yes, I know you're new to it, don't worry, everybody misunderstood DDD until they got it (but it might take some time and plenty of practice).
DDD is about identifying the domain model abstraction, which is not code. Code is when you're implementing that abstraction. It's very obvious you haven't done the proper modelling, because the domain expert should tell you what happens if products are out of stock.
Next, there's no db/acid transactions at this level. Those are an implementation detail. The way DDD works is identifying where the business needs things to be consistent together and that's called an aggregate.
The order was submitted and this where that use case stops. When you publish the OrderWasMadeevent, another use case (deducting the inventory or whatever) is triggered. This is a different business scenario related but not part of "submit order". If there isn't enough stock then another event is published NotEnoughInventory and another use case will be triggered. We follow the business here and we identify each step that the business does in order to fulfill the order.
The art of DDD consists in understanding and identifying granular business functionality, the involved aggregates, business behaviour which makes decisions etc and this has nothing to do the database or transactions.
In DDD the aggregate is the only place where a unit of work needs to be used.
To answer your questions:
It seems like each event will be in its own transaction scope, which means the system requires to open multiple connection to database at once. So if I am using IIS Server, I must enable DTC, am I correct?
No, transactions,events and distributed transactions are different things. IIS is a web server, I think you want to say SqlServer. You're always opening multiple connections to the db in a web app, DTC has nothing to do with it. Actually, the question tells me that you need to read a lot more about DDD and not just Evans' book. To be honest, from a DDD pov it doesn't make much sense what you're asking.. You know one of principles of DD: the db (as in persistence details) doesn't exist.
Is there any relationship between domain-events and domain-services
They're both part of the domain but they have different roles:
Domain events tell the world that something changed in the domain
Domain services encapsulate domain behaviour which doesn't have its own persisted state (like Calculate Tax)
Usually an application service (which acts as a host for a business use case) will use a domain service to verify constraints or to gather data required to change an aggregate which in turn will generate one or more events. Aggregates are the ones persisted and always, an aggregate is persisted in an atomic manner i.e db transaction / unit of work.
what will happen if domain event published failed?
MikeSW already described this - publishing the event (which is to say, making it part of the history) is a separate concern from consuming the event.
what if when the system try to deduct the stock, it found out that there is no stock remaining for the item (amount = 0). So, the item amount can't be deducted but the order had already being committed.
Will this kind of scenario happen?
So the DDD answer is: ask your domain experts!
If you sit down with your domain experts, and explore the ubiquitous language, you are likely to discover that this is a well understood exception to the happy path for ordering, with an understood mitigation ("we mark the status of the order as pending, and we check to see if we've already ordered more inventory from the supplier..."). This is basically a requirements discovery exercise.
And when you understand these requirements, you go do it.
Go do it typically means a "saga" (a somewhat misleading and overloaded use of the term); a business process/workflow/state machine implementation that keeps track of what is going on.
Using your example: OrderWasMade triggers an OrderFulfillment process, which tracks the "state" of the order. There might be an "AwaitingInventory" state where OrderFulfillment parks until the next delivery from the supplier, for example.
Recommended reading:
http://udidahan.com/2010/08/31/race-conditions-dont-exist/
http://udidahan.com/2009/04/20/saga-persistence-and-event-driven-architectures/
http://joshkodroff.com/blog/2015/08/21/an-elegant-abandoned-cart-email-using-nservicebus/
If you need the stock to be immediately consistent at all times, a common way of handling this in event sourced systems (can also in non-event based systems, this is orthogonal really) is to rely on optimistic locking at the event store level.
Events basically have a revision number that they expect the stream of events to be at to take effect. Once the event hits the persistent store, its revision number is checked against the real stream number and if they don't match, a conflict exception is raised and the transaction is aborted.
Now as #MikeSW pointed out, depending on your business requirements, stock checking can be an out-of-band process that handles the problem in an eventually consistent way. Eventually can range from milliseconds if another part of the process takes over immediately, to hours if an email is sent with human action needing to be taken.
In other words, if your domain requires it, you can choose to trade this sequence of events
(OrderAbortedOutOfStock)
for
(OrderMade, <-- Some amount of time --> OrderAbortedOutOfStock)
which amounts to the same aggregate state in the end
I am wondering how granular should a domain event be?
For example I have something simple, like changing the firstName, the secondName and the email address on a profile page. Should I have 3 different domain events or just a single one?
By coarse grained domain events when I add a new feature, I have to create a new version of the event, so I have to add a new event type, or store event versions in the event storage. By fine grained domain events I don't have these problems, but I have too many small classes. What do you think, what is the best practice in this case?
What's the problem with many classes? Really, why so many devs are afraid of having too many classes? You define as many classes as needed.
A domain event signals that the domain changed in a certain way. It should contain all the relevant information and it should be taken into consideration the fact that an event is also a DTO. You want clear events but it's up to the developer to decide how granular or generic an event is.
Size is not a concern, however if your event 'weights' 1 MB maybe you have a problem. And the number of classes is not a domain event design criteria.
I can agree with MikeSW's answer, but applying SRP during the modeling, you can end up with small classes, which is not a problem at all.
According to Greg Young the domain events should always express something that the user does from a business perspective. For example if the user has 2 reasons to change her phone number PhoneNumberChanged, and this information can be important from a business perspective, then we should create 2 event types PhoneNumberMigrated, PhoneNumberCorrected to store technically the same data. This clearly violates DRY, but that is not a problem, because SRP can override DRY in these cases as it does by sharing aggregates and their properties (most frequently the aggregate id) between multiple bounded contexts.
In my example:
I have something simple, like changing the firstName, the
secondName and the email address on a profile page.
We should ask the following: why would the user want that, has it any importance from the perspective of our business?
her account was stolen (security, not business issue)
she moved to another email address
she got married
she hated her previous name
she gave the account to somebody else on purpose
etc...
Well, if we have dating agency services then logging divorces can have probably a business importance. So if this information is really important, then we should put that it into the domain model, and create an UserProbablyDivorced event. If none of them are important, then we can simple say, that she just wanted to change her profile page, we don't care why, so I think in that case both UserProfileChanged or UserSecondNameChanged events can be acceptable.
The domain events can be in 1:1 and in 1:n relation with the commands. By 1:1 relation they name is usually the same as of the commands, but in a past tense. For example ChangeUserProfile -> UserProfileChanged. By 1:n relation we usually split up the behavior which the command represents into a series of smaller domain events.
So to summarize, it is the domain developer's decision how granular the domain events should be, but it should by clearly influenced from a business perspective and not just from a modeling a data schema perspective. But I think this is evident, because we are modeling business and not just data structure.
Suppose we have a situation when we need to implement some domain rules that requires examination of object history (event store). For example we have an Order object with CurrentStatus property, and we need to examine Order.CurrentStatus changes history.
Most likely you will answer that I need to move this knowledge to domain and introduce Order.StatusHistory property that contains a collection of status records, and that I should not query event store. And I will agree with you.
What I question is the need of Event Store.
We write in event store events that has business meaning (domain value), we do not record UserMovedMouse events (in most cases). And as with OrderStatusChanged event there is a high chance that most of events from EventStore will be needed at some point for domain logic, and we end up with a domain object that have a EventHistory property with the collection of events.
I can see a value in separate event store for patterns such as CQRS when you have a single write only event store and multiple read only query stores, which gives you some scalability. However the need to to introduce such thing in code is in question too for me. All decent databases support single write server, multiple read servers scalability (master-slave replication). Why should I introduce such thing at source code level? Why not to forget about Web Services, and Message buses and use write your own wrapers around Sockets.
I have a great respect to "old school" DDD as it was described be Eric Evans, and I see some fresh and good ideas in new wave DDD+SQRC+EventSourcing pattern aggregate. However the main idea of CQRS is under big question for me. Am I missing something?
In short: if event sourcing is not needed (for its added benefits or as workarounds for some quirks), then you definitely shouldn't bring it into your system just for the sake of it.
ES is just one of many ways to augment CQRS architectural style within a bounded context. It is not a requirement.