Contrived example, but say I have an Order aggregate with OrderLine entities. The order line contains cost, quantity and the name of the product. The aggregate is persisted as events to be event sourced. Typically, a single product could have millions of orders.
I now have to update the name of the product. Updating the product aggregate is simple enough but I now have millions of orders with the old product name.
This is a contrived example but assume I need to update this copied state in my orders. What’s the best approach here? Applying the state change to such a huge volume of records seems incredibly expensive operation.
In general, when event sourcing, events are immutable: they represent things that have definitively happened and you can't change the past. If somebody ordered a product "Foo", what is the benefit of later saying they ordered "Bar"?
There's a minor exception around schema migration, which would likely be an offline process manipulating the event store, after ensuring that anything reading the events can use either the old or the new encoding. This doesn't change the semantic meaning of the events.
If you really needed to correct every aggregate, new events can be recorded (e.g. removing the old product from an order and adding the new product). This would tend to be done order by order: remember that aggregates tend to define consistency boundaries, so this would be eventually consistent (you could perform arbitrarily many aggregate updates in parallel: event sourced systems tend to be trivially massively parallelizable for that sort of thing).
Related
I'm aware of the general rule that only a single aggregate should be modified per transaction, mostly for concurrency and transactional consistency issues, as far as I'm aware.
I have a use case where I want to create multiple aggregates in a single transaction: a RestaurantManager, a Restaurant, and a Menu. They seem like a single aggregate because their life-cycles begin and end together: it doesn't make sense within the domain to create a RestaurantManager without a Restaurant, or vice versa; the same goes for a Restaurant and a Menu. Further, if the Restaurant or the RestaurantManager is deleted (unregistered), they should all be deleted together.
However, I've split them into separate aggregates because, once created, they are updated separately, maintain their own invariants, and I don't want to load them all into memory just to update one property on the Restaurant, for example.
The only thing that ties them together is their life-cycle.
My question is whether this represents a case where it is okay to go against the "rule" that each transaction should only operate on a single aggregate.
I'd also like to know if I should enforce their shared life-cycle in the domain model by having each aggregate root hold the identifier of the aggregate root it depends on, i.e. by having Restaurant require a MenuId as a constructor parameter, and likewise for Menu and RestaurantId, so that neither can be created without the other. However, this still wouldn't enforce that they should be saved together by the application service anyway, since it could create them all in memory, then only save the Menu, for example.
Your requirement is a pretty normal use case in DDD, IMHO. There are always multiple aggregates working in tandem to support the application, and they are interlinked in their lifecycles. But the modeling concepts still stand true. Let me attempt to explain what your model would look like with the help of a few DDD rules:
Aggregates are transaction boundaries
Aggregates ensure that no business invariants are broken at any point. This means that if you have multiple aggregates strung together as part of one transaction, you have to load all of them into memory for the validation.
This is especially a problem when your application is data-rich and stores data in a database cluster - partitioned, distributed (think Mongo or Elasticsearch). You will have the problem of loaded up data from potentially different clusters as part of a single transaction.
Aggregates are loaded in entirety
Aggregates and their associated data objects are loaded in entirety into memory. This means that unnecessary objects (say the restaurant's schedule for the upcoming month, for example) for the transaction may be loaded into memory. By itself, this is not a problem. But when multiple aggregates get together, the amount of data loaded into memory needs to be considered.
Aggregates refer to each other by their unique identifiers
This one is straightforward and means that each aggregate stores its referenced aggregates by their identifiers instead of enclosing the other aggregate's data within it.
State changes across Aggregates are handled through Domain Events
In cases where you want a state change in one aggregate to have side-effects on other aggregates, you publish a domain event, and a subscriber handles the change on other aggregates in the background. This is how you would want to handle your requirement for cascade deletes.
By following these rules, you are essentially zooming in one single aggregate at a time and ensuring that the complexity remains low. When you string up multiple aggregates, though it is clear and understandable on day 1, eventually, the application tends towards becoming a big ball of mud, as dependencies and invariants start crisscrossing each other.
"only a single aggregate should be modified per transaction"
Contention at creation doesn't matter as much. You can create many ARs in a single transaction without problem because the only other operation that could conflict is another duplicate creation process.
Another reason to avoid involving many ARs in a single transaction is coupling between modules though, but you could always keep things loosely coupled using synchronously dispatched domain events.
As for the deletion, it's probably less problematic to make it eventually consistent. Does it really matter that Restaurant is closed while RestaurantManager remains registered for a short period of time?
The fact you are asking this question tells me your system is not distributed? If your system is running with a single DB server and used by a few people it may be that eventual consistency make things more complex for scalability you don't actually need.
Start simple and refactor as needed, but crossing AR boundaries is not something that should be done consistently or else your boundaries are clearly wrong.
Furthermore, if you want to communicate that a RestaurantManager can't be spawned from nowhere and associated with an invalid RestaurantId by mistake you may want to look at your ubiquitous language for guidance.
e.g.
"A RestaurantManager is registered for a given Restaurant": not sure it truly aligns with your UL, but it's just for the sake of the example.
RestaurantManager manager = restaurant.registerManager(...);
This obviously increases coupling and could affect performance, but it aligns well with the UL and makes it more difficult to misuse the model. Also note that with a single DB, you could enforce referential integrity which takes cares of these uninteresting referential constraints.
As pointed out by #plalx, contention doesn't matter as much when creating aggregates in terms of transactions, since they don't yet exist so can't be involved in contention.
As for enforcing the mutual life cycle of multiple aggregates in the domain, I've come to think that this is the responsibility of the application layer (i.e. an application service, or use case).
Maybe my thinking is closer to Clean or Hexagonal architecture, but I don't think it's possible or even sensible to try and push every single business rule down into the "domain model". The point of the domain model for me is to partition the problem domain into small chunks (aggregates), which encapsulate common business data/operations that change together, but it's the application layer's responsibility to use these aggregates properly in order to achieve the business' end goal (which is the application as a whole), including mediating operations between the aggregates and controlling their life cycles.
As such, I think this stuff belongs in an application service. That being said, frequently updating multiple aggregates in each use case could be a sign of incorrect domain boundaries.
So I'm trying to figure out the structure behind general use cases of a CQRS+ES architecture and one of the problems I'm having is how aggregates are represented in the event store. If we divide the events into streams, what exactly would a stream represent? In the context of a hypothetical inventory management system that tracks a collection of items, each with an ID, product code, and location, I'm having trouble visualizing the layout of the system.
From what I could gather on the internet, it could be described succinctly "one stream per aggregate." So I would have an Inventory aggregate, a single stream with ItemAdded, ItemPulled, ItemRestocked, etc. events each with serialized data containing the Item ID, quantity changed, location, etc. The aggregate root would contain a collection of InventoryItem objects (each with their respective quantity, product codes, location, etc.) That seems like it would allow for easily enforcing domain rules, but I see one major flaw to this; when applying those events to the aggregate root, you would have to first rebuild that collection of InventoryItem. Even with snapshotting, that seems be very inefficient with a large number of items.
Another method would be to have one stream per InventoryItem tracking all events pertaining to only item. Each stream is named with the ID of that item. That seems like the simpler route, but now how would you enforce domain rules like ensuring product codes are unique or you're not putting multiple items into the same location? It seems like you would now have to bring in a Read model, but isn't the whole point to keep commands and query's seperate? It just feels wrong.
So my question is 'which is correct?' Partially both? Neither? Like most things, the more I learn, the more I learn that I don't know...
In a typical event store, each event stream is an isolated transaction boundary. Any time you change the model you lock the stream, append new events, and release the lock. (In designs that use optimistic concurrency, the boundaries are the same, but the "locking" mechanism is slightly different).
You will almost certainly want to ensure that any aggregate is enclosed within a single stream -- sharing an aggregate between two streams is analogous to sharing an aggregate across two databases.
A single stream can be dedicated to a single aggregate, to a collection of aggregates, or even to the entire model. Aggregates that are part of the same stream can be changed in the same transaction -- huzzah! -- at the cost of some contention and a bit of extra work to do when loading an aggregate from the stream.
The most commonly discussed design assigns each logical stream to a single aggregate.
That seems like it would allow for easily enforcing domain rules, but I see one major flaw to this; when applying those events to the aggregate root, you would have to first rebuild that collection of InventoryItem. Even with snapshotting, that seems be very inefficient with a large number of items.
There are a couple of possibilities; in some models, especially those with a strong temporal component, it makes sense to model some "entities" as a time series of aggregates. For example, in a scheduling system, rather than Bobs Calendar you might instead have Bobs March Calendar, Bobs April Calendar and so on. Chopping the life cycle into smaller installments can keep the event count in check.
Another possibility is snapshots, with an additional trick to it: each snapshot is annotated with metadata that describes where in the stream the snapshot was made, and you simply read the stream forward from that point.
This, of course, depends on having an implementation of an event stream that supports random access, or an implementation of stream that allows you to read last in first out.
Keep in mind that both of these are really performance optimizations, and the first rule of optimization is... don't.
So I'm trying to figure out the structure behind general use cases of a CQRS+ES architecture and one of the problems I'm having is how aggregates are represented in the event store
The event store in a DDD project is designed around event-sourced Aggregates:
it provides the efficient loading of all events previously emitted by an Aggregate root instance (having a given, specified ID)
those events must be retrieved in the order they where emitted
it must not permit appending events at the same time for the same Aggregate root instance
all events emitted as result of a single command must be all appended atomically; this means that they should all succeed or all fail
The 4th point could be implemented using transactions but this is not a necessity. In fact, for scalability reasons, if you can then you should choose a persistence that provides you atomicity without the use of transactions. For example, you could store the events in a MongoDB document, as MongoDB guaranties document-level atomicity.
The 3rd point can be implemented using optimistic locking, using a version column with an unique index per (version x AggregateType x AggregateId).
At the same time, there is a DDD rule regarding the Aggregates: don't mutate more than one Aggregate per transaction. This rule helps you A LOT to design a scalable system. Break it if you don't need one.
So, the solution to all these requirements is something that is called an Event-stream, that contains all the previous emitted events by an Aggregate instance.
So I would have an Inventory aggregate
The DDD has higher precedence than the Event-store. So, if you have some business rules that force you to decide that you must have a (big) Inventory aggregate, then yes, it would load ALL the previous events generated by itself. Then the InventoryItem would be a nested entity that cannot emit events by itself.
That seems like it would allow for easily enforcing domain rules, but I see one major flaw to this; when applying those events to the aggregate root, you would have to first rebuild that collection of InventoryItem. Even with snapshotting, that seems be very inefficient with a large number of items.
Yes, indeed. The simplest thing would be for us to all have a single Aggregate, with a single instance. Then the consistency would be the strongest possible. But this is not efficient so you need to better think about the real business requirements.
Another method would be to have one stream per InventoryItem tracking all events pertaining to only item. Each stream is named with the ID of that item. That seems like the simpler route, but now how would you enforce domain rules like ensuring product codes are unique or you're not putting multiple items into the same location?
There is another possibility. You should model the assigning of product codes as a Business Process. For this you could use a Saga/Process manager that would orchestrate the entire process. This Saga could use a collection with an unique index added to the product code column in order to ensure that only one product uses a given product code.
You could design the Saga to permit the allocation of an already-taken code to a product and to compensate later or to reject the invalid allocation in the first place.
It seems like you would now have to bring in a Read model, but isn't the whole point to keep commands and query's seperate? It just feels wrong.
The Saga uses indeed a private state maintained from the domain events in an eventual consistent state, just like a Read-model but this does not feel wrong for me. It may use whatever it needs in order to bring (eventually) the system as a hole to a consistent state. It complements the Aggregates, whose purpose is to not allow the building-blocks of the system to get into an invalid state.
We have an aggregate root in our system and is has child entities in a collection. The problem is that the container needs to be updated very frequently, on a transaction basis, and the children entities don't, they in fact hardly ever change, they are more configuration like in nature.
My first reflex was to separate them into two different aggregate roots because our of application requirements. But I was reminded of the cascade delete rule, if we delete the one then the delete should cascade, so their lifetimes are linked.
We stumbled over this problem when we discovered that we have a caching problem. Changes to the children entities (configuration) were not being reflected in the system at runtime because the parent was unaware of the changes (we had them as one aggregate root but someone had created a repository for its children).
The main driver for aggregate boundaries are the invariants of your domain - or in other terms, aggregate boundaries should be consistency boundaries. Things that must change together atomically must be in the same aggregate.
The cascading delete is (with regards to aggregate boundaries) rather a nice-to-have than a rule. You can always enforce the fact that a Parent still lives by requiring one at the place where you load Child entities. With this design, you can make Parent and Child different aggregates, while still enforcing the rule that no "free floating" Child aggregates can be requested. And deleting Child aggregates in response to a deleted Parent is easy if you have domain events in place.
Note: All this is under the assumption that your domain invariants allow separating the aggregates in the first place.
This might be better in a discussion format, rather than a Q&A format. I'd recommend trying the audience at DomainDrivenDesign or DDDCQRS
Are you sure that you have a business requirement to delete data in your domain model? That's really unusual -- in most domain models I've seen, an aggregate will reach an "end of life" state, (example: AccountClosed), but doesn't actually get removed from the system.
A common trap in aggregate design is to think about the structure of the entities. "A has a B" does not necessarily mean that they are part of the same aggregate; the key idea is "A needs to keep B and C consistent". You can think about it like a graph; state B and state C are nodes in the graph, the consistency rules are the edges. If you can't traverse the graph from B to C, then they don't need to be part of the same aggregate, and probably shouldn't be.
My instinct is that caching should be the right answer here. If you are processing millions of transactions per day, and the collection only changes once per month, then simply using a cached value of the collection should produce the right answer most of the time.
In this, I'm influenced by Udi Dahan's essay Race Conditions Don't Exist; by coupling this configuration collection with the rest of the aggregate, you are essentially asserting that changes to the configuration (which are rare) are understood by the business to be happening precisely between two other changes to the aggregate. 3M transactions per day averages 1 per 30ms; are you really scheduling your configuration changes that precisely?
The usual pattern here would be that the consistency rule is removed from the domain model; instead, you monitor for changes that introduce an inconsistency, and mitigate them. That depends upon there being a reasonable way to detect the errors, an efficient way to mitigate them, and a mechanic for keeping the rate under control.
The latter of these would normally be done by having the clients/the application check their local copy of the collection, and making sure the command sent is consistent with that before dispatching the command to the domain model. (Possible questions for your domain experts: how quickly do the configuration changes need to be applied? Do the configuration changes happen when the aggregate is changing frequently or when it is quiet?)
Another possibility might be to change your persistence strategy; if the collection doesn't change often, then there are not a lot of change events related to it. So maybe instead of persisting the aggregate, you look into persisting its history - in other words, using event-sourcing here. Maybe if this aggregate lived in a micro service, you could limit the risk of the change? Hard to say, at a million transactions per day, this aggregate sounds pretty important.
Take the domain proposed in Effective Aggregate Design of a Product which has multiple Releases. In this article, Vaughn arrives at the conclusion that both the Product and Release should each be their own aggregate roots.
Now suppose that we add a feature
As a release manager I would like to be able to sort releases so that I can create timelines for rolling out larger epics to our users
I'm not a PM with a specific need but it seems reasonable that they would want the ability to sort releases in the UI.
I'm not exactly sure how this should work. Its natural for each Release to have an order property but re-ordering would involve changing multiple aggregates on the same transaction. On the other hand, if that information is stored in the Product aggregate you have to have a method like product.setRelaseOrder(ReleaseId[]) which seems like a weird bit of data to store at a completely different place than Releases. Worse, adding a release would again involve modification on two different aggregates! What else can we do? ProductReleaseSortOrder can be its own aggregate, but that sounds downright absurd!
So what to do? At the moment I'm still leaning toward the let-product-manage-it option but what's correct here?
I have found that in fact it is best to create a new aggregate root (e.g., ProductReleaseSorting as suggested) for each individual sorting and/or ordering purposes.
This is because releaseOrder clearly is not actually a property of the Product, i.e., something that has a meaning on a product on its own. Rather, it is actually a property of a "view" on a collection of products, and this view should be modeled on its own.
The reason why I tend to introduce a new aggregate root for each individual view on a collection of items becomes clear if you think of what happens if you were to introduce additional orderings in the future, say a "marketing order", or multiple product managers want to keep their own ordering etc. Here, one easily sees that "marketing order" and "release order" are two different concepts that should be treated independently, and if multiple persons want to order the products with the same key, but using different orderings, you'll need individual "per person views". Furthermore, it could be that there are multiple order criteria that one would like to take into account when sorting (an example for the latter would be (in a different context) fastest route vs. shortest route), all of which depends on the view you have on the collection, and not on individual properties of its items.
If you now handle the Product Manager's sorting in a ProductReleaseSorting aggregate, you
have a single source of truth support for the ordering (the AR),
the ProductReleaseSorting AR can enforce constraints such as that no two products have the same order number, and you
don't face the issue of having to update multiple ARs in a single transaction when changing the order.
Note that your ProductReleaseSorting aggregate most probably has a unique identity ("Singleton") in your domain, i.e., all product managers share the same sorting. If however all team members would like to have their own ProductReleaseSorting, it's trivial to support this by giving the ProductReleaseSorting a corresponding ID. Similarly, a more generic ProductSorting can be fetched by a per-team ID (marketing vs. product management) from the repository. All of this is easy with a new, separate aggregate root for ordering purposes, but hard if you add properties to the underlying items/entities.
So, Product and Release are both ARs. Release has an association to Product via AggregateId. You want to get list of all releasesfor a given product ordered by something?
Since ordering is an attribute of aggregate, then it should be set on Product, but Releases are ARs too and you shouldn't access repository of Release in Product AR (every AR should have its own repository).
I would simply make a ReleaseQueryService that takes productId and order parameter and call ReleaseRepository.loadOrderedReleasesForProduct(productId, order).
I would also think about separating contexts, maybe model for release presentation should be in another context? In example additional AR ProductReleases that would be used only for querying.
I have a couple questions regarding the relationship between references between two aggregate roots in a DDD model. Refer to the typical Customer/Order model diagrammed below.
First, should references between the actual object implementation of aggregates always be done through ID values and not object references? For example if I want details on the customer of an Order I would need to take the CustomerId and pass it to a ICustomerRepository to get a Customer rather then setting up the Order object to return a Customer directly correct? I'm confused because returning a Customer directly seems like it would make writing code against the model easier, and is not much harder to setup if I am using an ORM like NHibernate. Yet I'm fairly certain this would be violating the boundaries between aggregate roots/repositories.
Second, where and how should a cascade on delete relationship be enforced for two aggregate roots? For example say I want all the associated orders to be deleted when a customer is deleted. The ICustomerRepository.DeleteCustomer() method should not be referencing the IOrderRepostiory should it? That seems like that would be breaking the boundaries between the aggregates/repositories? Should I instead have a CustomerManagment service which handles deleting Customers and their associated Orders which would references both a IOrderRepository and ICustomerRepository? In that case how can I be sure that people know to use the Service and not the repository to delete Customers. Is that just down to educating them on how to use the model correctly?
First, should references between aggregates always be done through ID values and not actual object references?
Not really - though some would make that change for performance reasons.
For example if I want details on the customer of an Order I would need to take the CustomerId and pass it to a ICustomerRepository to get a Customer rather then setting up the Order object to return a Customer directly correct?
Generally, you'd model 1 side of the relationship (eg., Customer.Orders or Order.Customer) for traversal. The other can be fetched from the appropriate Repository (eg., CustomerRepository.GetCustomerFor(Order) or OrderRepository.GetOrdersFor(Customer)).
Wouldn't that mean that the OrderRepository would have to know something about how to create a Customer? Wouldn't that be beyond what OrderRepository should be responsible for...
The OrderRepository would know how to use an ICustomerRepository.FindById(int). You can inject the ICustomerRepository. Some may be uncomfortable with that, and choose to put it into a service layer - but I think that's overkill. There's no particular reason repositories can't know about and use each other.
I'm confused because returning a Customer directly seems like it would make writing code against the model easier, and is not much harder to setup if I am using an ORM like NHibernate. Yet I'm fairly certain this would be violating the boundaries between aggregate roots/repositories.
Aggregate roots are allowed to hold references to other aggregate roots. In fact, anything is allowed to hold a reference to an aggregate root. An aggregate root cannot hold a reference to a non-aggregate root entity that doesn't belong to it, though.
Eg., Customer cannot hold a reference to OrderLines - since OrderLines properly belongs as an entity on the Order aggregate root.
Second, where and how should a cascade on delete relationship be enforced for two aggregate roots?
If (and I stress if, because it's a peculiar requirement) that's actually a use case, it's an indication that Customer should be your sole aggregate root. In most real-world systems, however, we wouldn't actually delete a Customer that has associated Orders - we may deactivate them, move their Orders to a merged Customer, etc. - but not out and out delete the Orders.
That being said, while I don't think it's pure-DDD, most folks will allow some leniency in following a unit of work pattern where you delete the Orders and then the Customer (which would fail if Orders still existed). You could even have the CustomerRepository do the work, if you like (though I'd prefer to make it more explicit myself). It's also acceptable to allow the orphaned Orders to be cleaned up later (or not). The use case makes all the difference here.
Should I instead have a CustomerManagment service which handles deleting Customers and their associated Orders which would references both a IOrderRepository and ICustomerRepository? In that case how can I be sure that people know to use the Service and not the repository to delete Customers. Is that just down to educating them on how to use the model correctly?
I probably wouldn't go a service route for something so intimately tied to the repository. As for how to make sure a service is used...you just don't put a public Delete on the CustomerRepository. Or, you throw an error if deleting a Customer would leave orphaned Orders.
Another option would be to have a ValueObject describing the association between the Order and the Customer ARs, VO which will contain the CustomerId and additional information you might need - name,address etc (something like ClientInfo or CustomerData).
This has several advantages:
Your ARs are decoupled - and now can be partitioned, stored as event streams etc.
In the Order ARs you usually need to keep the information you had about the customer at the time of the order creation and not reflect on it any future changes made to the customer.
In almost all the cases the information in the value object will be enough to perform the read operations ( display customer info with the order ).
To handle the Deletion/deactivation of a Customer you have the freedom to chose any behavior you like. You can use DomainEvents and publish a CustomerDeleted event for which you can have a handler that moves the Orders to an archive, or deletes them or whatever you need. You can also perform more than one operation on that event.
If for whatever reason DomainEvents are not your choice you can have the Delete operation implemented as a service operation and not as a repository operation and use a UOW to perform the operations on both ARs.
I have seen a lot of problems like this when trying to do DDD and i think that the source of the problems is that developers/modelers have a tendency to think in DB terms. You ( we :) ) have a natural tendency to remove redundancy and normalize the domain model. Once you get over it and allow your model to evolve and implicate the domain expert(s) in it's evolution you will see that it's not that complicated and it's quite natural.
UPDATE: and a similar VO - OrderInfo can be placed inside the Customer AR if needed, with only the needed information - order total, order items count etc.