I have a "large" set of AggregateRoots with a property that should be unique in its context. But where do I validate this? I guess it depends on what the context is and as I see it I have two options:
Either I implement the validation within a repository-service so the persistence-logic can validate unique properties before saving aggregates (which would then also have to synchronize all saves of this AR-type).
Or I move the "unique index" inside another aggregate as a dictionary of aggregate references and let this dictionary validate unique properties. Since I have a very large set of AR's, this approach could be problematic, if not implemented so that the index can be kept on disk as much as possible.
But is there any true winner here? Are both methods valid and safe to use? Any major drawbacks to consider? Other variants?
My thoughts:
The first method is a bit simpler perhaps but is more limited as well. It's for instance more complicated to have multiple indexes for the same AR-type, if that's ever needed. The other method is more localized to a single aggregate which is more in line with how aggregates should be handled I guess. The first method requires all aggregates of this type to be saved by the same process since all saves have to be synchronized. The other method does not require this but instead introduce this index-aggregate that all saves have to pass through in order to validate new and updated values on the property. This method also do not validate that there exists multiple aggregates in the database with the same property-value, only that the referenced aggregates have unique properties.
The aggregate only cares for its own consistency. It doesn't really have an interest how it correlates with or relates to anything else in the system, outside of its own boundaries.
If you need to do any cross-aggregate checks, there are two options - either you need to reconsider your aggregate boundaries, maybe your current aggregates are just entities for a larger aggregate. However, it won't work if your scope of a transaction is what you currently have as an aggregate (although the uniqueness constraints kind of contradicts this statement).
But we all know things like the infamous unique user name paradox. It is clear the whole entirety of users cannot be a single aggregate, but you need to ensure that user names are unique. The solution for that is to check for uniqueness in the application service, before even going to the aggregate. If your query store is fully consistent, it should never be a problem. If your write and read sides are not guaranteed to be in sync, you can still use the read side to ensure uniqueness, considering the possibility of this constraint to be violated. If there will be no major blast and no kittens will die, you can probably accept such a situation and deal with the constraint violation when it actually happens, which might be never.
Related
I saw many different approaches and I am fairly new to domain-driven design approach. What I am struggling with is to understand one complex (at least for me) thing. I know the whole DDD is complex to understand on first but I am trying to find any resources I can on it.
Example: I have an order and order can have operations. Operations can not be accessed without order and they make no sense without an order. So order entity will be my aggregate root. Operations will be entity too because each operation will have an id (am I right on this one?). Each operation can have subitems (array of strings for example and these can be added or removed from any operation).
Now what I am struggling to understand and what I found everywhere is that every modification should be called and set only through aggregate root... But is it okay to have private methods like setters and getters on the Operation entity itself but these would be called only through the aggregate root (order entity)?
Sorry if I missed something basic, as the whole DDD concept for me is new and I am trying to explore it.
Thanks.
A couple of DDD concepts to arrive at the answer:
Aggregates are Transaction Boundaries.
Aggregates act as gatekeepers for all changes to domain elements enclosed within itself.
Data changes to an Aggregate and its enclosed domain elements are committed atomically. Either everything within the Aggregate stays in sync, or the whole state change operation fails.
The rule also means that one should not access Domain Elements within the Aggregate directly. It would be best if you did not manipulate the domain objects outside the context of the Aggregate.
If Operation is an entity under Order aggregate, then Order is responsible for ensuring operations satisfy the business invariants (a.k.a validations).
Aggregates are loaded in entirety.
Since an Aggregate represents the transaction and consistency boundary of a domain concept, its data is loaded in entirety to guarantee that all Business Invariants are satisfied. Data here means data of all underlying entities and value objects.
If you cannot load the entire data, you cannot guarantee that the change satisfies all business invariants. It may also mean that a data-intensive entity within the Aggregate may need to become an Aggregate itself.
You are protecting the data sanctity and operational consistency of the system if you adhere to these rules. Within the Aggregate itself, how you organize state changes is wholly left to you.
IMHO, I would go with your approach of enclosing all Operation related behaviors, data attributes, and invariants within the Operation entity. Order is responsible for protecting the data within its boundary, but it need not own the methods/logic of doing everything.
You can create state change methods within the Operation entity too, just like you would have done in the Order aggregate, but invoke them from the order object.
I have recently dived into DDD and this question started bothering me. For example, take a look at the scenario mentioned in the following article:
Let's say that a user made a mistake while adding an EstimationLogEntry to the Task aggregate, and now wants to correct that mistake. What would be the correct way of doing this? Value objects by nature don't have identifiers, they are identified by their structure. If this was a Web application, we would have to send the whole EstimationLogEntry value object as a request parameter, along with the new values, just so we could replace the old value object with the new one. Should EstimationLogEntry be an entity?
It really depends. If it's a sequence of estimations, which you append every time, you can quite possibly envision an operation which updates the value only of the VO. This would use VO semantics (the VO is called to clone itself in-mem with the updated value on the specific property), and the command can just be the estimation (along with a Task id).
If you have an array of VO's which all semantically apply to Task (instead of just the "latest" or something)... it's a different matter. In that case, you'd probably have to send all of them in the request, and you'd have to include all properties too, but I'd say that the need to change just one, probably implies a need to reference them, which in turn implies a need to have an Entity instead of a VO.
DDD emphasizes the Ubiquitous language and many modelling questions like this ones will derive their answer straight from that language.
First things first, if there's an aggregate that contains a value object, there's a good chance that the value object isn't directly created by the user. That is, the factory that creates the value object lives on the aggregates API. The value object(s) might even be derived directly from the aggregates state instead of from any direct method call. In this case, do you want to just discard the aggregate and create a new one? That might make sense depending on your UL.
In some cases, like if you have immutable value objects (based on your UL), you could simply add a new entry into the log entry that "reverses" the old entry. An example of this would be bank accounts and transactions. If bank accounts are aggregate roots and transactions are the value objects. If a transaction is erroneously entered, you can simply write a reversing transaction to void it.
It is definitely possible that you want to update the value object but that must make sense in your UL and it's implementation must also be framed around your UL. For example, if you have a scheduling application and an aggregate root is a person's schedule while the value objects are meetings. If a user erroneously enters a meeting, what your aggregate root should do would be to invalidate the old meeting (flip a flag, mark its state cancelled e.t.c) and create a new one. These actions fit the UL for your scheduling app. The same thing as what you are calling "updating the entry" above.
In CQRS + ES and DDD, is it a good thing to have small read model in aggregate to get data from other aggregate or bounded context?
For example, in order validation (In Order aggregate), there is a business rules which validate order only if customer is not flagged. The flag information is put in read model (specific to the aggregate) via synchronous domain events.
What do you think about this ?
is it a good thing to have small read model in aggregate to get data from other aggregate or bounded context?
It's not ideal. Aggregates, due to their nature, are not good at enforcing consistency that involves state outside of themselves.
What this usually means is that the business is going to need some way to respond when two aggregates produce an unacceptable state.
You also have the option of checking for the flag before you run the placeOrder command on the aggregate. That check for the flag could be done in the command handler, or in the client -- basically, you have was of "validating" that the command should succeed before passing it to the aggregate.
That said, if it were critical to try to consult the read model while processing the command, a way to do it would be to use a "domain service"; you pass a service provider to the aggregate as part of the command, and let the interface abstract away the fact that running the query requires looking outside of the aggregate.
That gives you some of the decoupling you need to keep the aggregate testable.
It's doable, but not in the form of a read model, rather a Value Object in the Aggregate (since we're on the Write side).
If you already have a CustomerId in Order, you just have to compose a VO with it and a Flagged member.
Of course, this remains prone to all the problems of cross-aggregate communication since the data originates from Customer. Order has to be kept in sync with the flagged status of its Customer, which can require quite a bit of work.
In any case, you should probably first determine with your domain expert whether immediate consistency is an absolute requirement (in which case you have to somehow wrap Customer + Order in a transaction) or if you can afford a small delay in Flagged freshness when enforcing that invariant.
If the latter, you can choose between duplicating Flagged in the Order aggregate or the first option given by #VoiceOfUnreason - the main difference being probably that if the data is in the aggregate, you'll get it for free at the Domain level should you need it in multiple occasions, instead of duplicating the check in multiple use cases/command handlers at the application level.
As part of my domain model, lets say I have a WorkItem object. The WorkItem object has several relationships to lookup values such as:
WorkItemType:
UserStory
Bug
Enhancement
Priority:
High
Medium
Low
And there could possibly be more, such as Status, Severity, etc...
DDD states that if something exists within an aggregate root that you shouldn't attempt to access it outside of the aggregate root. So if I want to be able to add new WorkItemTypes like Task, or new Priorities like Critical, do those lookup values need to be aggregate roots with their own repositories? This seems a little overkill especially if they are only a key value pair. How should I allow a user to modify these values and still comply with the aggregate root encapsulation rule?
While the repository pattern as described in the blue book does emphasize its use being exclusive to aggregates, it does leave room open for exceptions. To quote the book:
Although most queries return an object or a collection of objects, it
also fits within the concept to return some types of summary
calculations, such as an object count, or a sum of a numerical
attribute that was intended by the model to be tallied.
(pg. 152)
This states that a repository can be used to return summary information, which is not an aggregate. This idea extends to using a repository to look up value objects, just as your use case requires.
Another thing to consider is the read-model pattern which essentially allows for a query-only type of repository which effectively decouples the behavior-rich domain model from query concerns.
Landon, I think that the only way is to make those value pairs aggregate roots. I know that is might look overkill, but that's DDD braking things into small components.
The reasons why I think using a repository is the right way are:
A user needs to be able to add those value pairs independently of a Work Item.
The value pairs don't have a local, unique identity
Remember that DDD is just a set of guidelines, not hard truths. If you think that this is overkill, you might want to create a lookup that returns the pairs as value objects. This might work out specially if you don't have a feature to add value pairs in the application, but rather through the database.
As a side note, good question! There are quite a few blog posts about this situations... But not all agree on the best way to do this.
Not everything should be modeled using DDD. The complexity of managing the reference data most likely wouldn't justify creating aggregate roots. A common solution would be to use CRUD to manage reference data, and have a Domain Service to interface with that data from the domain.
Do these lookups have ID's ? If not, you could consider making them Value Objects...
I have a couple questions regarding the relationship between references between two aggregate roots in a DDD model. Refer to the typical Customer/Order model diagrammed below.
First, should references between the actual object implementation of aggregates always be done through ID values and not object references? For example if I want details on the customer of an Order I would need to take the CustomerId and pass it to a ICustomerRepository to get a Customer rather then setting up the Order object to return a Customer directly correct? I'm confused because returning a Customer directly seems like it would make writing code against the model easier, and is not much harder to setup if I am using an ORM like NHibernate. Yet I'm fairly certain this would be violating the boundaries between aggregate roots/repositories.
Second, where and how should a cascade on delete relationship be enforced for two aggregate roots? For example say I want all the associated orders to be deleted when a customer is deleted. The ICustomerRepository.DeleteCustomer() method should not be referencing the IOrderRepostiory should it? That seems like that would be breaking the boundaries between the aggregates/repositories? Should I instead have a CustomerManagment service which handles deleting Customers and their associated Orders which would references both a IOrderRepository and ICustomerRepository? In that case how can I be sure that people know to use the Service and not the repository to delete Customers. Is that just down to educating them on how to use the model correctly?
First, should references between aggregates always be done through ID values and not actual object references?
Not really - though some would make that change for performance reasons.
For example if I want details on the customer of an Order I would need to take the CustomerId and pass it to a ICustomerRepository to get a Customer rather then setting up the Order object to return a Customer directly correct?
Generally, you'd model 1 side of the relationship (eg., Customer.Orders or Order.Customer) for traversal. The other can be fetched from the appropriate Repository (eg., CustomerRepository.GetCustomerFor(Order) or OrderRepository.GetOrdersFor(Customer)).
Wouldn't that mean that the OrderRepository would have to know something about how to create a Customer? Wouldn't that be beyond what OrderRepository should be responsible for...
The OrderRepository would know how to use an ICustomerRepository.FindById(int). You can inject the ICustomerRepository. Some may be uncomfortable with that, and choose to put it into a service layer - but I think that's overkill. There's no particular reason repositories can't know about and use each other.
I'm confused because returning a Customer directly seems like it would make writing code against the model easier, and is not much harder to setup if I am using an ORM like NHibernate. Yet I'm fairly certain this would be violating the boundaries between aggregate roots/repositories.
Aggregate roots are allowed to hold references to other aggregate roots. In fact, anything is allowed to hold a reference to an aggregate root. An aggregate root cannot hold a reference to a non-aggregate root entity that doesn't belong to it, though.
Eg., Customer cannot hold a reference to OrderLines - since OrderLines properly belongs as an entity on the Order aggregate root.
Second, where and how should a cascade on delete relationship be enforced for two aggregate roots?
If (and I stress if, because it's a peculiar requirement) that's actually a use case, it's an indication that Customer should be your sole aggregate root. In most real-world systems, however, we wouldn't actually delete a Customer that has associated Orders - we may deactivate them, move their Orders to a merged Customer, etc. - but not out and out delete the Orders.
That being said, while I don't think it's pure-DDD, most folks will allow some leniency in following a unit of work pattern where you delete the Orders and then the Customer (which would fail if Orders still existed). You could even have the CustomerRepository do the work, if you like (though I'd prefer to make it more explicit myself). It's also acceptable to allow the orphaned Orders to be cleaned up later (or not). The use case makes all the difference here.
Should I instead have a CustomerManagment service which handles deleting Customers and their associated Orders which would references both a IOrderRepository and ICustomerRepository? In that case how can I be sure that people know to use the Service and not the repository to delete Customers. Is that just down to educating them on how to use the model correctly?
I probably wouldn't go a service route for something so intimately tied to the repository. As for how to make sure a service is used...you just don't put a public Delete on the CustomerRepository. Or, you throw an error if deleting a Customer would leave orphaned Orders.
Another option would be to have a ValueObject describing the association between the Order and the Customer ARs, VO which will contain the CustomerId and additional information you might need - name,address etc (something like ClientInfo or CustomerData).
This has several advantages:
Your ARs are decoupled - and now can be partitioned, stored as event streams etc.
In the Order ARs you usually need to keep the information you had about the customer at the time of the order creation and not reflect on it any future changes made to the customer.
In almost all the cases the information in the value object will be enough to perform the read operations ( display customer info with the order ).
To handle the Deletion/deactivation of a Customer you have the freedom to chose any behavior you like. You can use DomainEvents and publish a CustomerDeleted event for which you can have a handler that moves the Orders to an archive, or deletes them or whatever you need. You can also perform more than one operation on that event.
If for whatever reason DomainEvents are not your choice you can have the Delete operation implemented as a service operation and not as a repository operation and use a UOW to perform the operations on both ARs.
I have seen a lot of problems like this when trying to do DDD and i think that the source of the problems is that developers/modelers have a tendency to think in DB terms. You ( we :) ) have a natural tendency to remove redundancy and normalize the domain model. Once you get over it and allow your model to evolve and implicate the domain expert(s) in it's evolution you will see that it's not that complicated and it's quite natural.
UPDATE: and a similar VO - OrderInfo can be placed inside the Customer AR if needed, with only the needed information - order total, order items count etc.