domain driven design method duplication - domain-driven-design

I am currently working through the domain driven design book by Eric Evans, and there is one concept that I am having trouble with...
According to the book, all aggregates should have an aggregate root, and all members of the aggregate should only be accessed via this root. The root should also be responsible for enforcing invariants. Will this not lead to a lot of method duplication though? Take for example the following scenario:
I have a class Order, that consists of a set of OrderLine's. Class Order is the aggregate root in this case, and it must enforce the invariant that all OrderLine's of a single Order must have a unique order number. To ensure that this invariant is not violated, class Order does not expose its OrderLine's, and simply offers a method updateOrderLineOrderNumber(long orderLineId, int newOrderNumber) via which OrderLines must be updated. This method simply checks whether the newOrderNumber does not conflict with an existing order number, and then calls the method updateOrderNumber(int newOrderNumber) of the appropriate OrderLine. This is fine, since it is only one method, but what happens when class OrderLine has a couple of methods? Since an Order does not expose its OrderLines, all properties of OrderLines will have to be updated via class Order, even if the property changes don't need any invariant checking. This will undoubtedly lead to a lot of method duplication, which will only get worse as more classes are added to the aggregate.
Am I understanding this wrong? Is there any alternative mechanisms or design patterns that I could use to prevent this scenario?
One possible strategy that I thought of using is the concept of validators. Whenever a property of an OrderLine is changed, it must first check with a set of validators whether this change is allowed. The Order can then add an appropriate validator to an OrderLine's name property whenever the OrderLine is added to an Order. What do you guys think of this strategy?
Any help or thoughts would be greatly appreciated!

I don't see a problem here to be honest. Firstly why would you want to change orderId? Id should be set once, different id equals different entity.
Usually if you want to update entity of an AR you just get it and update it.
orderLine = order.getOrderLine(index)
orderLine.changeProduct(someProduct)
If you need to keep some invariants in AR, in example that OrderLine.product must be unique, then you call AR method.
order.changeOrderLineProduct(orderLineIndex, someProduct)
Internally that method checks if someProduct is unique and if it is then calls code above.
There is no DRY violation in this, AR method checks invariants, orderLine method updates.
I would also think about using more UL in this, like "Client changes product on his orderline on order"
client.changeOrderLineProductOnOrder(orderLineIndex, product, order)
This way you can check if client is an owner of that order.

Related

What is the purpose of child entity in Aggregate root?

[ Follow up from this question & comments: Should entity have methods and if so how to prevent them from being called outside aggregate ]
As the title says: i am not clear about what is the actual/precise purpose of entity as a child in aggregate?
According to what i've read on many places, these are the properties of entity that is a child of aggregate:
It has identity local to aggregate
It cannot be accessed directly but through aggregate root only
It should have methods
It should not be exposed from aggregate
In my mind, that translates to several problems:
Entity should be private to aggregate
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
So, why do we have an entity at all instead of Value Objects only? It seams much more convenient to have only value objects, all methods on aggregate and expose value objects (which we already do copying entity infos).
PS.
I would like to focus to child entity on aggregate, not collections of entities.
[UPDATE in response to Constantin Galbenu answer & comments]
So, effectively, you would have something like this?
public class Aggregate {
...
private _someNestedEntity;
public SomeNestedEntityImmutableState EntityState {
get {
return this._someNestedEntity.getState();
}
}
public ChangeSomethingOnNestedEntity(params) {
this._someNestedEntity.someCommandMethod(params);
}
}
You are thinking about data. Stop that. :) Entities and value objects are not data. They are objects that you can use to model your problem domain. Entities and Value Objects are just a classification of things that naturally arise if you just model a problem.
Entity should be private to aggregate
Yes. Furthermore all state in an object should be private and inaccessible from the outside.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
No. We don't expose information that is already available. If the information is already available, that means somebody is already responsible for it. So contact that object to do things for you, you don't need the data! This is essentially what the Law of Demeter tells us.
"Repositories" as often implemented do need access to the data, you're right. They are a bad pattern. They are often coupled with ORM, which is even worse in this context, because you lose all control over your data.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
The trick is, you don't have to. Every object (class) you create is there for a reason. As described previously to create an additional abstraction, model a part of the domain. If you do that, an "aggregate" object, that exist on a higher level of abstraction will never want to offer the same methods as objects below. That would mean that there is no abstraction whatsoever.
This use-case only arises when creating data-oriented objects that do little else than holding data. Obviously you would wonder how you could do anything with these if you can't get the data out. It is however a good indicator that your design is not yet complete.
Entity should be private to aggregate
Yes. And I do not think it is a problem. Continue reading to understand why.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to
save to db, for example)
No. Make your aggregates return the data that needs to be persisted and/or need to be raised in a event on every method of the aggregate.
Raw example. Real world would need more finegrained response and maybe performMove function need to use the output of game.performMove to build propper structures for persistence and eventPublisher:
public void performMove(String gameId, String playerId, Move move) {
Game game = this.gameRepository.load(gameId); //Game is the AR
List<event> events = game.performMove(playerId, move); //Do something
persistence.apply(events) //events contains ID's of entities so the persistence is able to apply the event and save changes usign the ID's and changed data wich comes in the event too.
this.eventPublisher.publish(events); //notify that something happens to the rest of the system
}
Do the same with inner entities. Let the entity return the data that changed because its method call, including its ID, capture this data in the AR and build propper output for persistence and eventPublisher. This way you do not need even to expose public readonly property with entity ID to the AR and the AR neither about its internal data to the application service. This is the way to get rid of Getter/Setters bag objects.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity
are duplicated on entity)
Sometimes the business rules, to check and apply, belongs exclusively to one entity and its internal state and AR just act as gateway. It is Ok but if you find this patter too much then it is a sign about wrong AR design. Maybe the inner entity should be the AR instead a inner entity, maybe you need to split the AR into serveral AR's (inand one the them is the old ner entity), etc... Do not be affraid about having classes that just have one or two methods.
In response of dee zg comments:
What does persistance.apply(events) precisely do? does it save whole
aggregate or entities only?
Neither. Aggregates and entities are domain concepts, not persistence concepts; you can have document store, column store, relational, etc that does not need to match 1 to 1 your domain concepts. You do not read Aggregates and entities from persitence; you build aggregates and entities in memory with data readed from persistence. The aggregate itself does not need to be persisted, this is just a possible implementation detail. Remember that the aggregate is just a construct to organize business rules, it's not a meant to be a representation of state.
Your events have context (user intents) and the data that have been changed (along with the ID's needed to identify things in persistence) so it is incredible easy to write an apply function in the persistence layer that knows, i.e. what sql instruction in case of relational DB, what to execute in order to apply the event and persist the changes.
Could you please provide example when&why its better (or even
inevitable?) to use child entity instead of separate AR referenced by
its Id as value object?
Why do you design and model a class with state and behaviour?
To abstract, encapsulate, reuse, etc. Basic SOLID design. If the entity has everything needed to ensure domain rules and invariants for a operation then the entity is the AR for that operation. If you need extra domain rules checkings that can not be done by the entity (i.e. the entity does not have enough inner state to accomplish the check or does not naturaly fit into the entity and what represents) then you have to redesign; some times could be to model an aggregate that does the extra domain rules checkings and delegate the other domain rules checking to the inner entity, some times could be change the entity to include the new things. It is too domain context dependant so I can not say that there is a fixed redesign strategy.
Keep in mind that you do not model aggregates and entities in your code. You model just classes with behaviour to check domain rules and the state needed to do that checkings and response whith the changes. These classes can act as aggregates or entities for different operations. These terms are used just to help to comunicate and understand the role of the class on each operation context. Of course, you can be in the situation that the operation does not fit into a entity and you could model an aggregate with a V.O. persistence ID and it is OK (sadly, in DDD, without knowing domain context almost everything is OK by default).
Do you wanna some more enlightment from someone that explains things much better than me? (not being native english speaker is a handicap for theese complex issues) Take a look here:
https://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-1
http://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-2
http://blog.sapiensworks.com/post/2016/07/14/DDD-Aggregate-Decoded-3
It has identity local to aggregate
In a logical sense, probably, but concretely implementing this with the persistence means we have is often unnecessarily complex.
We need a read only copy Value-Object to expose information from an
entity (at least for a repository to be able to read it in order to
save to db, for example)
Not necessarily, you could have read-only entities for instance.
The repository part of the problem was already addressed in another question. Reads aren't an issue, and there are multiple techniques to prevent write access from the outside world but still allow the persistence layer to populate an entity directly or indirectly.
So, why do we have an entity at all instead of Value Objects only?
You might be somewhat hastily putting concerns in the same basket which really are slightly different
Encapsulation of operations
Aggregate level invariant enforcement
Read access
Write access
Entity or VO data integrity
Just because Value Objects are best made immutable and don't enforce aggregate-level invariants (they do enforce their own data integrity though) doesn't mean Entities can't have a fine-tuned combination of some of the same characteristics.
These questions that you have do not exist in a CQRS architecture, where the Write model (the Aggregate) is different from a Read model. In a flat architecture, the Aggregate must expose read/query methods, otherwise it would be pointless.
Entity should be private to aggregate
Yes, in this way you are clearly expressing the fact that they are not for external use.
We need a read only copy Value-Object to expose information from an entity (at least for a repository to be able to read it in order to save to db, for example)
The Repositories are a special case and should not be see in the same way as Application/Presentation code. They could be part of the same package/module, in other words they should be able to access the nested entities.
The entities can be viewed/implemented as object with an immutable ID and a Value object representing its state, something like this (in pseudocode):
class SomeNestedEntity
{
private readonly ID;
private SomeNestedEntityImmutableState state;
public getState(){ return state; }
public someCommandMethod(){ state = state.mutateSomehow(); }
}
So you see? You could safely return the state of the nested entity, as it is immutable. There would be some problem with the Law of Demeter but this is a decision that you would have to make; if you break it by returning the state you make the code simpler to write for the first time but the coupling increases.
Methods that we have on entity are duplicated on Aggregate (or, vice versa, methods we have to have on Aggregate that handle entity are duplicated on entity)
Yes, this protect the Aggregate's encapsulation and also permits the Aggregate to protect it's invariants.
I won't write too much. Just an example. A car and a gear. The car is the aggregate root. The gear is a child entity

Can an aggregate be part of a domain-event?

Consider an aggregate with many properties. For example UserGroup. If I want to publish a UserGroupCreatedEvent I can do 2 things:
duplicate the properties from the just created UserGroup to
UserGroupCreatedEvent and copy their values. OR:
Refer to the new UserGroup within the UserGroupCreatedEvent
In many examples, like Axon's Contacts App, I've seen property duplication. I wonder why, and if in real-world CQRS applications, this is not a lot of overhead, and developers choose to refer the aggregate instead.
An event is a DTO and it's meant to cross boundaries. Including the aggregate directly has the following problems:
The aggregate is a concept that makes sense only in its own bounded context;
An event should contain only the relevant changes, not the whole state of the concept;
Because an event is a DTO, at one point it will be (de)serialized and that would be a technical problem with properly encapsulated objects;
Every component/context receiving handling the event would have a dependency of the component where the aggregate is defined;
These are the main reasons why a Domain Event should be just a flattened representation of the relevant state changes.
P.S: If you need to include the whole state in your event, maybe the events is improperly designed or you're dealing with a simple data structure. Usually an aggregate contains some value objects and/or encapsulates some business constraints.
An important property of domain events is that they are immutable. Bearing that in mind, the two possibilities you mention differ greatly:
Duplicating properties records their value at the time the UserGroup was created.
Referencing the UserGroup by ID just tells you that a UserGroup was created, but not the properties it had at the time. If the UserGroup has been deleted in the meantime, this means that the information is lost.
Which properties you copy depends on just that difference. Do you need to be able to look up e.g. the name of a UserGroup at its creation time? Add it as a property. If not (and if it's not expected that it ever will be required), don't.
Also, domain events have a global scope (i.e. they are meaningful outside of your BC), so you should include all information that clients outside of your BC need to make sense of the domain event.
Note that attaching the whole aggregate root object to a domain event violates the immutability rule of domain events, so this is most probably a bad idea.

How should I enforce relationships and constraints between aggregate roots?

I have a couple questions regarding the relationship between references between two aggregate roots in a DDD model. Refer to the typical Customer/Order model diagrammed below.
First, should references between the actual object implementation of aggregates always be done through ID values and not object references? For example if I want details on the customer of an Order I would need to take the CustomerId and pass it to a ICustomerRepository to get a Customer rather then setting up the Order object to return a Customer directly correct? I'm confused because returning a Customer directly seems like it would make writing code against the model easier, and is not much harder to setup if I am using an ORM like NHibernate. Yet I'm fairly certain this would be violating the boundaries between aggregate roots/repositories.
Second, where and how should a cascade on delete relationship be enforced for two aggregate roots? For example say I want all the associated orders to be deleted when a customer is deleted. The ICustomerRepository.DeleteCustomer() method should not be referencing the IOrderRepostiory should it? That seems like that would be breaking the boundaries between the aggregates/repositories? Should I instead have a CustomerManagment service which handles deleting Customers and their associated Orders which would references both a IOrderRepository and ICustomerRepository? In that case how can I be sure that people know to use the Service and not the repository to delete Customers. Is that just down to educating them on how to use the model correctly?
First, should references between aggregates always be done through ID values and not actual object references?
Not really - though some would make that change for performance reasons.
For example if I want details on the customer of an Order I would need to take the CustomerId and pass it to a ICustomerRepository to get a Customer rather then setting up the Order object to return a Customer directly correct?
Generally, you'd model 1 side of the relationship (eg., Customer.Orders or Order.Customer) for traversal. The other can be fetched from the appropriate Repository (eg., CustomerRepository.GetCustomerFor(Order) or OrderRepository.GetOrdersFor(Customer)).
Wouldn't that mean that the OrderRepository would have to know something about how to create a Customer? Wouldn't that be beyond what OrderRepository should be responsible for...
The OrderRepository would know how to use an ICustomerRepository.FindById(int). You can inject the ICustomerRepository. Some may be uncomfortable with that, and choose to put it into a service layer - but I think that's overkill. There's no particular reason repositories can't know about and use each other.
I'm confused because returning a Customer directly seems like it would make writing code against the model easier, and is not much harder to setup if I am using an ORM like NHibernate. Yet I'm fairly certain this would be violating the boundaries between aggregate roots/repositories.
Aggregate roots are allowed to hold references to other aggregate roots. In fact, anything is allowed to hold a reference to an aggregate root. An aggregate root cannot hold a reference to a non-aggregate root entity that doesn't belong to it, though.
Eg., Customer cannot hold a reference to OrderLines - since OrderLines properly belongs as an entity on the Order aggregate root.
Second, where and how should a cascade on delete relationship be enforced for two aggregate roots?
If (and I stress if, because it's a peculiar requirement) that's actually a use case, it's an indication that Customer should be your sole aggregate root. In most real-world systems, however, we wouldn't actually delete a Customer that has associated Orders - we may deactivate them, move their Orders to a merged Customer, etc. - but not out and out delete the Orders.
That being said, while I don't think it's pure-DDD, most folks will allow some leniency in following a unit of work pattern where you delete the Orders and then the Customer (which would fail if Orders still existed). You could even have the CustomerRepository do the work, if you like (though I'd prefer to make it more explicit myself). It's also acceptable to allow the orphaned Orders to be cleaned up later (or not). The use case makes all the difference here.
Should I instead have a CustomerManagment service which handles deleting Customers and their associated Orders which would references both a IOrderRepository and ICustomerRepository? In that case how can I be sure that people know to use the Service and not the repository to delete Customers. Is that just down to educating them on how to use the model correctly?
I probably wouldn't go a service route for something so intimately tied to the repository. As for how to make sure a service is used...you just don't put a public Delete on the CustomerRepository. Or, you throw an error if deleting a Customer would leave orphaned Orders.
Another option would be to have a ValueObject describing the association between the Order and the Customer ARs, VO which will contain the CustomerId and additional information you might need - name,address etc (something like ClientInfo or CustomerData).
This has several advantages:
Your ARs are decoupled - and now can be partitioned, stored as event streams etc.
In the Order ARs you usually need to keep the information you had about the customer at the time of the order creation and not reflect on it any future changes made to the customer.
In almost all the cases the information in the value object will be enough to perform the read operations ( display customer info with the order ).
To handle the Deletion/deactivation of a Customer you have the freedom to chose any behavior you like. You can use DomainEvents and publish a CustomerDeleted event for which you can have a handler that moves the Orders to an archive, or deletes them or whatever you need. You can also perform more than one operation on that event.
If for whatever reason DomainEvents are not your choice you can have the Delete operation implemented as a service operation and not as a repository operation and use a UOW to perform the operations on both ARs.
I have seen a lot of problems like this when trying to do DDD and i think that the source of the problems is that developers/modelers have a tendency to think in DB terms. You ( we :) ) have a natural tendency to remove redundancy and normalize the domain model. Once you get over it and allow your model to evolve and implicate the domain expert(s) in it's evolution you will see that it's not that complicated and it's quite natural.
UPDATE: and a similar VO - OrderInfo can be placed inside the Customer AR if needed, with only the needed information - order total, order items count etc.

Simple aggregate root and repository

I'm one of many trying to understand the concept of aggregate roots, and I think that I've got it!
However, when I started modeling this sample project, I quickly ran into a dilemma.
I have the two entities ProcessType and Process. A Process cannot exist without a ProcessType, and a ProcessType has many Processes. So a process holds a reference to a type, and cannot exist without it.
So should ProcessType be an aggregate root? New processes would be created by calling processType.AddProcess(new Process());
However, I have other entities that only holds a reference to the Process, and accesses its type through Process.Type. In this case it makes no sense going through ProcessType first.
But AFAIK entities outside the aggregate are only allowed to hold references to the root of the aggregate, and not entities inside the aggregate. So do I have two aggregates here, each with their own repository?
I largely agree with what Sisyphus has said, particularly the bit about not constricting yourself to the 'rules' of DDD that may lead to a pretty illogical solution.
In terms of your problem, I have come across the situation many times, and I would term 'ProcessType' as a lookup. Lookups are objects that 'define', and have no references to other entities; in DDD terminology, they are value objects. Other examples of what I would term a lookup may be a team member's 'RoleType', which could be a tester, developer, project manager for example. Even a person's 'Title' I would define as a lookup - Mr, Miss, Mrs, Dr.
I would model your process aggregate as:
public class Process
{
public ProcessType { get; }
}
As you say, these type of objects typically need to populate dropdowns in the UI and therefore need their own data access mechanism. However, I have personally NOT created 'repositories' as such for them, but rather a 'LookupService'. This for me retains the elegance of DDD by keeping 'repositories' strictly for aggregate roots.
Here is an example of a command handler on my app server and how I have implemented this:
Team Member Aggregate:
public class TeamMember : Person
{
public Guid TeamMemberID
{
get { return _teamMemberID; }
}
public TeamMemberRoleType RoleType
{
get { return _roleType; }
}
public IEnumerable<AvailabilityPeriod> Availability
{
get { return _availability.AsReadOnly(); }
}
}
Command Handler:
public void CreateTeamMember(CreateTeamMemberCommand command)
{
TeamMemberRoleType role = _lookupService.GetLookupItem<TeamMemberRoleType>(command.RoleTypeID);
TeamMember member = TeamMemberFactory.CreateTeamMember(command.TeamMemberID,
role,
command.DateOfBirth,
command.FirstName,
command.Surname);
using (IUnitOfWork unitOfWork = UnitOfWorkFactory.CreateUnitOfWork())
_teamMemberRepository.Save(member);
}
The client can also make use of the LookupService to populate dropdown's etc:
ILookup<TeamMemberRoleType> roles = _lookupService.GetLookup<TeamMemberRoleType>();
Not so simple. ProcessType is most likley a knowledge layer object - it defines a certain process. Process on the other hand is an instance of a process that is ProcessType. You probably really don't need or want the bidirectional relationship. Process is probably not a logical child of a ProcessType. They typically belong to something else, like a Product, or Factory or Sequence.
Also by definition when you delete an aggregate root you delete all members of the aggregate. When you delete a Process I seriously doubt you really want to delete ProcessType. If you deleted ProcessType you might want to delete all Processes of that type, but that relationship is already not ideal and chances are you will not be deleting definition objects ever as soon as you have a historical Process that is defined by ProcessType.
I would remove the Processes collection from ProcessType and find a more suitable parent if one exists. I would keep the ProcessType as a member of Process since it probably defines Process. Operational layer (Process) and Knowledge Layer(ProcessType) objects rarely work as a single aggregate so I would have either Process be an aggregate root or possibly find an aggregate root that is a parent for process. Then ProcessType would be a external class. Process.Type is most likely redundant since you already have Process.ProcessType. Just get rid of that.
I have a similar model for healthcare. There is Procedure (Operational layer) and ProcedureType (knowledge layer). ProcedureType is a standalone class. Procedure is a child of a third object Encounter. Encounter is the aggregate root for Procedure. Procedure has a reference to ProcedureType but it is one way. ProcedureType is a definition object it does not contain a Procedures collection.
EDIT (because comments are so limited)
One thing to keep in mind through all of this. Many are DDD purists and adamant about rules. However if you read Evans carefully he constantly raises the possibility that tradeoffs are often required. He also goes to pretty great lengths to characterize logical and carefully thought out design decisions versus things like teams that do not understand the objectives or circumvent things like aggregates for the sake of convenience.
The important things is to understand and apply the concepts as opposed to the rules. I see many DDD that shoehorn an application into illogical and confusing aggregates etc for no other reason than because a literal rule about repositories or traversal is being applied, That is not the intent of DDD but it is often the product of the overly dogmatic approach many take.
So what are the key concepts here:
Aggregates provide a means to make a complex system more manageable by reducing the behaviors of many objects into higher level behaviors of the key players.
Aggregates provide a means to ensure that objects are created in a logical and always valid condition that also preserves a logical unit of work across updates and deletes.
Let's consider the last point. In many conventional applications someone creates a set of objects that are not fully populated because they only need to update or use a few properties. The next developer comes along and he needs these objects too, and someone has already made a set somewhere in the neighborhood fora different purpose. Now this developer decides to just use those, but he then discovers they don't have all the properties he needs. So he adds another query and fills out a few more properties. Eventually because the team does not adhere to OOP because they take the common attitude that OOP is "inefficient and impractical for the real world and causes performance issues such as creating full objects to update a single property". What they end up with is an application full of embedded SQL code and objects that essentially randomly materialize anywhere. Even worse these objects are bastardized invalid proxies. A Process appears to be a Process but it is not, it is partially populated in different ways any given point depending on what was needed. You end up with a ball mud of numerous queries to continuously partially populate objects to varying degrees and often a lot of extraneous crap like null checks that should not exist but are required because the object is never truly valid etc.
Aggregate rules prevent this by ensuring objects are created only at certain logical points and always with a full set of valid relationships and conditions. So now that we fully understand exactly what aggregate rules are for and what they protect us from, we also want to understand that we also do not want to misuse these rules and create strange aggregates that do not reflect what our application is really about simply because these aggregate rules exists and must be followed at all times.
So when Evans says create Repositories only for aggregates he is saying create aggregates in a valid state and keep them that way instead of bypassing the aggregate for internal objects directly. You have a Process as a root aggregate so you create a repository. ProcessType is not part of that aggregate. What do you do? Well if an object is by itself and it is an entity, it is an aggregate of 1. You create a repository for it.
Now the purist will come along and say you should not have that repository because ProcessType is a value object, not an entity. Therefore ProcessType is not an aggregate at all, and therefore you do not create a repository for it. So what do you do? What you don't do is shoehorn ProcessType into some kind of artificial model for no other reason than you need to get it so you need a repository but to have a repository you have to have an entity as an aggregate root. What you do is carefully consider the concepts. If someone tells you that repository is wrong, but you know that you need it and whatever they may say it is, your repository system is valid and preserves the key concepts, you keep the repository as is instead of warping your model to satisfy dogma.
Now in this case assuming I am correct about what ProcessType is, as the other commentor noted it is in fact a Value Object. You say it cannot be a Value Object. That could be for several reasons. Maybe you say that because you use NHibernate for example, but the NHibernate model for implementing value objects in the same table as another object does not work. So your ProcessType requires an identity column and field. Often because of database considerations the only practical implementation is to have value objects with ids in their own table. Or maybe you say that because each Process points to a single ProcessType by reference.
It does not matter. It is a value Object because of the concept. If you have 10 Process objects that are of the same ProcessType you have 10 Process.ProcessType members and values. Whether each Process.ProcessType points to a single reference, or each got a copy, they should still by definition all be exactly the same things and all be completely interchangeable with any of the other 10. THAT is what makes it a value Object. The person who says "It has an Id therefore is cannot be a value Object you have an entity" is making a dogmatic error. Don't make the same error, if you need an ID field give it one, but don't say "it can't be a Value Object" when it in fact is albeit one that for other reason you had to give an Id to.
So how do you get this one right and wrong? ProcessType is a Value Object, but for some reason you need it to have an Id. The Id per se does not violate the rules. You get it right by having 10 processes that all have a ProcessType that is exactly the same. Maybe each has a local deeep copy, maybe they all point to one object. but each is identical either way, ergo each has an Id = 2, for example. You get is wrong when you do this: 10 Processes each have a ProcessType, and this ProcessType is identical and completely interchangeable EXCEPT now each also has it's own unique Id as well. Now you have 10 instances of the same thing but they vary only in Id, and will always vary only in Id. Now you no longer have a Value Object, not because you gave it an Id, but because you gave it an Id with an implementation that reflects the nature of an entity - each instance is unique and different
Make sense?
Look i think you have to restructure your model. Use ProcessType like a Value Object and Process Agg Root.
This way Every Process has a processType
Public class Process
{
Public Process()
{
}
public ProcessType { get; }
}
for this u just need 1 agg root not 2.

DDD - Aggregate Root - Example Order and OrderLine

I am trying to get my hands dirty learning DDD (by developing a sample eCommerce site with entities like Order, OrderLines, Product, Categories etc).
From what I could perceive about Aggregate Root concept I thought Order class should be an aggregate root for OrderLine.
Things went fine so far, however I am confused when it define a create order flow from UI.
When I want to add an order line to my order object, how should I get/create an instance of an OrderLine object:
Should I hardcode the new OrderLine() statement in my UI/Service class
Should I define a method with parameters like productID, quantity etc in Order class?
Also, what if I want to remove the hardcoded instantiations from the UI or the Order class using a DI. What would be the best approach for this?
From what I could perceive about
Aggregate Root concept I thought Order
class should be an aggreagrte root for
OrderLine.
Yes, OrderLine's should most likely be under an Order root, since OrderLine's likely make no sense outside of a parent Order.
Should I hardcode the new OrderLine()
statement in my UI/Service class
Probably not, though this is how it happens often and it is made to work. The problem, as I see it, is that object construction often happens in different contexts, and the validation constraints differ depending on that context.
Should I define a method with
parameters like productID,quantity etc
in Order class?
As in:
public OrderLine AddOrderLine(Product product, int Quantity ... )
This is one way to do it. Notice I used a Product class instead of a ProductId. Sometimes one is preferable to the other. I find I use both a lot for various reasons - sometimes I have the ID and there's no good reason to pull the aggregate root, sometimes I need the other root to validate the operation.
Another way I do this is to implement a custom collection for the children.
So I have:
order.OrderLines.Add(product, quantity);
This feels a little more natural or OO, and in particular if an entity root has many child collections it avoids clutter.
order.AddOrderLine(), order.AddXXX(), order.AddYYY(), order.AddZZZ()
versus
order.OrderLines.Add(), order.ZZZs.Add(), order.YYYs.Add()
Also, what if I want to remove the
hardcoded instantiations from the UI
or the Order class using a DI. What
would be the best approach for this?
This would be a textbook case for the Factory pattern. I inject such a Factory into my custom collections to support instantiation in those Add() methods.
You could use an OrderLine Factory to get instances of Orderlines. You would "new up" an OrderLine object in the factory with parameters passed into the factory method and then return the new instance to your Order object. Always try to isolate instantiations and dont do it in the UI. There is a question here that uses this technique.
Here is a great book you will find useful on DDD.

Resources