I've just run into a problem while trying to re-design our existing business objects to a adopt a more domain-driven design approach.
Currently, I have a Product Return aggregate root, which handles data related to a return for a specific product. As part of this aggregate, a date needs to be provided to say what month (and year) the return is currently for.
Each product return MUST be sequential, and so each product return must be the next month after the previous. Attempting to create a product return that doesn't follow this pattern should result in an exception.
I had thought about passing along a Domain Service to the method (or constructor) that sets the PeriodDate for the return, but I'm at a loss for how I would do this. Even if the domain service had a reference to a repository, I can't see it being appropriate to put a "GetNextReturnDate()" on that repository.
For background, each product return is associated with a product. I was reluctant to make the product the aggregate root, as loading up all the product returns just to add one seemed like an extremely non-performant way of doing things (considering this library is going to be used with a RESTful Web API).
Can anyone provide suggestions as to how I should model this? Is it a matter of just changing the aggregate root and dealing with the performance? Is there some place in the Domain that 'query' type services can be placed?
As an example, the current constructor the for product return looks like this:
public ProductReturn(int productID, int estimateTypeID, IProductService productService)
{
// This doesn't feel right, and I'm not sure how to implement it...
_periodDate = productService.GetNextReturnDate(productID);
// Other initialization code here...
}
The IProductService (and it's implementation) sit in the Domain layer, so there's no ability to call SQL directly from there (and I feel that's not what I should be doing here anyway)
Again, in all likelihood I've modelled this terribly, or I've missed something when designing the aggregate, so any help would be appreciated!
I Think my broader problem here is understanding how to implement constraints (be it foreign, unique, etc.) inside of a domain entity, without fetching an entire list of returns via a domain service, when a simple SQL query would give me the information required
EDIT: I did see this answer to another question: https://stackoverflow.com/a/48202644/9303178, which suggests having 'Domain Query' interfaces in the domain, which sound like they could return the sort of data I'm looking for.
However, I'm still worried I'm missing something here with my design, so again, I'm all open to suggestions.
EDIT 2: In response to VoiceOfUnreason's answer below, I figured I'd clarify a few things RE the PeriodDate property.
The rules on it are as follows:
CANNOT be NULL
It MUST be in sequence with other product returns, and it cannot be in a valid state without this fulfilled
It's a tricky one. I can't rely on the date passed in because it could then very well be out of sequence, but I can't figure out the date without the service being injected. I am going to transform the constructor into a method on a Factory to remove the 'constructor doing work' anti pattern.
I may be overly defensive over the way the code is currently, but it feels the amount of times the ReturnService has to be injected is wrong. Mind you, there are lots of scenarios where the return value must be recalculated, but it feels as though it would be easy to do this just before a save (but I couldn't think of a clean way to do this).
Overall I just feel this class has a bit of a smell to it (with the injected services and whatnot), but I may be needlessly worrying.
I had thought about passing along a Domain Service to the method (or constructor) that sets the PeriodDate for the return, but I'm at a loss for how I would do this.
I strongly suspect that passing a domain service to the method is the right approach to take.
A way of thinking about this: fundamentally, the aggregate root is a bag of cached data, and methods for changing the contents of the bag of data. During any given function call, it's entire knowledge of the world is that bag, plus the arguments that have been passed to the method.
So if you want to tell the aggregate something that it doesn't already know -- data that isn't currently in the bag -- then you have to pass that data as an argument.
That in turn comes in two forms; if you can know without looking in the aggregate bag what data to pass in, you just pass it as an argument. If you need some of the information hidden in the aggregate bag, then you pass a domain service, and let the aggregate (which has access to the contents of the bag), pass in the necessary data.
public ProductReturn(int productID, int estimateTypeID, IProductService productService)
{
// This doesn't feel right, and I'm not sure how to implement it...
_periodDate = productService.GetNextReturnDate(productID);
// Other initialization code here...
}
This spelling is a little bit odd; constructor does work is usually an anti-pattern, and it's a bit weird to pass in a service to compute a value, when you could just compute the value and pass it in.
If you feel like it's part of the business logic to decide how to compute _periodDate, which is to say if you think the rules for choosing the periodDate belong to the ProductReturn, then you would normally use a method on the object to encapsulate those rules. On the other hand, if periodDate is really decided outside of this aggregate (like productID is, in your example), then just pass in the right answer.
One idea that may be crossing you up: time is not something that exists in the aggregate bag. Time is an input; if the business rules need to know the current time to do some work, then you will pass that time to the aggregate as an argument (again, either as data, or as a domain service).
The user can’t pass a date in, because at any given time, the date for a return can ONLY be the next date from the last return.
Typically, you have a layer sitting between the user and the domain model -- the application; it's the application that decides what arguments to pass to the domain model. For instance, it would normally be the application that is passing the "current time" to the domain model.
On the other hand, if the "date of the last return" is something owned by the domain model, then it probably makes more sense to pass a domain service along.
I should also mention - a return is invalid without a date, so I can’t construct the entity, then hope that the method is called at a later time
Are you sure? Effectively, you are introducing an ordering constraint on the domain model - none of these messages is permitted unless that one has been received first, which means you've got a race condition. See Udi Dahan's Race Conditions Don't Exist
More generally, an entity is valid or in valid based on whether or not it is going to be able to satisfy the post conditions of its methods, you can loosen the constraints during construction if the post conditions are broader.
Scott Wlaschin's Domain Modeling Made Functional describes this in detail; in summary, _periodDate might be an Or Type, and interacting with it has explicit choices: do this thing if valid, do this other thing if invalid.
The idea that constructing a ProductReturn requires a valid _periodDate isn't wrong; but there are tradeoffs to consider that will vary depending on the context you are operating in.
Finally, if any date is saved to the database that ISN’T the next sequential date, the calculation of subsequent returns will fail as we require a sequence to do the calculation properly.
If you have strong constraints between data stored here and data stored somewhere else, then this could indicate a modeling problem. Make sure you understand the implications of Set Validation before you get too deeply invested into a design.
Your problem is querying across multiple aggregates(product returns) for decision making(creating a new product return aggregate).
Decision making based on querying across aggregates using a repository will always be wrong; we will never be able to guarantee consistency, as the state read from the repository will always be a little old.(Aggregates are transaction boundaries. The state read from repository will only be correct at that very instant. In the next instant the aggregate's state may change.)
In your domain what I will do is create a ProductReturnManager AggregateRoot which manages returns for a particular product and a ProductReturn Aggregate which specify one specific return of the product. ProductReturnManager AggregateRoot manages the lifecycle of ProductReturnAggregate to ensure consistency.
The logic for assigning a next month sequential date to ProductReturn is in ProductReturnManager (basically ProductReturnManager act as a constructor). The behavior of product return will be in ProductReturnAggregate.
The ProductReturnManager can be modeled as a Saga, which is created on first CreateProductReturnCommand (for a productId), and same saga is loaded for further CreateProductReturn commands(correlated by productId). It handles ProductReturnCreatedEvent to update its state. Saga creation logic will be according to your business rules(Eg. saga creation is done on InvoiceRaisedForProduct event and handles CreateProductReturn commands.)
sample code:
ProductReturnManagerSagaState{
ProductId productId;
//can cache details about last product return
ProductReturnDetails lastProductReturnDetails;
}
ProductReturnManagerSaga : Saga<ProductReturnManagerSagaState>,IAmStartedByMessages<CreateProductReturn>{
Handle(CreateProductReturn message){
//calculate next product return date
Date productReturnDate = getNextReturnDate(Data.lastProductReturnDetails.productReturnDate);
//create product return
ProductReturnAggregateService.createProductReturn(Data.productId, productReturnDate);
}
Handle(ProductReturnCreatedEvent message){
//logic for updating last product return details in saga state
}
}
ProductReturnAggregate{
ProductId productId;
Date productReturnDate;
ProductPayment productPayment;
ProductReturnState productReturnState;
//commands for product return
markProductReturnAsProcessing();
}
This is an excellent video by Udi Dahan on working across multiple aggregates.
Related
I have recently dived into DDD and this question started bothering me. For example, take a look at the scenario mentioned in the following article:
Let's say that a user made a mistake while adding an EstimationLogEntry to the Task aggregate, and now wants to correct that mistake. What would be the correct way of doing this? Value objects by nature don't have identifiers, they are identified by their structure. If this was a Web application, we would have to send the whole EstimationLogEntry value object as a request parameter, along with the new values, just so we could replace the old value object with the new one. Should EstimationLogEntry be an entity?
It really depends. If it's a sequence of estimations, which you append every time, you can quite possibly envision an operation which updates the value only of the VO. This would use VO semantics (the VO is called to clone itself in-mem with the updated value on the specific property), and the command can just be the estimation (along with a Task id).
If you have an array of VO's which all semantically apply to Task (instead of just the "latest" or something)... it's a different matter. In that case, you'd probably have to send all of them in the request, and you'd have to include all properties too, but I'd say that the need to change just one, probably implies a need to reference them, which in turn implies a need to have an Entity instead of a VO.
DDD emphasizes the Ubiquitous language and many modelling questions like this ones will derive their answer straight from that language.
First things first, if there's an aggregate that contains a value object, there's a good chance that the value object isn't directly created by the user. That is, the factory that creates the value object lives on the aggregates API. The value object(s) might even be derived directly from the aggregates state instead of from any direct method call. In this case, do you want to just discard the aggregate and create a new one? That might make sense depending on your UL.
In some cases, like if you have immutable value objects (based on your UL), you could simply add a new entry into the log entry that "reverses" the old entry. An example of this would be bank accounts and transactions. If bank accounts are aggregate roots and transactions are the value objects. If a transaction is erroneously entered, you can simply write a reversing transaction to void it.
It is definitely possible that you want to update the value object but that must make sense in your UL and it's implementation must also be framed around your UL. For example, if you have a scheduling application and an aggregate root is a person's schedule while the value objects are meetings. If a user erroneously enters a meeting, what your aggregate root should do would be to invalidate the old meeting (flip a flag, mark its state cancelled e.t.c) and create a new one. These actions fit the UL for your scheduling app. The same thing as what you are calling "updating the entry" above.
I am in the process of refactoring an application and am trying to figure out where certain logic should fit. For example, during the registration process I have to check if a user exists based upon their email address. As this requires testing if the user exists in the database it seems as if this logic should not be tied to the model as its existence is dictated by it being in the database.
However, I will have a method on the repository responsible for fetching the user by email, etc. This handles the part about retrieval of the user if they exist. From a use case perspective, registration seems to be a use case scenario and accordingly it seems there should be a UserService (application service) with a register method that would call the repository method and perform if then logic to determine if the user entity returned was null or not.
Am I on the right track with this approach, in terms of DDD? Am I viewing this scenario the wrong way and if so, how should I revise my thinking about this?
This link was provided as a possible solution, Where to check user email does not already exits?. It does help but it does not seem to close the loop on the issue. The thing I seem to be missing from this article would be who would be responsible for calling the CreateUserService, an application service or a method on the aggregate root where the CreateUserService object would be injected into the method along with any other relevant parameters?
If the answer is the application service that seems like you are loosing some encapsulation by taking the domain service out of the domain layer. On the other hand, going the other way would mean having to inject the repository into the domain service. Which of those two options would be preferable and more in line with DDD?
I think the best fit for that behaviour is a Domain Service. DS could access to persistence so you can check for existence or uniquenes.
Check this blog entry for more info.
I.e:
public class TransferManager
{
private readonly IEventStore _store;
private readonly IDomainServices _svc;
private readonly IDomainQueries _query;
private readonly ICommandResultMediator _result;
public TransferManager(IEventStore store, IDomainServices svc,IDomainQueries query,ICommandResultMediator result)
{
_store = store;
_svc = svc;
_query = query;
_result = result;
}
public void Execute(TransferMoney cmd)
{
//interacting with the Infrastructure
var accFrom = _query.GetAccountNumber(cmd.AccountFrom);
//Setup value objects
var debit=new Debit(cmd.Amount,accFrom);
//invoking Domain Services
var balance = _svc.CalculateAccountBalance(accFrom);
if (!_svc.CanAccountBeDebitted(balance, debit))
{
//return some error message using a mediator
//this approach works well inside monoliths where everything happens in the same process
_result.AddResult(cmd.Id, new CommandResult());
return;
}
//using the Aggregate and getting the business state change expressed as an event
var evnt = Transfer.Create(/* args */);
//storing the event
_store.Append(evnt);
//publish event if you want
}
}
from http://blog.sapiensworks.com/post/2016/08/19/DDD-Application-Services-Explained
The problem that you are facing is called Set based validation. There are a lot of articles describing the possible solutions. I will give here an extract from one of them (the context is CQRS but it can be applied to some degree to any DDD architecture):
1. Locking, Transactions and Database Constraints
Locking, transactions and database constraints are tried and tested tools for maintaining data integrity, but they come at a cost. Often the code/system is difficult to scale and can be complex to write and maintain. But they have the advantage of being well understood with plenty of examples to learn from. By implication, this approach is generally done using CRUD based operations. If you want to maintain the use of event sourcing then you can try a hybrid approach.
2. Hybrid Locking Field
You can adopt a locking field approach. Create a registry or lookup table in a standard database with a unique constraint. If you are unable to insert the row then you should abandon the command. Reserve the address before issuing the command. For these sort of operations, it is best to use a data store that isn’t eventually consistent and can guarantee the constraint (uniqueness in this case). Additional complexity is a clear downside of this approach, but less obvious is the problem of knowing when the operation is complete. Read side updates are often carried out in a different thread or process or even machine to the command and there could be many different operations happening.
3. Rely on the Eventually Consistent Read Model
To some this sounds like an oxymoron, however, it is a rather neat idea. Inconsistent things happen in systems all the time. Event sourcing allows you to handle these inconsistencies. Rather than throwing an exception and losing someone’s work all in the name of data consistency. Simply record the event and fix it later.
As an aside, how do you know a consistent database is consistent? It keeps no record of the failed operations users have tried to carry out. If I try to update a row in a table that has been updated since I read from it, then the chances are I’m going to lose that data. This gives the DBA an illusion of data consistency, but try to explain that to the exasperated user!
Accepting these things happen, and allowing the business to recover, can bring real competitive advantage. First, you can make the deliberate assumption these issues won’t occur, allowing you to deliver the system quicker/cheaper. Only if they do occur and only if it is of business value do you add features to compensate for the problem.
4. Re-examine the Domain Model
Let’s take a simplistic example to illustrate how a change in perspective may be all you need to resolve the issue. Essentially we have a problem checking for uniqueness or cardinality across aggregate roots because consistency is only enforced with the aggregate. An example could be a goalkeeper in a football team. A goalkeeper is a player. You can only have 1 goalkeeper per team on the pitch at any one time. A data-driven approach may have an ‘IsGoalKeeper’ flag on the player. If the goalkeeper is sent off and an outfield player goes in the goal, then you would need to remove the goalkeeper flag from the goalkeeper and add it to one of the outfield players. You would need constraints in place to ensure that assistant managers didn’t accidentally assign a different player resulting in 2 goalkeepers. In this scenario, we could model the IsGoalKeeper property on the Team, OutFieldPlayers or Game aggregate. This way, maintaining the cardinality becomes trivial.
You seems to be on the right way, the only stuff I didn't get is what your UserService.register does.
It should take all the values to register a user as input, validate them (using the repository to check the existence of the email) and, if the input is valid store the new User.
Problems can arise when the validation involve complex queries. In that case maybe you need to create a secondary store with special indexes suited for queries that you can't do with your domain model, so you will have to manage two different stores that can be out of sync (a user exists in one but it isn't replicated in the other one, yet).
This kind of problem happens when you store your aggregates in something like a key-value store where you can search just with the id of the aggregate, but if you are using something like a sql database that permits to search using your entities fields, you can do a lot of stuff with simple queries.
The only thing you need to take care is avoid to mix query logic and commands logic, in your example the lookup you need to do is easy, is just one field and the result is a boolean, sometimes it can be harder like time operations, or query spanning multiple tables aggregating results, in these cases it is better to make your (command) service use a (query) service, that offers a simple api to do the calculation like:
interface UserReportingService {
ComplexResult aComplexQuery(AComplexInput input);
}
That you can implement with a class that use your repositories, or an implementation that executes directly the query on your database (sql, or whatever).
The difference is that if you use the repositories you "think" in terms of your domain object, if you write directly the query you think in terms of your db abstractions (tables/sets in case of sql, documents in case of mongo, etc..). One or the other depends on the query you need to do.
It is fine to inject repository into domain.
Repository should have simple inteface, so that domain objects could use it as simple collection or storage. Repositories' main idea is to hide data access under simple and clear interface.
I don't see any problems in calling domain services from usecase. Usecase is suppossed to be archestrator. And domain services are actions. It is fine (and even unavoidable) to trigger domain actions by usecase.
To decide, you should analyze Where is this restriction come from?
Is it business rule? Or maybe user shouldn't be a part of model at all?
Usualy "User" means authorization and authentification i.e behaviour, that for my mind should placed in usecase. I prefare to create separate entity for domain (e.g. buyer) and relate it with usecase's user. So when new user is registered it possible to trigger creation of new buyer.
If using CQRS and creating an entity, and the values of some of its properties are generated part of the its constructor (e.g. a default active value for the status property, or the current datetime for createdAt), how do you include that as part of your response if your command handlers can’t return values?
You would need to create guid before creating an entity, then use this guid to query it. This way your command handlers always return void.
[HttpPost]
public ActionResult Add(string name)
{
Guid guid = Guid.NewGuid();
_bus.Send(new CreateInventoryItem(guid, name));
return RedirectToAction("Item", new { id = guid});
}
public ActionResult Item(Guid id)
{
ViewData.Model = _readmodel.GetInventoryItemDetailsByGuid(id);
return View();
}
Strictly speaking, I don't believe CQRS has a precise hard and fast rule about command handlers not returning values. Greg Young even mentions Martin Fowler's stack.pop() anecdote as a valid counter-example to the rule.
CQS - Command Query Separation, upon which CQRS is based - by Bertrand Meyer does have that rule, but it takes place in a different context and has exceptions, one of which can be interesting for the question at hand.
CQS reasons about objects and the kinds of instructions the routine (execution context) can give them. When issuing a command, there's no need for a return value because the routine already has a reference to the object and can query it whenever it likes as a follow-up to the command.
Still, an important distinction made by Meyer in CQS is the one between a command sent to a known existing object and an instruction that creates an object and returns it.
Functions that create objects
A technical point needs to be clarified before we examine further
consequences of the Command-Query Separation principle: should we
treat object creation as a side effect?
The answer is yes, as we have seen, if the target of the creation is an attribute a: in this case,
the instruction !! a changes the value of an object’s field. The
answer is no if the target is a local entity of the routine. But what
if the target is the result of the function itself, as in !! Result or
the more general form !! Result.make (...)?
Such a creation instruction need not be considered a side effect. It
does not change any existing object and so does not endanger
referential transparency (at least if we assume that there is enough
memory to allocate all the objects we need). From a mathematical
perspective we may pretend that all of the objects of interest, for
all times past, present and future, are already inscribed in the Great
Book of Objects; a creation instruction is just a way to obtain one of
them, but it does not by itself change anything in the environment.
It is common, and legitimate, for a function to create,
initialize and return such an object.
(in Object Oriented Software Construction, p.754)
In other places in the book, Meyer defines this kind of functions as creator functions.
As CQRS is an extension of CQS and maintains the viewpoint that [Commands and Queries] should be pure, I would tend to say that exceptions that hold for CQS are also true in CQRS.
In addition, one of the main differences between CQS and CQRS is the reification of the Command and Query into objects of their own.
In CQRS, there's an additional level of indirection, the "routine" doesn't have a direct reference to the domain object. Object lookup and modification are delegated to the command handler. It weakens, IMO, one of the original reasons that made the "Commands return nothing" precept possible, because the context now can't check the outcome of the operation on its own - it's basically left high and dry until some other object lets it know about the result.
Some ideas:
Let your command handlers return values. This is the simplest option - just return what was created inside the entity. There is some disagreement about whether this is 'allowed' in CQRS though.
The preferred approach is to create your defaults (i.e.id) and pass them into your command - For example, https://github.com/gregoryyoung/m-r/blob/master/CQRSGui/Controllers/HomeController.cs in the Add method, a Guid is created and passed in to the CreateInventoryItem command - this could be returned in the response. This could get quite ugly if you have lots of things to pass in though.
If you can't do 1 or 2, you could try having some async way of handling this, but you haven't said what your use case is so it's difficult to advise. If you're using some sort of socket technology you could do sync-over-async style where you return immediately, then push some value back to the client once the entity has been created. You can also have some sort of workflow where you accept the command then poll / query on the thing being created
According to my understanding of CQRS, you cannot query the aggregate and the command handler could not return any value. The only permitted way of interogating the aggregate is by listening to the raised events. That you could do by simply querying the read model, if the changes are reflected synchronously from the events to the read model.
In the case the changes to the read model are asynchronous things get complicated but solutions exists.
Note: the "command handler" in my answer is the method on the Aggregate, not some Application layer service.
What I ended up doing is I created a 3rd type: CommandQuery. Everything gets pushed into either a Command or Query whenever possible, but then when you have a scenario where running the command results in data you need above, simply turn to the CommandQuery. That way, you know these are special circumstances, like you need an auto-id from a create or you need something back from a stack pop, and you have a clear/easy way to deal with this with no extra overhead that creating some random dummy guid or relying on events (difficult when you are in a web request context) would cause. In a business setting, you could then discuss as a team if a CommandQuery is really warranted for the situation.
I'm one of many trying to understand the concept of aggregate roots, and I think that I've got it!
However, when I started modeling this sample project, I quickly ran into a dilemma.
I have the two entities ProcessType and Process. A Process cannot exist without a ProcessType, and a ProcessType has many Processes. So a process holds a reference to a type, and cannot exist without it.
So should ProcessType be an aggregate root? New processes would be created by calling processType.AddProcess(new Process());
However, I have other entities that only holds a reference to the Process, and accesses its type through Process.Type. In this case it makes no sense going through ProcessType first.
But AFAIK entities outside the aggregate are only allowed to hold references to the root of the aggregate, and not entities inside the aggregate. So do I have two aggregates here, each with their own repository?
I largely agree with what Sisyphus has said, particularly the bit about not constricting yourself to the 'rules' of DDD that may lead to a pretty illogical solution.
In terms of your problem, I have come across the situation many times, and I would term 'ProcessType' as a lookup. Lookups are objects that 'define', and have no references to other entities; in DDD terminology, they are value objects. Other examples of what I would term a lookup may be a team member's 'RoleType', which could be a tester, developer, project manager for example. Even a person's 'Title' I would define as a lookup - Mr, Miss, Mrs, Dr.
I would model your process aggregate as:
public class Process
{
public ProcessType { get; }
}
As you say, these type of objects typically need to populate dropdowns in the UI and therefore need their own data access mechanism. However, I have personally NOT created 'repositories' as such for them, but rather a 'LookupService'. This for me retains the elegance of DDD by keeping 'repositories' strictly for aggregate roots.
Here is an example of a command handler on my app server and how I have implemented this:
Team Member Aggregate:
public class TeamMember : Person
{
public Guid TeamMemberID
{
get { return _teamMemberID; }
}
public TeamMemberRoleType RoleType
{
get { return _roleType; }
}
public IEnumerable<AvailabilityPeriod> Availability
{
get { return _availability.AsReadOnly(); }
}
}
Command Handler:
public void CreateTeamMember(CreateTeamMemberCommand command)
{
TeamMemberRoleType role = _lookupService.GetLookupItem<TeamMemberRoleType>(command.RoleTypeID);
TeamMember member = TeamMemberFactory.CreateTeamMember(command.TeamMemberID,
role,
command.DateOfBirth,
command.FirstName,
command.Surname);
using (IUnitOfWork unitOfWork = UnitOfWorkFactory.CreateUnitOfWork())
_teamMemberRepository.Save(member);
}
The client can also make use of the LookupService to populate dropdown's etc:
ILookup<TeamMemberRoleType> roles = _lookupService.GetLookup<TeamMemberRoleType>();
Not so simple. ProcessType is most likley a knowledge layer object - it defines a certain process. Process on the other hand is an instance of a process that is ProcessType. You probably really don't need or want the bidirectional relationship. Process is probably not a logical child of a ProcessType. They typically belong to something else, like a Product, or Factory or Sequence.
Also by definition when you delete an aggregate root you delete all members of the aggregate. When you delete a Process I seriously doubt you really want to delete ProcessType. If you deleted ProcessType you might want to delete all Processes of that type, but that relationship is already not ideal and chances are you will not be deleting definition objects ever as soon as you have a historical Process that is defined by ProcessType.
I would remove the Processes collection from ProcessType and find a more suitable parent if one exists. I would keep the ProcessType as a member of Process since it probably defines Process. Operational layer (Process) and Knowledge Layer(ProcessType) objects rarely work as a single aggregate so I would have either Process be an aggregate root or possibly find an aggregate root that is a parent for process. Then ProcessType would be a external class. Process.Type is most likely redundant since you already have Process.ProcessType. Just get rid of that.
I have a similar model for healthcare. There is Procedure (Operational layer) and ProcedureType (knowledge layer). ProcedureType is a standalone class. Procedure is a child of a third object Encounter. Encounter is the aggregate root for Procedure. Procedure has a reference to ProcedureType but it is one way. ProcedureType is a definition object it does not contain a Procedures collection.
EDIT (because comments are so limited)
One thing to keep in mind through all of this. Many are DDD purists and adamant about rules. However if you read Evans carefully he constantly raises the possibility that tradeoffs are often required. He also goes to pretty great lengths to characterize logical and carefully thought out design decisions versus things like teams that do not understand the objectives or circumvent things like aggregates for the sake of convenience.
The important things is to understand and apply the concepts as opposed to the rules. I see many DDD that shoehorn an application into illogical and confusing aggregates etc for no other reason than because a literal rule about repositories or traversal is being applied, That is not the intent of DDD but it is often the product of the overly dogmatic approach many take.
So what are the key concepts here:
Aggregates provide a means to make a complex system more manageable by reducing the behaviors of many objects into higher level behaviors of the key players.
Aggregates provide a means to ensure that objects are created in a logical and always valid condition that also preserves a logical unit of work across updates and deletes.
Let's consider the last point. In many conventional applications someone creates a set of objects that are not fully populated because they only need to update or use a few properties. The next developer comes along and he needs these objects too, and someone has already made a set somewhere in the neighborhood fora different purpose. Now this developer decides to just use those, but he then discovers they don't have all the properties he needs. So he adds another query and fills out a few more properties. Eventually because the team does not adhere to OOP because they take the common attitude that OOP is "inefficient and impractical for the real world and causes performance issues such as creating full objects to update a single property". What they end up with is an application full of embedded SQL code and objects that essentially randomly materialize anywhere. Even worse these objects are bastardized invalid proxies. A Process appears to be a Process but it is not, it is partially populated in different ways any given point depending on what was needed. You end up with a ball mud of numerous queries to continuously partially populate objects to varying degrees and often a lot of extraneous crap like null checks that should not exist but are required because the object is never truly valid etc.
Aggregate rules prevent this by ensuring objects are created only at certain logical points and always with a full set of valid relationships and conditions. So now that we fully understand exactly what aggregate rules are for and what they protect us from, we also want to understand that we also do not want to misuse these rules and create strange aggregates that do not reflect what our application is really about simply because these aggregate rules exists and must be followed at all times.
So when Evans says create Repositories only for aggregates he is saying create aggregates in a valid state and keep them that way instead of bypassing the aggregate for internal objects directly. You have a Process as a root aggregate so you create a repository. ProcessType is not part of that aggregate. What do you do? Well if an object is by itself and it is an entity, it is an aggregate of 1. You create a repository for it.
Now the purist will come along and say you should not have that repository because ProcessType is a value object, not an entity. Therefore ProcessType is not an aggregate at all, and therefore you do not create a repository for it. So what do you do? What you don't do is shoehorn ProcessType into some kind of artificial model for no other reason than you need to get it so you need a repository but to have a repository you have to have an entity as an aggregate root. What you do is carefully consider the concepts. If someone tells you that repository is wrong, but you know that you need it and whatever they may say it is, your repository system is valid and preserves the key concepts, you keep the repository as is instead of warping your model to satisfy dogma.
Now in this case assuming I am correct about what ProcessType is, as the other commentor noted it is in fact a Value Object. You say it cannot be a Value Object. That could be for several reasons. Maybe you say that because you use NHibernate for example, but the NHibernate model for implementing value objects in the same table as another object does not work. So your ProcessType requires an identity column and field. Often because of database considerations the only practical implementation is to have value objects with ids in their own table. Or maybe you say that because each Process points to a single ProcessType by reference.
It does not matter. It is a value Object because of the concept. If you have 10 Process objects that are of the same ProcessType you have 10 Process.ProcessType members and values. Whether each Process.ProcessType points to a single reference, or each got a copy, they should still by definition all be exactly the same things and all be completely interchangeable with any of the other 10. THAT is what makes it a value Object. The person who says "It has an Id therefore is cannot be a value Object you have an entity" is making a dogmatic error. Don't make the same error, if you need an ID field give it one, but don't say "it can't be a Value Object" when it in fact is albeit one that for other reason you had to give an Id to.
So how do you get this one right and wrong? ProcessType is a Value Object, but for some reason you need it to have an Id. The Id per se does not violate the rules. You get it right by having 10 processes that all have a ProcessType that is exactly the same. Maybe each has a local deeep copy, maybe they all point to one object. but each is identical either way, ergo each has an Id = 2, for example. You get is wrong when you do this: 10 Processes each have a ProcessType, and this ProcessType is identical and completely interchangeable EXCEPT now each also has it's own unique Id as well. Now you have 10 instances of the same thing but they vary only in Id, and will always vary only in Id. Now you no longer have a Value Object, not because you gave it an Id, but because you gave it an Id with an implementation that reflects the nature of an entity - each instance is unique and different
Make sense?
Look i think you have to restructure your model. Use ProcessType like a Value Object and Process Agg Root.
This way Every Process has a processType
Public class Process
{
Public Process()
{
}
public ProcessType { get; }
}
for this u just need 1 agg root not 2.
After much reading and thinking as I begin to get my head wrapped around DDD, I am a bit confused about the best practices for dealing with complex hierarchies under an aggregate root. I think this is a FAQ but after reading countless examples and discussions, no one is quite talking about the issue I'm seeing.
If I am aligned with the DDD thinking, entities below the aggregate root should be immutable. This is the crux of my trouble, so if that isn't correct, that is why I'm lost.
Here is a fabricated example...hope it holds enough water to discuss.
Consider an automobile insurance policy (I'm not in insurance, but this matches the language I hear when on the phone w/ my insurance company).
Policy is clearly an entity. Within the policy, let's say we have Auto. Auto, for the sake of this example, only exists within a policy (maybe you could transfer an Auto to another policy, so this is potential for an aggregate as well, which changes Policy...but assume it simpler than that for now). Since an Auto cannot exist without a Policy, I think it should be an Entity but not a root. So Policy in this case is an aggregate root.
Now, to create a Policy, let's assume it has to have at least one auto. This is where I get frustrated. Assume Auto is fairly complex, including many fields and maybe a child for where it is garaged (a Location). If I understand correctly, a "create Policy" constructor/factory would have to take as input an Auto or be restricted via a builder to not be created without this Auto. And the Auto's creation, since it is an entity, can't be done beforehand (because it is immutable? maybe this is just an incorrect interpretation). So you don't get to say new Auto and then setX, setY, add(Z).
If Auto is more than somewhat trivial, you end up having to build a huge hierarchy of builders and such to try to manage creating an Auto within the context of the Policy.
One more twist to this is later, after the Policy is created and one wishes to add another Auto...or update an existing Auto. Clearly, the Policy controls this...fine...but Policy.addAuto() won't quite fly because one can't just pass in a new Auto (right!?). Examples say things like Policy.addAuto(VIN, make, model, etc.) but are all so simple that that looks reasonable. But if this factory method approach falls apart with too many parameters (the entire Auto interface, conceivably) I need a solution.
From that point in my thinking, I'm realizing that having a transient reference to an entity is OK. So, maybe it is fine to have a entity created outside of its parent within the aggregate in a transient environment, so maybe it is OK to say something like:
auto = AutoFactory.createAuto();
auto.setX
auto.setY
or if sticking to immutability, AutoBuilder.new().setX().setY().build()
and then have it get sorted out when you say Policy.addAuto(auto)
This insurance example gets more interesting if you add Events, such as an Accident with its PolicyReports or RepairEstimates...some value objects but most entities that are all really meaningless outside the policy...at least for my simple example.
The lifecycle of Policy with its growing hierarchy over time seems the fundamental picture I must draw before really starting to dig in...and it is more the factory concept or how the child entities get built/attached to an aggregate root that I haven't seen a solid example of.
I think I'm close. Hope this is clear and not just a repeat FAQ that has answers all over the place.
Aggregate Roots exist for the purpose of transactional consistency.
Technically, all you have are Value Objects and Entities.
The difference between the two is immutability and identity.
A Value Object should be immutable and it's identity is the sum of it's data.
Money // A value object
{
string Currency;
long Value;
}
Two Money objects are equal if they have equal Currency and equal Value. Therefore, you could swap one for the other and conceptually, it would be as if you had the same Money.
An Entity is an object with mutability over time, but whose identity is immutable throughout it's lifetime.
Person // An entity
{
PersonId Id; // An immutable Value Object storing the Person's unique identity
string Name;
string Email;
int Age;
}
So when and why do you have Aggregate Roots?
Aggregate Roots are specialized Entities whose job is to group a set of domain concepts under one transactional scope for purpose of data change only. That is, say a Person has Legs. You would need to ask yourself, should changes on Legs and changes on Person be grouped together under a single transaction? Or can I change one separately from the other?
Person // An entity
{
PersonId Id;
string Name;
string Ethnicity;
int Age;
Pair<Leg> Legs;
}
Leg // An entity
{
LegId Id;
string Color;
HairAmount HairAmount; // none, low, medium, high, chewbacca
int Length;
int Strength;
}
If Leg can be changed by itself, and Person can be changed by itself, then they both are Aggregate Roots. If Leg can not be changed alone, and Person must always be involved in the transaction, than Leg should be composed inside the Person entity. At which point, you would have to go through Person to change Leg.
This decision will depend on the domain you are modeling:
Maybe the Person is the sole authority on his legs, they grow longer and stronger based on his age, the color changes according to his ethnicity, etc. These are invariants, and Person will be responsible for making sure they are maintained. If someone else wants to change this Person's legs, say you want to shave his legs, you'd have to ask him to either shaves them himself, or hand them to you temporarily for you to shave.
Or you might be in the domain of archeology. Here you find Legs, and you can manipulate the Legs independently. At some point, you might find a complete body and guess who this person was historically, now you have a Person, but the Person has no say in what you'll do with the Legs you found, even if it was shown to be his Legs. The color of the Leg changes based on how much restoration you've applied to it, or other things. These invariants would be maintained by another Entity, this time it won't be Person, but maybe Archaeologist instead.
TO ANSWER YOUR QUESTION:
I keep hearing you talk about Auto, so that's obviously an important concept of your domain. Is it an entity or a value object? Does it matter if the Auto is the one with serial #XYZ, or are you only interested in brand, colour, year, model, make, etc.? Say you care about the exact identity of the Auto and not just it's features, than it would need to be an Entity of your domain. Now, you talk about Policy, a policy dictates what is covered and not covered on an Auto, this depends on the Auto itself, and probably the Customer too, since based on his driving history, the type and year and what not of Auto he has, his Policy might be different.
So I can already conceive having:
Auto : Entity, IAggregateRoot
{
AutoId Id;
string Serial;
int Year
colour Colour;
string Model
bool IsAtGarage
Garage Garage;
}
Customer : Entity, IAggregateRoot
{
CustomerId Id;
string Name;
DateTime DateOfBirth;
}
Policy : Entity, IAggregateRoot
{
string Id;
CustomerId customer;
AutoId[] autos;
}
Garage : IValueObject
{
string Name;
string Address;
string PhoneNumber;
}
Now the way you make it sound, you can change a Policy without having to change an Auto and a Customer together. You say things like, what if the Auto is at the garage, or we transfer an Auto from one Policy to another. This makes me feel like Auto is it's own Aggregate Root, and so is Policy and so is Customer. Why is that? Because it sounds like it is the usage of your domain that you would change an Auto's garage without caring that the Policy be changed with it. That is, if someone changes an Auto's Garage and IsAtGarage state, you don't care not to change the Policy. I'm not sure if I'm being clear, you wouldn't want to change the Customer's Name and DateOfBirth in a non transactional way, because maybe you change his name, but it fails to change the Date and now you have a corrupt customer whose Date of Birth doesn't match his name. On the other hand, it's fine to change the Auto without changing the Policy. Because of this, Auto should not be in the aggregate of Policy. Effectively, Auto is not a part of Policy, but only something that the Policy keeps track of and might use.
Now we see that it then totally make sense that you are able to create an Auto on it's own, as it is an Aggregate Root. Similarly, you can create Customers by themselves. And when you create a Policy, you simply must link it to a corresponding Customer and his Autos.
aCustomer = Customer.Make(...);
anAuto = Auto.Make(...);
anotherAuto = Auto.Make(...);
aPolicy = Policy.Make(aCustomer, { anAuto, anotherAuto }, ...);
Now, in my example, Garage isn't an Aggregate Root. This is because, it doesn't seem to be something that the domain directly works with. It is always used through an Auto. This makes sense, Insurance companies don't own garages, they don't work in the business of garages. You wouldn't ever need to create a Garage that existed on it's own. It's easy then to have an anAuto.SentToGarage(name, address, phoneNumber) method on Auto which creates a Garage and assign it to the Auto. You wouldn't delete a Garage on it's own. You would do anAuto.LeftGarage() instead.
entities below the aggregate root should be immutable.
No. Value objects are supposed to be immutable. Entities can change their state.
Just need to make sure You do proper encapsulation:
entities modifies themselves
entities are modified through aggregate root only
but Policy.addAuto() won't quite fly because one can't just pass in a new Auto (right!?)
Usually it's supposed to be so. Problem is that auto creation task might become way too large. If You are lucky and, knowing that entities can be modified, are able to divide smoothly it into smaller tasks like SpecifyEngine, problem is resolved.
However, "real world" does not work that way and I feel Your pain.
I got case when user uploads 18 excel sheets long crap load of data (with additional fancy rule - it should be "imported" whatever how invalid data are (as I say - that's like saying true==false)). This upload process is considered as one atomic operation.
What I do in this case...
First of all - I have excel document object model, mappings (e.g. Customer.Name==1st sheet, "C24") and readers that fill DOM. Those things live in infrastructure far far away.
Next thing - entity and value objects in my domain that looks similar to DOM dto`s, but only projection that I'm interested in, with proper data types and according validation. + I Have 1:1 association in my domain model that isolates dirty mess out (luckily enough, it kind a fits with ubiquitous language).
Armed with that - there's still one tricky part left - mapping between excel DOM dtos to domain objects. This is where I sacrifice encapsulation - I construct entity with its value objects from outside. My thought process is kind a simple - this overexposed entity can't be persisted anyway and validness still can be forced (through constructors). It lives underneath aggregate root.
Basically - this is the part where You can't runaway from CRUDyness.
Sometimes application is just editing bunch of data.
P.s. I'm not sure that I'm doing right thing. It's likely I've missed something important on this issue. Hopefully there will be some insight from other answerers.
Part of my answer seems to be captured in these posts:
Domain Driven Design - Parent child relation pattern - Specification pattern
Best practice for Handling NHibernate parent-child collections
how should i add an object into a collection maintained by aggregate root
To summarize:
It is OK to create an entity outside its aggregate if it can manage its own consistency (you may still use a factory for it). So having a transient reference to Auto is OK and then a new Policy(Auto) is how to get it into the aggregate. This would mean building up "temporary" graphs to get the details spread out a bit (not all piled into one factory method or constructor).
I'm seeing my alternatives as either:
(a) Build a DTO or other anemic graph first and then pass it to a factory to get the aggregate built.
Something like:
autoDto = new AutoDto();
autoDto.setVin(..);
autoDto.setEtc...
autoDto.setGaragedLocation(new Location(..));
autoDto.addDriver(...);
Policy policy = PolicyFactory.getInstance().createPolicy(x, y, autoDto);
auto1Dto...
policy.addAuto(auto1Dto);
(b) Use builders (potentially compound):
builder = PolicyBuilder.newInstance();
builder = builder.setX(..).setY(..);
builder = builder.addAuto(vin, new Driver()).setGaragedLocation(new Location());
Policy = builder.build();
// and how would update work if have to protect the creation of Auto instances?
auto1 = AutoBuilder.newInstance(policy, vin, new Driver()).build();
policy.addAuto(auto1);
As this thing twists around and around a couple things seem clear.
In the spirit of ubiquitous language, it makes sense to be able to say:
policy.addAuto
and
policy.updateAuto
The arguments to these and how the aggregate and the entity creation semantics are managed is not quite clear, but having to look at a factory to understand the domain seems a bit forced.
Even if Policy is an aggregate and manages how things are put together beneath it, the rules about how an Auto looks seem to belong to Auto or its factory (with some exceptions for where Policy is involved).
Since Policy is invalid without a minimally constructed set of children, those children need to be created prior or within its creation.
And that last statement is the crux. It looks like for the most part these posts handle the creation of children as separate affairs and then glue them. The pure DDD approach would seem to argue that Policy has to create Autos but the details of that spin wildly out of control in non-trivial cases.