I'm struggling with the following domain design that seams not to fit within the concepts of DDD as I understand them.
In one hand, I have the Device->Sensor->Measurement hierarchy, modeled as an Aggregate with Device as as the root, Sensor as an Entity and Measurement as a VO. So far so good.
Now, each device has a type, as does the Sensor. At the same time Measurements refers to a variable been measured (e.g. temperature). Here is where things break apart.
I initially modeled types as Value Objects, but there's a limited set of types and many devices and sensors share the same types.
I then decided to model them as an aggregate, following a similar structure than the device aggregate: DeviceType->SensorType->Variable. However, this doesn't work as Sensors may reference to SensorType and Measurement to Variable, breaking the rule of only allowing references to the root of an aggregate from inside another aggregate. Besides, It may happen than more than one DeviceType includes the same type of sensor (e.g. a battery charge sensor) and also that more than one Sensor measures the same variable (e.g. battery charge level).
This leads me to having each of those entities (DeviceType, SensorType, Variable) as independent entities, each in its own (degenerated) .
My concrete question is: have I correctly interpreted the notions of Aggregate, Entity, VO or having such anemic aggregates with solely the root an anti-pattern?
There are no hard and fast rules in modeling, you should do whatever fits your use case best. That being said, aggregates are mainly used for maintaining invariants across a group of entities. I don't see any such constraints between DeviceType, SensorType and Variable, hence I don't see any reason to put them into an aggregate. It should be fine to keep them as independent entities (or even value objects).
Related
I've been doing a lot of research on DDD and have been stumbling a bit on the concept of Aggregates vs Aggregate Roots.
Say I'm trying to model a space game that exists in a strict hierarchy, with a Sector Entity that wraps around everything contained within it. It can have a collection of PlanetarySystem, each of those can have a collection of Star, the stars can have a collection of Planet which themselves can have a collection of Moon
I want all of these objects to either be associated with that Sector, situated nicely in the hierarchy, but it can also be associated with no sector. Each item can have a 0 or 1 relationship with the object in the hierarchy above it... floating in the ether if need be. I want to be able to delete a Planet without deleting all the Moon entities it has, but it can be done optionally.
Am I wrong to think that these should each be their own Aggregate Root in this scenario, with references to the Aggregate Root instances that they contain? Or should the Sector be the actual Aggregate Root, orchestrating everything as the object at the top of the hierarchy?
An aggregate root should never contain a reference to another aggregate root. You should use either an id to reference an associated aggregate or a value object containing the id and some other pertinent data.
Anything that is floating in the ether would be an aggregate root since it has a lifecycle of its own.
I reckon the hierarchy should just work in a unidirectional manner as far as the domain is concerned. If you need to query lower level items based on higher level containment then you either need joins or need to denormalize the relevant higher level identifiers/data into the lower level items.
My advice would be to attempt to keep any aggregate a single level deep but if that isn't practical then try something deeper but it gets unwieldy rather quickly.
Remember something can be an Aggregate in one use case and an Aggregate Root in another. If your use case is AddPlanetarySystemToSector, then Sector is your Aggregate Root. If your use case is AddMoonToPlanet, then Planet is the AR.
The difference is that an AR should not be saved unless it and all Aggregates it owns satisfy all business rules. The higher level your AR, the more business rules you must test and satisfy. Therefore your use cases should be a specific as possible and target the proper AR.
If this poses problems, a work around is to have ARs own value objects instead of aggregates. So a Sector would own PlanetarySystemVOs. If the Sector needs to modify a PlanetarySystem, it can launch a PlanetarySystem use case that performs the proper action.
No, it is not a duplication question.
I have red many sources on the subject, but still I feel like I don't fully understand it.
This is the information I have so far (from multiple sources, be it articles, videos, etc...) about what is an Aggregate and Aggregate Root:
Aggregate is a collection of multiple Value Objects\Entity references and rules.
An Aggregate is always a command model (meant to change business state).
An Aggregate represents a single unit of (database - because essentialy the changes will be persisted) work, meaning it has to be consistent.
The Aggregate Root is the interface to the external world.
An Aggregate Root must have a globally unique identifier within the system
DDD suggests to have a Repository per Aggregate Root
A simple object from an aggregate can't be changed without its AR(Aggregate Root) knowing it
So with all that in mind, lets get to the part where I get confused:
in this site it says
The Aggregate Root is the interface to the external world. All interaction with an Aggregate is via the Aggregate Root. As such, an Aggregate Root MUST have a globally unique identifier within the system. Other Entites that are present in the Aggregate but are not Aggregate Roots require only a locally unique identifier, that is, an Id that is unique within the Aggregate.
But then, in this example I can see that an Aggregate Root is implemented by a static class called Transfer that acts as an Aggregate and a static function inside called TransferedRegistered that acts as an AR.
So the questions are:
How can it be that the function is an AR, if there must be a globaly unique identifier to it, and there isn't, reason being that its a function. what does have a globaly unique identifier is the Domain Event that this function produces.
Following question - How does an Aggregate Root looks like in code? is it the event? is it the entity that is returned? is it the function of the Aggregate class itself?
In the case that the Domain Event that the function returns is the AR (As stated that it has to have that globaly unique identifier), then how can we interact with this Aggregate? the first article clearly stated that all interaction with an Aggregate is by the AR, if the AR is an event, then we can do nothing but react on it.
Is it right to say that the aggregate has two main jobs:
Apply the needed changes based on the input it received and rules it knows
Return the needed data to be persisted from AR and/or need to be raised in a Domain Event from the AR
Please correct me on any of the bullet points in the beginning if some/all of them are wrong is some way or another and feel free to add more of them if I have missed any!
Thanks for clarifying things out!
I feel like I don't fully understand it.
That's not your fault. The literature sucks.
As best I can tell, the core ideas of implementing solutions using domain driven design came out of the world of Java circa 2003. So the patterns described by Evans in chapters 5 and six of the blue book were understood to be object oriented (in the Java sense) domain modeling done right.
Chapter 6, which discusses the aggregate pattern, is specifically about life cycle management; how do you create new entities in the domain model, how does the application find the right entity to interact with, and so on.
And so we have Factories, that allow you to create instances of domain entities, and Repositories, that provide an abstraction for retrieving a reference to a domain entity.
But there's a third riddle, which is this: what happens when you have some rule in your domain that requires synchronization between two entities in the domain? If you allow applications to talk to the entities in an uncoordinated fashion, then you may end up with inconsistencies in the data.
So the aggregate pattern is an answer to that; we organize the coordinated entities into graphs. With respect to change (and storage), the graph of entities becomes a single unit that the application is allowed to interact with.
The notion of the aggregate root is that the interface between the application and the graph should be one of the members of the graph. So the application shares information with the root entity, and then the root entity shares that information with the other members of the aggregate.
The aggregate root, being the entry point into the aggregate, plays the role of a coarse grained lock, ensuring that all of the changes to the aggregate members happen together.
It's not entirely wrong to think of this as a form of encapsulation -- to the application, the aggregate looks like a single entity (the root), with the rest of the complexity of the aggregate being hidden from view.
Now, over the past 15 years, there's been some semantic drift; people trying to adapt the pattern in ways that it better fits their problems, or better fits their preferred designs. So you have to exercise some care in designing how to translate the labels that they are using.
In simple terms an aggregate root (AR) is an entity that has a life-cycle of its own. To me this is the most important point. One AR cannot contain another AR but can reference it by Id or some value object (VO) containing at least the Id of the referenced AR. I tend to prefer to have an AR contain only other VOs instead of entities (YMMV). To this end the AR is responsible for consistency and variants w.r.t. the AR. Each VO can have its own invariants such as an EMailAddress requiring a valid e-mail format. Even if one were to call contained classes entities I will call that semantics since one could get the same thing done with a VO. A repository is responsible for AR persistence.
The example implementation you linked to is not something I would do or recommend. I followed some of the comments and I too, as one commenter alluded to, would rather use a domain service to perform something like a Transfer between two accounts. The registration of the transfer is not something that may necessarily be permitted and, as such, the domain service would be required to ensure the validity of the transfer. In fact, the registration of a transfer request would probably be a Journal in an accounting sense as that is my experience. Once the journal is approved it may attempt the actual transfer.
At some point in my DDD journey I thought that there has to be something wrong since it shouldn't be so difficult to understand aggregates. There are many opinions and interpretations w.r.t. to DDD and aggregates which is why it can get confusing. The other aspect is, in IMHO, that there is a fair amount of design involved that requires some creativity and which is based on an understanding of the domain itself. Creativity cannot be taught and design falls into the realm of tacit knowledge. The popular example of tacit knowledge is learning to ride a bike. Now, we can read all we want about how to ride a bike and it may or may not help much. Once we are on the bike and we teach ourselves to balance then we can make progress. Then there are people who end up doing absolutely crazy things on a bike and even if I read how to I don't think that I'll try :)
Keep practicing and modelling until it starts to make sense or until you feel comfortable with the model. If I recall correctly Eric Evans mentions in the Blue Book that it may take a couple of designs to get the model closer to what we need.
Keep in mind that Mike Mogosanu is using a event sourcing approach but in any case (without ES) his approach is very good to avoid unwanted artifacts in mainstream OOP languages.
How can it be that the function is an AR, if there must be a globaly unique identifier to it, and there isn't, reason being that
its a function. what does have a globaly unique identifier is the
Domain Event that this function produces.
TransferNumber acts as natural unique ID; there is also a GUID to avoid the need a full Value Object in some cases.
There is no unique ID state in the computer memory because it is an argument but think about it; why you want a globaly unique ID? It is just to locate the root element and its (non unique ID) childrens for persistence purposes (find, modify or delete it).
Order A has 2 order lines (1 and 2) while Order B has 4 order lines (1,2,3,4); the unique identifier of order lines is a composition of its ID and the Order ID: A1, B3, etc. It is just like relational schemas in relational databases.
So you need that ID just for persistence and the element that goes to persistence is a domain event expressing the changes; all the changes needed to keep consistency, so if you persist the domain event using the global unique ID to find in persistence what you have to modify the system will be in a consistent state.
You could do
var newTransfer = New Transfer(TransferNumber); //newTransfer is now an AG with a global unique ID
var changes = t.RegisterTransfer(Debit debit, Credit credit)
persistence.applyChanges(changes);
but what is the point of instantiate a object to create state in the computer memory if you are not going to do more than one thing with this object? It is pointless and most of OOP detractors use this kind of bad OOP design to criticize OOP and lean to functional programming.
Following question - How does an Aggregate Root looks like in code? is it the event? is it the entity that is returned? is it the function
of the Aggregate class itself?
It is the function itself. You can read in the post:
AR is a role , and the function is the implementation.
An Aggregate represents a single unit of work, meaning it has to be consistent. You can see how the function honors this. It is a single unit of work that keeps the system in a consistent state.
In the case that the Domain Event that the function returns is the AR (As stated that it has to have that globaly unique identifier),
then how can we interact with this Aggregate? the first article
clearly stated that all interaction with an Aggregate is by the AR, if
the AR is an event, then we can do nothing but react on it.
Answered above because the domain event is not the AR.
4 Is it right to say that the aggregate has two main jobs: Apply the
needed changes based on the input it received and rules it knows
Return the needed data to be persisted from AR and/or need to be
raised in a Domain Event from the AR
Yes; again, you can see how the static function honors this.
You could try to contat Mike Mogosanu. I am sure he could explain his approach better than me.
I'm struggling with how/if to define "a set of aggregates". Aggregates are supposed to be stand alone and isolated but it's easy to think of a bigger set of aggregates that belong together. But is this a trap?
Using this "set of aggregates" it would be possible to for instance enumerate and index aggregates on a unique property within the set and have other domain rules that could be validated across all aggregates in the set. It's tempting but also feels a bit wrong.
Another approach would be to avoid this thinking completely and not allow/define a set of aggregates and not allow enumerating aggregates but only load/save on aggregate-id. Using this option if would be necessary to reference aggregates from other aggregates and by doing this build up an interconnected graph of aggregates.
The approaches are similar to having aggregates in a folder on disk or having an "internet" of aggregates where the references between them are defining the bigger set of aggregates. In any case I'm really stuck on this problem. I have never read anywhere about this and I guess nobody really cares that much? I'm not sure I explain this very good but my question is if there are any definitions of the "set of aggregates" or if we should think of aggregates as totally isolated/on its own and with only a unique aggregate-id (UUID)?
The set of aggregates could for instance be the database being used under the surface. But what I'm wondering is if this database as in the information about what aggregates it contains has any definition in DDD or if we should think about a set of aggregates as an interconnected graph where only traversal of this graph can be used to enumerate all "associated" aggregates.
Aggregates are connected
In any application with sufficient complexity, Aggregates end up referencing one-another. And it is perfectly reasonable to use their unique identifiers as reference IDs to refer to each other.
But take care to load and persist aggregates outside the domain layer, typically in repositories. If you want to traverse links across aggregates and load them into memory, you will be doing that upfront before handing over control to the domain layer for the actual processing.
Traversing the graph to get all related aggregates is correct, but this rarely spans across too many aggregate boundaries. You rarely find a single change or rule to be applied throughout the application. If you do have such a transaction, it is probably a sign of the domain design needing improvement, simply because you are spreading one responsibility/change amongst many aggregates.
The connectivity is so usual that you should watch out for aggregates that have no linkages with the rest of the system. They are either standalone libraries, or they probably belong to a different bounded context.
Aggregates can morph into different forms
They are aggregates because they form a clear invariant boundary, with their primary responsibility being to enforce invariants across state changes for all the entities within themselves. But they can morph into different kinds of DDD objects based on the requirement.
A good example is of a single Currency note. In most applications, they are value objects. But for the federal bank, they are aggregates with clear cut invariant rules. They are aggregates when they are created and referenced, but in a transaction that ships printed notes to banks, they may become value objects.
So you may have to evaluate whether you are talking about a domain entity in its aggregate form, or as a value object when you consider each linkage.
Aggregates are invariant boundaries
It is wrong to validate domain rules across aggregates.
Your aggregate boundary is an invariant boundary, meaning all the domain rules within it should be satisfied at all time. By that logic, you are going to incorrectly build up a structure that will need to ensure that all domain rules across aggregates are valid at all time. Doing so will impose considerable performance burden, not to mention the complexity in business logic.
But this is not to say that there may be domain rules that span across aggregates. The correct way to accomplish this would be using eventual consistency and an Event-driven approach.
The primary changing aggregate would validate and persist the data, and bubble up an event containing the state change. Other aggregates would then act on the event and bring themselves up-to-date. If an aggregate's domain rules break because of the change, there is usually a supplementary mechanism that allows correction of the problem (a preferred mechanism) or a rollback of the first state change (happens very rarely).
Perhaps you can find a common denominator the agg sets have in common and use that to work with?
A simplified example; there is a set of Books and a set of Users that have nothing in common except you want to know whenever they were first registered? What might be an option is to have an interface FirstRegistration and then you can choose to either expand Books/Users or create a specific entity instead.
I'm struggling with how/if to define "a set of aggregates". Aggregates
are supposed to be stand alone and isolated but it's easy to think of
a bigger set of aggregates that belong together. But is this a trap?
I think you're struggling because indeed the idea of a set of aggregates (instances) is very generic, and the uses of such things are contextual and domain-specific. People don't talk specifically about it because of course you may have behaviors that operate on a collection of multiple aggregates, but that doesn't give such collections any particular common properties or requirements that would allow you, from a general DDD perspective, to characterize such collections more specifically than "a set of aggregates", "a list of distinct aggregates", or similar.
Using this "set of aggregates" it would be possible to for instance
enumerate and index aggregates on a unique property within the set and
have other domain rules that could be validated across all aggregates
in the set. It's tempting but also feels a bit wrong.
Tempting why? You've couched the question in very abstract terms, so it's pretty much impossible to contradict you about the "it would be possible", but just because something may be possible doesn't mean it would be useful. In practice, I think you'll find that rules or behaviors that operate on collections of aggregates most naturally belong not to collections of aggregates in an abstract sense, but rather to other aggregate types in your domain model, to domain repositories, or to domain services.
It is entirely plausible that your domain model might want to handle particular sets of aggregates characterized by some rule. For example, if you're an airline, then one of the aggregates in your domain model might a single seat on a flight, since that's the unit you sell. It makes sense in that case that there would be operations on all the seats on a particular flight, for example, but whatever rules and behaviors you might have about that are specifically about that kind of aggregate, selected in that particular way.
Another approach would be to avoid this thinking completely and not
allow/define a set of aggregates and not allow enumerating aggregates
but only load/save on aggregate-id.
It's surely counterproductive to forbid working with sets of aggregates. Just don't attribute more significance to it than is warranted. There is nothing particularly special about sets of aggregates in general.
Using this option if would be
necessary to reference aggregates from other aggregates and by doing
this build up an interconnected graph of aggregates.
I don't follow that. One certainly must be able to retrieve and store individual aggregates from persistence, as that's more or less the defining property of aggregates -- they are the unit of persistence. But that doesn't mean that you must reject the ability to work with collections of aggregates. However, sets of aggregates do not have identity in the same way that individual aggregates do, so yes, relationships between aggregates need to be modeled in terms of individual aggregates. Nevertheless, that does not inherently preclude 1:m or n:m relationships among aggregates.
I'm really stuck on this problem. I have never read anywhere about this and I guess nobody really cares that much?
You'll find all sorts of uses of various sets of aggregates in applications built and maintained based on DDD ideas, but there's not much to talk about at the level of abstraction of your question, and what there is is already summed up in the words "set" and "aggregate".
The set of aggregates could for instance be the database being used
under the surface. But what I'm wondering is if this database as in
the information about what aggregates it contains has any definition
in DDD
Not to my knowledge. I suspect most DDD practitioners would just call it "the data", or something similar.
or if we should think about a set of aggregates as an
interconnected graph where only traversal of this graph can be used to
enumerate all "associated" aggregates.
I'm still not seeing why you set that up as a thing. Sure, depending on the domain model, you might be able to traverse all or substantial chunks of the data by traversing associations between aggregates, and that might be appropriate for some purposes, but DDD doesn't have to give a special name or special rules for sets of aggregates for you to work with them.
Like any useful methodology, DDD exists to solve problems. Its bread & butter is complex applications with complex data and evolving requirements. It is not to be interpreted as a straight jacket preventing designers and developers from (thoughtfully) writing designs and code that incorporate aspects of other design approaches, much less designs and code that provide for the application's idiosyncratic needs.
If we are working on a sub-domain where we're only dealing with a read-only scenario, meaning that our entities and value objects will not be changed, does it make sense to create aggregates composed by roots and its children or should each entity of this context map to a single aggregate?
Imagine that we've entity A and entity B.
In a context where modifications are made, we create an aggregate composed by entity A and entity B, where A is the aggregate root (let's say that B can't live without A and there are some invariants involved).
If we move the same entities to a different context where no modifications are made, does it make sense to keep this aggregate or should we create an aggregate for entity A and a different one for entity B?
In 2019, there's fairly large support for the idea that in a read only scenario, you don't bother with the domain model at all.
Just load the data directly into whatever read only data structure makes sense to support the use case.
See also: cqrs.
The first thing is if B cant live without A and there are some invariants involved, to me A is an Aggregate root, with B being an entity that belongs to it.
Aggregate roots represent a real world concept and dont just exist for the convenience of modification. In many of our applications, we don't modify state of our aggregate roots once created - i.e. we in effect have immutable aggregate roots. These would have some logic for design by contract checks/invariant checks etc but they are in effect anaemic as there is no "Update" methods due to its immutability. Since the "blue book" was written by Eric Evans, alot of things have changed, e.g. the concept of NoSql database have become very popular, functional programming concepts have become very influential rising to more advanced DDD style architectures being recommended such as CQRS. So for example, rather than doing updates to a database I can append (i.e. insert) instead. This leads to aggregates no longer having to be "updated". This leads to leaner anaemic types but this is what we want in this context. The issue before with anaemic types was that "update logic" for a given type was put elsewhere in the codebase instead of being put into the type itself. However if you do not require "update logic" in the first place then you dont have that problem!
If for example there is an Order with many OrderItems, we would create an Order aggregate root and an OrderItem entity. Its a very important concept to distill your domain to properly identify what are aggregates, entities and value types.
Then creation of domain services, repositories etc just flows naturally. For example, aggregate roots and repositories are 1 to 1 i.e. in the example above we would have an Order repository and not have an OrderItem repository. That way your main domain concepts are spread throughout your code in a predictable and easy to understand way.
Finally, in your specific question I would not treat them as the same entities. In one context, you seem to need modification logic - in the other they you dont - they are separate domain concepts to me.
In context where modifications are made: A=agg root, B=entity.
In context without modifications: A=agg root (immutable), B=entity(immutable)
In Domain Driven Design,domain objects are fall into two categories,entity and value object.It is very easy to make out which is entity,which is value object,but i don't know why we have to do that?Before the advent of DDD, we model the domain without entity and value object.After the DDD was put forwar,we use entity and value object to classify domain object, so what is the advantage of such classification?
domain objects are fall into two categories,
Actually, no, objects are a possible implementation of a domain Concept , which is basically just information, not code. A concept can be an Entity because it makes sense to identify it in a uniquely and consistent manner, regardless how it changes over time (e.g: a Customer can change their name but it's the same customer. 2 customers with the same name are not necessarily the same person).
A Value Object (a name that still reminds us that DDD started a bit too coupled to OOP) represent a Domain concept that it's just a value. Or more precisely, the business only cares about its value. If you change it, it's another value all together. Basically, 5 dollars is 5 dollars, you don't really care which is which, any of them is good enough, because only the value is important.
Another thing is, as a domain modeler you identify the nature of a concept based on how the business looks at a concept. The business tells you what they care about.
Now, that we know that a concept can be a Entity, we can select a certain instance of it (User with Id 3). You can't do that with VO, because a VO doesn't have an identity.
Even more, when we identify aggregates, most of the time, the aggregate components (other concepts) are mostly VOs, because they usually are just values (but they do respect business constraints).
So, in conclusion, we classify concepts into Entity and VO, because
The business sees them in this manner: uniquely identifiable or just value
Entities keep their identity regardless how they change (obviously the identity itself is read-only), we treat each one as unique
VO are values that can be used interchangeably, we don't care which is which as long as they represent the same value (which itself, as an implementation detail can be a complex - composite - value). Also, a VO by its nature is immutable, so we know that we can't change it without becoming another value.
Before the advent of DDD, we model the domain without entity and value object.After the DDD was put forwar,we use entity and value object to classify domain object, so what is the advantage of such classification?
You should review Chapter 5 ("A Model Expressed in Software") of the Blue Book.
Evans writes:
Defining objects that clearly follow one pattern or the other makes the objects less ambiguous and lays out the path toward specific choices for robust design.
Tracking the identity of ENTITIES is essential, but attaching identity to other objects can hurt system performance, add analytical work, and muddle the model by making all objects look the same.
... bidirectional associations between two VALUE OBJECTS just make no sense. Without identity, it is meaningless to say that an object points back to the same VALUE OBJECT that points to it. The most you could say is that it points to an object that is equal to the one pointing to it, but you would have to enforce that invariant somewhere.
My own summary would be this: recognizing that some domain concepts are entities is useful, because it encourages the designer to acknowledge that identity, continuity, and life cycle are important concerns for the entity in this domain. Similarly, recognizing that a concept is a value immediately frees you from those concerns, bringing your attention to their immutable nature and equivalence.