I am fairly new to DDD and here is my dilemma:
I have to persist an entity A that has a reference to entity B (let us consider both are entity roots). The UI layer gathers all these info (at controller) through A_DTO (DTO class for A), maps the attributes to a new instance of A from the DTO, now for the reference to B in A , UI sends an id. As I am using an ORM behind the repositories I would want to lookup for the object instance of B from BRepository , populate the reference on the new A instance that we are building and finally call ARepository.save(A instance).
I have a few options here
Do all these in UI layer(either in controller or some kind of service facade) or
Do this in a ApplicationService called createA or even a domain Service .
Which of options will be correct ??. Here what really stands out is the process of looking up of B by its id to get the reference to set on A object, this can be equally argued as process to keep ORM satisfied or to keep the domain model consistent. There might be some implicit business rules and validations around the process of setting reference of B on A, these I think are the driving points for the decision here.
Also what could queer the pitch further here will be a consideration for validations, should validations be weaved in the process of creation of an entity and say the constructors and/or setters through specific errors that can be bubbled to the client through UI and have another level of validatins through repos ?? or as an explicit step happening in the controllers ??
The DTO is merely a convenience class for transporting data within the UI layer. The fact that you use an ID to refer to B is an implementation detail of the UI layer. So it should be the job of the UI layer / controller to map the DTO to a domain object, including translating IDs to references.
Validation, on the other hand, belongs rightly in the domain layer. In this regard, the only job of the UI is to set values in the domain object and display any errors arising from this.
Both options can be viewed as correct, but I tend to prefer option 2 because the encapsulation provided by application services helps in reading and understanding the code. It also makes it easier to consolidate the API of your domain. The argument in favor of option 1 over 2 is that the additional layer of encapsulation resulting from use of application services is needless complexity, though of course you be the judge. Validation is usually manifested in several layers of a application, including the presentation layer and the domain layer. It seems ideal to write validation logic once and have it reused everywhere else, in practice it is usually easier to duplicate validation logic. That means the the presentation layer, such as ASP.NET MVC, has its own validation declarations for view models. Then the application service and domain entities should also perform any validation that is needed in that context. Take a look at my posts on services in DDD as well as validation in DDD for in depth discussion on these topics.
Related
While i am practicing DDD in my software projects, i have always faced the question of "Why should i implement my business rules in the entities? aren't they supposed to be pure data models?"
Note that, from my understanding of DDD, domain models could be consist of persistent models as well as value objects.
I have come up with a solution in which i separate my persistent models from my domain models. On the other hand we have data transfer objects (DTO), so we have 3 layers of data mapping. Database to persistence model, persistence model to domain models and domain models to DTOs. In my opinion, my solution is not an efficient one as too much hard effort must be put into it.
Therefore is there any better practice to achieve this goal?
Disclaimer: this answer is a little larger that the question but it is needed to understand the problem; also is 100% based on my experience.
What you are feeling is normal, I had the same feeling some time ago. This is because of a combination of architecture, programming language and used framework. You should try to choose the above tools as such that they give the code that is easiest to change. If you have to change 3 classes for each field added to an entity then this would be nightmare in a large project (i.e. 50+ entity types).
The problem is that you have multiple DTOs per entity/concept.
The heaviest architecture that I used was the Classic layered architecture; the strict version was the hardest (in the strict version a layer may access only the layer that is just before it; i.e. the User interface may access only the Application). It involved a lot of DTOs and translations as the data moved from the Infrastructure to the UI. The testing was also hard as I had to use a lot of mocking.
Then I inverted the dependency, the Domain will not depend on the Infrastructure. For this I defined interfaces in the Domain layer that were implemented in the Infrastructure. But I still needed to use mocking for them. Also, the Aggregates were not pure and they had side effects (because they called the Infrastructure, even it was abstracted by interfaces).
Then I moved the Domain to the very bottom. This made my Aggregates pure. I no longer needed to use mocking. But I still needed DTOs (returned by the Application layer to the UI and those used by the ORM).
Then I made the first leap: CQRS. This splits the models in two: the write model and the read model. The important thing is that you don't need to use DTOs for models anymore. The Aggregate (the write model) can be serialized as it is or converted to JSON and stored in almost any database. Vaughn Vernon has a blog post about this.
But the nicest are the Read models. You can create a read model for each use case. Being a model used only for read/query, it can be as simple/dump as possible. The read entities contain only query related behavior. With the right persistence they can be persisted as they are. For example, if you use MongoDB (or any document database), with a simple reflection based serializer you can have a very thin architecture. Thanks to the domain events, you won't need to use JOINS, you can have full data denormalization (the read entities include all the data they need).
The second leap is Event sourcing. With this you don't need a flat persistence for the Aggregates. They are rehydrated from the Event store each time they handle a command.
You still have DTOs (commands, events, read models) but there is only one DTO per entity/concept.
Regarding the elimination of DTOs used by the Presentation: you can use something like GraphSQL.
All the above can be made worse by the programming language and framework. Strong typed programming languages force you to create a type for each custom returned value. Some frameworks force you to return a custom serializable type in order to return them to REST over HTTP requests (in this way you could have self-described REST endpoints using reflection). In PHP you can simply use arrays with string keys as value to be returned by a REST controller.
P.S.
By DTO I mean a class with data and no behavior.
I'm not saying that we all should use CQRS, just that you should know that it exists.
Why should i implement my business rules in the entities? aren't they supposed to be pure data models?
Your persistence entities should be pure data models. Your domain entities describe behaviors. They aren't the same thing; it is a common pattern to have a bit of logic with in the repository to change one to the other.
The cleanest way I know of to manage things is to treat the persistent entity as a value object to be managed by the domain entity, and to use something like a data mapper for transitions between domain and persistence.
On the other hand we have data transfer objects (DTO), so we have 3 layers of data mapping. Database to persistence model, persistence model to domain models and domain models to DTOs. In my opinion, my solution is not an efficient one as too much hard effort must be put into it.
cqrs offers some simplification here, based on the idea that if you are implementing a query, you don't really need the "domain model" because you aren't actually going to change the supporting data. In which case, you can take the "domain model" out of the loop altogether.
DDD and data are very different things. The aggregate's data (an outcome) will be persisted somehow depending on what you're using. Personally I think in domain events so the resulting Domain Event is the DTO (technically it is) that can be stored directly in an Event Store (if you're using Event Sourcing) or act as a data source for your persistence model.
A domain model represents relevant domain behaviour with the domain state being the 'result'. An entity is concept which has an id, compared to a Value Object which represents a business semantic value only. An entity usually groups related value objects and consistency rules. Not all business rules are here , some of them make sense as a service.
Now, there is the case of a CRUD domain or CRUD modelling where basically all you have is some data structures plus some validation rules. No need to complicate your life here if the modeling is correct. Implement things as simple as possible.
Always think of DDD as a methodology to gather requirements and to structure information. Implementation as in code (design) is something different.
I am using Jimmy Bogard's lovely Automapper to map my API model to the POCO classes (domain model).
The API model does not contain certain attributes of the domain model which are necessary for the domain model to be in valid state. In this scenario, when Automapper finishes its job, I have a fully constructed domain model in an invalid state.
Not using Automapper is not an option because there are too many attributes to hand code.
So, here is my question:
How do I use automapper to create the object in this manner while leaving the domain model in a valid state.
Thanks
This is probably too long for a comment:
Although Jimmy's answer is correct it may imply that your design may not be as encapsulated as it could be since you are going to have to expose a lot of state. This may be why there are comments around "anemic" models and "not fit for purpose".
If you are focused on behaviour then you may have something, totally hypothetical, as follows to make your customer a Gold customer:
customer.MakeGold();
However, using state you would now have to expose attributes that allow you to map to a Gold status. Internally your customer may have checked certain other state to determine the validity of the Gold status whereas that validity check has now moved out of the domain in the way Jimmy says he makes sure that the state is correct before passing it to the domain.
This is not so much an auto-mapper issue as it is a design issue. It also seems to indicate that your API/Integration model may be more data-centric.
On the other hand, if you are mapping to, say, command-style value objects that are handed to the domain it may not be as bad as that :)
// map my APIActivationDetails to Activate --- however automapper does this :)
var activate = AutoMapper.Map<Activate>().From(apiActivationDetails);
customer.Activate(activate);
Come to think of it... this seems to be what Jimmy is saying :P
"to map my API model to the POCO classes (domain model)"
Why would you ever want to map command object (your API model) attributes to domain objects? If that's what you are aiming for you will mostly end up with over-engineered CRUD, but certainly not a DDD solution.
Behaviors should be explicitly declared on aggregates and aggregates are responsible for mutating their own internal state according to the business invariants they protect.
That way, domain model clients can clearly state their intention and this intention isin't lost through the business process.
You shouldn't try to shove data into aggregates, instead you should ask them to perform a task.
I validate the API model before it gets mapped into my domain model, whether I'm using AutoMapper or not. The API model represents a command or action, so I'm validating the command before I apply it to my entity, not mutate my entity and then try to validate it afterwards. This also means all my validation attributes are on my API model.
From what I've read and implemented, DTO is the object that hold a subset of value from a Data model, in most cases these are immutable objects.
What about the case where I need to pass either new value or changes back to the database?
Should I work directly with the data model/actual entity from my DAL in my Presentation layer?
Or should I create a DTO that can be passed from the presentation layer to the business layer then convert it to an entity, then be updated in the DB via an ORM call. Is this writing too much code? I'm assuming that this is needed if the presentation layer has no concept of the data model. If we are going with this approach, should I fetch the object again at the BLL layer before committing the change?
A few thoughts :
DTO is a loaded term, but as it stands for Data Transfer Object, I see it more as a purely technical, potentially serializable container to get data through from one point to another, usually across tiers or maybe layers. Inside a layer that deals with business concerns, such as the Domain layer in DDD, these little data structures that circulate tend to be named Value Objects instead, because they have a business meaning and are part of the domain's Ubiquitous Language. There are all sorts of subtle differences between DTO's and Value Objects, such as you usually don't need to compare DTO's, while comparison and equality is an important concern in VO's (two VO's are equal if their encapsulated data is equal).
DDD has an emphasis on the idea of rich domain model. That means you usually don't simply map DTO's one-to-one to domain entities, but try to model business actions as intention-revealing methods in your entities. For instance, you wouldn't use setters to modify a User's Street, City and ZipCode but rather call a moveTo(Address newAddress) method instead, Address being a Value Object declared in the Domain layer.
DTO's usually don't reach the Domain layer but go through the filter of an Application layer. It can be Controllers or dedicated Application Services. It's Application layer objects that know how to turn DTO's they got from the client, into the correct calls to Domain layer Entities (usually Aggregate Roots loaded from Repositories). Another level of refinement above that is to build tasked-based UIs where the user doesn't send data-centric DTO's but Commands that reflect their end goal.
So, mapping DTO's to Entities is not really the DDD way of doing things, it denotes more of a CRUD-oriented approach.
Should I work directly with the data model/actual entity from my DAL in my Presentation layer?
This is okay for small to medium projects. But when you have a large project with more than 5 developers where different layers are assigned to different teams, then the project benefits from using a DTO to separate the Data Layer from the Presentation Layer.
With a DTO in the middle, any changes in the presentation layer won't affect the data layer (vice versa)
Or should I create a DTO that can be passed from the presentation layer to the business layer then convert it to an entity, then be updated in the DB via an ORM call. Is this writing too much code? I'm assuming that this is needed if the presentation layer has no concept of the data model. If we are going with this approach, should I fetch the object again at the BLL layer before committing the change?
For creating a new entity, then that is the usual way to go (for example "new user"). For updating an existing entity, you don't convert a DTO to an entity, rather you fetch the existing entity, map the new values then initiate an ORM update.
UpdateUser(UserDto userDto)
{
// Fetch
User user = userRepository.GetById(userDto.ID);
// Map
user.FirstName = userDTO.FirstName;
user.LastName = userDTO.LastName;
// ORM Update
userRepository.Update(user);
userRepository.Commit();
}
For large projects with many developers, the disadvantage of writing too much code is minimal compared to the huge advantage of decoupling it provides.
See my post about Why use a DTO
My opinion is that DTOs represent the contracts (or messages, if you will) that form the basis for interaction between an Aggregate Root and the outside world. They are defined in the domain, and the AR needs to be able to both handle incoming instances and provide outgoing instances. (Note that in most cases, the DTO instances will be either provided by the AR or handled by the AR, but not both, because having one DTO that flows both ways is usually a violation of separation of concerns.)
At the same time, the AR is responsible for providing the business logic through which the data contained in the DTOs are processed. The presentation layer (or any other actor including the data access layer, for that matter) is free to put whatever gibberish it wants into a DTO and request that the AR process it, and the AR must be able to interpret the contents of the DTO as gibberish and raise an exception.
Because of this requirement, it is never appropriate to simply map a DTO back to its Entity counterpart.
The DTO must always be processed through the logic in the AR in order to affect changes in the Entity that may bring it to the state described by the DTO.
When you are developing an architecture in OO/DDD style and modeling some domain entity e.g. Order entity you are putting whole logic related to order into Order entity.
But when the application becomes more complicated, Order entity collects more and more logic and this class becomes really huge.
Comparing with anemic model, yes its obviously an anti-pattern, but all that huge logic is separated in different services.
Is it ok to deal with huge domain entities or i understand something wrong?
When you are trying to create rich domain models, focus entities on identity and lifecyle, and thus try to avoid them becoming bloated with either properties or behavior.
Domain services potentially are a place to put behavior, but I tend to see a lot of domain service methods with behavior that would be better assigned to value objects, so I wouldn't start refactoring by moving the behavior to domain services. Domain services tend to work best as straightforward facades/adaptors in front of connections to things outside of the current domain model (i.e. masking infrastructure concerns).
You can also put behavior in Application services, but ask yourself whether that behavior belongs outside of the domain model or not. As a general rule, try to focus application services more on orchestration-style tasks that cross entities, domain services, repositories.
When you encounter a bloated entity then the first thing to do is look for sets of cohesive set of entity properties and related behavior, and make these implicit concepts explicit by extracting them into value objects. The entity can then delegate its behavior to these value objects.
Since we all tend to be more comfortable with entities, try to be more biased towards value objects so that you get the benefits of immutability, encapsulation and composability that value objects provide - moving you towards a more supple design.
Value objects enable you to incorporate a more functional style (eg. side-effect-free functions) into your domain model and thus free up your entities from having to deal with the complexity of adding complicated behavior to the burden of managing identity and lifecycle. See the pattern summaries for entities and value objects in Eric Evan's http://domainlanguage.com/ddd/patterns/ and the Blue Book for more details.
When you are developing an architecture in OO/DDD style and modeling
some domain entity e.g. Order entity you are putting whole logic
related to order into Order entity. But when the application becomes
more complicated, Order entity collects more and more logic and this
class becomes really huge.
Classes that have a tendency to become huge, are often the classes with overlapping responsibilities. Order is a typical example of a class that could have multiple responsibilities and that could play different roles in your application.
Given the context the Order appears in, it might be an Entity with mutable state (i.e. if you're managing Order's commercial condition, during a negotiation phase) but if you're application is managing logistics, an Order might play a different role: and an immutable Value Object might be the best implementation in the logistic context.
Comparing with anemic model, yes its
obviously an anti-pattern, but all that huge logic is separated in
different services.
...and separation is a good thing. :-)
I have got a feeling that the original model is probably data-centric and data serving different purposes (order creation, payment, order fulfillment, order delivery) is piled up in the same container (the Order class). Can't really say it from here, but it's a very frequent pattern. Not all of this data is useful for the same purpose at the same time.
Often, a bloated class like the one you're describing is a smell of a missing separation between Bounded Contexts, and/or an incomplete Aggregate separation within the same bounded context. I'd have a look to:
things that change together;
things that change for the same reason;
information needed to fulfill behavior;
and try to re-define aggregate boundaries accordingly. And also to:
different purposes for the application;
different stakeholders;
different implicit models/languages;
when it comes to discover the involved contexts.
In a large application you might have more than one model, thus leading to more than a single representation of a single domain concept, at least for concepts that are playing many roles.
This is complementary to Paul's approach.
It's fine to use services in DDD. You will commonly see services at the Domain, Application or Infrastructure layers.
Eric uses these guidelines in his book for spotting when to use services:
The operation relates to a domain concept that is not a natural part of an ENTITY or VALUE OBJECT.
The interface is defined in terms of other elements in the domain model
The operation is stateless
An interesting thread came up when I typed in this question just now. I don't think it answers my question though.
I've been working a lot with .NET MVC3, where it's desirable to have an anemic model. View models and edit models are best off as dumb data containers that you can just pass off from a controller to a view. Any kind of application flow should come from the controllers, and the views handle UI concerns. In MVC, we don't want any behavior in the model.
However we don't want any business logic in the controllers either. For larger applications its best to keep the domain code separate from and independent of the models, views, and controllers (and HTTP in general for that matter). So there is a separate project which provides, before anything else, a domain model (with entities and value objects, composed into aggregates according to DDD).
I've made a few attempts to move away from an anemic model towards a richer one in domain code, and I'm thinking of giving up. It seems to me like having entity classes which both contain data and behavior violates the SRP.
Take for example one very common scenario on the web, composing emails. Given some event, it is the domain's responsibility to compose an EmailMessage object given an EmailTemplate, EmailAddress, and custom values. The template exists as an entity with properties, and custom values are provided as input by the user. Let's also say for the sake of argument that the EmailMessage's FROM address can be provided by an external service (IConfigurationManager.DefaultFromMailAddress). Given those requirements, it seems like a rich domain model could give the EmailTemplate the responsibility of composing the EmailMessage:
public class EmailTemplate
{
public EmailMessage ComposeMessageTo(EmailAddress to,
IDictionary<string, string> customValues, IConfigurationManager config)
{
var emailMessage = new EmailMessage(); // internal constructor
// extension method
emailMessage.Body = this.BodyFormat.ApplyCustomValues(customValues);
emailMessage.From = this.From ?? config.DefaultFromMailAddress;
// bla bla bla
return emailMessage;
}
}
This was one of my attempts at rich domain model. After adding this method though, it was the EmailTemplate's responsibility to both contain the entity data properties and compose messages. It was about 15 lines long, and seemed to distract the class from what it really means to be an EmailTemplate -- which, IMO, is to just store data (subject format, body format, attachments, and optional from/reply-to addresses).
I ended up refactoring this method into a dedicated class who's sole responsibility is composing an EmailMessage given the previous arguments, and I am much happier with it. In fact, I'm starting to prefer anemic domains because it helps me keep responsibilities separate, making classes and unit tests shorter, more concise, and more focused. It seems that making entities and other data objects "devoid of behavior" can be good for separating responsibility. Or am I off track here?
The argument in favor of a rich domain model instead of an anemic model hinges on one of the value propositions of OOP, which is keeping behavior and data next to each other. The core benefit is that of encapsulation and cohesion which assists in reasoning about the code. A rich domain model can also be viewed as an instance of the information expert pattern. The value of all of these patterns however is to a great extent subjective. If it is more helpful for you to keep data and behavior separate, then so be it, though you might also consider other people that will be looking at the code. I prefer to encapsulate as much as I can. The other benefit of a richer domain model in this case would be the possibility to make certain properties private. If a property is only used by one method on the class, why make it public?
Whether a rich domain model violates SRP depends on your definition of responsibility. According to SRP, a responsibility is a reason to change, which itself calls for a definition. This definition will generally depend on the use-case at hand. You can declare that the responsibility of the template class is to be a template, with all of the implications that arise, one of which is generating a message from the template. A change in one of the template properties can affect the ComposeMessageTo method, which indicates that perhaps these are a single responsibility. Moreover, the ComposeMessageTo method is the most interesting part of the template. Clients of the template don't care how the method is implemented or what properties are present of the template class. They only want to generate a message based on the template. This also votes in favor of keeping data next to the method.
Well, it depends on how you want to look at it.
Another way is: "Can the Single Responsibility Principle violate a rich domain model?"
Both are guidelines. There is no "principle" anywhere in software design. There are, however, good designs and bad designs. Both these concepts can be used in different ways, to achieve a good design.