I am trying Abp framework recently and happily found that it is a wonderful implementation of DDD. But since it uses AutoMapper to translate DTOs into Entities/Aggregates, I have noticed it's able to short-circuit my private setters of Aggregates, which obviously violated the main rule of DDD. Although the goal of AutoMapper is to reduce manual operations, but DDD emphasizes invariant through private setters.
How can I make there two seemingly conflicting concept clear and use this framework smoothly? Does that mean I have to give up AutoMapper to keep DDD principles or vice versa?
I believe AutoMapper is not an anti-pattern of DDD since it's very popular in the community. In another word, if AutoMapper can use reflection (as I know) to set private setters, anybody else can. Does that means private setters is essentially unsafe?
Thanks for anyone could help me or have me a hint.
AutoMapper is: A convention-based object-object mapper in .NET.
In itself, AutoMapper does not violate the principle of DDD. It is how you use it that possibly does.
How can I make there two seemingly conflicting concept clear and use this framework smoothly? Does that mean I have to give up AutoMapper to keep DDD principles or vice versa?
No, you don't have to give up AutoMapper.
You can specify .IgnoreAllPropertiesWithAnInaccessibleSetter for each map.
Related: How to configure AutoMapper to globally Ignore all Properties With Inaccessible Setter(private or protected)?
In another word, if AutoMapper can use reflection (as I know) to set private setters, anybody else can. Does that means private setters is essentially unsafe?
No, that means that reflection is very powerful.
Don't know a lot about the Abp framework. Private setters is just good old traditional OOP which is used in DDD (encapsulation). You should expose public methods from your aggregate that will change its state. Automapper can be used in your application layer where you map the DTOs to domain building blocks (like value objects) and you pass them as parameters in your aggregate public functions that will change its own state and enforce invariants. Having said that not everyone loves Automapper :)
How can I make there two seemingly conflicting concept clear and use this framework smoothly?
By configuring AutoMapper's profile to construct the aggregate root using a custom expression that uses the aggregate's factory methods or constructors. Here is an example from one of my projects:
public class BphNomenclatureManagerApplicationAutoMapperProfile : Profile
{
public BphNomenclatureManagerApplicationAutoMapperProfile()
{
CreateMap<BphCompany, BphCompanyDto>(MemberList.Destination);
CreateMap<CreateUpdateBphCompanyDto, BphCompany>(MemberList.Destination)
// invariants preserved by use of AR constructor:
.ConstructUsing(dto => new BphCompany(
dto.Id,
dto.BphId,
dto.Name,
dto.VatIdNumber,
dto.TradeRegisterNumber,
dto.IsProducer
));
}
}
Related
I know you can't create a program that adheres 100% to the Dependency Inversion Principle. All of us violate it by instantiation strings in our programs without thinking about it. Since String is a class and not a datatype, we always become dependent on a concrete class.
I was wondering if there are any solutions for this (purely theoretical speaking). Since String is pretty much a blackbox with very few 'leaks', and has a complex background algorithm, I don't expect an actual implementation ofcourse :)
The intent of the principle is not to avoid creating instances within a class, or to avoid using the "new" keyword. Therefore instantiating objects (or strings) does not violate the principle.
The principle is also not about always creating a higher-level abstraction (e.g. an interface or base class) in order to inject it and promote looser coupling. If an abstraction is already reasonable, there is no reason to try to improve on it. What benefit would you ever gain by swapping out the implementation of string?
I actually posted this question a few years ago (semi-relevant): IOC/DI: Is Registering a Concrete Type a Code Smell?
So what is the principle about? It's about writing components that are highly focused on their own responsibilities, and injecting components that are highly focused on their own responsibilities. These components are usually services when using a dependency injection framework and constructor injection, but can also be datatypes when doing other types of injection (e.g. method injection).
Note that there is no requirement for these services or datatypes to be interfaces or base classes--they can absolutely be concrete types without violating the principle.
Dependeny inversion is not about creation of object, its about high-level/low-level module dependency and who define the Domain (object and interface).
You are talking about dependency injection, a sub-part of the Inversion of Control principle.
In DDD, should any class that is not an Entity or an Value Object be a Service?
For example, in libraries some classes are named FileReader (which read a File object), Cache interface that is implemented by MemcachedCache or FileCache, XXXManager, ...
I understand outside of DDD, you can name your classes however you want to.
But in DDD (and with the same examples), should I name my classes like FileReadingService, CacheService implemented by FileCacheService, XXXService, etc ?
I think this is really something which is only relevant to your projects naming standards. DDD does not dictate that level of detail.
My only advice would be to make sure something like FileReader is clearly segregated away from you domain. Possibly inside you infrastructure library,
There are additional types of objects in DDD, albeit in a more supporting role than Entity, Service, or ValueObject. Things like Repositories and Factories spring to mind. But in general, 'real' objects such as physical objects, or nouns in a problem description, should fall into one of those categories.
Well, i will say YES on that. even though there are other kinds of objects you might encounter but those probably will turn out to be VALUE objects after all. i think of it like this: if it is not an object that needs storing or an object that is managed by an Aggregate root then it must a service managing them.
The POCO class has behaviors in domain driven design such as Validate() method, Is it true?
Yes - the "Entity" encapsulates the data and the behaviour of the object - so it isn't a plain old contract object any longer, it is a domain object.
One way to think of it is to imagine that none of your other code could see the properties of the object, so they can't do...
if (myDomainObject.Name != null) ...
They have to call
if (myDomainObject.IsValid()) ...
When you change the rules about what makes it valid, the change only needs to be done in the domain object as you have stopped the logic from leaking outside into the code that uses it.
Yes, the classes of the Domain Model in Domain-Driven Design should focus on behavior, if that is what you mean.
No. They do not have methods like Validate().
A DDD entity should always be in a valid state. That's why we use behaviors (methods) on the classes and not public property setters.
Does this approach cause POCO have dependency?
No. Typically everything depends on the DDD model and not vice versa.
So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.
i've read a blog about DDD from Matt Petters
and according and there it is said that we create a repository (interface) for each entity and after that we create a RepositoryFactory that is going to give instances (declared as interfaces) of repositories
is this how project are done using DDD ?
i mean, i saw projects that i thought that they use DDD but they were calling each repository directly, there was no factory involved
and also
why do we need to create so much repository classes, why not use something like
public interface IRepository : IDisposable
{
T[] GetAll();
T[] GetAll(Expression<Func> filter);
T GetSingle(Expression<Func> filter);
T GetSingle(Expression<Func> filter, List<Expression<Func>> subSelectors);
void Delete(T entity);
void Add(T entity);
int SaveChanges();
}
i guess it could be something with violating the SOLID principles, or something else ?
There are many different ways of doing it. There's is not single 'right' way of doing it. Most people prefer a Repository per Entity because it lets them vary Domain Services in a more granular way. This definitely fits the 'S' in SOLID.
When it comes to factories, they should only be used when they add value. If all they do is to wrap a new operation, they don't add value.
Here are some scenarios in which factories add value:
Abtract Factories lets you vary Repository implementations independently of client code. This fits well with the 'L' in SOLID, but you could also achieve the same effect by using DI to inject the Repository into the Domain Service that requires it.
When the creation of an object in itself is such a complex operation (i.e. in involves much more than just creating a new instance) that it is best encapsulated behind an API.