I'm having a really difficult time mapping from a DynamicObject using Automapper -- the properties on my destination type always end up null even though they do map to a property of the same name. I assume this has to do with reflection issues on DynamicObjects... Is Automapper able to do this?
Similar question to Allow mapping of dynamic types using AutoMapper or similar?
In summary AutoMapper does not support this. It's easy to write your own mapper to do this using reflection though. The accepted answer to Allow mapping of dynamic types using AutoMapper or similar? has an example of this.
Related
I am trying Abp framework recently and happily found that it is a wonderful implementation of DDD. But since it uses AutoMapper to translate DTOs into Entities/Aggregates, I have noticed it's able to short-circuit my private setters of Aggregates, which obviously violated the main rule of DDD. Although the goal of AutoMapper is to reduce manual operations, but DDD emphasizes invariant through private setters.
How can I make there two seemingly conflicting concept clear and use this framework smoothly? Does that mean I have to give up AutoMapper to keep DDD principles or vice versa?
I believe AutoMapper is not an anti-pattern of DDD since it's very popular in the community. In another word, if AutoMapper can use reflection (as I know) to set private setters, anybody else can. Does that means private setters is essentially unsafe?
Thanks for anyone could help me or have me a hint.
AutoMapper is: A convention-based object-object mapper in .NET.
In itself, AutoMapper does not violate the principle of DDD. It is how you use it that possibly does.
How can I make there two seemingly conflicting concept clear and use this framework smoothly? Does that mean I have to give up AutoMapper to keep DDD principles or vice versa?
No, you don't have to give up AutoMapper.
You can specify .IgnoreAllPropertiesWithAnInaccessibleSetter for each map.
Related: How to configure AutoMapper to globally Ignore all Properties With Inaccessible Setter(private or protected)?
In another word, if AutoMapper can use reflection (as I know) to set private setters, anybody else can. Does that means private setters is essentially unsafe?
No, that means that reflection is very powerful.
Don't know a lot about the Abp framework. Private setters is just good old traditional OOP which is used in DDD (encapsulation). You should expose public methods from your aggregate that will change its state. Automapper can be used in your application layer where you map the DTOs to domain building blocks (like value objects) and you pass them as parameters in your aggregate public functions that will change its own state and enforce invariants. Having said that not everyone loves Automapper :)
How can I make there two seemingly conflicting concept clear and use this framework smoothly?
By configuring AutoMapper's profile to construct the aggregate root using a custom expression that uses the aggregate's factory methods or constructors. Here is an example from one of my projects:
public class BphNomenclatureManagerApplicationAutoMapperProfile : Profile
{
public BphNomenclatureManagerApplicationAutoMapperProfile()
{
CreateMap<BphCompany, BphCompanyDto>(MemberList.Destination);
CreateMap<CreateUpdateBphCompanyDto, BphCompany>(MemberList.Destination)
// invariants preserved by use of AR constructor:
.ConstructUsing(dto => new BphCompany(
dto.Id,
dto.BphId,
dto.Name,
dto.VatIdNumber,
dto.TradeRegisterNumber,
dto.IsProducer
));
}
}
According to documentation Automapper performs an automatic mapping besides the "normal" (property to property within mapable types).
However this functionality caused some unintentional behavior when dealing with some DTO within Entity Framework sometimes triggering data loads through navigation properties and I thought about disabling it altogether (i.e. at MapperConfiguration level).
I know that changing some names or using [NotMapped] might do the trick, but this requires paying attention to each case.
Question: Does Automapper allow disabling the (auto-)flattening?
No, but you can write a naming convention that doesn't do anything. See this PR for an example.
I am new to the Groovy programming language and I am trying to fully understand the dynamic nature and capabilities it has. What I do know is that every class created in Groovy in its most basic form looks like this (implements GroovyObject and extending java Object).
public class Foo implements groovy.lang.GroovyObject extends java.lang.Object { }
Groovy object also contains a MetaClass that extends MetaObjectProtocol. It is this class hierarchy that provides some of Groovy's dynamic capabilities. This includes the ability to introspect itself (getProperties,getMethods) or intercept methods (invokeMethod,missingMethod).
I also understand the different types of meta programming available in Groovy. These give you the ability to add or override functionality at runtime or compile time.
Runtime
Categories
Expando / MetaClass / ExpandoMetaClass
Compile Time
AST Transformations
Extension Module
Now that have some of that out of the way we can get to the meat of this question. When someone or a book refers to the "Metaobject Protocol" in Groovy are we talking about a specific class or a collection of things. I have hard time grasping something that isn't defined or set in stone. One of my books defined it as
A protocol is a collection of rules and formats. The
Meta-Object-Protocol (MOP) is the collection of rules of how a request
for a method call is handled by the Groovy runtime system and how to
control the intermediate layer. The "format" of the protocol is
defined by the respective APIs,
I also have Venkat's Programming Groovy 2 book and in it there is a diagram that defines this method lookup process. So I am guessing this is the rules of how we request a method (at least a POGO, I understand a POJO is different).
Anyways I think I am going down the right path but I feel like I am still missing that "ahhhaa" moment. Can anyone fill me in on what I am missing? Or at the very least tell me my ramblings here made some sort of sense :) Thank you!!
This is the answer. "The Meta-Object-Protocol (MOP) is the collection of rules of how a request for a method call is handled by the Groovy runtime system and how to control the intermediate layer." Once you understand the process a method call goes through and the API that comes with it I think it all makes sense. I was just over thinking it all. Thanks!!
As suggested in this answer Using Profiles in Automapper to map the same types with different logic, can anybody provide an example of using Automapper with different Configuration object?
I need to have two or more configuration for mapping between two objects. How can I register many configuration objects and how to use them?
I'm playing around with the SimpleRepository provider (with automigrations) in SubSonic 3 and I have an annoying problem:
The only way I can control the string length in my database tables is by adding the SubSonicStringLength or SubSonicLongString attributes to the properties of the objects that need to be persisted.
I don't really want a dependency on SubSonic anywhere except in my repository class, and certainly not in my model objects if I can avoid it.
Are there anyways to get round this or am I stuck using SubSonicStringLength and the other attributes?
Basically the only way around this would be to have a DTO object that you map to/from your SimpleRepository classes inside your repository. You could use a mapping tool like AutoMapper to convert to/from your DTOs to your SimpleRepo objects.
This would isolate your application from SubSonic dependencies outside of your repo but would obviously involve a non trivial amount of work.