Automapper: Mapping hierarchy - automapper

For my ASP.NET web application, I'm currently using Automapper to map from models (DTOs) -> view models. My view models have all string properties, because I'm using Mustache, a logic-less template engine.
I'm exposing an API to my website (via JSON, etc.), and what I'd like to do is perform the following mapping:
Model -> Base ViewModel -> Web ViewModel
Then, "Base ViewModel" can be serialized for my API (eg. with numerical values for currency). From there, I'll do a simple mapping for my "Web ViewModel" (eg. with formatted currency value strings, links, etc).
Problem is, I can't seem to get this to work. Defining the Model -> Base ViewModel mapping and Base ViewModel -> Web ViewModel mappings seperately isn't enough it seems to get my Web ViewModel, and if I explicitly add the Model -> Web ViewModel mapping, Automapper just tries to map directly, skipping the intermediate step which I rely on.
Can/should Automapper be used like this? I realize that I could probably explicitly just do two sequential conversions to achieve the correct result, but I thought I'd ask here to see whether I can get Automapper to handle the conversion in one step.

Well, I don't believe (or to be honest I don't know how) it could be possible.
But you could try
Create your Mappings
Model.CreateMap<Model, BaseViewModel>()...
Model.CreateMap<BaseViewModel, WebViewModel>()...
and try a generic helper like this, to be changed for your needs
public static void TwoStepMapping<TSource, TIntermediate, TDest>(TSource source, TDest dest) where TIntermediate : new()
{
Mapper.Map(Mapper.Map(source, new TIntermediate()), dest);
}
call :
TwoStepMapping<Model, BaseViewModel, WebViewModel>(model, webViewModel);

Related

Extending a JOOQ Table class

I have a 'document' table (very original) that I need to dynamically subset at runtime so that my API consumers can't see data that isn't legal to view given some temporal constraints between the application/database. JOOQ created me a nice auto-gen Document class that represents this table.
Ideally, I'd like to create an anonymous subclass of Document that actually translates to
SELECT document.* FROM document, other_table
WHERE document.id = other_table.doc_id AND other_table.foo = 'bar'
Note that bar is dynamic at runtime hence the desire to extend it anonymously. I can extend the Document class anonymously and everything looks great to my API consumers, but I can't figure out how to actually restrict the data. accept() is final and toSQL doesn't seem to have any effect.
If this isn't possible and I need to extend CustomTable, what method do I override to provide my custom SQL? The JOOQ docs say to override accept(), but that method is marked final in TableImpl, which CustomTable extends from. This is on JOOQ 3.5.3.
Thanks,
Kyle
UPDATE
I built 3.5.4 from source after removing the "final" modifier on TableImpl.accept() and was able to do exactly what I wanted. Given that the docs imply I should be able to override accept perhaps it's just a simple matter of an erroneous final declaration.
Maybe you can implement one of the interfaces
TableLike (and delegate all methods to a JOOQ implementation instance) such as TableImpl (dynamic field using a HashMap to store the Fields?)
Implement the Field interface (and make it dynamic)
Anyway you will need to remind that there are different phases while JOOQ builds the query, binds values, executes it etc. You should probably avoid changing the "foo" Field when starting to build a query.
It's been a while since I worked with JOOQ. My team ended up building a customized JOOQ. Another (dirty) trick to hook into the JOOQ library was to use the same packages, as the protected identifier makes everything visible within the same package as well as to sub classes...

spring integration: Custom header enricher that records header values to database

I would like to have a custom header-enricher that takes the header values to be added and adds them to the header and also, records them in the database. I was trying to create a custom spring tag say: db-recording-header-enricher and use that instead of header-enricher tag wherever I am interested in recording the headers to the database.
And here's what I have so far:
I have custom spring XML name-space with custom element db-recorder-header-enricher correctly configured. I have a test spring integration xml that I am using to test whether the parser is functioning correctly. The test is loading the test XML correctly, except I want to use my custom parser below instead of the HeaderEnricher which it picks up by default as the transformer.
The processor for db-recording-header-enricher looks like:
DbRecorderHeaderEnricherParser implements BeanDefinitionParser {
#Override
public BeanDefinition parse(Element element, ParserContext parserContext) {
BeanDefinition beanDefinition = new StandardHeaderEnricherParser().parse(element, parserContext);
// Set the header Enricher processor to be my custom processor
// beanDefinition.setHeaderEnricherProcessor(dbRecordingHeaderEnricher);
return beanDefinition;
}
}
The problem I am facing is this:
Based on the parser definition above if I use StandardHeaderEnricherParser to parse my xml, I cannot find a way to associate DbRecordingHeaderEnricher as the transformer for the parsing of the header-enricher. Even if I extend StandardHeaderEnricherParser the method below is final, so again I cannot seem to give it my custom parser for transforming purposes.
#Override
protected final String getTransformerClassName() {
return HeaderEnricher.class.getName();
}
All I want to do in my custom parser is associate my custom header enricher (which extends HeaderEnricher class) for the parsing of the headers and creating records into the database for the headers added. If it's not possible the way I am thinking about it, what are some of the other alternatives? Can I use AOP/advice on a transformer?
This is fairly advanced. You will need a schema, a namespace handler that associates the parser with the namespace element and the parser itself.
It might be simpler to use a <transformer/> and simply reference your bean that adds the headers (and stores them).
If you want to learn how to write your own namespace; a good place to get started is the STS project templates which will create all of the boiler plate for you.
EDIT:
In response to your updates...
Since it's still a bean definition, and not yet a bean, you can simply change the beanClassName property...
BeanDefinition beanDefinition = new StandardHeaderEnricherParser().parse(element, parserContext);
beanDefinition.setBeanClassName(Foo.class.getName());

Preventing StackOverflowException while serializing Entity Framework object graph into Json

I want to serialize an Entity Framework Self-Tracking Entities full object graph (parent + children in one to many relationships) into Json.
For serializing I use ServiceStack.JsonSerializer.
This is how my database looks like (for simplicity, I dropped all irrelevant fields):
I fetch a full profile graph in this way:
public Profile GetUserProfile(Guid userID)
{
using (var db = new AcmeEntities())
{
return db.Profiles.Include("ProfileImages").Single(p => p.UserId == userId);
}
}
The problem is that attempting to serialize it:
Profile profile = GetUserProfile(userId);
ServiceStack.JsonSerializer.SerializeToString(profile);
produces a StackOverflowException.
I believe that this is because EF provides an infinite model that screws the serializer up. That is, I can techincally call: profile.ProfileImages[0].Profile.ProfileImages[0].Profile ... and so on.
How can I "flatten" my EF object graph or otherwise prevent ServiceStack.JsonSerializer from running into stack overflow situation?
Note: I don't want to project my object into an anonymous type (like these suggestions) because that would introduce a very long and hard-to-maintain fragment of code).
You have conflicting concerns, the EF model is optimized for storing your data model in an RDBMS, and not for serialization - which is what role having separate DTOs would play. Otherwise your clients will be binded to your Database where every change on your data model has the potential to break your existing service clients.
With that said, the right thing to do would be to maintain separate DTOs that you map to which defines the desired shape (aka wireformat) that you want the models to look like from the outside world.
ServiceStack.Common includes built-in mapping functions (i.e. TranslateTo/PopulateFrom) that simplifies mapping entities to DTOs and vice-versa. Here's an example showing this:
https://groups.google.com/d/msg/servicestack/BF-egdVm3M8/0DXLIeDoVJEJ
The alternative is to decorate the fields you want to serialize on your Data Model with [DataContract] / [DataMember] fields. Any properties not attributed with [DataMember] wont be serialized - so you would use this to hide the cyclical references which are causing the StackOverflowException.
For the sake of my fellow StackOverflowers that get into this question, I'll explain what I eventually did:
In the case I described, you have to use the standard .NET serializer (rather than ServiceStack's): System.Web.Script.Serialization.JavaScriptSerializer. The reason is that you can decorate navigation properties you don't want the serializer to handle in a [ScriptIgnore] attribute.
By the way, you can still use ServiceStack.JsonSerializer for deserializing - it's faster than .NET's and you don't have the StackOverflowException issues I asked this question about.
The other problem is how to get the Self-Tracking Entities to decorate relevant navigation properties with [ScriptIgnore].
Explanation: Without [ScriptIgnore], serializing (using .NET Javascript serializer) will also raise an exception, about circular
references (similar to the issue that raises StackOverflowException in
ServiceStack). We need to eliminate the circularity, and this is done
using [ScriptIgnore].
So I edited the .TT file that came with ADO.NET Self-Tracking Entity Generator Template and set it to contain [ScriptIgnore] in relevant places (if someone will want the code diff, write me a comment). Some say that it's a bad practice to edit these "external", not-meant-to-be-edited files, but heck - it solves the problem, and it's the only way that doesn't force me to re-architect my whole application (use POCOs instead of STEs, use DTOs for everything etc.)
#mythz: I don't absolutely agree with your argue about using DTOs - see me comments to your answer. I really appreciate your enormous efforts building ServiceStack (all of the modules!) and making it free to use and open-source. I just encourage you to either respect [ScriptIgnore] attribute in your text serializers or come up with an attribute of yours. Else, even if one actually can use DTOs, they can't add navigation properties from a child object back to a parent one because they'll get a StackOverflowException.
I do mark your answer as "accepted" because after all, it helped me finding my way in this issue.
Be sure to Detach entity from ObjectContext before Serializing it.
I also used Newton JsonSerializer.
JsonConvert.SerializeObject(EntityObject, Formatting.Indented, new JsonSerializerSettings { PreserveReferencesHandling = PreserveReferencesHandling.Objects });

Entity Framework turn off caching

I am trying to figure out how to turn off caching of the DbContext using the repository pattern. Right now my view and CRUD functions use the same context, so putting .AsNoTracking() on the DbSet is not working because updating the data is not happening as it did before.
_context.Entry(e).State = e.Id == 0 ? EntityState.Added : EntityState.Modified;
_context.SaveChanges();
Can someone explain caching in EntityFramework, so that I can provide dynamic functionality where if a user updates a record and then they click a link to view other data, then the data represented on the new grid takes in to effect the change from the previous controller action...hope that makes since.
View Orders -> Update Order -> Save Order -> View Users -> View correctly shows Item count aggregate based off of order changes.
Question you are asking is not really easy.
what you would do depends on what type of entities you are using
as you probably know there are several generic options - EF entities, Self Tracking entities, POCO no proxy, POCO with proxy
depending on what you have you would either
1) reattach entity, call Load on navigation property of entity
2) reattach entity, call LoadProperty on context
3) just call either Load / LoadProperty if the context remain the same
What you refer as caching in fact is entity tracking , so you can turn it off either detaching entities or setting MergeOption to MergeOption.NoTracking on ObjectQuery

How can I combine/split properties with AutoMapper?

we're using Automapper (http://automapper.codeplex.com/) to map between entities and dto's. We have a situation where one property on an entity corresponds to three different properties on the dto, and we need to write some custom logic to do the mapping. Anyone know how we can do that? (We need to map both ways, from entity and from dto).
I notice that AutoMapper supports custom resolvers to do custom mapping logic, but as far as I can tell from the documentation, they only allow you to map a single property to another single property.
Thx
You can create a custom type converter. It allows you to define a converter for an entire type, not just a single property.

Resources