According to documentation Automapper performs an automatic mapping besides the "normal" (property to property within mapable types).
However this functionality caused some unintentional behavior when dealing with some DTO within Entity Framework sometimes triggering data loads through navigation properties and I thought about disabling it altogether (i.e. at MapperConfiguration level).
I know that changing some names or using [NotMapped] might do the trick, but this requires paying attention to each case.
Question: Does Automapper allow disabling the (auto-)flattening?
No, but you can write a naming convention that doesn't do anything. See this PR for an example.
Related
After reading this SO question, I noticed that the link in the question made a reference to Microsoft.Xrm.Client.CodeGeneration.CodeCustomization,Microsoft.Xrm.Client.CodeGeneration.
What advantages it has over the standard code gen? According to LameCoder it changes all the entities to inherit from Microsoft.Xrm.Client.CrmEntity rather than `Microsoft.Xrm.Sdk.Entity. What changes does that make and what other changes are created?
Here is the best site I could currently find on what it does:
CrmSvcUtil & OrganizationServiceContext enhancements such as lazy loading
Simplified Connection Management with Connection Dialog UI
Client Side caching extensions
Utility Extension functions for common tasks to speed up client development
Organization Service Message utility functions to make it easy to call common messages such as BulkDelete, Add Member to Team etc.
Objects to support the Microsoft.Xrm.Portal extensions
The only real downside I can see to inheriting from CrmEntity is that it requires the Microsoft.Xrm.Client dll to either be Gac'd on the server, or IL Mergered into the Entities dll.
Besides that one downside, here are the features I see it adding:
Moves INotifyPropertyChanging and INotifyPropertyChanged into the base class, making resulting code smaller
Defines additional class Attributes
System.Data.Services.Common.DataServiceKeyAttribute
System.Data.Services.IgnorePropertiesAttribute (I'm assuming this one sends less data over the wire?)
Microsoft.Xrm.Client.Metadata.EntityAttribute (I believe this is used to support LazyLoading
Option Sets properties are changed to nullable ints
Money properties are now nullable decimals
Setting a property value to the value it already is, will not trigger a property changing/changed event
SetPrimaryIdAttributeValue results in smaller code.
I recently switched to Entity Framework 5. Now, I want to generate the POCO classes from an existing database and also I need both lazy loading and change tracking. So all the scalar properties should be virtual as well as navigation properties.
Adding a new ADO.Net Entity Data Model ends in an .edmx file and some other .cs and .tt files.
Firstly, I wonder why the generated POCO classes by default do not meet the requirements of change tracking proxy, i.e scalar properties are not virtual.
Secondly, how can I genrate proxy-enabled poco classes?
PS: I accepted the Slauma's answer as the best and the only answer so far but I don't agree with the first part of it. Here is my argument
Slauma talks about two problems with proxy: restrictions and performance:
About the restrictions on the proxy-enabled entities:
When the classes are generated in DB First method by Entity Framework, the rules that the classes must follow to enable change-tracking proxies are not that much important becuase they are not restrictive at all. Who really cares whether the navigation collections are IList or HashSet? Talking about the restrictions is sensible only when there are perior designed classes in the application and tables are to be generated from them.
Complex properties are not supported in DB first. So we can exclude them from our discussion.
About the perfomrance:
In the addressed article and also some other experiments I have studied so far the results are not very convincing to reject proxy in favor of snapshot. First, the experiments were done on a large number of entities a.k.a 10,000. It is not improbable that a batch process in your application(not in database) works on large number of entities, however better approaches are assumed such as stored procedure.
Second, depending on the type of the application and the needs, we usually deal with few number of entites for example when Repository pattern is impelemented and used; there is no difference between the performance of proxy and snapshot.
Interestingly, in the addressed experiment, re-assigning the same value to the properties was the only case when performance of proxy dramatically fails. But who really does this? It is very easy to be careful to avoid repeatedly notifying change tracker. Again, in this case significant problem arrises when large number of entites are dealt with.
Firstly, I wonder why the generated POCO classes by default do not
meet the requirements of change tracking proxy, i.e scalar properties
are not virtual.
Using change tracking proxies is not recommended as the default change tracking strategy. It is explained in more details in this blog post. In essence the main reason to use change tracking proxies - better performance compared to snapshot based change tracking - is not always guaranteed - and sometimes it's even worse - and the list of disadvantages is longer than for snapshot based change tracking.
In the past the T4 templates that generated POCO entities indeed marked all properties - including scalar properties - as virtual and prepared the entities for proxy based change tracking. For the reasons described in the blog this has been changed for the newer templates, including the DbContext Generator for EF 5, as mentioned in this comment below the blog post linked above. Now, only navigation properties are marked as virtual, but not scalar properties, which allows lazy loading but is not sufficient for change tracking proxies.
Secondly, how can I generate proxy-enabled poco classes?
I am not aware of any available T4 template that would do this, but it is quite easy to modify the default template to mark also the scalar properties as virtual:
In your project you should have two files with a .tt extension: YourModelContainer.tt and YourModelContainer.Context.tt. Open the YourModelContainer.tt file.
In this file you'll find a method called Property:
public string Property(EdmProperty edmProperty)
{
return string.Format(
CultureInfo.InvariantCulture,
"{0} {1} {2} {{ {3}get; {4}set; }}",
Accessibility.ForProperty(edmProperty),
_typeMapper.GetTypeName(edmProperty.TypeUsage),
_code.Escape(edmProperty),
_code.SpaceAfter(Accessibility.ForGetter(edmProperty)),
_code.SpaceAfter(Accessibility.ForSetter(edmProperty)));
}
Change the line with...
Accessibility.ForProperty(edmProperty),
...to...
AccessibilityAndVirtual(Accessibility.ForProperty(edmProperty)),
That's it.
Just to mention it, in case you are not familiar with it, but there is a second kind of Database-First approach available, that is Reverse Engineering an existing database to a Code-First model. This approach doesn't use a T4 template at all but creates a Code-First model and a context with Fluent API mapping. It is useful if you want to customize and extend the model classes (you could also add virtual modifiers then manually) and proceed with Code-First workflow (and Code-First Migrations) in future to update and evolve your database schema.
My question here is if I should place a RowVersion [TimeStamp] property in every
entity in my domain model.
For Example: I have an Order class and an OrderDetails "navigation, reference" property,
should I use a RowVersion property for both entities, or is it enough to the parent object?
These classes are pocos meant to be used with Entity Framework Code First approach.
Thank you.
The answer, as often, is "it depends".
Since it will almost always be possible to have an Order without any OrderDetails, you're right that the parent object should have a RowVersion property.
Is it possible to modify an OrderDetail without also modifying the Order? Should it be?
If it isn't possible and shouldn't be, a RowVersion property at the detail level doesn't add anything. You already catch all possible modifications by checking the Order's RowVersion. In that case, only add the property at the top level, and stop reading here.
Otherwise, if two independent contexts load the same order and details, both modify a different OrderDetail, and both try to save, do you want to treat this as a conflict? In some cases, this makes sense. In other cases, it doesn't. To treat it as a conflict, the simplest solution is to actually mark the Order as modified too if it is unchanged (using ObjectStateEntry.SetModified, not ObjectStateEntry.ChangeState). EF will check and cause an update to the Order's RowVersion property, and complain if anyone else made any modifications.
If you do want to allow two independent contexts to modify two different OrderDetails of the same Order, yes, you need a RowVersion attribute at the detail level.
That said: if you load an Order and its OrderDetails into the same context, modify an OrderDetail, and save your changes, Entity Framework may also check and update the Order's RowVersion, even if you don't actually change the Order, causing bogus concurrency exceptions. This has been labelled a bug, and a hotfix is available, or you can install .NET Framework 4.5 (currently available in release candidate form), which fixes it even if your application uses .NET 4.0.
So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.
I have an object that gets instantiated in a linq to sql method. When the object fields are being assigned, i want to check a date field and if it is an old date, retrieve data from another table and perform calculations before continuing with assigning this object.
Is there anything wrong with triggering such an event through the property setter Or should I independently check the date through some service and make the changes if necessary at some point aftwewards?
There's nothing wrong with doing some logic from within your setters, but you should be careful about just how much logic you put within your setters. One of the fundamental problems of setters is that since they act like attributes, but have backing code, it's easy to forget that there are potentially some non-trivial actions going on behind the scenes.
This sort of thing can cause problems if you have accessors which use accessors which use accessors; you can rapidly end up causing unexpected performance problems. Generally, it's a good idea to keep the actions of setters (or getters, for that matter) to a relatively small set of actions. For example, validation can work perfectly fine in a setter, but I'd generally advise against doing validation against external resources, because of two things: first, resource delays can cause problems with expected access speed, and secondly, the number of external resource accesses can destroy your performance.
Generally, the rule is this: keep it simple. It's not unreasonable to do complicated things in a setter, but if you do, it's really important to understand the consequences of all of the actions you'll be causing, and it's EXTREMELY important to document what it does extremely well, so the next guy (or girl) to use the code doesn't just try to naively use the accessor and end up causing massive resource contention issues unexpectedly.
Half the point of using setters instead of, say, public fields, is to be able to trigger events associated with setting certain data.
Keyword: associated. If you're just using the setter as a "convenient" time to do some other stuff because it happens to work, you're doing it wrong. If setting this value requires other work to be done, then by all means, use the setter to do it.