Convention-based object-graph synchronization - automapper

I'm planning my first architecture that uses DTOs. I'm now exploring how to map the modified client-side domain objects back to the DTOs that were originally retrieved from the data service. I must map back to the original object graph, instead of instantiating a new one, in order to use WCF Data Services Client Library's change tracking feature.
To put it in general terms, I need a tool that maps instances and (recursively) their sub-instances (collectively called the "source graph") to existing instances and (recursively) sub-instances (collectively called the "target graph") in a manner that is (nearly) 100% convention, rather than configuration, based.
The specific required functionality that I can think of is:
Replace single-valued properties within the target graph with their corresponding values from the source graph.
Synchronize collection pairs: elements that were added to a collection within the source graph should then be added to the corresponding collection within the target graph; elements removed from a collection within the source graph should then be removed from the corresponding collection within the target graph.
When it comes to mapping DTOs, it seems many people use AutoMapper. So I had assumed this task would be easy using that tool. Upon looking at the details, though, I have doubts it will fit my requirements. This indicates AutoMapper won't handle #1 so well. Equally so, this indicates AutoMapper won't help much with #2 either.
I don't want to try bending AutoMapper to my purposes if it will lead to a lot of configuration code. That would defeat the purpose of using a convention-based tool in the first place. So I'm wondering: what's a better tool for the job?

Related

Showing data on the UI in the Hexagonal architecture

I'm learning DDD and Hexagonal architecture, I think I got the basics. However, there's one thing I'm not sure how to solve: how am I showing data to the user?
So, for example, I got a simple domain with a Worker entity with some functionality (some methods cause the entity to change) and a WorkerRepository so I can persist Workers. I got an application layer with some commands and command bus to manipulate the domain (like creating Workers and updating their work hours, persisting the changes), and an infrastructure layer which has the implementation of the WorkerRepository and a GUI application.
In this application I want to show all workers with some of their data, and be abe to modify them. How do I show the data?
I could give it a reference to the implementation of WorkerRepository.
I think it's not a good solution because this way I could insert new Workers in the repository skipping the command bus. I want all changes going through the command bus.
Okay then, I'd split the WorkerRepository into WorkerQueryRepository and WorkerCommandRepository (as per CQRS), and give reference only to the WorkerQueryRepository. It's still not a good solution because the repo gives back Worker entities which have methods that change them, and how are these changes will be persisted?
Should I create two type of Repositories? One would be used in the domain and application layer, and the other would be used only for providing data to the outside world. The second one wouldn't return full-fledged Worker entities, only WorkerDTOs containing only the data the GUI needs. This way, the GUI has no other way to change Workers, only through the command bus.
Is the third approach the right way? Or am I wrong forcing that the changes must go through the command bus?
Should I create two type of Repositories? One would be used in the domain and application layer, and the other would be used only for providing data to the outside world. The second one wouldn't return full-fledged Worker entities, only WorkerDTOs containing only the data the GUI needs.
That's the CQRS approach; it works pretty well.
Greg Young (2010)
CQRS is simply the creation of two objects where there was previously only one. The separation occurs based upon whether the methods are a command or a query (the same definition that is used by Meyer in Command and Query Separation, a command is any method that mutates state and a query is any method that returns a value).
The current term for the WorkerDTO you propose is "Projection". You'll often have more than one; that is to say, you can have a separate projection for each view of a worker in the GUI. (That has the neat side effect of making the view easier -- it doesn't need to think about the data that it is given, because the data is already formatted usefully).
Another way of thinking of this, is that you have a "write-only" representation (the aggregate) and "read-only" representations (the projections). In both cases, you are reading the current state from the book of record (via the repository), and then using that state to construct the representation you need.
As the read models don't need to be saved, you are probably better off thinking factory, rather than repository, on the read side. (In 2009, Greg Young used "provider", for this same reason.)
Once you've taken the first step of separating the two objects, you can start to address their different use cases independently.
For instance, if you need to scale out read performance, you have the option to replicate the book of record to a bunch of slave copies, and have your projection factory load from the slaves, instead of the master. Or to start exploring whether a different persistence store (key value store, graph database, full text indexer) is more appropriate. Udi Dahan reviews a number of these ideas in CQRS - but different (2015).
"read models don't need to be saved" Is not correct.
It is correct; but it isn't perhaps as clear and specific as it could be.
We don't need to create a durable representation of a read model, because all of the information that describes the variance between instances of the read model has already been captured by our writes.
We will often want to cache the read model (or a representation of it), so that we can amortize the work of creating the read model across many queries. And various trade offs may indicate that the cached representations should be stored durably.
But if a meteor comes along and destroys our cache of read models, we lose a work investment, but we don't lose information.

DDD: Referencing non aggregate roots

I'm trying to improve my design using some DDD concepts. Currently I have 4 simple EF entites as shown in the following image:
There are multiple TaskTemplates each of them storing multiple TasksItemTemplates. The TaskItemTemplates contains various information (description, images, default processing times).
Users can create new concrete Tasks based on a TaskTemplate. In the current implementation, this will also create a TaskItem for every TaskItemTemplate, but in the future it might be possible to select one some relevant TasksItemTemplates.
I wonder how to model this requirement in DDD. The reference from TaskItem to TaskTemplateItem is not allowed, because TaskTemplateItem is not an aggregate root. But without this reference it is not possible to get the properties of the TaskTemplateItem.
Of course I could just drop the reference and copy all properties from TaskTemplateItem to TaskItem, but actually I like the possibility to update TaskItems by updating the TaskTemplateItems.
Update: Expected behaviour on Task(Item)Template updates
It should be possible to edit TaskTemplate and TaskItemTemplate and e.g. fix Typos in Name or Description. I expect these changes to be reflected in the Task/TaskItem.
On the other hand, if the DefaultProcessingTime is modified, this should not change the persisted DueDate of a TaskItem.
In my current Implemenation it is not possible to add/remove TaskItemTemplates to a persisted TaskTemplate, but this would be a nice improvement. How would I implement something likes this? Add another entity TaskTemplateVersion between TaskTemplate and TaskItemTemplate?
Update2: TaskItemTemplateId as ValueObject
After reading Vaughn's slides again, I think with a simple modification, my model is correct according to DDD:
Unfortunately I do not really understand, why this Design is better (is it better?). Okay, there won't be unnecessary db queries for TaskItemTemplates. But on the other side I almost ever need a TaskItemTemplate when working with a TaskItem and therefore everything gets more complicated. I cannot any longer do something like
public string Description
{
get { return this.taskItemTemplate.Description; }
}
Based on the properties that you list beneath TaskItem and TaskItemTemplate I'd say that they should be value objects instead of entities. So if there isn't a reason (based on the information in your question there isn't) to make them entities, change them to immutable value objects.
With that solution, you just create a TaskItem from a TaskItemTemplate by copying its data.
Regarding the update scenario that you describe, it see the following solution:
TaskItems are created from a specific version of the TaskItemTemplate. Record that version with a TaskItem.
The TaskTemplate is responsible for updating its items and keep track of their version.
If a template changes, notify all Tasks that are derived from the template if immediate action is required. If you just want to be able to "pull in" the template changes at a later time (instead of acting when the template changes), you just compare the versions.
To make informed decisions, it is very important that you fully understand the pros and cons of immutability. Only then you will see a benefit in modelling things as value objects. One source on the topic that I find very valuable is Eric Lippert's series on immutability.
Also, the book Implementing DDD by Vaughn Vernon explains the concepts of value objects and entities very well.

How to generate POCO proxies from an existing database

I recently switched to Entity Framework 5. Now, I want to generate the POCO classes from an existing database and also I need both lazy loading and change tracking. So all the scalar properties should be virtual as well as navigation properties.
Adding a new ADO.Net Entity Data Model ends in an .edmx file and some other .cs and .tt files.
Firstly, I wonder why the generated POCO classes by default do not meet the requirements of change tracking proxy, i.e scalar properties are not virtual.
Secondly, how can I genrate proxy-enabled poco classes?
PS: I accepted the Slauma's answer as the best and the only answer so far but I don't agree with the first part of it. Here is my argument
Slauma talks about two problems with proxy: restrictions and performance:
About the restrictions on the proxy-enabled entities:
When the classes are generated in DB First method by Entity Framework, the rules that the classes must follow to enable change-tracking proxies are not that much important becuase they are not restrictive at all. Who really cares whether the navigation collections are IList or HashSet? Talking about the restrictions is sensible only when there are perior designed classes in the application and tables are to be generated from them.
Complex properties are not supported in DB first. So we can exclude them from our discussion.
About the perfomrance:
In the addressed article and also some other experiments I have studied so far the results are not very convincing to reject proxy in favor of snapshot. First, the experiments were done on a large number of entities a.k.a 10,000. It is not improbable that a batch process in your application(not in database) works on large number of entities, however better approaches are assumed such as stored procedure.
Second, depending on the type of the application and the needs, we usually deal with few number of entites for example when Repository pattern is impelemented and used; there is no difference between the performance of proxy and snapshot.
Interestingly, in the addressed experiment, re-assigning the same value to the properties was the only case when performance of proxy dramatically fails. But who really does this? It is very easy to be careful to avoid repeatedly notifying change tracker. Again, in this case significant problem arrises when large number of entites are dealt with.
Firstly, I wonder why the generated POCO classes by default do not
meet the requirements of change tracking proxy, i.e scalar properties
are not virtual.
Using change tracking proxies is not recommended as the default change tracking strategy. It is explained in more details in this blog post. In essence the main reason to use change tracking proxies - better performance compared to snapshot based change tracking - is not always guaranteed - and sometimes it's even worse - and the list of disadvantages is longer than for snapshot based change tracking.
In the past the T4 templates that generated POCO entities indeed marked all properties - including scalar properties - as virtual and prepared the entities for proxy based change tracking. For the reasons described in the blog this has been changed for the newer templates, including the DbContext Generator for EF 5, as mentioned in this comment below the blog post linked above. Now, only navigation properties are marked as virtual, but not scalar properties, which allows lazy loading but is not sufficient for change tracking proxies.
Secondly, how can I generate proxy-enabled poco classes?
I am not aware of any available T4 template that would do this, but it is quite easy to modify the default template to mark also the scalar properties as virtual:
In your project you should have two files with a .tt extension: YourModelContainer.tt and YourModelContainer.Context.tt. Open the YourModelContainer.tt file.
In this file you'll find a method called Property:
public string Property(EdmProperty edmProperty)
{
return string.Format(
CultureInfo.InvariantCulture,
"{0} {1} {2} {{ {3}get; {4}set; }}",
Accessibility.ForProperty(edmProperty),
_typeMapper.GetTypeName(edmProperty.TypeUsage),
_code.Escape(edmProperty),
_code.SpaceAfter(Accessibility.ForGetter(edmProperty)),
_code.SpaceAfter(Accessibility.ForSetter(edmProperty)));
}
Change the line with...
Accessibility.ForProperty(edmProperty),
...to...
AccessibilityAndVirtual(Accessibility.ForProperty(edmProperty)),
That's it.
Just to mention it, in case you are not familiar with it, but there is a second kind of Database-First approach available, that is Reverse Engineering an existing database to a Code-First model. This approach doesn't use a T4 template at all but creates a Code-First model and a context with Fluent API mapping. It is useful if you want to customize and extend the model classes (you could also add virtual modifiers then manually) and proceed with Code-First workflow (and Code-First Migrations) in future to update and evolve your database schema.

Mass-Replacing Objects using Unity's label tags?

Im currently working on an exercise, for which I want to create a technical design documentation.
Therefore, I need to evaluate possible solutions to a bunch of problems coming with my fictional project.
Here's a quick glance at the exercise:
The game's art & core game design are split up very harshly - basically, the core system, game mechanics and design are created to be very abstract, in order to allow them to work with a very wide variety of art settings. Also, one of the restrictions is to re-use as many assets, levels & designs as possible.
Now to my question:
I want the level designers to create levels using "template" objects (object which have all the technical features that are required, ie slots for attachments, correct scale, textures etc) and later replace these objects with set of assets I receive from my outsourcer.
Since I dont want to manually replace all objects whenever I get a new set of assets, this is what I wanted to do:
Each template object gets a descriptive label, and each asset delivered by the outsourcer needs to have the exact same label name as its corresponding template-counterpart stored in it as well (for example as a custom attribute, a channel, or simply in its name).
I now want to replace all templates with the related asset using a script.
This would be done for each set of assets. I would also keep several deployments of my engine, one per set, but initially, they'd all start out with the templates that need to be replaced (since there will need to be some modifications for each setting, both visually and from a game design perspective, keeping all assets in one trunk/project didn't make any sense to me).
To make this easier i'd use a "database" of some sorts (probably a simple dictionary which the engine script could query and which would be filled out beforehand by another script upon delivery of new assets?).
My question is: is this possible? If yes, how difficult would this be from a programmers perspective? I have only limited knowledge in this field, so I'd love to hear what you lads & ladies think about this.
Also (very important) - do you know of a better way to achieve this "replacability" of assets? Or simply have an easier way to achieve what I want to do? I appreciate any feedback! Thank you!
quick edit: This would not only be applied to 3d Objects; textures would also need to be replaced, obviously
I think you are looking for Prefabs.
Basically prefabs implements a sort of prototype pattern.
Instead of putting into scene's hierarchy directly a GameObject you can make it a prefab and put into the scene a GameObject that is an instance of that prefab.
When a GameObject into the scene is linked to a prefab, and the prefab will be modified, the linked object will be modified too.
If you have several instances of the same prefab, all istances will be updated as well.
The only strong limitation to this feature is that, since now, nested prefabs aren't supported.
I want the level designers to create levels using "template" objects
(object which have all the technical features that are required, ie
slots for attachments, correct scale, textures etc) and later replace
these objects with set of assets I receive from my outsourcer.
This is the tipical use case. You have a placeholder into the scene (es. a Cube) that will be subistitued by a model when the artists will provide it.
If you instantiate 100 cubes into the scene, when you need to substitute them, you would do it manually for all objects.
If instead you have created a prefab (lets call it ModelPrefab) and the cubes into the scene are instances of that prefab, when you'll have the new 3d model you can simply update the prefab and all linked instances will be updated too.
My question is: is this possible? If yes, how difficult would this be
from a programmers perspective?
If you can work without nested prefabs you have to do nothing, it's already implemented. If you need to implement nested prefabs, it might not be so straightforward.
quick edit: This would not only be applied to 3d Objects; textures
would also need to be replaced, obviously
I made the example above using the models, but you can make a prefab from each GameObject that is actually a collection of Components (have a look at Component Based Object Management if you are interested).
EDIT
Yes, it is possible to update prefabs throught script the required functions are in the UnityEditor namespace, so they mast be used through an editor extension.
You can found all you need in PrefabUtility class.

How to go about creating an Aggregate

Well this time the question I have in mind is what should be the necessary level of abstraction required to construct an Aggregate.
e.g.
Order is composed on OrderWorkflowHistory, Comments
Do I go with
Order <>- OrderWorkflowHistory <>- WorkflowActivity
Order <>- CommentHistory <>- Comment
OR
Order <>- WorkflowActivity
Order <>- Comment
Where OrderWorkflowHistory is just an object which will encapsulate all the workflow activities that took place. It maintains a list. Order simply delegates the job of maintaining th list of activities to this object.
CommentHistory is similarly a wrapper around (list) comments appended by users.
When it comes to database, ultimately the Order gets written to ORDER table and the list of workflow activities gets written to WORKFLOW_ACTIVITY table. The OrderWorkflowHistory has no importance when it comes to persistence.
From DDD perspective which would be most optimal. Please share your experiences !!
As you describe it, the containers (OrderWorkflowHistory, CommentHistory) don't seem to encapsulate much behaviour. On that basis I'd vote to omit them and manage the lists directly in Order.
One caveat. You may find increasing amounts of behaviour required of the list (e.g. sophisticated searches). If that occurs it may make sense to introduce one/both containers to encapulate that logic and stop Order becoming bloated.
I'd likely start with the simple solution (no containers) and only introduce them if justified as above. As long as external clients make all calls through Order's interface you can refactor Order internally without impacting the clients.
hth.
This is a good question, how to model and enrich your domain. But sooo hard to answer since it vary so much for different domain.
My experince has been that when I started with DDD I ended up with a lots of repositories and a few Value Objects. I reread some books and looked into several DDD code examples with an open mind (there are so many different ways you can implement DDD. Not all of them suits your current project scenario).
I started to try to have in mind that "more value objects, more value objects, more value objects". Why?
Well Value objects brings less tight dependencies, and more behaviour.
In your example above with one to many (1-n) relationship I have solved 1-n rel. in different ways depending on my use cases uses the domain.
(1)Sometimes I create a wrapper class (like your OrderWorkflowHistory) that is a value object. The whole list of child objects is set when object is created. This scenario is good when you have a set of child objects that must be set during one request. For example a Qeustion Weights on a Questionaire form. Then all questions should get their question weight through a method Questionaire.ApplyTuning(QuestionaireTuning) where QuestionaireTuning is like your OrderWorkflowHistory, a wrapper around a List. This add a lot to the domain:
a) The Questionaire will never get in a invalid state. Once we apply tuning we do it against all questions in questionaire.
b) The QuestionaireTuning can provide good access/search methods to retrieve a weight for a specific question or to calculate average weight score... etc.
(2)Another approach has been to have the 1-n wrapper class not being a Value object. This approach suits more if you want to add a child object now and then. The parent cannot be in a invalid state because of x numbers of child objects. This typical wrapper class has Add(Child...) method and several search/contains/exists/check methods.
(3)The third approach is just having the IList exposed as a readonly collection. You can add some search functionality with Extension methods (new in .Net 3.0) but I think it's a design smell. Better to incapsulate the provided list access methods through a list-wrapper class.
Look at http://dddsamplenet.codeplex.com/ for some example of approach one.
I believe the entire discussion with modeling Value objects, entities and who is responsible for what behaviour is the most centric in DDD. Please share your thoughts around this topic...

Resources