Is there any way to list out all automapper mapping expression trees that have been configured in an application.
The reason being we are seeing few edge cases causing stackoverflow exception during mapping objects and want to see if there is any way to look at all the mapping trees. This could possibly point to any circular references.
Related
This post is more like a RFC. I'm collecting opinions before creating the Issue on Github to be sure that it's reasonable.
The subject is
When mapping a collection property, if the source value is null
AutoMapper will map the destination field to an empty collection
rather than setting the destination value to null. This aligns with
the behavior of Entity Framework and Framework Design Guidelines that
believe C# references, arrays, lists, collections, dictionaries and
IEnumerables should NEVER be null, ever.
https://media.readthedocs.org/pdf/automapper/latest/automapper.pdf
I find it quite unexpected. I don't think, that forcing a developer to follow the behavior of Framework Design Guidelines is the area of concern of Automapper. It can be optionally enabled, but it shouldn't be a default setting, because null value and empty collection are different things.
What do you think?
I have several schemas that inherit one or more elements from a collection of 'common' schemas. In this particular instance, I'm importing one of these schemas to make use of a single complex type defined in it.
When I generate the java objects from the schema, I get my schema types, and the element I referenced as expected, however I also get objects generated for the 30+ other types from the common schema.
I want to use the common schema, because I want to rely on automated builds for updating my schema when the common schema changes, but I do not want the extra java classes generated.
Suggestions ?
There's no out of the box approach to achieve what you want. The reason I am offering an opinion here is rather to point out (maybe for others) some issues one needs to take into account no matter which route ones go.
The 'extra' label is not always straightforward. Substitution group members are interesting. In Java, think about a class (A) using an interface (I), and a class (B:I) implementing (I). Some may say there's no dependency between A and B, while others would require B in the distribution. If you replace (I) with a concrete class, things become even less clear - consider that the substitution group head doesn't need to be abstract; or if the type of the substitution group head is anyType (Object in Java).
More so, if the XML processing was designed to accommodate xsi:type then it is even harder to tell (by looking at the schema) what is expected to work where.
Tools such as QTAssistant (I am associated with it) have a default setting that will pull in all strict dependencies (A and I above); and either ALL that might work (B above), or nothing else. Anything in between, the user needs to manually define what goes in the release. This is called automatic XSD refactoring and could be used easily in your scenario.
I'm planning my first architecture that uses DTOs. I'm now exploring how to map the modified client-side domain objects back to the DTOs that were originally retrieved from the data service. I must map back to the original object graph, instead of instantiating a new one, in order to use WCF Data Services Client Library's change tracking feature.
To put it in general terms, I need a tool that maps instances and (recursively) their sub-instances (collectively called the "source graph") to existing instances and (recursively) sub-instances (collectively called the "target graph") in a manner that is (nearly) 100% convention, rather than configuration, based.
The specific required functionality that I can think of is:
Replace single-valued properties within the target graph with their corresponding values from the source graph.
Synchronize collection pairs: elements that were added to a collection within the source graph should then be added to the corresponding collection within the target graph; elements removed from a collection within the source graph should then be removed from the corresponding collection within the target graph.
When it comes to mapping DTOs, it seems many people use AutoMapper. So I had assumed this task would be easy using that tool. Upon looking at the details, though, I have doubts it will fit my requirements. This indicates AutoMapper won't handle #1 so well. Equally so, this indicates AutoMapper won't help much with #2 either.
I don't want to try bending AutoMapper to my purposes if it will lead to a lot of configuration code. That would defeat the purpose of using a convention-based tool in the first place. So I'm wondering: what's a better tool for the job?
I'm writing a graphical editor for a "model" (i.e. a collection of boxes and lines with some kind of semantics, such as UML, the details of which don't matter here). So I want to have a data structure representing the model, and a diagram where an edit to the diagram causes a corresponding change in the model. So if, for instance, a model element has some text in an attribute, and I edit the text in the diagram, I want the model element to be updated.
The model will probably be represented as a tree, but I want to have the diagram editor know as little about the model representation as possible. (I'm using the diagrams framework, so associating arbitrary information with a graphical element is easy). There will probably be a "model" class to encode the interface, if I can just figure out what that should be.
If I were doing this in an imperative language it would be straightforward: I'd just have a reference from the graphical element in the diagram back to the model element. In theory I could still do this by building up the model from a massive collection of IORefs, but that would be writing a Java program in Haskell.
Clearly, each graphical element is going to have some kind of cookie associated with it that will enable the model update to happen. One simple answer would be to give each model element a unique identifier and store the model in a Data.Map lookup table. But that requires significant bookkeeping to ensure that no two model elements get the same identifier. It also strikes me as a "stringly typed" solution; you have to handle cases where an object is deleted but there is a dangling reference to it elsewhere, and its difficult to say anything about the internal structure of the model in your types.
On the other hand Oleg's writings about zippers with multiple holes and cursors with clear transactional sharing sounds like a better option, if only I could understand it. I get the basic idea of list and tree zippers and the differentiation of a data structure. Would it be possible for every element in a diagram to hold a cursor into a zipper of the model? So that if a change is made it can then be committed to all the other cursors? Including tree operations (such as moving a subtree from one place to another)?
It would particularly help me at this point if there was some kind of tutorial on delimited continuations, and an explanation of how they make Oleg's multi-cursor zippers work, that is a bit less steep than Oleg's postings?
I think you're currently working from a design in which each node in the model tree is represented by a separate graphical widget, and each of these widgets may update the model independently. If so, I don't believe that a multi-hole zipper will be very practical. The problem is that the complexity of the zipper grows quickly with the number of holes you wish to support. As you get much beyond 2 terms, the size of the zipper will get quite large. From a differentiation point of view, a 2-hole zipper is a zipper over 1-hole zippers, so the complexity grows by operation of the chain rule.
Instead, you can borrow an idea from MVC. Each node is still associated with a widget, but they don't communicate directly. Rather they go through an intermediary controller, which maintains a single zipper. When widgets are updated, they notify the controller, which serializes all updates and modifies the zipper accordingly.
The widgets will still need some sort of identifier to reference model nodes. I've found it's often easiest to use the node's path, e.g. [0] for the root, [1,0] for the root's second child, etc. This has a few advantages. It's easy to determine the node a path refers to, and it's also easy for a zipper to calculate the shortest path from the current location to a given node. For a tree structure, they're also unique up to deletion and reinsertion. Even that isn't usually a problem because, when the controller is notified that nodes should be deleted, it can delete the corresponding widgets and disregard any associated updates. As long as the widget lifetime is tied to each node's lifetime, the path will be sufficiently unique to identify any modifications.
For tree operations, I would probably destroy then recreate graphical widgets.
As an example, I have some code which does this sort of thing. In this model there aren't separate widgets for each node, rather I render everything using diagrams then query the diagram based on the click position to get the path into the data model. It's far from complete, and I haven't looked at it for a while, so it might not build, but the code may give you some ideas.
I have loaded an xmi file with an uml diagram. As a result I get an org.eclipse.uml2.uml.Package.
Now I want to programmatically convert it to Ecore (ePackage).
I've already taken a look at the UML2EcoreConverter from org.eclipse.uml2.uml.util.UMLUtil. But it's convert-method is not clear to me.
Instead of going directly for the UML2EcoreConverter, take a look at
org.eclipse.uml2.uml.util.UMLUtil.convertToEcore(Package, Map)
It takes a package and a Map of options and returns the converted EPackage(s). The options map can be fed the options from UMLUtil.UML2EcoreConverter.OPTION__* as keys. Possible values are UMLUtil.OPTION_DISCARD/OPTION_IGNORE/OPTION_PROCESS/OPTION_REPORT. All options default to OPTION__IGNORE.
Most of these options are for processing concepts of UML2 class diagrams that don't map cleanly to Ecore, so you can control how they should be handled.
For extended feature mapping (subset/union, redefines ...), see OPTION_REDEFINING*, OPTION_SUBSETTING*, OPTION__UNION_PROPERTIES, OPTION_DUPLICATE*. It should be okay to set all of these to OPTION_PROCESS.
One option you might want to disable is OPTION__SUPER_CLASS_ORDER. This will reorder the generalizations and interface realizations in alphabetical order, which might cause implementation concerns when you want to inherit a specific super implementation. Another one is OPTION__CAMEL_CASE_NAMES, which will process class and feature names to force a strict camel case scheme. This only makes sense in cases where your UML artifacts don't have valid java names. Just set them to OPTION_IGNORE, or, to see where they would change something, to OPTION_REPORT.
There's also a convertFromEcore(...) for the reverse.
In case you would like to understand the inner workings of UML2EcoreConverter better: It's basically a simple recursive visitor that traverses the UML model, converting each artifact to its Ecore equivalent and doing some cleaning up. It extends UMLSwitch to handle the different metaclasses. So to see for example how a UML Property is converted to an EStructuralFeature, have a look at caseProperty(...)
You can only convert one way from Ecore to UML.