When I read mapstruct documentation they say: MapStruct is a Java annotation processor for the generation of type-safe bean mapping classes.
https://mapstruct.org/documentation/stable/reference/html/#introduction
Which leaves me with the queston. Why do I need mapstruct? Jhipster uses it and I have no clue why did they need it for in the first place? Why you need a mapping inside Jhipster.
They also mention that .... Compared to writing mapping code from hand, MapStruct saves time by generating code which is tedious and error-prone to write. So it saves time but it does not explain why you need it, right?
Thanks. I hope they can modify the documentation with the doubts and explanations that are written down here.
JHipster uses MapStruct to generate code for mapping entities to/from DTOs as explained in https://www.jhipster.tech/using-dtos/
You can get rid of it by copying generated Mapper classes into your source tree and then evolving them manually. This could be useful if you don't plan to use JHipster beyond project bootstrapping and/or want to build DTOs that are too complex for MapStruct.
It might sound more work at first but it's simple code and you will need to do the same anyway in the frontend code.
Basically, a Mapper is a simple service that maps an entity to/from a Data Transfer Object. It does not require any library to do so, not to implement any specific interface, you just call writers from values you got from getters.
If you don't want to start from scratch, let's say you have defined a Book entity, you can find an example by searching for the BookMapperImpl.java class generated by MapStruct in your target directory. Then, you can copy it to you src directory, you get rid of mapstruct imports in BookMapperImpl, you delete the BookMapper interface and rename the BookMapperImpl to BookMapper.
Related
I have done a web site project in PHP using mySQL at school, which was not object oriented but that was in a manner on which I want to display my content. Then I changed the same project into object orientated classes where I use same CRUD queries in functions of that classes and they interact with a DBWrapper class. Or can say, I just cut the php content and pasted it into the functions and and call that functionality through object. that all was done without documentation. But now i am making a project in .net with documentation how ever its a web based app actually and i have the idea of getting data from database trough queries and o of course c# is different but CRUD is something which is similar in any language. so as i had decide first this thing will display and from thing the next this thing will display etc. about codding than how should i know my class diagram is the same as what i am getting and what that actually is. because i will connect both classes as i wnt to display . and plus is Do we write object of other class as an attribute of second class if that is going to use in it.
Most class diagrams I've seen and made include only the business model entities. Most of the time UML diagrams are used to communicate and document the workings of the system. I like to think of them as pseudo-code.
Please refer to this other question as well: https://softwareengineering.stackexchange.com/questions/190429/what-classes-to-put-exactly-in-a-class-diagram
However, if you feel your implementation ended up with a lot of helper classes then it's probably good to review your system's structure to make sure you are coding "object oriented". Actually making the class diagram is supposed to help you realize what you can improve.
I suggest you also take a look at design patterns. This link might be useful, as you mention experience with C# http://www.dotnettricks.com/learn/designpatterns
I m trying to build a rest easy service with hibernate. Is it Good to have Hibernate and Jaxb be annotation both on the same class. OR there should be two different classes one for hibernate data object with annotation and another similar class for rest request and response with jaxb annotation.
The question is, basically if you need extra transfer objects next to your entities.
If you don't, the structure of your tranfer data (JSON, XML, whatever) will be more or less dictated of how your entities are structured. You can achieve a lot with annotations but you'll still be somewhat bound. As a consequence of this, changes in the entities may need to be propagated to your outer interfaces. Basically, if you change your entities and/or your database schema, you may also need to change the structure of the JSON returned by your REST interface.
Having separate DTOs is safer in cases when you need to provide stability of your interfaces. The downside is that you'll need mapping code to convert between DTOs and entities.
From my experience, you can get away with just entities most of the time.
I'm planning my first architecture that uses DTOs. I'm now exploring how to map the modified client-side domain objects back to the DTOs that were originally retrieved from the data service. I must map back to the original object graph, instead of instantiating a new one, in order to use WCF Data Services Client Library's change tracking feature.
To put it in general terms, I need a tool that maps instances and (recursively) their sub-instances (collectively called the "source graph") to existing instances and (recursively) sub-instances (collectively called the "target graph") in a manner that is (nearly) 100% convention, rather than configuration, based.
The specific required functionality that I can think of is:
Replace single-valued properties within the target graph with their corresponding values from the source graph.
Synchronize collection pairs: elements that were added to a collection within the source graph should then be added to the corresponding collection within the target graph; elements removed from a collection within the source graph should then be removed from the corresponding collection within the target graph.
When it comes to mapping DTOs, it seems many people use AutoMapper. So I had assumed this task would be easy using that tool. Upon looking at the details, though, I have doubts it will fit my requirements. This indicates AutoMapper won't handle #1 so well. Equally so, this indicates AutoMapper won't help much with #2 either.
I don't want to try bending AutoMapper to my purposes if it will lead to a lot of configuration code. That would defeat the purpose of using a convention-based tool in the first place. So I'm wondering: what's a better tool for the job?
I recently switched to Entity Framework 5. Now, I want to generate the POCO classes from an existing database and also I need both lazy loading and change tracking. So all the scalar properties should be virtual as well as navigation properties.
Adding a new ADO.Net Entity Data Model ends in an .edmx file and some other .cs and .tt files.
Firstly, I wonder why the generated POCO classes by default do not meet the requirements of change tracking proxy, i.e scalar properties are not virtual.
Secondly, how can I genrate proxy-enabled poco classes?
PS: I accepted the Slauma's answer as the best and the only answer so far but I don't agree with the first part of it. Here is my argument
Slauma talks about two problems with proxy: restrictions and performance:
About the restrictions on the proxy-enabled entities:
When the classes are generated in DB First method by Entity Framework, the rules that the classes must follow to enable change-tracking proxies are not that much important becuase they are not restrictive at all. Who really cares whether the navigation collections are IList or HashSet? Talking about the restrictions is sensible only when there are perior designed classes in the application and tables are to be generated from them.
Complex properties are not supported in DB first. So we can exclude them from our discussion.
About the perfomrance:
In the addressed article and also some other experiments I have studied so far the results are not very convincing to reject proxy in favor of snapshot. First, the experiments were done on a large number of entities a.k.a 10,000. It is not improbable that a batch process in your application(not in database) works on large number of entities, however better approaches are assumed such as stored procedure.
Second, depending on the type of the application and the needs, we usually deal with few number of entites for example when Repository pattern is impelemented and used; there is no difference between the performance of proxy and snapshot.
Interestingly, in the addressed experiment, re-assigning the same value to the properties was the only case when performance of proxy dramatically fails. But who really does this? It is very easy to be careful to avoid repeatedly notifying change tracker. Again, in this case significant problem arrises when large number of entites are dealt with.
Firstly, I wonder why the generated POCO classes by default do not
meet the requirements of change tracking proxy, i.e scalar properties
are not virtual.
Using change tracking proxies is not recommended as the default change tracking strategy. It is explained in more details in this blog post. In essence the main reason to use change tracking proxies - better performance compared to snapshot based change tracking - is not always guaranteed - and sometimes it's even worse - and the list of disadvantages is longer than for snapshot based change tracking.
In the past the T4 templates that generated POCO entities indeed marked all properties - including scalar properties - as virtual and prepared the entities for proxy based change tracking. For the reasons described in the blog this has been changed for the newer templates, including the DbContext Generator for EF 5, as mentioned in this comment below the blog post linked above. Now, only navigation properties are marked as virtual, but not scalar properties, which allows lazy loading but is not sufficient for change tracking proxies.
Secondly, how can I generate proxy-enabled poco classes?
I am not aware of any available T4 template that would do this, but it is quite easy to modify the default template to mark also the scalar properties as virtual:
In your project you should have two files with a .tt extension: YourModelContainer.tt and YourModelContainer.Context.tt. Open the YourModelContainer.tt file.
In this file you'll find a method called Property:
public string Property(EdmProperty edmProperty)
{
return string.Format(
CultureInfo.InvariantCulture,
"{0} {1} {2} {{ {3}get; {4}set; }}",
Accessibility.ForProperty(edmProperty),
_typeMapper.GetTypeName(edmProperty.TypeUsage),
_code.Escape(edmProperty),
_code.SpaceAfter(Accessibility.ForGetter(edmProperty)),
_code.SpaceAfter(Accessibility.ForSetter(edmProperty)));
}
Change the line with...
Accessibility.ForProperty(edmProperty),
...to...
AccessibilityAndVirtual(Accessibility.ForProperty(edmProperty)),
That's it.
Just to mention it, in case you are not familiar with it, but there is a second kind of Database-First approach available, that is Reverse Engineering an existing database to a Code-First model. This approach doesn't use a T4 template at all but creates a Code-First model and a context with Fluent API mapping. It is useful if you want to customize and extend the model classes (you could also add virtual modifiers then manually) and proceed with Code-First workflow (and Code-First Migrations) in future to update and evolve your database schema.
So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.