Is it a good practice to use a converter inside a populator? - sap-commerce-cloud

Hybris tell us that Converters should use populators and not vice versa because can be critical for performance considerations.
But when I am digging in hybris code you can see populators like DefaultAbstractOrderEntryPopulator, ProductFeatureListPopulator which
are wiring converters.And I have also find Populators using other populators such as ProductPopulator.
I read the following links but I cannot find anything about using a converter inside a populator or populator inside of populators:
Wiki Hybris - Converters and Populators
Wiki Hybris - DTOS best practice
Wiki Hybris 6
so, can we use converters inside populator like Hybris does? and populators inside populators?

I would like to give my point of view to answer this question. One common mistake when working with converters and populators is to confuse them.
Converters creates a DTO and populators filling the DTO.
We have to be very careful when we are going to use a converter inside of a populator and to be completely sure that
we need to do that.
If we have a long chain of populators using converters we can have a performance risk. For instance
C1->P1->C2->P2->C3->P3....
I think the best practise to follow is:
1) Be aware of the converters are already done, and ckeck if we have to add our populator to an existing Converter
(for example using the modifyPopulatorList)
2) If our DTO has another dependency with other DTO
We should ask ourselves if that dependency it is really necessary.I will decide this according to if that second DTO is used in many places
or not.Because if you are the only one who use it maybe you can merge the properties in only one DTO and avoid to have two different converters.
3) Other possibility It is to use differents converter in parallel as we can see in
WIKI HYBRIS - Facades and DTOs
To sum up, the design of our converters and populators is our responsability, and we have to get the best design posible of them to avoid
performance problems.

Basically the way to do it is: never write a concrete converter class and never call a populator directly.
But this is how the product is built for extensibility and frankly you can do whatever you like.

Related

How to implement a proper layer separation in XPages (i.e. talk to Java objects, not DominoViews and DominoDocuments)

I'm trying to implement a proper layer separation in my XPage project. Ideally I'm trying to get to a point where the XML in the XPage contains no SSJS and uses only EL to access Java objects.
So far I've worked out how to load all my data from the domino database into Java Beans (where 1 document = 1 Object, more or less), I'm reading view contents into Java Maps or Lists, and I've managed to display the content of these collections in repeat controls.
What I'm unsure of is how to display the content of a 'form', of a single document, without referencing the domino document. In particular, I'm unsure of how to deal with the 'new document' case. I suppose I create an empty object, then set that object as a data source for the Xpage.
I'm aware that I have to use a ObjectDataSource for this, but I'm unsure where to actually store it. I read an article from Stephan Wissel stating that one shouldn't put them in managed bean, so where can I put it? In one of the scoped variables like viewScope?
Right now I've written an 'ApplicationBean' which is a session-scope Managed Bean where I'm storing all my objects.
What is the best practise? It seems that there are many different ways to meet that goal. Currently I'm exploring Christian Güdemann's XPages Toolkit, which sounds very promising. I know that Samir Pipalia, John Daalsgard and Frank van der Linden have worked up their own frameworks.
So how should I go about it? Any pitfalls?
This is a large topic indeed. As Paul mentioned, Tim's document model classes are a great example of how to do that clearly, and Tim goes into more detail in later episodes in that NotesIn9 series. My own Framework's model objects are fairly similar, though I also added collection managers to handle the dirty business of accessing views. For better or for worse, almost every XPage developer solves this problem in a unique way.
There are a number of ways you can go about implementing this sort of thing, and some of the differences aren't terribly important in normal cases (for example, whether you preload all data from the document when constructing the model object or do lazy fetching to the back-end only as needed), but there are definitely a couple overarching questions to tackle.
Model Access
As you mentioned in the question, one of the big problems is how you actually access model objects from the XPage - how the objects are fetched from the DB or created anew. My Framework's model objects use a conceit of "Manager" objects, which are application-scoped beans that allow getting either named collections (which map to views), model objects by UNID, or a new model object via the keyword "new". Generally, these models (which are Serializable) are then stored in the view scope of the page using them either via <xp:dataContext/>, <xe:objectData/>, or the Framework's own <ff:modelObjectData/>.
I've found it very wise to avoid using managed beans to represent individual objects (like "CurrentWhatever" that you then fill with data on each page), since that muddies up your faces-config in the best case or runs into session problems in the worst (if you put it in session scope, which I rarely use).
How you implement "new" vs. "fetched" model objects depends largely on the tack you take to write your models in the first place, but most boil down to having two constructors: one to take a UNID (at least) to point to existing document and one to create a new one. If you go the "write every properly explicitly in the object with getters and setters" route, the latter would also initialize all of the fields with default values instead of reading them from a document. Internally, you should have fields to store the UNID of the document, which can indicate whether it's new or not - then, your save method can check if this field is empty and create a new document if needed (and then store the new doc's UNID in the field).
Views
It sounds like you're already reading your model collections into Lists, which is good. One down side there, though, is scalability: with small (less than 100) collections, you're likely to not run into any load-speed problems, but afterwards things are going to slow down on initial page load as your code reads in the entire view ahead of time. You can mitigate this somewhat with efficient view reading, but there's a limit. The built-in views are generally speedy because they only load data as needed (they also cheat like hell to do so, but that's another issue).
This is a noble goal to aim for yourself, but doing so to cover all cases is no small feat: you end up running into questions of FT searching, column resorting, efficient data preloading (you don't want to re-open the View object only to read in one entry at a time, but you also don't want to read the whole thing), use in viewPanel and maybe others (which require specialized interfaces), expanding/collapsing categories, and so forth. It's a large sub-topic on its own.
Esoterica
You're also liable to run across other areas that are more difficult than you'd think at first, such as "proper" rich text handing and file attachments. Attachments, in particular, require direct conflict with the XSP framework to get to function properly with custom model objects and the standard upload/download controls.
Case-sensitivity in field names is another potential area of trouble. If you're writing getters and setters for all of your fields, it's a moot point, but if you're going the "thin wrapper" route (which I prefer), it's important to code any intermediary caches/lookups in a way that deal with the fact that "FOO" and "foo" are (basically) the same as item names to Notes, but are distinct in Java. The tack I take is to make extensive use of TreeMaps: if you pass String.CASE_INSENSITIVE_ORDER as the parameter to the constructor, it handles treating Strings as generally case-insensitive when used as keys.
Having your model objects work with all the standard controls like that may or may not be a priority - I find it very valuable, so I did a lot of legwork to make it happen with my framework, but if you're just going to do some basic Strings-and-numbers models, you don't necessarily need to worry.
So... yeah, it's a big topic! Depending on how confident you are with Java and the XPages undergirdings, I would suggest either going the route of fairly-simple "beans with getters and setters" for your objects or by looking into the implementation details of one of the existing frameworks (my own or the ones you mentioned). Sadly, there are a lot of little things that will crop up as your code gets more complicated, many of which are non-obvious to deal with.
Jesse Gallagher's Scaffolding framework also accesses Java objects rather than dominoDocument datasources. I've used (and modified) Tim Tripcony's Java objects in his "Using Java in XPages series" from NotesIn9 (the first is episode 132. The source code for what he does is on BitBucket. This basically converts the dominoDocument to a Map. It doesn't cover MIME (Rich Text) or attachments. If it's useful, I can share the amendments I made (e.g. making field names all the same case, to ensure it still works for existing documents, where fields may have been created with LotusScript).
Andrew - Jesse's one of the experts here so I'd read his response carefully.
What I do is I took one of the key pieces of Jesses bigger framework - the "pageControllers" and I use that HEAVILY. So I end up with a Java Class for each XPages to act as the controller. "All" Jesse's page controller framework does is make it a little easier to consume. So you can reference it on each page as "controller" and don't nee dot make individual managed beans for them.
I still will use SOMES SSJS on the XPage if I really need to for things like button events.. some methods that don't have proper getters and setters.. HashMap.size() for instance. But the vast bulk of the code goes into the Java Class. No real need for viewScope variables any more as well.
in the case of a "New Document".. In the controller I'll create a new Java Object that represents the "Current document". I'll bind all the fields to that. If it's new I create a new Object and assign it to the private variable. If I'm loading form somewhere then I take that variable and load the document I want.
I've started to really try and detail this in more recent NotesIn9's. Especially the little series on Java for Xpages developers. I think I got far enough there to show you what you need to know. I do plan on doing a lot more on this topic as soon as I can.

How to generate POCO proxies from an existing database

I recently switched to Entity Framework 5. Now, I want to generate the POCO classes from an existing database and also I need both lazy loading and change tracking. So all the scalar properties should be virtual as well as navigation properties.
Adding a new ADO.Net Entity Data Model ends in an .edmx file and some other .cs and .tt files.
Firstly, I wonder why the generated POCO classes by default do not meet the requirements of change tracking proxy, i.e scalar properties are not virtual.
Secondly, how can I genrate proxy-enabled poco classes?
PS: I accepted the Slauma's answer as the best and the only answer so far but I don't agree with the first part of it. Here is my argument
Slauma talks about two problems with proxy: restrictions and performance:
About the restrictions on the proxy-enabled entities:
When the classes are generated in DB First method by Entity Framework, the rules that the classes must follow to enable change-tracking proxies are not that much important becuase they are not restrictive at all. Who really cares whether the navigation collections are IList or HashSet? Talking about the restrictions is sensible only when there are perior designed classes in the application and tables are to be generated from them.
Complex properties are not supported in DB first. So we can exclude them from our discussion.
About the perfomrance:
In the addressed article and also some other experiments I have studied so far the results are not very convincing to reject proxy in favor of snapshot. First, the experiments were done on a large number of entities a.k.a 10,000. It is not improbable that a batch process in your application(not in database) works on large number of entities, however better approaches are assumed such as stored procedure.
Second, depending on the type of the application and the needs, we usually deal with few number of entites for example when Repository pattern is impelemented and used; there is no difference between the performance of proxy and snapshot.
Interestingly, in the addressed experiment, re-assigning the same value to the properties was the only case when performance of proxy dramatically fails. But who really does this? It is very easy to be careful to avoid repeatedly notifying change tracker. Again, in this case significant problem arrises when large number of entites are dealt with.
Firstly, I wonder why the generated POCO classes by default do not
meet the requirements of change tracking proxy, i.e scalar properties
are not virtual.
Using change tracking proxies is not recommended as the default change tracking strategy. It is explained in more details in this blog post. In essence the main reason to use change tracking proxies - better performance compared to snapshot based change tracking - is not always guaranteed - and sometimes it's even worse - and the list of disadvantages is longer than for snapshot based change tracking.
In the past the T4 templates that generated POCO entities indeed marked all properties - including scalar properties - as virtual and prepared the entities for proxy based change tracking. For the reasons described in the blog this has been changed for the newer templates, including the DbContext Generator for EF 5, as mentioned in this comment below the blog post linked above. Now, only navigation properties are marked as virtual, but not scalar properties, which allows lazy loading but is not sufficient for change tracking proxies.
Secondly, how can I generate proxy-enabled poco classes?
I am not aware of any available T4 template that would do this, but it is quite easy to modify the default template to mark also the scalar properties as virtual:
In your project you should have two files with a .tt extension: YourModelContainer.tt and YourModelContainer.Context.tt. Open the YourModelContainer.tt file.
In this file you'll find a method called Property:
public string Property(EdmProperty edmProperty)
{
return string.Format(
CultureInfo.InvariantCulture,
"{0} {1} {2} {{ {3}get; {4}set; }}",
Accessibility.ForProperty(edmProperty),
_typeMapper.GetTypeName(edmProperty.TypeUsage),
_code.Escape(edmProperty),
_code.SpaceAfter(Accessibility.ForGetter(edmProperty)),
_code.SpaceAfter(Accessibility.ForSetter(edmProperty)));
}
Change the line with...
Accessibility.ForProperty(edmProperty),
...to...
AccessibilityAndVirtual(Accessibility.ForProperty(edmProperty)),
That's it.
Just to mention it, in case you are not familiar with it, but there is a second kind of Database-First approach available, that is Reverse Engineering an existing database to a Code-First model. This approach doesn't use a T4 template at all but creates a Code-First model and a context with Fluent API mapping. It is useful if you want to customize and extend the model classes (you could also add virtual modifiers then manually) and proceed with Code-First workflow (and Code-First Migrations) in future to update and evolve your database schema.

ServiceStack Swapping ORMLite to Entity Framework

I want to replace ORMLite to EF5, and please don't ask me why :P ... So I searched around the net and have no luck finding much information on how to actually do this.
Do I need to rewrite ORMLiteConnectionFactory into EFConnectionFactory that registers in global.asax.cs? It seems like a lot to implement and very complex because it is linked to IOrmLiteDialectProvider OrmLiteConfig and all that, and it doesn't seem right because SS normally have a simple answer to all questions. For example, it is rather easy if I want to change Funq to another DI provider.
Is ORMLite the fixed choice of weapon or is this a flexible option that I can tune? Please help.
For all intents and purposes you're better off pretending OrmLite doesn't exist. OrmLite simply provides extension methods off ADO.NET's raw IDbConnection interfaces which works similar to (and why it is able to be used alongside with) Dapper and other Micro ORMS.
Entity Framework in contrast manages its own heavy abstraction that's by design not substitutable with other Micro ORMS, so you shouldn't attempt this route.
Simply ignore OrmLite exists and use Entity Framework as you normally would. Last I heard EF doesn't play too nicely with IOCs so you probably have to resort to the normal case of instantiating a new EF DataContext whenever you want to use it.

Meta Programming, whats it good for?

So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.

Subsonic: Bring me to tiers

This is an embarrassingly basic n-tier question.
I've created a DAL project in VS2008 with subsonic. It's got a widget class, a widgetcollection class, and a widgetcontroller class.
I've created my Business logic project (no I can't put it in the same tier) that references it. Using certain business criteria, it selects a collection of widgets in a function that returns a widgetcollection.
My question is: how does my GUI layer bind the collection to a grid? I know that the widgetcollection is a valid datasource for a datagrid, but how does the GUI layer know what a widget and widgetcollection are? Surely I don't have to reference the DAL from the GUI, that negates the whole point.
Firstly, I dont think this is an embarrasingly basic n-tier question.
It is a very interesting subject and one I attempted to stimulate discussion for in the old Subsonic Forums.
I share your reluctance to expose my GUI layer to the DAL.
My GUI layer only talks to BLL using the vocabulary and topics of my own Entity Model and only returns my own entities or lists or in some cases Data Tables.
My BLL only talks to a MAPping layer which maps Fetches,Saves etc to the appropriate DAL CRUD methods and converts the returned Subsonic types to my Entity types.
In doing this I was suprised at how much of Subsonic I had to duplicate and at times I felt I was going down the wrong road, I am feeling more comfortable with it now, though it still needs refactoring and refining.
For example, finding a flexible, generic means of indicating to my BLL which row(s) I wanted returned in a fetch was a challenge I hadn't expected and I finished up writing a generic queryClass with fluent interface which looks a lot like a Subsonic Select.
FWIW, I think you are headed down the right track, I guess what you have to do though is decide how you want to define those Subsonic types to your GUI.
Rob has an interesting discussion you may be interested in.
(using SubSonic 2.x) In my BLL classes I have a property which gives an object reference to the relevant DAL class. My UI form has a reference to the BLL class, so from the form I can address the DAL properties and methods via .BLL.DAL.xxxx
FWIW, I have never managed to successfully bind a SubSonic collection to a DataGridView. As alternatives, I sometimes use the collections .ToTable() method to create a DataTable and then bind to that, or alternatively I manually bind using .AddRow()
Look at the documentation for IBindingList Interface in MSDN, it has a pretty good sample.
Create, for example, a CustomersList class in your model that uses a Customer class in your BLL. Bind the grid to an instance of the CustomersList class. The presentation layer has no knowledge of the subsonic table classes.
You probably need to use an Interface. You can easily create an interface based off of the Widget in your Dal(right click on the class and create an Interface from the class). Next take the Interface and add it to your Business Logic Layer or a seperate project just for interfaces. Once you have done that you can add a reference to the Interface both in the DAL and in the GUI. This can also help if you ever change your data storage from a Database to XML etc. etc.

Resources