I'm wondering how to use CDI to build multiple independent objects trees representing the same type of data. Here is an example:
I have a Car, in which I want to inject GearShift and Engine.
I also want to inject Engine in GearShift
This Car + GearShift + Engine is my tree.
If I want to have several cars at the same time, what would be the best way to do this using CDI?
I would expect to be able to define a kind of scope or a qualifier for each tree.
But CDI scopes and qualifiers are defined statically, while the number of cars is dynamic.
As an additional requirement, I would like to inject another dependency that would be shared between cars.
For example, all cars would share the same Road for their whole lifetime (couldn't find something else that makes more sense).
Thanks in advance
I am no sure if I understand you properly, but I think you can, at least, use Session Scope, because you could create several Sessions with different Ids. For every Session you will have own Session Map.
In that way you can manage separate set of Cars Objects.
If you use Weld as CDI your implementation, you can use a BoundSessionContext which can be bound to any Map. The context is automatically attached to the map on activation, and detached when invalidate() is called.
Related
Suppose I acquire a contextual reference of a bean programmatically, using BeanManager#getBeans(Type, Annotation...) and BeanManager#resolve(Set) and BeanManager#getReference(Bean, Type, CreationalContext). (This is standard stuff.)
Suppose further that the bean in question is in #Singleton scope (it could be any scope other than #Dependent; I've picked #Singleton just (a) to make this concrete and (b) to get client proxies out of the picture to keep it simple). (Note that strictly speaking I should be unaware of what scope applies to the bean I'm working with, but my question may be directly related to this.)
Suppose further that the bean in question has a #Dependent-scoped contextual reference "inside" it. (This helps to highlight the necessary pairing of bean destruction with CreationalContext#release(); see below.)
Suppose further that the reference I get back is the first such reference. That is, my acquiring this reference is what causes the underlying contextual instance singleton to be created. All fine and simple and good so far. Yay; I have my object.
I do whatever I'm going to do with this contextual reference. Now I have just created something and don't want to leak memory, so…what should I do?
Should I simply try to destroy the existing contextual instance underlying the reference, perhaps using AlterableContext#destroy(Contextual)? But don't I now have to know intimate details about the scope and whether it is suitable to destroy any existing instance? (And of course if I do this I'd better call CreationalContext#release() too.)
Should I simply release the CreationalContext used for construction (without explicitly destroying the singleton instance)? But this will destroy the singleton's dependent references. That seems like something that would result in inconsistent state.
Should I do nothing? In the case of #Singleton this does no harm, since a singleton lasts for as long as the application, but, again, now I have to know intimate details about the scope. Is this by design?
Is the proper answer really that I need to know lots of things about the particular underlying scope (which, if user-supplied, it may be impossible to do), and need to tailor my destruction/releasing actions to the semantics of the scope?
I'm trying to implement a proper layer separation in my XPage project. Ideally I'm trying to get to a point where the XML in the XPage contains no SSJS and uses only EL to access Java objects.
So far I've worked out how to load all my data from the domino database into Java Beans (where 1 document = 1 Object, more or less), I'm reading view contents into Java Maps or Lists, and I've managed to display the content of these collections in repeat controls.
What I'm unsure of is how to display the content of a 'form', of a single document, without referencing the domino document. In particular, I'm unsure of how to deal with the 'new document' case. I suppose I create an empty object, then set that object as a data source for the Xpage.
I'm aware that I have to use a ObjectDataSource for this, but I'm unsure where to actually store it. I read an article from Stephan Wissel stating that one shouldn't put them in managed bean, so where can I put it? In one of the scoped variables like viewScope?
Right now I've written an 'ApplicationBean' which is a session-scope Managed Bean where I'm storing all my objects.
What is the best practise? It seems that there are many different ways to meet that goal. Currently I'm exploring Christian Güdemann's XPages Toolkit, which sounds very promising. I know that Samir Pipalia, John Daalsgard and Frank van der Linden have worked up their own frameworks.
So how should I go about it? Any pitfalls?
This is a large topic indeed. As Paul mentioned, Tim's document model classes are a great example of how to do that clearly, and Tim goes into more detail in later episodes in that NotesIn9 series. My own Framework's model objects are fairly similar, though I also added collection managers to handle the dirty business of accessing views. For better or for worse, almost every XPage developer solves this problem in a unique way.
There are a number of ways you can go about implementing this sort of thing, and some of the differences aren't terribly important in normal cases (for example, whether you preload all data from the document when constructing the model object or do lazy fetching to the back-end only as needed), but there are definitely a couple overarching questions to tackle.
Model Access
As you mentioned in the question, one of the big problems is how you actually access model objects from the XPage - how the objects are fetched from the DB or created anew. My Framework's model objects use a conceit of "Manager" objects, which are application-scoped beans that allow getting either named collections (which map to views), model objects by UNID, or a new model object via the keyword "new". Generally, these models (which are Serializable) are then stored in the view scope of the page using them either via <xp:dataContext/>, <xe:objectData/>, or the Framework's own <ff:modelObjectData/>.
I've found it very wise to avoid using managed beans to represent individual objects (like "CurrentWhatever" that you then fill with data on each page), since that muddies up your faces-config in the best case or runs into session problems in the worst (if you put it in session scope, which I rarely use).
How you implement "new" vs. "fetched" model objects depends largely on the tack you take to write your models in the first place, but most boil down to having two constructors: one to take a UNID (at least) to point to existing document and one to create a new one. If you go the "write every properly explicitly in the object with getters and setters" route, the latter would also initialize all of the fields with default values instead of reading them from a document. Internally, you should have fields to store the UNID of the document, which can indicate whether it's new or not - then, your save method can check if this field is empty and create a new document if needed (and then store the new doc's UNID in the field).
Views
It sounds like you're already reading your model collections into Lists, which is good. One down side there, though, is scalability: with small (less than 100) collections, you're likely to not run into any load-speed problems, but afterwards things are going to slow down on initial page load as your code reads in the entire view ahead of time. You can mitigate this somewhat with efficient view reading, but there's a limit. The built-in views are generally speedy because they only load data as needed (they also cheat like hell to do so, but that's another issue).
This is a noble goal to aim for yourself, but doing so to cover all cases is no small feat: you end up running into questions of FT searching, column resorting, efficient data preloading (you don't want to re-open the View object only to read in one entry at a time, but you also don't want to read the whole thing), use in viewPanel and maybe others (which require specialized interfaces), expanding/collapsing categories, and so forth. It's a large sub-topic on its own.
Esoterica
You're also liable to run across other areas that are more difficult than you'd think at first, such as "proper" rich text handing and file attachments. Attachments, in particular, require direct conflict with the XSP framework to get to function properly with custom model objects and the standard upload/download controls.
Case-sensitivity in field names is another potential area of trouble. If you're writing getters and setters for all of your fields, it's a moot point, but if you're going the "thin wrapper" route (which I prefer), it's important to code any intermediary caches/lookups in a way that deal with the fact that "FOO" and "foo" are (basically) the same as item names to Notes, but are distinct in Java. The tack I take is to make extensive use of TreeMaps: if you pass String.CASE_INSENSITIVE_ORDER as the parameter to the constructor, it handles treating Strings as generally case-insensitive when used as keys.
Having your model objects work with all the standard controls like that may or may not be a priority - I find it very valuable, so I did a lot of legwork to make it happen with my framework, but if you're just going to do some basic Strings-and-numbers models, you don't necessarily need to worry.
So... yeah, it's a big topic! Depending on how confident you are with Java and the XPages undergirdings, I would suggest either going the route of fairly-simple "beans with getters and setters" for your objects or by looking into the implementation details of one of the existing frameworks (my own or the ones you mentioned). Sadly, there are a lot of little things that will crop up as your code gets more complicated, many of which are non-obvious to deal with.
Jesse Gallagher's Scaffolding framework also accesses Java objects rather than dominoDocument datasources. I've used (and modified) Tim Tripcony's Java objects in his "Using Java in XPages series" from NotesIn9 (the first is episode 132. The source code for what he does is on BitBucket. This basically converts the dominoDocument to a Map. It doesn't cover MIME (Rich Text) or attachments. If it's useful, I can share the amendments I made (e.g. making field names all the same case, to ensure it still works for existing documents, where fields may have been created with LotusScript).
Andrew - Jesse's one of the experts here so I'd read his response carefully.
What I do is I took one of the key pieces of Jesses bigger framework - the "pageControllers" and I use that HEAVILY. So I end up with a Java Class for each XPages to act as the controller. "All" Jesse's page controller framework does is make it a little easier to consume. So you can reference it on each page as "controller" and don't nee dot make individual managed beans for them.
I still will use SOMES SSJS on the XPage if I really need to for things like button events.. some methods that don't have proper getters and setters.. HashMap.size() for instance. But the vast bulk of the code goes into the Java Class. No real need for viewScope variables any more as well.
in the case of a "New Document".. In the controller I'll create a new Java Object that represents the "Current document". I'll bind all the fields to that. If it's new I create a new Object and assign it to the private variable. If I'm loading form somewhere then I take that variable and load the document I want.
I've started to really try and detail this in more recent NotesIn9's. Especially the little series on Java for Xpages developers. I think I got far enough there to show you what you need to know. I do plan on doing a lot more on this topic as soon as I can.
I might be an idiot here but I have a question regarding xpages and managed beans. I'm trying to separate logic and presentation by moving logic to a bean corresponding to an entity (a document more or less). I have a data-provider-class fetching and setting data. This is fine and all with one xpage but as the application gets more advanced with relations and multiple xpages I run into a problem (I'm looking at http://blog.mindoo.com/web/blog.nsf/dx/18.03.2011104725KLEDH8.htm?opendocument&comments#anc1 for inspiration).
If I'm not wrong I can't assign different managed beans to different xpages so setting different data-provider-classes and businesslogic-beans to different xpages can not be done in faces-config.xml. Now I might be going about this in the wrong way but any pointers is most appreciated.
Best regard
Olof
You cant assign managed beans (as in define them in faces-config) for specific xpages ( as far is i know). They are application specific. I think you are looking for something like the factory pattern/creator pattern. These are design patterns used to create instances of a specific class. For more info see: Factory method pattern Wikipedia or Creational patterns wikipedia.
When you create for instance a pizzeria website you could have a factory to create specific types of pizza's depending the button you are pressing. Each pizza is then created in memory ( bean ) and used as the datasource of your custom control. When the customer is ready ordering the pizza is saved to a notesdocument ( saved state ) and transformed together with all other products orderd as an order for that customer.
Whenever you want to retrieve that particular pizza again (for instance when you want to check which pizza the customer has ordered) you only need to ask the factory if you can get pizza with number / ID and the factory will return that pizza from the notesdocument. Build once, use many.
So basically you don't have several managed beans per page but per application and you use them across your application wherever you need them.
Look at beans as "global variables", so you can have different functions by defining different names. For example: "invoice", "customer", "order", "orderItem" and so on. It's up to you.
Please share your view on this kind of thing i'm currently testing out :
Have an JPA entity inside my JSF managed bean
Bind the entity's properties to the JSF form elements like input text, combo, even datatable for the entity's list of detail objects for example.
Have the entity processed by a service object, meaning the entity object itself, and perhaps with some other simple variables / objects
The service will do some basic validation or simple processes, and deliver the entity object to the DAO layer to be persisted
And the JSF view will reflect on the detached entity
Is this kind of solution with passing the entities between tiers OK ?
Forgive me for my inexperience in this matter, since i was used to play with 'variables' in webapp (using map based formbean in struts 1), but i've read about transforming the entity objects into some other format, but i'm not sure what it is for ?
If the relations between entities are defined, we can bind it to JSF components, and therefore render based on and populate the entity's properties.
Yes, this is perfectly fine and in fact the recommended way to do it nowadays.
This "transforming the entity objects into some other format" refers probably to the Data Transfer Object pattern, which was necessary in the bad old days before annotations, when entity classes usually had to inherit from some framework-specific base class, undergo bytecode manipulation or were implemented as proxy objects by an EJB container.
Such entity objects were either impossible to serialize or contained much more state than the actual entity data and therefore would waste a lot of space when serialized. So if you wanted to have a separate app server tier, you had to use the DTO pattern to have it communicate efficiently with the web tier.
So Meta Programming -- the idea that you can modify classes/objects at runtime, injecting new methods and properties. I know its good for framework development; been working with Grails, and that framework adds a bunch of methods to your classes at runtime. You have a name property on a User object, and bamm, you get a findByName method injected at runtime.
Has my description completely described the concept?
What else is it good for (specific examples) other than framework development?
To me, meta-programming is "a program that writes programs".
Meta-programming is especially good for reuse, because it supports generalization: you can define a family of concepts that belong to a particular pattern. Then, through variability you can apply that concept in similar, but different scenarios.
The simplest example is Java's getters and setters as mentioned by #Sjoerd:
Both getter and setter follow a well-defined pattern: A getter returns a class member, and a setter sets a class member's value. Usually you build what it's called a template to allow application and reuse of that particular pattern. How a template works depends on the meta-programming/code generation approach being used.
If you want a getter or setter to behave in a slightly different way, you may add some parameters to your template. This is variability. For instance, if you want to add additional processing code when getting/setting, you may add a block of code as a variability parameter. Mixing custom code and generated code can be tricky. ABSE is currently the only MDSD approach that I know that natively supports custom code directly as a template parameter.
Meta programming is not only adding methods at runtime, it can also be automatically creating code at compile time. I.e. code generating code.
Web services (i.e. the methods are defined in the WSDL, and you want to use them as if they were real methods on an object)
Avoiding boilerplate code. For example, in Java you should use getters and setters, but these can be made automatically for most properties.