I'm learning JSF lifecycle from Oracle website and ran into ambiguous point concerning component tree rebuilding.
According to my understanding, the entire component tree will be rebuilt after every post-back request (including ajax) based on latest view-state saved, so my question is after having successfully rebuilt component tree from saved view-state what would server do with old component tree and old view-state, discard or store somewhere like view pooling for reusing later?
It depends on the state saving mode you are using. If you are using client side state saving, the information related to the view is stored into javax.faces.ViewState hidden field parameter. When the server receives the request, it creates the view from the state, process it and write the field in the response. If you are using server side state saving, the state is stored into session, so in some cases the old state is there, but there is an algorithm that discard the old views from the session.
With JSF 2.0 Partial State Saving (PSS) the view is derived from two things: the initial state and the delta state. The initial state is derived from building the view again using facelets algoritm. So what's stored as view state is just an small fraction of the overall state. The trick is very effective indeed, so after that improvement people doesn't need to care about state size anymore with JSF. That has resulted in a very good performance compared with stateless frameworks. See Understanding JSF Performance part 3 on JSFCentral
In Apache MyFaces 2.2 there is a view pool algorithm. The idea is take advantage of the state saving algorithm and use it to reuse views already built. It can give you a boost in performance of about 8-10%, but third party libraries needs to be compatible with this approach. See How to configure View Pool in Apache MyFaces. This was thought as a solution to get the "ultimate performance", but most of the time you'll do it fine using without it.
Facelets algorithm with PSS enabled is called in two points: when the view is build or restored and before render response phase to refresh the components like c:if and so on.
Related
Spring-session provides a Hazelcast4IndexedSessionRepository that stores the session data (represented as MapSession) in a Hazelcast cache. Instead of returning the MapSession directly from the repository, it uses its own wrapper class HazelcastSession (both MapSession and HazelcastSession implement the same Session interface). One of the reasons is for sure so that it can implement the different flush and save modes and support the update of the principal name index. But it also remembers all changes to the session as deltas and when the save() method on the respository is called, it uses an Hazelcast4SessionUpdateEntryProcessor to update the corresponding map entry of the Hazelcast IMap.
Why does the session repository not just set the MapSession object on the IMap directly via put without using an EntryProcessor? What is the benefit of the current implementation of recording the change deltas?
To my understanding of the Hazelcast EntryProcessor documentation, an entry processor is useful when a map entry should be updated often without the client having to retrieve the existing value first. Instead of first getting the old value (which might require a network round-trip), the entry processor can be executed directly on the Hazelcast member that holds the data.
But in case of a Spring Session, the session data is loaded from the Hazelcast map at the beginning of each incoming web request anyway (or latest when the application code wants to read/modify the session content) and then held in local memory. All changes to the session during the processing of such a request are done to the local session object and it is then saved again to the Hazelcast cache when the request ends (or earlier depending on the flush/save mode). That means the saving can be done without executing an extra get request on the IMap first. So why not just call map.put(MapSession) instead of using an EntryProcessor to update only the attributes noted in the delta list?
The only explanation I could think of would be the attempt to minimize concurrent modification of the same attributes. By saving only the deltas in the EntryProcessor instead of storing the whole MapSession which was loaded earlier, the chance to overwrite an attribute value that was modified concurrently in a parallel process is less likely. But it is not zero. Especially if my application code stores and updates only the same couple of attributes in the session all the time, even with the EntryProcessor the update will not be safe because there is no optimistic lock scheme in place.
Thanks for the insight!
An excerpt taken from a book,
For a stateless view, the component tree cannot be dynamically
generated/changed (for example, JSTL and bindings are not available in
the stateless mode). You can't create/manipulate views dynamically.
I perfectly understand the concept of going stateless as in a login form.
What I don't understand is the author's point on, JSTL and bindings are not available in the stateless mode. Please elucidate.
The author seems to be confused itself or overgeneralizing a bit too much.
The component tree can certainly still be dynamically generated/changed. This does not depend on stateful/stateless mode. The only difference with stateful mode is that those dynamic actions won't be remembered in JSF state, so they can't be restored in the postback.
It will in stateless mode continue to work fine if those dynamic changes are initiated by a non-user event during view build time, such as #PostConstruct of a request scoped bean referenced via binding attribute, or a postAddToView event listener method. It will simply be re-executed. If the method logic however in turn depends on some user-controlled variables/actions, such as request parameters or actions invoked during previous postbacks, or it is executed too late, such as during the preRenderView event, then it's not anymore guaranteed that the view will during apply request values phase of the subsequent postback become identical as it was during rendering the form to be submitted. In such case, processing the form submit may behave "unexpectedly" different as compared to a stateful view.
I have a background process receiving and applying changes to Core Data entities from a back end server using Restkit which works really well. All the entities have a version number property which updates when the back end accepts changes and publishes a new version. If an entity the user is viewing is changed I need to update the view with the latest version information.
Using KVO to observe version number for the current entity and refreshing the view when it changes works really well as long as version number is the last property.
That is, the 'column order' matters, and property updates are atomic. If the version number is the last property then when the observer is invoked all changes to all entity properties will have been applied.
If version number is not the last property defined, then when the observer is invoked the updated values of the properties after version will not been applied.
The solution is to change the database and ensure that version number is always last. This works however I cannot find anything in the documentation to suggest that the sequence of property changes is guaranteed.
I assume the only way to get a water-tight non-atomic notification is to register for managed object context change notifications and then process those notifications looking for changes to objects of interest. My concern with this is that it is not fined grained and there will be a lot of unnecessary processing to find relatively few things of interest.
Is this correct or is there a way to ensure an non-atomic view of an object when using KVO?
If you wanted to use KVO you would need to layer some change management on top, such as when the managed object is saved you check the version number and change another (non-persistent) attribute that is being observed. You can be sure that everything has been updated in a logical set when the object is saved.
Generally the context save notification is the approved approach. So long as you aren't making thousands of changes or making few large saves to the context it shouldn't be an issue. You can also look at using predicates to filter the changes and / or a fetched results controller (which does the observation for you).
I am working on a RichFaces-based JSF application that has com.sun.faces.numberOfViewsInSession and com.sun.faces.numberOfLogicalViews parameters set to 1 but has most of the managed beans set to a "session" scope. If reducing the memory footprint is the prime objective (with no significant deterioration to the page rendering times as well), what would be a better option?
Changing the scope to "request" so that the view state is not held for too long (unlike when the scope is set to "session").
I read somewhere that the scope of the beans could have a bearing on the size of the view (and "request" scoped beans may not necessarily be available for GC at the end of the request). I have seen a performance degradation in this case, straightaway though.
Changing the scope to "application" since a number of pages are user-agnostic and don't really change based on the authenticated user. The application scope would result in a singleton and therefore, would the overall memory associated with a bean would be significantly lower as it is not tied to a user?
Also, would this result in the JSF View lingering around for a little too long? If yes, this would make it worse than how it is currently with the session scoped beans.
Last but not the least, there are multiple forms within a view. Could this play a role as well in increasing the memory footprint?
If the beans don't really change for different users and they are going to be needed most of the time set them to Application scope. That way only one instance of the object will be instantiated and all requests will use it.
For objects that are not shared by all users using Request scope should make them eligible for garbage collection immediately instead of hanging around until the user's session expires.
That doesn't mean that the collector will run immediately but when collection is done they will be removed.
What does just-in-time initialization means?
This is "lazy" initialization, i.e. initialization performed only when/if the underlying module or feature is needed for the first time.
The purpose of this practice is to save time and, to a lesser extent, memory or other run-time resources by not loading modules which are not systematically needed in a given session of the application.
It is particularly useful for HTML pages, for which only the essential resources are loaded along with the main page, but all other resources are merely marked with a placeholder in the DOM containing only the necessary info for some [typically] javascript snippet to effectively replace the location so the underlying image or other resource gets loaded, following some action from the user (or also some timer event) when needed. See this article for more info about the use of JITI with web pages
With HTML this makes for faster load of the page, giving the idea of a snappier application because pages load faster.
In Just in time initialization Loads object only when an attribute is get or set or when these fields are accessed.
Non-lazy initialization retrieves an object and all of its related objects at load time.
Just in time initialization increases the performance and effective utilization of resources .
If you are looking for Hibernate Just-in-time initialization check out this document