Utility methods in application scoped bean - jsf

Do you think it is a good idea to put all widely used utility methods in an application scoped bean?
In the current implementation of the application I'm working on, all utility methods (manipulating with Strings, cookies, checking url, checking current page where the user is etc.) are all put in one big request scoped bean and they are referenced from every xhtml page.
I couldn't find any information on stackoverflow if the approach of putting utility methods in an application scoped bean would be a good or a bad choice.
Why I came across this idea is the need of reusing those methods in a bean of a wider scope then a request scoped bean (like view or session scoped bean). Correct me if I'm wrong but you should always inject same or wider scoped beans i.e. you shouldn't inject request scoped bean inside a view scoped one.
I think using utility methods from application scoped bean should be beneficial (there won't be any new object creations, one object will be created and re-used across all application), but still I would like a confirmation or someone to tell me if that is a wrong approach and why is it wrong.

As to the bean scope, if the bean doesn't have any state (i.e. the class doesn't have any mutable instance variables), then it can safely be application scoped. See also How to choose the right bean scope? This all is regardless of the purpose of the bean (utility or not). Given that utility functions are per definition stateless, then you should definitely be using an application scoped bean. It saves the cost of instantiating on every single request.
As to having utility methods in a managed bean, in object oriented perspective this is a poor practice, because in order to access them from EL those methods cannot be static while they should be. You can't use them as real utility methods in other normal Java classes. Static code analyzers like Sonar will mark them all with a big red flag. This is thus an anti-pattern. The correct approach would be to keep using a true utility class (public final class with private Constructor() with solely static methods) and register all those static methods as EL functions in your.taglib.xml as described in How to create a custom EL function to invoke a static method?
At least, this is what you should be doing when you intend to have a publicly reusable library such as JSTL fn:xxx(), PrimeFaces p:xxx() or OmniFaces of:xxx(). If you happen to use OmniFaces, then you could, instead of creating a your.taglib.xml file, reference the class in <o:importFunctions>. It will automatically export all public static methods of the given type into EL function scope.
<o:importFunctions type="com.example.Utils" var="u" />
...
<x:foo attr="#{u:foo(bean.property)}" />
If you don't use OmniFaces, and this all is for internal usage, then I can imagine that it becomes tiresome to redo all that your.taglib.xml registration boilerplate for every tiny utility function which suddenly pops up. I can rationalize and forgive abusing an application scoped bean for such "internal usage only" cases. Only when you start to externalize/modularize/publicize it, then you should really register them as EL functions and not expose poor practices into public.

Related

Thread-safety of #FlowScoped beans

I have done a small experiment with #FlowScoped beans, whose purpose, as I understand, is to make easier creating "wizard-type" web applications, gradually accumulating data over a sequence of pages, then, once all the data is ready, writing it to the persistent storage (this is just an example, nothing prevents of course to write to the persistent storage during intermediate steps). As I saw, the calls to a #FlowScoped bean are not synchronized, and thus there is in principle the possibility of corrupting the data stored in the bean (by doing a double submit, or launching by any other means two almost simultaneous HTTP requests, which invoke the methods of the bean). This unlike #ConversationScoped beans the calls to which are synchronized.
What puzzles me is that about #SessionScoped beans I have found several links which speak about the need to synchronize the access to a #SessionScoped bean (or recommending not to use them at all, apart from user data which changes rarely), but I have not found anything like that about #FlowScoped beans.
What is considered then to be a "best practice" for using #FlowScoped beans? Am I missing something?
EDIT
#FlowScoped seems, at least to me, to be motivated in part by Spring WebFlow, with which I have some experience, and which, as I know, offers integration with JSF 2 (not all JSF 2.2 features seem to be implemented, but it seems that PrimeFaces is usable, for example). I know that Spring WebFlow + JSF is actually used in "real world" applications, and the issue of thread safety of flow scoped objects is handled there elegantly together with double submit issues (flow execution id must be supplied with each HTTP request, and it expires and a new one is returned after a HTTP request which invokes a Spring WebFlow "action" method: therefore one cannot invoke concurrently more than one "action" method for the same user and flow id).
So I want to understand, what is the best practice in the case of JSF 2.2 if I wish to use the #FlowScoped beans to construct an application "flow" (without using Spring WebFlow). Do I really need to synchronize the access to #FlowScoped beans myself, or there is some standard way to deal with such issues?

XAgent importPackage vs. managed bean in none scope

What is your prefered way to run application (business) logic in an XAgent?
XAgent using importPackage:
XAgent
importPackage(com.test.model.configuration);
FolderConfiguration.updateFolders(
facesContext.getExternalContext().getRequest().getReader());
XAgent using a managed bean in none scope:
faces-config.xml
<managed-bean>
<managed-bean-name>folderConfig</managed-bean-name>
<managed-bean-class>com.test.model.configuration.FolderConfiguration
</managed-bean-class>
<managed-bean-scope>none</managed-bean-scope>
</managed-bean
XAgent
folderConfig.updateFolders(
facesContext.getExternalContext().getRequest().getReader());
I am not sure about the pros and cons of both of them.
Thanks for any hints.
Both versions won't be significant different at run time performance.
So, it's more a matter of code design.
Managed bean's pros:
Java class reference is defined at a central place.
If you change Java package later you have to change the managed bean
definition only
JavaScript code is shorter
importPackage's pros:
use of Java class is independent from outside managed bean settings
Java class doesn't need to be instantiated if static methods only are called
I'd go for managed bean version if you use this Java class on several XPages or custom controls. Otherwise I'd use the importPackage or the direct call
com.test.model.configuration.FolderConfiguration.updateFolders(...)

Inject ConversationScoped beans within an asynchronous method

I need to call a method annotated with #Asynchronous in EJB from a ConversationScoped bean. Inside this method I create instances of some classes using #Inject to inject ConversationScoped beans.
Is it somehow possible to set the context of the asynchronous method to given Conversation?
I hope you can help me.
No, absolutely not. EJBs do per definition not run in web container, but in EJB container. In essence, having any web-related artifact/dependency (including javax.faces.* classes) inside an EJB class is a red alert. You're not supposed to inject/access any class from the client tier (the WAR) in the business tier (the EJB/EAR). Moreover, conversation scoped beans are tied to a HTTP request parameter and this information is nowhere available in an EJB container.
Whatever problem you're trying to solve and for which you incorrectly thought that this all would be the right solution, it has to be solved differently. As an educated guess, I think you just need to let the EJB fire a CDI event or take a callback argument.
See also:
JSF Service Layer

java.io.NotSerializableException: org.primefaces.model.DefaultStreamedContent

I have a simple jsf 2.1 that used to work fine on Java EE 6 using primefaces 3.4.
When I migrated to glassfish 4.0 and primefaces 5.1 I've got the following exceptions each time I redeploy the project on Netbeans:
java.io.NotSerializableException: org.primefaces.model.DefaultStreamedContent
java.io.NotSerializableException: org.primefaces.component.datatable.DataTable
Even if this exception is thrown, the project is deployed and run correctly!
What could be wrong?
You've declared those types as a property of a view or session scoped managed bean. You should absolutely not do that. You should declare them as a property of a request scoped bean.
View and session scoped beans must be Serializable because view scoped beans are reused/shared across multiple requests on the same view in the same session, and session scoped beans are reused/shared across multiple requests in the same session. Anything tied to a specific HTTP session must be Serializable, because it enables the server to store sessions on disk, so it could be shared among other servers in a cluster, or survive server restarts.
The DefaultStreamedContent (and the InputStream it is wrapping, if any) may absolutely not be created and assigned as a view/session scoped bean property, not only because it's not serializable, but also because it can be read only once. You need to create this in the getter method only. This is indeed a rather special case which is fleshed out further in this answer: Display dynamic image from database with p:graphicImage and StreamedContent
The DataTable is a JSF component which you most likely referenced via binding attribute. It may absolutely not be assigned as a view/session scoped bean property, because UI components are inherently request scoped. Reusing the same UI component instance across multiple restored views in the same session may cause its state being shared across multiple requests (NOT threadsafe thus!) and/or potential "Duplicate Component ID" errors. See also a.o. How does the 'binding' attribute work in JSF? When and how should it be used?
NotSerializableException is thrown when an instance of a class must implement the Serializable interface.
If the class that throws the exception does not belong to a third-party library, find the class and make it implement the serializable interface.
If you do not want to serialize the objects in the class, you can mark the objects as transient, to make the serializable runtime ignore the objects.
You can read about it here

Accessing FacesContext in servlet

I'm working on a JSF (v1.2) application. In my application I need a generic servlet which could serve any resource (PDF, Images, Excel, etc). My idea is to ask the caller to send the required information so that I can find out the correct delegator class using some configurations.
This delegator class will take care of serving the correct resource.
For example this is the request url
http://example.com/servlet?delegatorid=abcd
My Servlet code is something like this.
protected void doGet(HttpServletRequest request, HttpServletResponse response){
String delegatorID=request.getParameter("delegatorid");
//Get the configuration from Configuration table
configuration=getConfiguration(delegatorID);
//invoke the method of the delegator class based on this configuration
Object result=invokeMethod(configuration);
//write the response to the stream
}
My question is what is the best way to do this in a JSF project?
Should I completely avoid JSF dependency in this operation? I can find the delegator method and class and invoke it using reflection. Will there be any potential restrictions in future if I avoid JSF dependency. [One problem which I can think about is, in one of the code, I need to get the user information from session. I'm doing this through FacesContext. Since FacesContext is not available, it will fail, I should have another option to get the session.
If I have to introduce JSF dependency, how do I get the FacesContext
here? As far as I know, only the beans that are stored in
application scope can be accessed here. I don't want to do that. Is there any other way of getting it?
Instead of using a servlet, can I do this by invoking a ManagedBean
method directly using the URL? This will give me FacesContext. I
think I need to have a dummy JSP page for the managed bean method to
get invoked.
Could you please let me your thoughts on this?
The FacesContext (and ExternalContext) is just a facade over HttpServletRequest, HttpServletResponse, HttpSession, ServletContext, etcetara along with some JSF specifics which you don't need at all in a plain vanilla servlet. The ExternalContext#getSessionMap() is nothing more than an abstract mapping of HttpSession#get/setAttribute().
In a plain vanilla servlet, the session is just available by request.getSession() and the application by getServletContext() the usual way. See also among others this related question: Get JSF managed bean by name in any Servlet related class.
You can also just refactor code which needs to be shared by JSF and Servlet into an utility method which doesn't have any dependencies on javax.faces.* nor javax.servlet.* classes (or at most only javax.servlet.*) and finally let the callers each pass the necessary information through.

Resources