I have been thinking about this for a while now and have yet to come up with a best practice for how to organize my beans/classes in a JSF project for the presentation tier. Obviously there are many factors that come into play, but I would like to discuss. Here is my current line of thought:
Consider a basic JSF (still stuck on JSF 1.xx here unfortunately) application that contains a view page (view the data) and an edit page (add, update, delete the data). Here is how I would organize the project:
Request scoped BackingBean:
View related stuff (save status, render logic, etc.). Stuff that is only needed in one request
Actions, action listeners, and value change listeners. If they are applicable to more than one view, they can be separated into their own file.
Session scoped BackingBean:
Anything that needs to remain around longer than one request. Database data, SelectItems, etc.
This bean is injected into the request bean, and stores instances of any data objects
Data Objects:
It doesn't seem to make sense to make the data objects into beans, so they are stored separately. This would be things like User, Book, Car, any object that you are going to display on the page. The object can contain view helper methods as well such as getFormattedName(), etc.
DAO:
A non-bean object that handles all interaction with the business logic tier. It loads the data bean and prepares submits, etc. I usually make this a class of public static methods.
Converters, Validators:
Separate files
This seems to be all that is needed in your average JSF application. I have read through this: http://java.dzone.com/articles/making-distinctions-between, as well the replies here: JSF backing bean structure (best practices), but I never felt like we got a complete picture. BalusC's response was helpful, but didn't seem to quite cover a full app. Let me know your thoughts!
I think you are on the right track generally, however I would do a few things differently:
I would take your DAO layer and split it into two seperate layers, one pure DAO layer that merely is responsible from fetching data from various sources (Eg. database calls, web service calls, file reads, etc...). I would then have a Business Logic layer that contains passthroughs to DAO calls as well as any additional calculations, algorithms, or other general business logic that isn't specific to any one JSF view.
In the MVC pattern, your ManagedBean plays the role of the Controller, and as such should also be the repository for Presentation Logic (logic that is specific to manipulating the view or interacting between various View components). It will also tie your business logic into the event behavior as well.
I would not use public static methods for your Business Logic or DAO layer. This doesn't lend itself well to automated unit tests and prevents your app from utilizing Dependency Injection frameworks like CDI or Spring. Instead create an interface for your BO's and DAO's and then an implementation class for this.
On that note, utilize a Dependency Injection Framework like CDI or Spring :) It will allow you to automatically inject Business Logic objects or DAO's into your ManagedBeans as well as your unit tests. It will also allow for you to swap implementations or DAO's without any coupling to code in other layers of your application.
Related
I have done a small experiment with #FlowScoped beans, whose purpose, as I understand, is to make easier creating "wizard-type" web applications, gradually accumulating data over a sequence of pages, then, once all the data is ready, writing it to the persistent storage (this is just an example, nothing prevents of course to write to the persistent storage during intermediate steps). As I saw, the calls to a #FlowScoped bean are not synchronized, and thus there is in principle the possibility of corrupting the data stored in the bean (by doing a double submit, or launching by any other means two almost simultaneous HTTP requests, which invoke the methods of the bean). This unlike #ConversationScoped beans the calls to which are synchronized.
What puzzles me is that about #SessionScoped beans I have found several links which speak about the need to synchronize the access to a #SessionScoped bean (or recommending not to use them at all, apart from user data which changes rarely), but I have not found anything like that about #FlowScoped beans.
What is considered then to be a "best practice" for using #FlowScoped beans? Am I missing something?
EDIT
#FlowScoped seems, at least to me, to be motivated in part by Spring WebFlow, with which I have some experience, and which, as I know, offers integration with JSF 2 (not all JSF 2.2 features seem to be implemented, but it seems that PrimeFaces is usable, for example). I know that Spring WebFlow + JSF is actually used in "real world" applications, and the issue of thread safety of flow scoped objects is handled there elegantly together with double submit issues (flow execution id must be supplied with each HTTP request, and it expires and a new one is returned after a HTTP request which invokes a Spring WebFlow "action" method: therefore one cannot invoke concurrently more than one "action" method for the same user and flow id).
So I want to understand, what is the best practice in the case of JSF 2.2 if I wish to use the #FlowScoped beans to construct an application "flow" (without using Spring WebFlow). Do I really need to synchronize the access to #FlowScoped beans myself, or there is some standard way to deal with such issues?
Do you think it is a good idea to put all widely used utility methods in an application scoped bean?
In the current implementation of the application I'm working on, all utility methods (manipulating with Strings, cookies, checking url, checking current page where the user is etc.) are all put in one big request scoped bean and they are referenced from every xhtml page.
I couldn't find any information on stackoverflow if the approach of putting utility methods in an application scoped bean would be a good or a bad choice.
Why I came across this idea is the need of reusing those methods in a bean of a wider scope then a request scoped bean (like view or session scoped bean). Correct me if I'm wrong but you should always inject same or wider scoped beans i.e. you shouldn't inject request scoped bean inside a view scoped one.
I think using utility methods from application scoped bean should be beneficial (there won't be any new object creations, one object will be created and re-used across all application), but still I would like a confirmation or someone to tell me if that is a wrong approach and why is it wrong.
As to the bean scope, if the bean doesn't have any state (i.e. the class doesn't have any mutable instance variables), then it can safely be application scoped. See also How to choose the right bean scope? This all is regardless of the purpose of the bean (utility or not). Given that utility functions are per definition stateless, then you should definitely be using an application scoped bean. It saves the cost of instantiating on every single request.
As to having utility methods in a managed bean, in object oriented perspective this is a poor practice, because in order to access them from EL those methods cannot be static while they should be. You can't use them as real utility methods in other normal Java classes. Static code analyzers like Sonar will mark them all with a big red flag. This is thus an anti-pattern. The correct approach would be to keep using a true utility class (public final class with private Constructor() with solely static methods) and register all those static methods as EL functions in your.taglib.xml as described in How to create a custom EL function to invoke a static method?
At least, this is what you should be doing when you intend to have a publicly reusable library such as JSTL fn:xxx(), PrimeFaces p:xxx() or OmniFaces of:xxx(). If you happen to use OmniFaces, then you could, instead of creating a your.taglib.xml file, reference the class in <o:importFunctions>. It will automatically export all public static methods of the given type into EL function scope.
<o:importFunctions type="com.example.Utils" var="u" />
...
<x:foo attr="#{u:foo(bean.property)}" />
If you don't use OmniFaces, and this all is for internal usage, then I can imagine that it becomes tiresome to redo all that your.taglib.xml registration boilerplate for every tiny utility function which suddenly pops up. I can rationalize and forgive abusing an application scoped bean for such "internal usage only" cases. Only when you start to externalize/modularize/publicize it, then you should really register them as EL functions and not expose poor practices into public.
I have a drop down menu from which user selects an item(in my case project he is working on), most of the data displayed by a webpage depends on that selection. So I have couple of view scoped beans that call EJB beans which do database queries that depend on the selected project.
I want to cache most of that data in order to reduce Database queries, but when user changes project it has to notify other beans that change has happened and new data needs to be fetched.
So I had an idea:
projectChangeManager (session scoped managed bean), saves which project is selected, notifies it's subscribers when project changes. Has #PreDestory method implemented where I cleanup observers.
project observers (view scoped managed beans), gets data from EJB based on project selection, has onProjectChange() method which fetches new data from EJB. Has #PreDestory method implemented where projectChangeManager.detach(this) is called to unsubscribe from projectChangeManager.
Is this reasonable approach in JSF? Or is it better not to implement observer pattern but just when user changes project I fetch all cached data and save it in session bean and then in ViewScoped bean I just access that data from SessionScoped bean? Or is there a better way?
To me this looks like overengineering. Initially, I would probably just let every managed bean call the repository/EJB whenever it needs data. Then rely on caching in the persistence layer (JPA/Hibernate/whatever you use).
If this turns out to be a performance problem, you can consider some hand-rolled caching solution.
Even in that case I still don't quite see the advantage of your observer approach. Your second approach (cache in session bean, access from ViewScoped bean) looks simpler and should work just as well.
Finally, if you decide to use caching, consider how to avoid a stale cache. If you cache per session, changes in one session will not be visible in another session. Incidentally, I think this is another reason to leave caching to the persistence layer.
In a jsf application I have a table with summarized data. If I'm interested in the details I can click on a row an see the details in another page.
If the managed bean of the 'master' page is ion view scope it is re-created every time I return back from the 'detail' page and I don't think it is a good idea if the user is supposed to check the details more times. I can solve putting the bean in sessions cope but this way the bean (and the data) are kept in memory also when the user is interacting with the application in a completely different section. Probably I would need a custom scope but:
the documentation about custom scope is poor and I'm a bit frightened about people complaining it has bugs and doesn't work well.
the scenario I'm dealing with seems to me quite general, so I wonder why there is no ready solution for it.
Thanks
Filippo
If the detail page has to be idempotent (i.e. it's permalinkable, bookmarkable, searchbot-crawlable), just use two request or view scoped beans and use a GET link with the entity ID as request parameter to go from master page to detail page. See also Creating master-detail pages for entities, how to link them and which bean scope to choose for a concrete example.
If the detail page does not need to be idempotent, then you can always conditionally render the master and detail in the very same view or even display the detail in some modal dialog from the master page on. This way you can continue with a single view scoped bean.
In JSF side you must not be too much worried about the DB performance cost. Rather configure and finetune it in the persistence layer. In JPA for example you can setup a second level cache. If you've much more than 500~1000 items, then consider database-level pagination.
It may be valid to reload the master page each time e.g. if the data could have changed after viewing the details page. However, if you want to keep the data available for longer than #ViewScoped your options are:
You should be using JEE6 of which JSF 2.0 is a part of, so look at Conversation Scope (part of CDI)
Some additional scopes for JEE6 CDI is available through the MyFaces CODI
Potentially use Session Scope and make sure you tidy up when a Request hits which is not for the Master or Details page
Rework your design to use Ajax, so if clicking a record on the Master page its details load in the same view. You could then use #ViewScoped
My preference would be to look at the Conversation Scope. You don't mention which JSF implementation you are running or in which environment.
I'm developing a java EE web app using JSF with a shopping cart style process, so I want to collect user input over a number of pages and then do something with it.
I was thinking to use an EJB 3 stateful session bean for this, but my research leads me to believe that a SFSB is not tied to a client's http session, so I would have to manually keep track of it via an httpSession, some side questions here . . .
1) Why is it called a session bean, as far as I can see it has nothing to do with a session, I could achieve the same by storing a pojo in a session.
2) What's the point of being able to inject it, if all I'm gonna be injecting' is a new instance of this SFSB then I might as well use a pojo?
So back to the main issue I see written all over that JSF is a presentation technology, so it should not be used for logic, but it seems the perfect option for collecting user input.
I can set a JSF session scoped bean as a managed property of all of my request beans which means it's injected into them, but unlike a SFSB the JSF managed session scoped bean is tied to the http session and so the same instance is always injected as long as the http session hasn't been invalidated.
So I have multiple tiers
1st tier) JSF managed request scoped beans that deal with presentation, 1 per page.
2nd tier) A JSF managed session scoped bean that has values set in it by the request beans.
3rd tier) A stateless session EJB who executes logic on the data in the JSF session scoped bean.
Why is this so bad?
Alternative option is to use a SFSB but then I have to inject it in my initial request bean and then store it in the http session and grab it back in each subsequent request bean - just seems messy.
Or I could just store everything in the session but this isn't ideal since it involves the use of literal keys and casting . etc .. etc which is error prone. . . and messy!
Any thoughts appreciated I feel like I'm fighting this technology rather than working with it.
Thanks
Why is it called a session bean, as far as I can see it has nothing to do with a session, I could achieve the same by storing a pojo in a session.
From the old J2EE 1.3 tutorial:
What Is a Session Bean?
A session bean represents a single
client inside the J2EE server. To
access an application that is deployed
on the server, the client invokes the
session bean's methods. The session
bean performs work for its client,
shielding the client from complexity
by executing business tasks inside the
server.
As its name suggests, a session bean
is similar to an interactive session.
A session bean is not shared--it may
have just one client, in the same way
that an interactive session may have
just one user. Like an interactive
session, a session bean is not
persistent. (That is, its data is not
saved to a database.) When the client
terminates, its session bean appears
to terminate and is no longer
associated with the client.
So it has to do with a "session". But session not necessarily means "HTTP session"
What's the point of being able to inject it, if all I'm gonna be injecting' is a new instance of this SFSB then I might as well use a pojo?
Well, first of all, you don't inject a SFSB in stateless component (injection in another SFSB would be ok), you have to do a lookup. Secondly, choosing between HTTP session and SFSB really depends on your application and your needs. From a pure theoretical point of view, the HTTP session should be used for presentation logic state (e.g. where you are in your multi page form) while the SFSB should be used for business logic state. This is nicely explained in the "old" HttpSession v.s. Stateful session beans thread on TSS which also has a nice example where SFSB would make sense:
You may want to use a stateful session
bean to track the state of a
particular transaction. i.e some one
buying a railway ticket.
The web Session tracks the state of
where the user is in the html page
flow. However, if the user then gained
access to the system through a
different channel e.g a wap phone, or
through a call centre you would still
want to know the state of the ticket
buying transaction.
But SFSB are not simple and if you don't have needs justifying their use, my practical advice would be to stick with the HTTP session (especially if all this is new to you). Just in case, see:
Stateless and Stateful Enterprise Java Beans
Stateful EJBs in web application?
So back to the main issue I see written all over that JSF is a presentation technology, so it should not be used for logic, but it seems the perfect option for collecting user input.
That's not business logic, that's presentation logic.
So I have multiple tiers (...)
No. You have probably a client tier, a presentation tier, a business tier, a data tier. What you're describing looks more like layers (not even sure). See:
Can anybody explain these words: Presentation Tier, Business Tier, Integration Tier in java EE with example?
Spring, Hibernate, Java EE in the 3 Tier architecture
Why is this so bad?
I don't know, I don't know what you're talking about :) But you should probably just gather the multi page form information into a SessionScoped bean and call a Stateless Session Bean (SLSB) at the end of the process.
1) Why is it called a session bean, as far as I can see it has nothing to do with a session, I could achieve the same by storing a pojo in a session.
Correction: an EJB session has nothing to do with a HTTP session. In EJB, roughly said, the client is the servlet container and the server is the EJB container (both running in a web/application server). In HTTP, the client is the webbrowser and the server is the web/application server.
Does it make more sense now?
2) What's the point of being able to inject it, if all I'm gonna be injecting' is a new instance of this SFSB then I might as well use a pojo?
Use EJB for transactional business tasks. Use a session scoped managed bean to store HTTP session specific data. Neither of both are POJO's by the way. Just Javabeans.
Why shouldn't I use a JSF SessionScoped bean for logic?
If you aren't taking benefit of transactional business tasks and the abstraction EJB provides around it, then just doing it in a simple JSF managed bean is indeed not a bad alternative. That's also the normal approach in basic JSF applications. The actions are however usually to be taken place in a request scoped managed bean wherein the session scoped one is been injected as a #ManagedProperty.
But since you're already using EJB, I'd question if there wasn't a specific reason for using EJB. If that's the business requirement from upper hand, then I'd just stick to it. At least, your session-confusion should now be cleared up.
Just in case you're not aware of this, and as a small contribution to the answers you have, you could indeed anotate a SFSB with #SessionScoped, and CDI will handle the life cycle of the EJB... This would tie an EJB to the Http Session that CDI manages. Just letting you know, because in your question you say:
but my research leads me to believe that a SFSB is not tied to a client's http session, so I would have to manually keep track of it via an httpSession, some side questions here . . .
Also, you could do what you suggest, but it depends on your requirements, until CDI beans get declarative transaction support or extended persistence contexts etc, you'll find yourself writing a lot of boilerplate code that would make your bean less clean. Of course you can also use frameworks like Seam (now moving to DeltaSpike) to enhance certain capabilities of your beans through their extensions.
So I'd say yes, at first glance you may feel it's not necessary to use a stateful EJB, but certain use cases may be better solve through them. If a user adds a product to his cart, and another user adds this same product later, but there is only one unit in stock, who gets it? the one who does the checkout faster? or the one who added it first? What if you want to access your entity manager to persist a kart in case the user decides to randomly close his browser or what if you have transactions that spawn multiple pages and you want every step to be synchronized to the db? (To keep a transaction open for so long is not advisable but maybe there could be a scenario where this is needed?) You could use SLSB but sometimes it's better and cleaner to use a SFSB..