Does load balancing in the server side of a web app generate multiple instances of a singleton AppScoped bean of JSF? - jsf

Suppose I deploy my web app on AWS or GAE and my app in JSF has a Singleton ApplicationScoped Bean with methods "void setList( List)" "List getList()".
I call this methods from a SessionScoped bean when a user makes modifications to the list.
I want to ensure that all users get the changes in their own session by pushing a notification message to them so that they can get the list again.
If the load balancer of AWS or GAE splits the app in several instances, how do they manage this singleton ApplicationScoped bean? Are there many instances of the singleton? how are they synchronized? is there any risk that one instance has different information?
I suppose all instances of the app in each server participating in the load balancing needs to be updated somehow but this would kill the purpose of load balancing since the work would be replicated everywhere. It might be possible that the singleton work is not load balanced but I don't know. The documentation is very extense and hard to get familiar with.
#ManagedBean( name = "theModelBean",
eager = true )
#Singleton
#ApplicationScoped
public class ModelBean {
ArrayList<Data> theList;
public List<Data> getList(){
return theList;
}
public void setList( List<Data> aList ) {
this.theList = aList;
}
}
#ManagedBean( name = "theController" )
#SessionScoped
public class Controller {
#ManagedProperty(value = "#{theModelBean}")
private ModelBean theModelBean;
public foo(){
ArrayList<Data> list = new ArrayList<>;
list.add( new Data() );
theModelBean.setList( list );
}
}
I wish load balancing does not interfere with my logic and that it handles everything in a transparent way for me. Otherwise I might have to make theModelBean write the list to the database everytime it changes and get it from there everytime it is requested from a session.

I'll ignore the "load balancing" / "load balanced" terms in your question and assume that you actually meant "clustering" / "clustered". As in: the same WAR file is being deployed to multiple servers which are all behind a single proxy (who does the actual load balancing, but load balancing itself is not the cause of the observed problem).
Yes, each server of the cluster will get its own instance of any "application scoped" bean. This not only includes the JSF #javax.faces.bean.ApplicationScoped, but also the CDI #javax.enterprice.context.ApplicationScoped and #javax.inject.Singleton and the EJB #javax.ejb.Singleton.
The normal approach is indeed to keep track of shared data in a single common data source which is used by all servers of the cluster. Usually a SQL-based RDBMS is being used for that. Usually you fire a SQL query to obtain the most recent data from the DB on every request/view. If you're using JPA for that, you usually use 2nd level cache to cache the data so that the amount of DB hits will be reduced. This can be configured cluster-wide.
If the data is immutable (i.e. read-only after creation), then the alternative approach to saving in a DB is to rely on session persistence. Have a #SessionScoped bean which reads from the #ApplicationScoped one during writeObject() and writes-if-absent to the #ApplicationScoped during readObject(). One real world example is implemented in the code behind the JSF <f:websocket> and OmniFaces <o:socket>.

Related

Ad-Hoc type conversion in JOOQ DSL query

scenario:
we store some encrypted data in db as blob. When reading/saving it, we need to decrypt/encrypt it using an external service.
because it is actually a spring bean using an external service, we cannot use the code generator like dealing with enums.
I don't want to use dslContext.select(field1, field2.convertFrom).from(TABLE_NAME) because you need to specify every fields of the table.
It is convenient to use dslContext.selectFrom(TABLE_NAME). wonder if any way we can register the converter bean in such query to perform encrypt and decrypt on the fly.
Thanks
Edit: I ended up using a service to encrypt/decrypt the value when it is actually used. Calling an external service is relatively expensive. Sometimes the value isn't used in the request. It may not make sense to always decrypt the value when reading from db using the converter.
because it is actually a spring bean using an external service, we cannot use the code generator like dealing with enums.
Why not? Just because Spring favours dependency injection, and you currently (as of jOOQ 3.15) cannot inject anything into jOOQ Converter and Binding instances, doesn't mean you can't use other means of looking up such a service. Depending on what you have available, you could use some JNDI lookup, or other means to discover that service when needed, from within your Converter.
Another option would be to use a ConverterProvider and register your logic inside of that. That wouldn't produce your custom type inside of jOOQ records, but whenver you convert your blob to your custom data type, e.g. using reflection.
How to access Spring Beans without Dependency Injection?
If you need to access your Spring Beans you don't need Dependency Injection. Simply create the following class and you can get beans from the static method getBean():
#Component
public class ApplicationContextHolder implements ApplicationContextAware {
private static ApplicationContext applicationContext;
public static <T> T getBean(Class<T> type) {
return applicationContext.getBean(type);
}
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
ApplicationContextHolder.applicationContext = applicationContext;
}
}

JSF Singleton Services/DAO/.. vs ApplicationScope [duplicate]

I'm trying to get used to how JSF works with regards to accessing data (coming from a spring background)
I'm creating a simple example that maintains a list of users, I have something like
<h:dataTable value="#{userListController.userList}" var="u">
<h:column>#{u.userId}</h:column>
<h:column>#{u.userName}</h:column>
</h:dataTable>
Then the "controller" has something like
#Named(value = "userListController")
#SessionScoped
public class UserListController {
#EJB
private UserListService userListService;
private List<User> userList;
public List<User> getUserList() {
userList = userListService.getUsers();
return userList;
}
}
And the "service" (although it seems more like a DAO) has
public class UserListService {
#PersistenceContext
private EntityManager em;
public List<User> getUsers() {
Query query = em.createQuery("SELECT u from User as u");
return query.getResultList();
}
}
Is this the correct way of doing things? Is my terminology right? The "service" feels more like a DAO? And the controller feels like it's doing some of the job of the service.
Is this the correct way of doing things?
Apart from performing business logic the inefficient way in a managed bean getter method, and using a too broad managed bean scope, it looks okay. If you move the service call from the getter method to a #PostConstruct method and use either #RequestScoped or #ViewScoped instead of #SessionScoped, it will look better.
See also:
Why JSF calls getters multiple times
How to choose the right bean scope?
Is my terminology right?
It's okay. As long as you're consistent with it and the code is readable in a sensible way. Only your way of naming classes and variables is somewhat awkward (illogical and/or duplication). For instance, I personally would use users instead of userList, and use var="user" instead of var="u", and use id and name instead of userId and userName. Also, a "UserListService" sounds like it can only deal with lists of users instead of users in general. I'd rather use "UserService" so you can also use it for creating, updating and deleting users.
See also:
JSF managed bean naming conventions
The "service" feels more like a DAO?
It isn't exactly a DAO. Basically, JPA is the real DAO here. Previously, when JPA didn't exist, everyone homegrew DAO interfaces so that the service methods can keep using them even when the underlying implementation ("plain old" JDBC, or "good old" Hibernate, etc) changes. The real task of a service method is transparently managing transactions. This isn't the responsibility of the DAO.
See also:
I found JPA, or alike, don't encourage DAO pattern
DAO and JDBC relation?
When is it necessary or convenient to use Spring or EJB3 or all of them together?
And the controller feels like it's doing some of the job of the service.
I can imagine that it does that in this relatively simple setup. However, the controller is in fact part of the frontend not the backend. The service is part of the backend which should be designed in such way that it's reusable across all different frontends, such as JSF, JAX-RS, "plain" JSP+Servlet, even Swing, etc. Moreover, the frontend-specific controller (also called "backing bean" or "presenter") allows you to deal in a frontend-specific way with success and/or exceptional outcomes, such as in JSF's case displaying a faces message in case of an exception thrown from a service.
See also:
JSF Service Layer
What components are MVC in JSF MVC framework?
All in all, the correct approach would be like below:
<h:dataTable value="#{userBacking.users}" var="user">
<h:column>#{user.id}</h:column>
<h:column>#{user.name}</h:column>
</h:dataTable>
#Named
#RequestScoped // Use #ViewScoped once you bring in ajax (e.g. CRUD)
public class UserBacking {
private List<User> users;
#EJB
private UserService userService;
#PostConstruct
public void init() {
users = userService.listAll();
}
public List<User> getUsers() {
return users;
}
}
#Stateless
public class UserService {
#PersistenceContext
private EntityManager em;
public List<User> listAll() {
return em.createQuery("SELECT u FROM User u", User.class).getResultList();
}
}
You can find here a real world kickoff project here utilizing the canonical Java EE / JSF / CDI / EJB / JPA practices: Java EE kickoff app.
See also:
Creating master-detail pages for entities, how to link them and which bean scope to choose
Passing a JSF2 managed pojo bean into EJB or putting what is required into a transfer object
Filter do not initialize EntityManager
javax.persistence.TransactionRequiredException in small facelet application
It is a DAO, well actually a repository but don't worry about that difference too much, as it is accessing the database using the persistence context.
You should create a Service class, that wraps that method and is where the transactions are invoked.
Sometimes the service classes feel unnecessary, but when you have a service method that calls many DAO methods, their use is more warranted.
I normally end up just creating the service, even if it does feel unnecessary, to ensure the patterns stay the same and the DAO is never injected directly.
This adds an extra layer of abstraction making future refactoring more flexible.

Where should I create my ConnectionPool in a JSF application?

I am new to JSF/Facelets and I am creating an application that does the usual CRUD operations over a (No SQL) database. The database has an API that allows the creation of a pool of connections, and then from this object my operations can take and release connections. I suppose this pool has to be created only once for the whole application when it is deployed, be shared (as static?) and closed once the application is destroyed. Is my approach correct? What is the best practice to do this? I have no idea of where I should place my code and how I should call it.
With my old SQL database I used to configure a "testOnBorrow" and a "validationQuery" in the context.xml Resource so I didn't have to create an explicit pool programmatically.
I found two great tutorials (here and here) but I can't figure out from them where to put the code that creates the pool.
Note: I know this might be only a Servlet problem, but I am tagging it as JSF since I don't know if there is a way to do this in JSF (like an application scoped bean). Thanks.
EDIT
Looking at the fact that I cannot find a JAR with a DataSource for the database to be loaded via context.xml, perhaps my question should be more specific: where can I run code once, when a JSF application is deployed and where can I run code when a JSF application is destroyed?
You can implement a weblistner( i.e ServletContextListener). and can use contextInitialized , contextDestroyed method of that to create and destroy your connection pool
Sample :
#WebListener
public class ContextWebListner implements ServletContextListener {
#Override
public void contextInitialized(ServletContextEvent event) {
// initialize connection pool.
}
#Override
public void contextDestroyed(ServletContextEvent event) {
// Destroy connection pool.
}
}

SessionScope and Scheduled threads

In my application i have a service that performs heavy loading (parsing of different files) up on creation. The data is metadata, so wont change during runtime (localized strings, key/value mappings, etc.) Therefore I decided to make this Service SessionScoped, so I don't need to parse the values with every request. Not ApplicationScoped to make sure the data is refreshed, when the user logs in again.
this works pretty well, but now i need to access that service inside a thread, that is run with the #Schedule Annotation. Of course Weld does not like that and says: org.jboss.weld.context.ContextNotActiveException: WELD-001303 No active contexts for scope type javax.enterprise.context.SessionScoped
#Singleton
public class DailyMails {
#Inject
MailService mailService; //just Named
#Inject
GroupDataService groupDataService; //Stateless
#Inject
LocalizationService localizationService; //SessionScoped
#Schedule(hour = "2", minute = "0", second = "0", dayOfWeek="Mon,Tue,Wed,Thu,Fri", persistent = false)
public void run() {
//do work
}
}
Can I manually create a Session at this point, so that I can use the SessionScoped service?
Edit: I know, that a Service should not ne SessionScoped nor should it hold any Data(-Collections). However in this Situation it seems legit to me to avoid multiple File-System accesses.
I thought about making the Service to a unscoped service and "cache" the data in a session scoped bean. However then I would need to inject the session bean to that Service, which will
again make the service kind of "session scoped".
Shouldn't this work:
#Inject #New
LocalizationService localizationService;
At least, that's how I interpret the specification.

How can I initialize a Java FacesServlet

I need to run some code when the FacesServlet starts, but as FacesServlet is declared final I can not extend it and overwrite the init() method.
In particular, I want to write some data to the database during development and testing, after hibernate has dropped and created the datamodel.
Is there a way to configure Faces to run some method, e.g. in faces-config.xml?
Or is it best to create a singleton bean that does the initialization?
Use an eagerly initialized application scoped managed bean.
#ManagedBean(eager=true)
#ApplicationScoped
public class App {
#PostConstruct
public void startup() {
// ...
}
#PreDestroy
public void shutdown() {
// ...
}
}
(class and method names actually doesn't matter, it's free to your choice, it's all about the annotations)
This is guaranteed to be constructed after the startup of the FacesServlet, so the FacesContext will be available whenever necessary. This in contrary to the ServletContextListener as suggested by the other answer.
You could implement your own ServletContextListener that gets notified when the web application is started. Since it's a container managed you could inject resources there are do whatever you want to do. The other option is to create a #Singleton ejb with #Startup and do the work in it's #PostCreate method. Usually the ServletContextListener works fine, however if you have more than one web application inside an ear and they all share the same persistence context you may consider using a #Singleton bean.
Hey you may want to use some aspects here. Just set it to run before
void init(ServletConfig servletConfig)
//Acquire the factory instances we will
//this is from here
Maybe this will help you.

Resources