How to inject an #Normal (#ApplicationScoped) bean into a #Dependent scope if the bean does not have a no-arg constructor - cdi

This post is related to an older SO Post of mine, wherein I was trying to understand the requirements of a no-args constructor by WELD.
Right now, I'm trying to figure out if there is a way in CDI to inject an #ApplicationScoped bean (#Normal) into a #Dependent scope. From what I've read from WELD, the requirements are to have a non-private no-arg constructor to be proxyable. However, I do not have control over the bean definition as it is provided by a library. My code is doing the following:
#Produces
#ApplicationScoped
#Named("keycloakAdmin")
public Keycloak getKeycloakAdminClient(#Named("keycloakDeployment") final KeycloakDeployment deployment) {
String clientId = deployment.getResourceName();
Map<String, Object> clientCredentials = deployment.getResourceCredentials();
// need to set the resteasy client connection pool size > 0 to ensure thread safety (https://access.redhat.com/solutions/2192911)
ResteasyClient client = new ResteasyClientBuilder().connectionPoolSize(CONNECTION_POOL_SIZE).maxPooledPerRoute(CONNECTION_POOL_SIZE)
.defaultProxy("localhost",8888)
.build();
KeycloakBuilder builder = KeycloakBuilder.builder()
.clientId(clientId)
.clientSecret((String) clientCredentials.get(CredentialRepresentation.SECRET))
.realm(deployment.getRealm())
.serverUrl(deployment.getAuthServerBaseUrl())
.grantType(OAuth2Constants.CLIENT_CREDENTIALS)
.resteasyClient(client);
return builder.build();
}
// error thrown here that cannot inject #Normal scoped bean as it is not proxyable because it has no no-args constructor
#Produces
#Dependent
#Named("keycloakRealm")
public RealmRepresentation getKeycloakRealm( #Named("keycloakAdmin") final Keycloak adminClient ){
// error thrown here that cannot inject #Normal scoped bean as it is not proxyable because it has no no-arg
return adminClient.realm(resolveKeycloakDeployment().getRealm()).toRepresentation();
}
The problem is that I do not control the Keycloak bean; it is provided by the library. Consequently, I have no way of providing a no-argument constructor to the bean.
Does this mean it is impossible to do? Are there any workarounds that one can use? This would seem like a significant limitation by WELD, particularly when it comes to #Produceing 3rd party beans.
My goal is to have a single Keycloak bean for the application as it is thread-safe and only needs to be initialized once. However, I want to be able to inject it into non-application-scoped beans.
There is a #Singleton scope which may address my issue, but if #Singleton works for this case, what is the purpose of the 2 different scopes? Under what circumstances would one want a non-proxied singleton (#Singleton) vs a proxied one (#ApplicationScoped)? Or is #Singleton for the entire container, whereas #ApplicationScoped for the application (WAR) only instead? How does it apply to an EAR or multiple ears?

Related

JSF Singleton Services/DAO/.. vs ApplicationScope [duplicate]

I'm trying to get used to how JSF works with regards to accessing data (coming from a spring background)
I'm creating a simple example that maintains a list of users, I have something like
<h:dataTable value="#{userListController.userList}" var="u">
<h:column>#{u.userId}</h:column>
<h:column>#{u.userName}</h:column>
</h:dataTable>
Then the "controller" has something like
#Named(value = "userListController")
#SessionScoped
public class UserListController {
#EJB
private UserListService userListService;
private List<User> userList;
public List<User> getUserList() {
userList = userListService.getUsers();
return userList;
}
}
And the "service" (although it seems more like a DAO) has
public class UserListService {
#PersistenceContext
private EntityManager em;
public List<User> getUsers() {
Query query = em.createQuery("SELECT u from User as u");
return query.getResultList();
}
}
Is this the correct way of doing things? Is my terminology right? The "service" feels more like a DAO? And the controller feels like it's doing some of the job of the service.
Is this the correct way of doing things?
Apart from performing business logic the inefficient way in a managed bean getter method, and using a too broad managed bean scope, it looks okay. If you move the service call from the getter method to a #PostConstruct method and use either #RequestScoped or #ViewScoped instead of #SessionScoped, it will look better.
See also:
Why JSF calls getters multiple times
How to choose the right bean scope?
Is my terminology right?
It's okay. As long as you're consistent with it and the code is readable in a sensible way. Only your way of naming classes and variables is somewhat awkward (illogical and/or duplication). For instance, I personally would use users instead of userList, and use var="user" instead of var="u", and use id and name instead of userId and userName. Also, a "UserListService" sounds like it can only deal with lists of users instead of users in general. I'd rather use "UserService" so you can also use it for creating, updating and deleting users.
See also:
JSF managed bean naming conventions
The "service" feels more like a DAO?
It isn't exactly a DAO. Basically, JPA is the real DAO here. Previously, when JPA didn't exist, everyone homegrew DAO interfaces so that the service methods can keep using them even when the underlying implementation ("plain old" JDBC, or "good old" Hibernate, etc) changes. The real task of a service method is transparently managing transactions. This isn't the responsibility of the DAO.
See also:
I found JPA, or alike, don't encourage DAO pattern
DAO and JDBC relation?
When is it necessary or convenient to use Spring or EJB3 or all of them together?
And the controller feels like it's doing some of the job of the service.
I can imagine that it does that in this relatively simple setup. However, the controller is in fact part of the frontend not the backend. The service is part of the backend which should be designed in such way that it's reusable across all different frontends, such as JSF, JAX-RS, "plain" JSP+Servlet, even Swing, etc. Moreover, the frontend-specific controller (also called "backing bean" or "presenter") allows you to deal in a frontend-specific way with success and/or exceptional outcomes, such as in JSF's case displaying a faces message in case of an exception thrown from a service.
See also:
JSF Service Layer
What components are MVC in JSF MVC framework?
All in all, the correct approach would be like below:
<h:dataTable value="#{userBacking.users}" var="user">
<h:column>#{user.id}</h:column>
<h:column>#{user.name}</h:column>
</h:dataTable>
#Named
#RequestScoped // Use #ViewScoped once you bring in ajax (e.g. CRUD)
public class UserBacking {
private List<User> users;
#EJB
private UserService userService;
#PostConstruct
public void init() {
users = userService.listAll();
}
public List<User> getUsers() {
return users;
}
}
#Stateless
public class UserService {
#PersistenceContext
private EntityManager em;
public List<User> listAll() {
return em.createQuery("SELECT u FROM User u", User.class).getResultList();
}
}
You can find here a real world kickoff project here utilizing the canonical Java EE / JSF / CDI / EJB / JPA practices: Java EE kickoff app.
See also:
Creating master-detail pages for entities, how to link them and which bean scope to choose
Passing a JSF2 managed pojo bean into EJB or putting what is required into a transfer object
Filter do not initialize EntityManager
javax.persistence.TransactionRequiredException in small facelet application
It is a DAO, well actually a repository but don't worry about that difference too much, as it is accessing the database using the persistence context.
You should create a Service class, that wraps that method and is where the transactions are invoked.
Sometimes the service classes feel unnecessary, but when you have a service method that calls many DAO methods, their use is more warranted.
I normally end up just creating the service, even if it does feel unnecessary, to ensure the patterns stay the same and the DAO is never injected directly.
This adds an extra layer of abstraction making future refactoring more flexible.

Removing JSF managed beans on session timeout

I'm having trouble working out how to correctly handle automatic destruction of a session in JSF. Of course, at this time, the session gets invalidated by the container, resulting in #PreDestroy methods being called on the session scoped beans as well.
At PreDestroy of some session scoped beans, we're unregistering some listeners, like below:
#PreDestroy
public void destroy() {
getWS().removeLanguageChangeListener(this);
}
However, the getWS() method actually attempts to get a reference to another session scoped bean, but that fails, as FacesContext.getCurrentInstance() returns null.
The latter appears to be normal JSF behaviour, according to Ryan Lubke:
We're true to the specification here. I'm not sure it's safe to assume
that the FacesContext will be available in all #PreDestroy cases.
Consider session scoped beans. The session could be timed out by the
container due to inactivity. The FacesContext cannot be available at
that time.
Fine by me, but how should one then make sure all objects are correctly cleared? Is it bad practice to remove self as listener in PreDestroy?
Or would we only have to do this for request/view scoped beans, as they live less long than the session scope of WS (from getWS() ) ?
Note that I get this behaviour on Tomcat7, but I expect this problem happens on every container.
I think session beans are cleaned in a dedicated thread on a servlet container and thus are outside of FacesContext (which is associated with a JSF Request). You could use HttpSessionListener to overcome the problem and cleanup session resources. Something like:
#WebListener
public class LifetimeHttpSessionListener implements HttpSessionListener {
#Override
public void sessionCreated(final HttpSessionEvent e) {
// create some instance here and save it in HttpSession map
HttpSession session = e.getSession();
session.setAttribute("some_key", someInstance);
// or elsewhere in JSF context:
// FacesContext.getCurrentInstance().getExternalContext().getSessionMap().put("some_key", someInstance);
}
#Override
public void sessionDestroyed(final HttpSessionEvent e) {
// get resources and cleanup them here
HttpSession session = e.getSession();
Object someInstance = session.getAttribute("some_key");
}
}
Hope this can be helpful for you

JSF application backend architecture with JPA and CDI

I'm working on a JSF application with JPA and CDI; I use the following backend architecture:
Controllers (CDI annotation for JSF process)
Services (CDI annotations to be injected into Controllers and other Services)
DAOs (handled with EntityManager)
My question is, how should exactly be EntityManager and transactions be handled?
For example transactions (I don't use EJB or Deltaspike, so no declarative transactions available) should be managed by the Service layer (am I right?), but each data-releated other operation should be handled by the DAOs. So where should EntityManager be injected?
Also, should EntityManager be request (or session or method) scoped?
Thanks,
krisy
I would use service layer to manage a business logic and data access layer to manage object-relational model. As a consequence of the above, entity manager and transactions should be part of DAO. It's important to keep transactions as short as possible.
The decision which type of scope to choose is not so obvious as it depends on the nature of your bean/application. An example usage followed by this presentation, slide #15:
#RequestScoped: DTO/Models, JSF backing beans
#ConversationScoped: multi-step workflow, Shopping cart
#SessionScoped: User login credentials
#ApplicationScoped: Data shared by entire app, Cache
As you can see a scope of a given bean and the related entity manager is specific for the problem it concerns. If a given bean is request scoped its state is preserved for a single HTTP request in the same HTTP session. For a session scoped bean the state is maintained through HTTP session. An example approach may look somehow like the following (pseudocode):
#SessionScoped // conversation, application scoped as well
public class ServiceImpl implements Service {
#Inject
private Dao dao;
public void createSomething(SomeDto dto) {
// dto -> entity transformation
dao.create(entity);
}
public SomeDto getSomething(int id) {
SomeEntity entity = em.findById(id);
// entity -> dto transformation
return dto;
}
}
#RequestScoped
#Transactional
public class DaoImpl implements Dao {
#Inject
private EntityManager em; //creating em is cheap
// TxType.REQUIRED by default
public void create(SomeEntity entity) {
em.persist(entity);
}
#Transactional(TxType.NOT_SUPPORTED)
public SomeEntity findById(int id) {
return em.find(SomeEntity.class, id);
}
}

How can I initialize a Java FacesServlet

I need to run some code when the FacesServlet starts, but as FacesServlet is declared final I can not extend it and overwrite the init() method.
In particular, I want to write some data to the database during development and testing, after hibernate has dropped and created the datamodel.
Is there a way to configure Faces to run some method, e.g. in faces-config.xml?
Or is it best to create a singleton bean that does the initialization?
Use an eagerly initialized application scoped managed bean.
#ManagedBean(eager=true)
#ApplicationScoped
public class App {
#PostConstruct
public void startup() {
// ...
}
#PreDestroy
public void shutdown() {
// ...
}
}
(class and method names actually doesn't matter, it's free to your choice, it's all about the annotations)
This is guaranteed to be constructed after the startup of the FacesServlet, so the FacesContext will be available whenever necessary. This in contrary to the ServletContextListener as suggested by the other answer.
You could implement your own ServletContextListener that gets notified when the web application is started. Since it's a container managed you could inject resources there are do whatever you want to do. The other option is to create a #Singleton ejb with #Startup and do the work in it's #PostCreate method. Usually the ServletContextListener works fine, however if you have more than one web application inside an ear and they all share the same persistence context you may consider using a #Singleton bean.
Hey you may want to use some aspects here. Just set it to run before
void init(ServletConfig servletConfig)
//Acquire the factory instances we will
//this is from here
Maybe this will help you.

StackOverflow error when initializing JSF SessionScoped bean in HttpSessionListener

Continuing on my previous question, I'm trying to initialize a session-scoped JSF bean when the application's session first starts, so the bean will be available to a user, regardless of which page they access on my web application first. My custom listener:
public class MyHttpSessionListener implements HttpSessionListener {
#Override
public void sessionCreated(HttpSessionEvent se) {
if (FacesContext.getCurrentInstance().getExternalContext().getSessionMap()
.get("mySessionBean") == null) {
FacesContext.getCurrentInstance().getExternalContext().getSessionMap()
.put("mySessionBean", new MySessionBean());
}
}
}
However, this is giving me a stack overflow error. It appears that the put() method in the SessionMap class tries to create a new HttpSession, thus causing an infinite loop to occur with my listener. How can I initialize a JSF session-scoped bean when my application's session first starts, without running into this issue?
I'm using JSF 2 with Spring 3, running on WebSphere 7.
Thanks!
The session isn't been fully finished creating at that point. Only when the listener method leaves, the session is put into the context and available by request.getSession() as JSF's getSessionMap() is using under the covers.
Instead, you should be grabbing the session from the event argument and use its setAttribute() method. JSF lookups and stores session scoped managed beans just there and won't create a new one if already present.
public void sessionCreated(HttpSessionEvent event) {
event.getSession().setAttribute("mySessionBean", new MySessionBean());
}
Note that I removed the superfluous nullcheck as it's at that point impossible that the session bean is already there.
Unrelated to the concrete problem, you should actually never rely on the FacesContext being present in an implementation which isn't managed by JSF. It is quite possible that the session can be created during a non-JSF request.

Resources