Where should I create my ConnectionPool in a JSF application? - jsf

I am new to JSF/Facelets and I am creating an application that does the usual CRUD operations over a (No SQL) database. The database has an API that allows the creation of a pool of connections, and then from this object my operations can take and release connections. I suppose this pool has to be created only once for the whole application when it is deployed, be shared (as static?) and closed once the application is destroyed. Is my approach correct? What is the best practice to do this? I have no idea of where I should place my code and how I should call it.
With my old SQL database I used to configure a "testOnBorrow" and a "validationQuery" in the context.xml Resource so I didn't have to create an explicit pool programmatically.
I found two great tutorials (here and here) but I can't figure out from them where to put the code that creates the pool.
Note: I know this might be only a Servlet problem, but I am tagging it as JSF since I don't know if there is a way to do this in JSF (like an application scoped bean). Thanks.
EDIT
Looking at the fact that I cannot find a JAR with a DataSource for the database to be loaded via context.xml, perhaps my question should be more specific: where can I run code once, when a JSF application is deployed and where can I run code when a JSF application is destroyed?

You can implement a weblistner( i.e ServletContextListener). and can use contextInitialized , contextDestroyed method of that to create and destroy your connection pool
Sample :
#WebListener
public class ContextWebListner implements ServletContextListener {
#Override
public void contextInitialized(ServletContextEvent event) {
// initialize connection pool.
}
#Override
public void contextDestroyed(ServletContextEvent event) {
// Destroy connection pool.
}
}

Related

Spring Reactive Cassandra, use custom CqlSession

How do we use a custom CqlSession on a Spring Webflux application combined with Spring starter reactive Cassandra please?
I am currently doing the following, which is working perfectly:
public class BaseCassandraConfiguration extends AbstractReactiveCassandraConfiguration {
#Bean
#NonNull
#Override
public CqlSessionFactoryBean cassandraSession() {
final CqlSessionFactoryBean cqlSessionFactoryBean = new CqlSessionFactoryBean();
cqlSessionFactoryBean.setContactPoints(contactPoints);
cqlSessionFactoryBean.setKeyspaceName(keyspace);
cqlSessionFactoryBean.setLocalDatacenter(datacenter);
cqlSessionFactoryBean.setPort(port);
cqlSessionFactoryBean.setUsername(username);
cqlSessionFactoryBean.setPassword(passPhrase);
return cqlSessionFactoryBean;
}
However, I would like to use a custom session, something like:
CqlSession session = CqlSession.builder().build();
How do we tell this configuration to use it?
Thank you
Option 1:
If you are looking to completely override the auto configured CqlSession bean, you can do so by providing your own CqlSesson bean ie.
#Bean
public CqlSession cassandraSession() {
return CqlSession.builder().withClientId(MyClientId).build();
}
The downside of override the entire bean is that you will lose the ability to configure this session via application properties and you will lose the defaults spring boot ships with.
Option 2:
If you want to leave the default values provided by spring boot and have the ability to configure the session via application properties you can use CqlSessionBuilderCustomizer to provide specific custom configurations to the CqlSession. This can be achieved by defining a bean of that type ie:
#Bean
public CqlSessionBuilderCustomizer myCustomiser() {
return cqlSessionBuilder -> cqlSessionBuilder.withClientId(MyClientId);;
}
My personal preference is option 2 as it maintains the functionality provided by spring boot which in my opinion results in an easier to maintain application over time.

Does load balancing in the server side of a web app generate multiple instances of a singleton AppScoped bean of JSF?

Suppose I deploy my web app on AWS or GAE and my app in JSF has a Singleton ApplicationScoped Bean with methods "void setList( List)" "List getList()".
I call this methods from a SessionScoped bean when a user makes modifications to the list.
I want to ensure that all users get the changes in their own session by pushing a notification message to them so that they can get the list again.
If the load balancer of AWS or GAE splits the app in several instances, how do they manage this singleton ApplicationScoped bean? Are there many instances of the singleton? how are they synchronized? is there any risk that one instance has different information?
I suppose all instances of the app in each server participating in the load balancing needs to be updated somehow but this would kill the purpose of load balancing since the work would be replicated everywhere. It might be possible that the singleton work is not load balanced but I don't know. The documentation is very extense and hard to get familiar with.
#ManagedBean( name = "theModelBean",
eager = true )
#Singleton
#ApplicationScoped
public class ModelBean {
ArrayList<Data> theList;
public List<Data> getList(){
return theList;
}
public void setList( List<Data> aList ) {
this.theList = aList;
}
}
#ManagedBean( name = "theController" )
#SessionScoped
public class Controller {
#ManagedProperty(value = "#{theModelBean}")
private ModelBean theModelBean;
public foo(){
ArrayList<Data> list = new ArrayList<>;
list.add( new Data() );
theModelBean.setList( list );
}
}
I wish load balancing does not interfere with my logic and that it handles everything in a transparent way for me. Otherwise I might have to make theModelBean write the list to the database everytime it changes and get it from there everytime it is requested from a session.
I'll ignore the "load balancing" / "load balanced" terms in your question and assume that you actually meant "clustering" / "clustered". As in: the same WAR file is being deployed to multiple servers which are all behind a single proxy (who does the actual load balancing, but load balancing itself is not the cause of the observed problem).
Yes, each server of the cluster will get its own instance of any "application scoped" bean. This not only includes the JSF #javax.faces.bean.ApplicationScoped, but also the CDI #javax.enterprice.context.ApplicationScoped and #javax.inject.Singleton and the EJB #javax.ejb.Singleton.
The normal approach is indeed to keep track of shared data in a single common data source which is used by all servers of the cluster. Usually a SQL-based RDBMS is being used for that. Usually you fire a SQL query to obtain the most recent data from the DB on every request/view. If you're using JPA for that, you usually use 2nd level cache to cache the data so that the amount of DB hits will be reduced. This can be configured cluster-wide.
If the data is immutable (i.e. read-only after creation), then the alternative approach to saving in a DB is to rely on session persistence. Have a #SessionScoped bean which reads from the #ApplicationScoped one during writeObject() and writes-if-absent to the #ApplicationScoped during readObject(). One real world example is implemented in the code behind the JSF <f:websocket> and OmniFaces <o:socket>.

Threads in JSF?

I new in JSF, and I need use Threads for google maps. I am using primefaces for google maps, but I need excute a thread in background to get lat and long from data base and then graphic the markers in the map.
Your question is not specific to JSF, but rather to web applications in general. So, how to perform tasks asynchronously in a Java web applications? Definitely NOT by creating your own threads.
A Java web application runs in an application server (for example jBoss). It is the responsibility of the application server to manage Java threads for you. For instance, it will use a separate thread for each web request that comes in. The application server creates a pool of threads and reuses those threads since it is somewhat expensive to create new ones all the time. That's why you should not create your own, especially if it's done for every web request since it will directly impact scalability.
In order to execute tasks asynchronously, you can use the ejb #Asynchronous annotation (assuming the app is running in a Java EE container like jBoss, but not Tomcat).
import javax.ejb.Singleton;
#Singleton
public class AsyncBean {
#Asynchronous
public void doSomethingAsynchronously() {
// when this EJB is injected somewhere, and this method is called, it will return to the caller immediately and its logic will run in the background
}
}
If the app is not running in a Java EE container, take a look at this answer which nicely lays out some other options for async processing in web apps.
JSF is completely unrelated to your problem. For this case, JSF will act as mere HTML generator. Your specific problem is how to prepare data asynchronously and consume it from your web app.
You can create the thread manually when the application starts on a class that implements ServletContextListener interface, like this:
public class ApplicationListener implements ServletContextListener {
ExecutorService executor;
public ApplicationListener() {
executor = Executors.newSingleThreadExecutor();
}
#Override
public void contextInitialized(ServletContextEvent sce) {
Runnable task = new Runnable() {
#Override
public void run() {
//process the data here...
}
}
executor.submit(task);
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
executor.shutdownNow();
}
}
Improve the design above to fulfill your requirements. Take into account that creating threads in an application server should only be done if you know what you're doing.
Another implementation would be to use another application to do the processing (let's call it Data Processor), which by default will run on a separate thread and environment. Then, communicate your web application with this Data Processor through a cache or nosql application like EhCache, Infinispan or Hazelcast.

How can I initialize a Java FacesServlet

I need to run some code when the FacesServlet starts, but as FacesServlet is declared final I can not extend it and overwrite the init() method.
In particular, I want to write some data to the database during development and testing, after hibernate has dropped and created the datamodel.
Is there a way to configure Faces to run some method, e.g. in faces-config.xml?
Or is it best to create a singleton bean that does the initialization?
Use an eagerly initialized application scoped managed bean.
#ManagedBean(eager=true)
#ApplicationScoped
public class App {
#PostConstruct
public void startup() {
// ...
}
#PreDestroy
public void shutdown() {
// ...
}
}
(class and method names actually doesn't matter, it's free to your choice, it's all about the annotations)
This is guaranteed to be constructed after the startup of the FacesServlet, so the FacesContext will be available whenever necessary. This in contrary to the ServletContextListener as suggested by the other answer.
You could implement your own ServletContextListener that gets notified when the web application is started. Since it's a container managed you could inject resources there are do whatever you want to do. The other option is to create a #Singleton ejb with #Startup and do the work in it's #PostCreate method. Usually the ServletContextListener works fine, however if you have more than one web application inside an ear and they all share the same persistence context you may consider using a #Singleton bean.
Hey you may want to use some aspects here. Just set it to run before
void init(ServletConfig servletConfig)
//Acquire the factory instances we will
//this is from here
Maybe this will help you.

Ninject dependency injection in SharePoint Timer Job

I have successfully implemented an enterprise SharePoint solution using Ninject dependency injection and other infrastructure such as NLog logging etc using an Onion architecture. With a HttpModule as an Composition Root for the injection framework, it works great for normal web requests:
public class SharePointNinjectHttpModule: IHttpModule, IDisposable
{
private readonly HttpApplication _httpApplication;
public void Init(HttpApplication context)
{
if (context == null) throw new ArgumentException("context");
Ioc.Container = IocContainerFactory.CreateContainer();
}
public void Dispose()
{
if(_httpApplication == null) return;
_httpApplication.Dispose();
Ioc.Container.Dispose();
}
}
The CreateContainer method loads the Ninject modules from a separate class library and my ioc container is abstracted.
For normal web application requests I used a shared static class for the injector called Ioc. The UI layer has a MVP pattern implementation. E.g in the aspx page the presenter is constructed as follows:
presenter = Ioc.Container.Get<SPPresenter>(new Ninject.Parameters.ConstructorArgument("view", this));
I'm still reliant on a Ninject reference for the parameters. Is there any way to abstract this, other than mapping a lot of methods in a interface? Can't I just pass in simple types for arguments?
The injection itself works great, however my difficulty comes in when using external processes such as SharePoint Timer Jobs. It would obviously be a terrible idea to reuse the ioc container from here, so it needs to bootstrap the dependencies itself. In addition, it needs to load the configuration from the web application pool, not the admin web application. Else the job would only be able to run on the application server. This way the job can run on any web server, and your SharePoint feature only has to deploy configurations etc. to the web apllication.
Here is the execute method of my timer job, it opens the associated web application configuration and passes it to the logging service (nlog) and reads it's configuration from the external web config service. I have written code that reads a custom section in the configuration file and initializes the NLog logging infrastructure.
public override void Execute(Guid contentDbId)
{
try
{
using (var ioc = IocContainerFactory.CreateContainer())
{
// open configuration from web application
var configService = ioc.Get<IConfigService>(new ConstructorArgument("webApplicationName", this.WebApplication.Name));
// get logging service and set with web application configuration
var logginService = ioc.Get<ILoggingService>();
logginService.SetConfiguration(configService);
// reapply bindings
ioc.Rebind<IConfigService>().ToConstant(configService);
ioc.Rebind<ILoggingService>().ToConstant(logginService);
try
{
logginService.Info("Test Job started.");
// use services etc...
var productService = ioc.Get<IProductService>();
var products = productService.GetProducts(5);
logginService.Info("Got products: " + products.Count() + " Config from web application: " + configService.TestConfigSetting);
logginService.Info("Test Job completed.");
}
catch (Exception exception)
{
logginService.Error(exception);
}
}
}
catch (Exception exception)
{
EventLog.WriteError(exception, "Exception thrown in Test Job.");
}
}
This does not make the timer jobs robust enough, and there is a lot of boiler plate code. My question is how do I improve on this design? It's not the most elegant, I'm looking for a way to abstract the timer job operation code and have it's dependencies injected into it for each timer job. I would just like to hear your comments if you think this is a good approach. Or if someone has faced similar problems like this? Thanks
I think I've answered my own question with the presenter construction code above. When using dependency injection in a project, the injection itself is not that important, but the way it changes the way you write code is far more significant. I need to use a similar pattern such as command for my SharePoint timer job operations. I'd just like the bootstrapping to be handled better.

Resources