Correct design and concurrency issues of servlet/jdbc - multithreading

I have a couple questions I would like to ask regarding correct design and concurrency. For the example, I created a simple application that takes parameters via servlet and adds to database. So the process is like so.
1) Send firstname/lastname to servlet
2) Servlet calls PersonDao.createPerson(firstname, lastname).
Classes involved...
PersonDao(Interface)
PersonDaoImpl(Concrete Class)
AbstractDao(Abstract class)
PersonController(Servlet)
I would like to know all your opinions if this is a correctly designed, connection-pooled, code. Is that static creation of the data-source correct? Would you change anything in the AbstractDao class that could pose a concurrency issue?
public interface PersonDao {
public void createPerson(String firstname, String lastname);
}
_
public class PersonDaoImpl extends AbstractDao implements PersonDao {
#Override
public void createPerson(String firstname, String lastname) {
String query = " insert into persons values (?,?) ";
Connection connection = null;
PreparedStatement ps = null;
try {
connection = getConnection();
ps = connection.prepareStatement(query);
ps.setString(1, firstname);
ps.setString(2, lastname);
ps.executeUpdate();
} catch (SQLException e) {
System.out.println(e.toString());
} finally {
close(connection, ps, null);
}
}
}
_
public abstract class AbstractDao {
protected static DataSource dataSource;
static{
try {
dataSource = (DataSource) new InitialContext().lookup("java:comp/env/jdbc/MyDataSource");
} catch (NamingException e) {
throw new ExceptionInInitializerError("jdbc/MyDataSource' not found in JNDI");
}
}
protected Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
protected void close(Connection connection) {
close(connection, null, null);
}
protected void close(Connection connection, Statement ps) {
close(connection, ps, null);
}
protected void close(Connection connection, Statement ps, ResultSet rs) {
try {
if (rs != null)
rs.close();
if (ps != null)
ps.close();
if (connection != null)
connection.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
-
#WebServlet("/PersonController")
public class PersonController extends HttpServlet {
private static final long serialVersionUID = 1L;
public PersonController() {
super();
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
}
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
String firstname = request.getParameter("firstname");
String lastname = request.getParameter("lastname");
PersonDao personDao = new PersonDaoImpl();
personDao.createPerson(firstname, lastname);
}
}
My other question is if there are concurrency issues here, specifically in the servlet. Imagine 1000 requests simultaneously hitting the servlet. What worries me is the PersonDaoImpl.
1000 different threads and each gets it's own stack. So 1000 different instances of PersonDaoImpl. If we go to AbstractDao, it calls getConnection on the datasource.
So questions would be...
Does the getConnection() pose a concurrency issue?
Can the 1000 different requests pose a threat to the datasource object from the above code?
What if there was a private PersonDao personDao = new PersonDaoImpl() as an instance in the servlet. Now what happens?
What I'm really confused on is what is happening inside the doGet when the PersonDaoImpl is instantiated. Can someone give me a walkthrough please. The gist of my question is if the code I have up there is thread-safe.

Ironically, I just answered a question from October exactly like this.
See my answer here.

Related

Attempt to invoke virtual method...on a null object reference problem

I have a problem with my code, where sometimes I get a null object reference and sometimes not (on the same thing) and I cant understand whats the problem because as I said, one time its working fine but on the second time it shows me the null object reference message. What Im trying to do is to get the club of the logged trainer from my Azure table.
private MobileServiceTable<trainer> TrainerTable=null;
private MobileServiceClient mService=null;
private users cl=null;
private ProgressDialog prg;
private void Trainer_Club_String(final String username)
{
AsyncTask<Void,Void,Void> task = new AsyncTask<Void, Void, Void>()
{
#Override
protected Void doInBackground(Void... voids)
{
try
{
List<trainer> chosen_manager= TrainerTable.where().field("username").eq(username).execute().get(); \\the problem is in this line. Sometimes it tells me that the user is null and sometimes its working well
if(chosen_manager.size()>0)
{
trainer_club=chosen_manager.get(0).getClub().toString();
}
}
catch (Exception e)
{
final String message=e.getMessage().toString();
runOnUiThread(new Runnable()
{
#Override
public void run()
{
Toast.makeText(trainer_home_page.this, message, Toast.LENGTH_LONG).show();
}
});
}
return null;
}
}.execute();
}
ONCREATE:
cl=StaticObjects.GetClient();
trainer_username=cl.getUsername().toString();
Trainer_Club_String(trainer_username);

Can you access a Hazelcast Queue from within an ItemListener?

I have a use case where I have a set of items, DiagnosticRuns, that are submitted to my cluster. I want to process them serially (to avoid conflicts). I am trying to use a Hazelcast Queue protected by a Lock to make sure the items are processed one at a time. Hazelcast is running in embedded mode in my cluster. If I register an ItemListener with the Queue, is it safe to call take() on the Queue from within the itemAdded() method? For example:
#Component
public class DistributedQueueListener
{
public static final String DIAGNOSTICS_RUN_QUEUE_NAME = "diagnosticRun";
#Autowired
private HazelcastInstance hazelcast;
#Autowired
private ProductVersioningService productVersioningService;
private IQueue<DiagnosticRun> diagnosticRunQueue;
private ILock diagnosticRunLock;
private String diagnosticRunListenerId;
#PostConstruct
public void init()
{
diagnosticRunQueue = hazelcast.getQueue(DIAGNOSTICS_RUN_QUEUE_NAME);
diagnosticRunLock = hazelcast.getLock("diagnosticRunLock");
diagnosticRunListenerId = diagnosticRunQueue.addItemListener(new DiagnosticRunListener(), false);
}
#PreDestroy
public void stop()
{
diagnosticRunQueue.removeItemListener(diagnosticRunListenerId);
}
public class DiagnosticRunListener implements ItemListener<DiagnosticRun>
{
#Override
public void itemAdded(ItemEvent<diagnosticRun> item)
{
diagnosticRunLock.lock(5, TimeUnit.SECONDS);
try
{
DiagnosticRun diagnosticRun = diagnosticRunQueue.poll();
if(diagnosticRun != null)
{
productVersioningService.updateProductDeviceTable(diagnosticRun);
}
}
finally
{
diagnosticRunLock.unlock();
}
}
#Override
public void itemRemoved(ItemEvent<diagnosticRun> item)
{
}
}
}
I'm not sure whether it's threadsafe to call take() on the Queue from that location and thread.
If that is not allowed, I'll have to set up my own long-running loop to poll() the Queue. I'm not sure what's the best way to set up a long-running thread in a Spring Boot application. Assuming the method above does not work, would the below code be threadsafe? Or is there a better way to do this?
#Component
public class DistributedQueueListener
{
public static final String DIAGNOSTIC_RUN_QUEUE_NAME = "diagnosticRun";
#Autowired
private HazelcastInstance hazelcast;
#Autowired
private ProductVersioningService productVersioningService;
private IQueue<diagnosticRun> diagnosticRunQueue;
private ILock diagnosticRunLock;
private ExecutorService executorService;
#PostConstruct
public void init()
{
diagnosticRunQueue = hazelcast.getQueue(DIAGNOSTIC_RUN_QUEUE_NAME);
diagnosticRunLock = hazelcast.getLock("diagnosticRunLock");
executorService = Executors.newFixedThreadPool(1);
executorService.submit(() -> listenToDiagnosticRuns());
}
#PreDestroy
public void stop()
{
executorService.shutdown();
}
private void listenToDiagnosticRuns()
{
while(!executorService.isShutdown())
{
diagnosticRunLock.lock(5, TimeUnit.SECONDS);
try
{
DiagnosticRun diagnosticRun = diagnosticRunQueue.poll(1L, TimeUnit.SECONDS);
productVersioningService.updateProductDeviceTable(diagnosticRun);
}
catch(InterruptedException e)
{
logger.error("Interrupted polling diagnosticRun queue", e);
}
finally
{
diagnosticRunLock.unlock();
}
}
}
}
First I'll qualify that I'm not exactly an expert on which threads these are executed on and when so some may disagree but here're my thoughts on this so anyone please chime in as this looks to be an interesting case. Your first solution mixes the Hazelcast event threading with it's operation threading. In fact you're triggering three operations to be invoked as a result of the single event. If you put some arbitrary latency in your call to updateProcductDeviceTable, you'll see that eventually, it will slow down but resume up again after some time. This will cause your local event queue to pile up while operations are invoked. You could put everything you're doing in a separate thread which you can "wake" up on #itemAdded or if you can afford to have a bit of latency, do what you're doing on your second solution. I would, however, make a couple changes in
listenToDiagnosticsRuns() method:
private void listenToDiagnosticRuns()
{
while(!executorService.isShutdown())
{
if(diagnosticRunQueue.peek() != null)
{
diagnosticRunLock.lock(5, TimeUnit.SECONDS);
try
{
DiagnosticRun diagnosticRun = diagnosticRunQueue.poll(1L, TimeUnit.SECONDS);
if(diagnosticRun != null)
{
productVersioningService.updateProductDeviceTable(diagnosticRun);
}
}
catch(InterruptedException e)
{
logger.error("Interrupted polling diagnosticRun queue", e);
}
finally
{
diagnosticRunLock.unlock();
}
} // peek != null
else
{
try
{
Thread.sleep(5000);
}
catch (InterruptedException e)
{
//do nothing
}
}
}
}

jdbctemplate Query : Exception in thread "File Watcher" java.lang.OutOfMemoryError: Java heap space

im getting an out of memory error, that happends when i run a 500k+ results query.
ive tried to a resultset to use "pointer" to the db results.
and setted fetch size to 1000.
what else can i do?
Im using spring boot, and maria db driver, thanks.
ExcelComponent.java
#Component
public class ExcelComponent {
#Autowired
DatabaseChooser databaseChooser;
public String create(DbModel dbModel, boolean isTemp) {
JdbcTemplate jdbc = databaseChooser.getJdbcTemplateByDatabaseId(dbModel);
return createFile(query, jdbc, isTemp);
}
#SuppressWarnings("unchecked")
private String createFile(String query, JdbcTemplate jdbc) {
PreparedStatementSet psc = new PreparedStatementSet(query);
jdbc.query(psc, new ResultSetExcelWriter(options));
return "";
}
DatabaseChooser.java
#Component
public class DatabaseChooser {
public JdbcTemplate getJdbcTemplateByDatabaseId(SettingsDbsModel dbModel){
DataSource dataSource = createDatasource(dbModel);
try {
dataSource.getConnection().setAutoCommit(false);
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource, true);
jdbcTemplate.setFetchSize(100);
return jdbcTemplate;
}
ResultSetExcelWriter.java
public class ResultSetExcelWriter implements ResultSetExtractor{
ExcelWriterNew writer;
public ResultSetExcelWriter(WriterOptions csvWriterOptions) {
writer = new ExcelWriterNew(csvWriterOptions);
}
#Override
public Object extractData(ResultSet rs) throws SQLException, Da
taAccessException {
while (rs.next()) {
System.out.println(rs.getObject(1));
}
writer.make();
return null;
}
}

Finalize method causing a memory leak?

I cannot resolve a problem and need your help. When I click on menu I call customer account and then afterwards I close it. Every time I call customer account the memory increases. It should diminish when I close the account, but it does not happen.
Class Menu
mnItemCL_Cust.setOnAction(new EventHandler<ActionEvent>() {
#Override
public void handle(ActionEvent t) {
try {
panCenterPrev = (Pane) root.getCenter();
panCenterAct = Customer.listCustomer();
root.setCenter(null);
root.setCenter(panCenterAct);
Customer.btCanc.setOnAction(new EventHandler<ActionEvent>() {
#Override public void handle(ActionEvent e) {
try {
Customer.Fim();
panCenterAct.getChildren().clear();
panCenterAct = null;
root.setCenter(null);
root.setCenter(panCenterPrev);
} catch (Throwable ex) {
Logger.getLogger(Customer.class.getName()).log(Level.SEVERE, null, ex);
}
}
});
Class Customer
public class Customer
{
public static Pane listCustomer() throws SQLException, ClassNotFoundException
{
...
final ObservableList<MyCustomer> data = FXCollections.observableArrayList();
...
}
public static class MyCustomer {
private final SimpleIntegerProperty idcl;
private MyCustomer(Integer pIdcl ) {
this.idcl = new SimpleIntegerProperty(pIdcl);
}
public Integer getIdcl() {
return idcl.get();
}
public void setIdcl(Integer pIdcl) {
idcl.set(pIdcl);
}
}
public static void Fim() throws Throwable {
...
rs = null;
tbViewCL.getItems().clear();
tbViewCL = null;
colIDCL.getColumns().clear();
colIDCL = null;
}
...
protected void finalize() throws Throwable {
try{
...
rs.close();
...// Never happens... why??
} catch(Throwable t) {
throw t;
} finally {
JOptionPane.showMessageDialog(null,"End?");
super.finalize();
}
}
Regards
Java usually reclaims the memory you used when it see it fits, so even if you finalize the object, the memory may still be there. However, if rs.Close() never executes, probably is because something before it is throwing and exception, i recommend you to check the code before just to be sure that nothing is doing so, also, if you catch an exception is a good practice to log it so you can know what is happening.

How to intercept methods of EntityManager with Seam 3?

I'm trying to intercept the method persist and update of javax.persistence.EntityManager in a Seam 3 project.
In a previous version (Seam 2) of the micro-framework I'm trying to make, I did this using an implementation of org.hibernate.Interceptor and declaring it in the persistence.xml.
But I want something more "CDI-like" now we are in a JEE6 environment.
I want that just before entering in a EntityManager.persist call, an event #BeforeTrackablePersist is thrown. The same way, I want an event #BeforeTrackableUpdate to be thrown before entering in a EntityManager.merge call. Trackable is an interface which some of my Entitys could implement in order to be intercepted before persist or merge.
I'm using Seam 3 (3.1.0.Beta3) Extended Persistence Manager :
public class EntityManagerHandler {
#SuppressWarnings("unused")
#ExtensionManaged
#Produces
#PersistenceUnit
private EntityManagerFactory entityManagerFactory;
}
So I've made a javax.enterprise.inject.spi.Extension, and tryied many ways to do that :
public class TrackableExtension implements Extension {
#Inject #BeforeTrackablePersisted
private Event<Trackable> beforeTrackablePersistedEvent;
#Inject #BeforeTrackableMerged
private Event<Trackable> beforeTrackableMergedEvent;
#SuppressWarnings("unchecked")
public void processEntityManagerTarget(#Observes final ProcessInjectionTarget<EntityManager> event) {
final InjectionTarget<EntityManager> injectionTarget = event.getInjectionTarget();
final InjectionTarget<EntityManager> injectionTargetProxy = (InjectionTarget<EntityManager>) Proxy.newProxyInstance(event.getClass().getClassLoader(), new Class[] {InjectionTarget.class}, new InvocationHandler() {
#Override
public Object invoke(final Object proxy, final Method method, final Object[] args) throws Throwable {
if ("produce".equals(method.getName())) {
final CreationalContext<EntityManager> ctx = (CreationalContext<EntityManager>) args[0];
final EntityManager entityManager = decorateEntityManager(injectionTarget, ctx);
return entityManager;
} else {
return method.invoke(injectionTarget, args);
}
}
});
event.setInjectionTarget(injectionTargetProxy);
}
public void processEntityManagerType(#Observes final ProcessAnnotatedType<EntityManager> event) {
final AnnotatedType<EntityManager> type = event.getAnnotatedType();
final AnnotatedTypeBuilder<EntityManager> builder = new AnnotatedTypeBuilder<EntityManager>().readFromType(type);
for (final AnnotatedMethod<? super EntityManager> method : type.getMethods()) {
final String name = method.getJavaMember().getName();
if (StringUtils.equals(name, "persist") || StringUtils.equals(name, "merge")) {
builder.addToMethod(method, TrackableInterceptorBindingLiteral.INSTANCE);
}
}
event.setAnnotatedType(builder.create());
}
public void processEntityManagerBean(#Observes final ProcessBean<EntityManager> event) {
final AnnotatedType<EntityManager> annotatedType = (AnnotatedType<EntityManager>)event.getAnnotated();
// not even called
}
public void processEntityManager(#Observes final ProcessProducer<?, EntityManager> processProducer) {
processProducer.setProducer(decorate(processProducer.getProducer()));
}
private Producer<EntityManager> decorate(final Producer<EntityManager> producer) {
return new Producer<EntityManager>() {
#Override
public EntityManager produce(final CreationalContext<EntityManager> ctx) {
return decorateEntityManager(producer, ctx);
}
#Override
public Set<InjectionPoint> getInjectionPoints() {
return producer.getInjectionPoints();
}
#Override
public void dispose(final EntityManager instance) {
producer.dispose(instance);
}
};
}
private EntityManager decorateEntityManager(final Producer<EntityManager> producer, final CreationalContext<EntityManager> ctx) {
final EntityManager entityManager = producer.produce(ctx);
return (EntityManager) Proxy.newProxyInstance(entityManager.getClass().getClassLoader(), new Class[] {EntityManager.class}, new InvocationHandler() {
#Override
public Object invoke(final Object proxy, final Method method, final Object[] args) throws Throwable {
final String methodName = method.getName();
if (StringUtils.equals(methodName, "persist")) {
fireEventIfTrackable(beforeTrackablePersistedEvent, args[0]);
} else if (StringUtils.equals(methodName, "merge")) {
fireEventIfTrackable(beforeTrackableMergedEvent, args[0]);
}
return method.invoke(entityManager, args);
}
private void fireEventIfTrackable(final Event<Trackable> event, final Object entity) {
if (entity instanceof Trackable) {
event.fire(Reflections.<Trackable>cast(entity));
}
}
});
}
}
In all those observer methods, only the second one (processEntityManagerType(#Observes ProcessAnnotatedType<EntityManager>)) is called ! And even with that binding addition to methods persist and merge, my Interceptor is never called (I've of course enabled it with the correct lines in beans.xml, and enabled my extension with the services/javax.enterprise.inject.spi.Extension file).
Something I've thought simple with CDI seems to be actually really hard at last... or perhaps Seam 3 does something which prevent this code from executing correctly...
Does someone know how to handle that ?
I think you're making this a little harder than what it needs to be. Firstly though, JPA and CDI integration isn't very good in Java EE 6, we're very much hoping that changes in Java EE 7 and JPA 2.1.
What you'll want to do is create your own producer for the EntityManager that will delegate to an actual instance of an EntityManager, but also fire your own events when you call the methods you're interested in. Take a look at the Seam Persistence source to see one way this can be done.
As finally my little patch for Seam Persistence was applied in SEAMPERSIST-75, it will be possible in theory to do that by extending org.jboss.seam.persistence.HibernatePersistenceProvider and override the method proxyEntityManager(EntityManager).

Resources