To the folks of the stackoverflow community. I was looking for some help with an issue i am facing with HikariCP connection pooling.
High level:
I am trying to create several threads using a thread pool and my plan is to give each worker thread its own separate connection from the HikariCP, but what HikariCP is doing is that it is sharing a common connection across multiple threads. I am using
public synchronized Connection getConnection() throws SQLException
{
synchronized (dataSource) {
return dataSource.getConnection();
}
}
to retrieve a DB connection. Now when I close a connection, I am seeing issues in other threads saying that the connection got closed and the batch of records that thread is processing get dropped.
Here are the stmts from my log file:
2016-06-08 11:52:11,000 DEBUG [com.boxer.delete.SmartProcessWorker] (pool-1-thread-6:) Closing DB connection ConnectionJavassistProxy(1551909727) wrapping oracle.jdbc.driver.T4CConnection#7b4f840f
2016-06-08 11:52:11,002 DEBUG [com.boxer.delete.SmartProcessWorker] (pool-1-thread-9:) Closing DB connection ConnectionJavassistProxy(1511839669) wrapping oracle.jdbc.driver.T4CConnection#343b8714
2016-06-08 11:52:11,014 ERROR [com.boxer.delete.SmartProcessWorker] (pool-1-thread-5:) SQLException trying to Execute pstmt batch
2016-06-08 11:52:11,014 ERROR [com.boxer.delete.SmartProcessWorker] (pool-1-thread-5:) Connection is closed
java.sql.SQLException: Connection is closed
at com.zaxxer.hikari.proxy.ConnectionProxy.checkClosed(ConnectionProxy.java:275)
at com.zaxxer.hikari.proxy.ConnectionJavassistProxy.commit(ConnectionJavassistProxy.java)
at com.boxer.delete.SmartProcessWorker.processRecords(SmartProcessWorker.java:219)
at com.boxer.delete.SmartProcessWorker.run(SmartProcessWorker.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Now, can someone help me with how to get a Db connection from hikari datasource which is not apparently being shared by any other thread ?
Short answer: don't do that. JDBC connections are not meant to be shared between threads.
From the author of HikariCP (source):
Multithreaded access to Connections was deprecated in JDBC and is not
supported by HikariCP either.
HikariCP is fast enough that you can obtain a Connection, execute SQL,
and then return it back to the pool many times in the course of a
request.
It is a Best Practice to only hold Connections in local variables,
preferably in a try-with-resources block, or possibly passed on the
stack, but never in a class member field. If you follow that pattern
it is virtually impossible to leak a Connection or accidentally share
across threads.
Related
I have written a client which tries to connect to Azure service bus. As soon as the server starts up i get the below errors and i receive no messages present at the queue. I tried replacing the sb protocol with amqpwss, but it dint help.
2020-05-25 21:23:11 [ReactorThreadeebf108d-444b-4acd-935f-c2c2c135451d] INFO c.m.a.s.p.RequestResponseLink - Internal send link 'RequestResponseLink-Sender_0480eb_c31e1cc239bf471e811e53a30adc6488_G51' of requestresponselink to '$cbs' encountered error.
com.microsoft.azure.servicebus.primitives.ServiceBusException: com.microsoft.azure.servicebus.amqp.AmqpException: The connection was inactive for more than the allowed 60000 milliseconds and is closed by container 'LinkTracker'. TrackingId:c31e1cc239bf471e811e53a30adc6488_G51, SystemTracker:gateway7, Timestamp:2020-05-25T21:23:10
at com.microsoft.azure.servicebus.primitives.ExceptionUtil.toException(ExceptionUtil.java:55)
at com.microsoft.azure.servicebus.primitives.RequestResponseLink$InternalSender.onClose(RequestResponseLink.java:759)
at com.microsoft.azure.servicebus.amqp.BaseLinkHandler.processOnClose(BaseLinkHandler.java:66)
at com.microsoft.azure.servicebus.amqp.BaseLinkHandler.onLinkRemoteClose(BaseLinkHandler.java:42)
at org.apache.qpid.proton.engine.BaseHandler.handle(BaseHandler.java:176)
at org.apache.qpid.proton.engine.impl.EventImpl.dispatch(EventImpl.java:108)
at org.apache.qpid.proton.reactor.impl.ReactorImpl.dispatch(ReactorImpl.java:324)
at org.apache.qpid.proton.reactor.impl.ReactorImpl.process(ReactorImpl.java:291)
at com.microsoft.azure.servicebus.primitives.MessagingFactory$RunReactor.run(MessagingFactory.java:491)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.microsoft.azure.servicebus.amqp.AmqpException: The connection was inactive for more than the allowed 60000 milliseconds and is closed by container 'LinkTracker'. TrackingId:c31e1cc239bf471e811e53a30adc6488_G51, SystemTracker:gateway7, Timestamp:2020-05-25T21:23:10
... 10 common frames omitted
There is a similar issue opened in GitHub
what you posted here is the trace, not the error. Yes, the service
closes idle connections are 10 minutes. The client traces it and
reopens the connection. It is seamless, doesn't throw any exceptions
to the application. That can't be your problem. If your sends are
failing means there may be another problem, but not this one.
As i see the second line it is about the timeout of 6 secs, can you check the troubleshoot page if it helps. Also this.
we recommend adding "sync-publish=true" to the connection url
The ClientWebSocketContainer Spring class can provide a websocket connection session to a remote endpoint. Though if an attempt is made to re-establish a closed connection (after a failed attempt) by using the ClientWebSocketContainer stop(), start(), and then getSession() methods, the connection is established but the ClientWebSocketContainer thinks it isn't connected due to the openConnectionException set in the failed attempt.
#Override
public void onFailure(Throwable t) {
logger.error("Failed to connect", t);
ClientWebSocketContainer.this.openConnectionException = t;
ClientWebSocketContainer.this.connectionLatch.countDown();
}
Should I be able to use the ClientWebSocketContainer in this fashion or should I create my own client connection manager?
I think it's just a bug, some kind of omission in the ClientWebSocketContainer logic.
I've just raised a JIRA on the matter. Will be fixed today.
Meanwhile give us more information what is your task?
The ClientWebSocketContainer is based on the ConnectionManagerSupport, where one of its implementation is WebSocketConnectionManager. So, consider to use the last one for obtaining the session.
If you use Spring Integration WebSocket Adapters, you don't have choice unless implement your own ClientWebSocketContainer variant. Yes, it fully may be based on the existing one.
I'm implementing a Server socket application with Multi-Thread features. In details, when a client connect to the server, the managment of the request is passed to a thread. In this thread I do this steps:
entityManager = Persistence.createEntityManagerFactory("SensorPersistence").createEntityManager();
MeasurementsManager mm = new MeasurementsManager(entityManager);
Measurement meas = new Measurement();
setting values
mm.add(meas);
close entity manager
All works good but after thread ends, the connection on the MySQL is still active. If i terminate the Server application then the connection are closed.
Any suggestion? Thanks.
Using Groovy RestClient I am getting the following exception:
java.lang.IllegalStateException: Invalid use of BasicClientConnManager: connection still allocated.
Make sure to release the connection before allocating another one.
As I understand that one connection has not released, so I cannot make another one.
What are the possible solutions?
Make new RestClient for every call?
Or maybe there is some pool?
Thanks!
By default the REST Client uses the BasicClientConnManager which only handles one connection at one time. In order to do concurrent connections, you need to use the AsyncHTTPBuilder:
def httpClient = new AsyncHTTPBuilder(
poolSize: 20,
uri: 'https://www.mysite.com'
)
Architecturally what is the best way to handle JDBC with multiple threads? I have many threads concurrently accessing the database. With a single connection and statement I get the following error message:
org.postgresql.util.PSQLException: This ResultSet is closed.
Should I use multiple connections, multiple statements or is there a better method? My preliminary thought was to use one statement per thread which would guarantee a single result set per statement.
You should use one connection per task. If you use connection pooling you can't use prepared statements prepared by some other connection. All objects created by connection (ResultSet, PreparedStatements) are invalid for use after connection returned to pool.
So, it's alike
public void getSomeData() {
Connection conn = datasource.getConnection();
PreparedStatement st;
try {
st = conn.prepareStatement(...);
st.execute();
} finally {
close(st);
close(conn);
}
}
So in this case all your DAO objects take not Connection, but DataSource object (java.sql.DataSource) which is poolable connection factory indeed. And in each method you first of all get connection, do all your work and close connection. You should return connection to pool as fast as possible. After connection returned it may not be physically closed, but reinitialized (all active transactions closed, all session variables destroyed etc.)
Yes, use multiple connections with a connection pool. Open the connection for just long enough to do what you need, then close it as soon as you're done. Let the connection pool take care of the "physical" connection management for efficiency.