Multithreaded JDBC - multithreading

Architecturally what is the best way to handle JDBC with multiple threads? I have many threads concurrently accessing the database. With a single connection and statement I get the following error message:
org.postgresql.util.PSQLException: This ResultSet is closed.
Should I use multiple connections, multiple statements or is there a better method? My preliminary thought was to use one statement per thread which would guarantee a single result set per statement.

You should use one connection per task. If you use connection pooling you can't use prepared statements prepared by some other connection. All objects created by connection (ResultSet, PreparedStatements) are invalid for use after connection returned to pool.
So, it's alike
public void getSomeData() {
Connection conn = datasource.getConnection();
PreparedStatement st;
try {
st = conn.prepareStatement(...);
st.execute();
} finally {
close(st);
close(conn);
}
}
So in this case all your DAO objects take not Connection, but DataSource object (java.sql.DataSource) which is poolable connection factory indeed. And in each method you first of all get connection, do all your work and close connection. You should return connection to pool as fast as possible. After connection returned it may not be physically closed, but reinitialized (all active transactions closed, all session variables destroyed etc.)

Yes, use multiple connections with a connection pool. Open the connection for just long enough to do what you need, then close it as soon as you're done. Let the connection pool take care of the "physical" connection management for efficiency.

Related

scala: apache httpclient in multi-threaded environment

I am writing a singleton class (Object in scala) which uses apache httpclient(4.5.2) to post some file content and return status to caller.
object HttpUtils{
protected val retryHandler = new HttpRequestRetryHandler() {
def retryRequest(exception: IOException, executionCount: Int, context: HttpContext): Boolean = {
//retry logic
true
}
}
private val connectionManager = new PoolingHttpClientConnectionManager()
// Reusing same client for each request that might be coming from different threads .
// Is it correct ????
val httpClient = HttpClients.custom()
.setConnectionManager(connectionManager)
.setRetryHandler(retryHandler)
.build()
def restApiCall (url : String, rDD: RDD[SomeMessage]) : Boolean = {
// Creating new context for each request
val httpContext: HttpClientContext = HttpClientContext.create
val post = new HttpPost(url)
// convert RDD to text file using rDD.collect
// add this file as MultipartEntity to post
var response = None: Option[CloseableHttpResponse] // Is it correct way of using it ?
try {
response = Some(httpClient.execute(post, httpContext))
val responseCode = response.get.getStatusLine.getStatusCode
EntityUtils.consume(response.get.getEntity) // Is it require ???
if (responseCode == 200) true
else false
}
finally {
if (response.isDefined) response.get.close
post.releaseConnection() // Is it require ???
}
}
def onShutDown = {
connectionManager.close()
httpClient.close()
}
}
Multiple threads (More specifically from spark streaming context) are calling restApiCall method. I am relatively new to scala and apache httpClient. I have to make frequent connections to only few fixed server (i.e. 5-6 fixed URL's with different request parameters).
I went through multiple online resource but still not confident about it.
Is it the best way to use http client in multi-threaded environment?
Is it possible to keep live connections and use it for various requests ? Will it be beneficial in this case ?
Am i using/releasing all resources efficiently ? If not please suggest.
Is it good to use it in Scala or there exist some better library ?
Thanks in advance.
It seems the official docs have answers to all your questions:
2.3.3. Pooling connection manager
PoolingHttpClientConnectionManager is a more complex implementation
that manages a pool of client connections and is able to service
connection requests from multiple execution threads. Connections are
pooled on a per route basis. A request for a route for which the
manager already has a persistent connection available in the pool will
be serviced by leasing a connection from the pool rather than creating
a brand new connection.
PoolingHttpClientConnectionManager maintains a maximum limit of
connections on a per route basis and in total. Per default this
implementation will create no more than 2 concurrent connections per
given route and no more 20 connections in total. For many real-world
applications these limits may prove too constraining, especially if
they use HTTP as a transport protocol for their services.
2.4. Multithreaded request execution
When equipped with a pooling connection manager such as
PoolingClientConnectionManager, HttpClient can be used to execute
multiple requests simultaneously using multiple threads of execution.
The PoolingClientConnectionManager will allocate connections based on
its configuration. If all connections for a given route have already
been leased, a request for a connection will block until a connection
is released back to the pool. One can ensure the connection manager
does not block indefinitely in the connection request operation by
setting 'http.conn-manager.timeout' to a positive value. If the
connection request cannot be serviced within the given time period
ConnectionPoolTimeoutException will be thrown.

Node.js: How to handle MongoDB Write Error?

How to handle MongoDB write error(due to Mongo connection drop) ? Because I have to do updates on multiple documents, so basically it needs to be transactional, "Nothing or all" approach. I thought I can catch the err and revert the inserted data if one of the updates was failed. But if the MongoDB connection dropped, it's directly caught by the "uncaughtException" of application. So how can I handle this scenario ? All I need is "Nothing or All" on a multi-document update.
Transactions don't exist in MongoDB. There are some workarounds, such as the one #Alex-Blex posted in the comments where you do two-stage commits to progressively perform your update. (It's a bit of a misnomer; their example has seven db ops.)
if the MongoDB connection dropped, it's directly caught by the "uncaughtException" of application
You can listen for this with the connection disconnected and error events:
mongoose.connection.on("disconnected", function () {
// Lost connection to database
});
mongoose.connection.on("reconnected", ...);
Usually those listeners are application-wide, although I suppose you could set one up for the lifetime of your "transaction". You'd have to wait for reconnected, and retry/resume your database op. In any case, you'll still be relying on your application to not crash for any reason for the entire duration of your "transaction."
If you need transactions to be reliable, probably need to look at another database.

HikariCP multithreading separate connection for each thread

To the folks of the stackoverflow community. I was looking for some help with an issue i am facing with HikariCP connection pooling.
High level:
I am trying to create several threads using a thread pool and my plan is to give each worker thread its own separate connection from the HikariCP, but what HikariCP is doing is that it is sharing a common connection across multiple threads. I am using
public synchronized Connection getConnection() throws SQLException
{
synchronized (dataSource) {
return dataSource.getConnection();
}
}
to retrieve a DB connection. Now when I close a connection, I am seeing issues in other threads saying that the connection got closed and the batch of records that thread is processing get dropped.
Here are the stmts from my log file:
2016-06-08 11:52:11,000 DEBUG [com.boxer.delete.SmartProcessWorker] (pool-1-thread-6:) Closing DB connection ConnectionJavassistProxy(1551909727) wrapping oracle.jdbc.driver.T4CConnection#7b4f840f
2016-06-08 11:52:11,002 DEBUG [com.boxer.delete.SmartProcessWorker] (pool-1-thread-9:) Closing DB connection ConnectionJavassistProxy(1511839669) wrapping oracle.jdbc.driver.T4CConnection#343b8714
2016-06-08 11:52:11,014 ERROR [com.boxer.delete.SmartProcessWorker] (pool-1-thread-5:) SQLException trying to Execute pstmt batch
2016-06-08 11:52:11,014 ERROR [com.boxer.delete.SmartProcessWorker] (pool-1-thread-5:) Connection is closed
java.sql.SQLException: Connection is closed
at com.zaxxer.hikari.proxy.ConnectionProxy.checkClosed(ConnectionProxy.java:275)
at com.zaxxer.hikari.proxy.ConnectionJavassistProxy.commit(ConnectionJavassistProxy.java)
at com.boxer.delete.SmartProcessWorker.processRecords(SmartProcessWorker.java:219)
at com.boxer.delete.SmartProcessWorker.run(SmartProcessWorker.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Now, can someone help me with how to get a Db connection from hikari datasource which is not apparently being shared by any other thread ?
Short answer: don't do that. JDBC connections are not meant to be shared between threads.
From the author of HikariCP (source):
Multithreaded access to Connections was deprecated in JDBC and is not
supported by HikariCP either.
HikariCP is fast enough that you can obtain a Connection, execute SQL,
and then return it back to the pool many times in the course of a
request.
It is a Best Practice to only hold Connections in local variables,
preferably in a try-with-resources block, or possibly passed on the
stack, but never in a class member field. If you follow that pattern
it is virtually impossible to leak a Connection or accidentally share
across threads.

ClientWebSocketContainer - can it be used on the client side to create a websocket connection?

The ClientWebSocketContainer Spring class can provide a websocket connection session to a remote endpoint. Though if an attempt is made to re-establish a closed connection (after a failed attempt) by using the ClientWebSocketContainer stop(), start(), and then getSession() methods, the connection is established but the ClientWebSocketContainer thinks it isn't connected due to the openConnectionException set in the failed attempt.
#Override
public void onFailure(Throwable t) {
logger.error("Failed to connect", t);
ClientWebSocketContainer.this.openConnectionException = t;
ClientWebSocketContainer.this.connectionLatch.countDown();
}
Should I be able to use the ClientWebSocketContainer in this fashion or should I create my own client connection manager?
I think it's just a bug, some kind of omission in the ClientWebSocketContainer logic.
I've just raised a JIRA on the matter. Will be fixed today.
Meanwhile give us more information what is your task?
The ClientWebSocketContainer is based on the ConnectionManagerSupport, where one of its implementation is WebSocketConnectionManager. So, consider to use the last one for obtaining the session.
If you use Spring Integration WebSocket Adapters, you don't have choice unless implement your own ClientWebSocketContainer variant. Yes, it fully may be based on the existing one.

JPA - Threads: Connection remains open after close

I'm implementing a Server socket application with Multi-Thread features. In details, when a client connect to the server, the managment of the request is passed to a thread. In this thread I do this steps:
entityManager = Persistence.createEntityManagerFactory("SensorPersistence").createEntityManager();
MeasurementsManager mm = new MeasurementsManager(entityManager);
Measurement meas = new Measurement();
setting values
mm.add(meas);
close entity manager
All works good but after thread ends, the connection on the MySQL is still active. If i terminate the Server application then the connection are closed.
Any suggestion? Thanks.

Resources