I'm implementing a Server socket application with Multi-Thread features. In details, when a client connect to the server, the managment of the request is passed to a thread. In this thread I do this steps:
entityManager = Persistence.createEntityManagerFactory("SensorPersistence").createEntityManager();
MeasurementsManager mm = new MeasurementsManager(entityManager);
Measurement meas = new Measurement();
setting values
mm.add(meas);
close entity manager
All works good but after thread ends, the connection on the MySQL is still active. If i terminate the Server application then the connection are closed.
Any suggestion? Thanks.
Related
I am making a server application in Kotlin, and the server does following things:
Bind a ServerSocket port let's say 10001.
This port accepts TCP connection from clients (Users). Thread used. Works now as intended.
It also opens and Binds a local port 10002 from localhost only.
This port allows external application in local host to connect, and communicate as manager thread.
It initiate a remote connection to another server in UDP, translates TCP data from port 10001 to UDP by restructuring the data pack and visa-versa.
This thread is being created by the thread running port 10001 connection on-demand above at #1.
Now, we have 3 connections as shown below (Manager & User connections are two different Threads):
Manager(10002) --> | Kotlin | --> Remote Server
User(10001) <-----> | Server | <-- (UDP Port)
So, I want to send some commands from Manager Thread to User Thread buy specifying certain tread identifier, and that will initiate a code block in User thread to send some JSON data to user terminal.
And one of the command from Manager thread will start a remote server connection(UDP, assume a thread too) to communicate and translate data between User Thread and the remote server connection.
So in this case, how do I manage the communication between threads especially between Manager and User thread?
I have been able to create treads to accept User side connections, and it works fine now.
val socketListener_User = ServerSocket(10001)
socketListener_User.use {
while (true) {
val socket_User = socketListener_User.accept()
thread(start = true) {
/** SOME CODE HERE **/
/** THIS PART IS FOR USER THREAD **/
}
}
}
The user can send data at any time to Server as well as Manager. So the server shall be on standby for both side, and neither of them shall block each other.
It should be similar to instant messenger server, but usually IMs store data in external database and trigger the receiver to read, isn't it?
And now I believe there must be some way to establish communications channels between treads to do above tasks which I have not figured out.
After digging some docs, I ended up using cascaded MutableMaps to store the data needed to be shared between threads
In main class file, before main(), I declared vals for the data storage maps.
val msgToPeers : MutableMap<String, <Int, <String, String>>> = HashMap()
// Format: < To_ClientID, < Sequence_Num, < From_ClientID, Message_Body >>>
Next, in threads for each serversocket connection, create two sub threads
Sender Thread
Receiver Thread
In Sender Thread, construct the datamap, and add into msgToPeers by using
msgToPeers.set() or msgToPeers["$receiverClientID"] = .......
Then in Receiver Thread, run a loop to scan the map, and pick up whatever data you need only, and output to socketwriter.
Remember to use msgToPeers["receiverClientID"].remove(sequenceID) to empty the processed message before goes to next loop.
Oh, I also added a 50ms pause per loop as I needed to make sure that the sender threads to have enough time to queue the message list before the scanner takes the message and clear it.
In order to improve the pool design of an application, I would like to be notified (ideally with an event) when the pool size of an application is reached. This way, I can add a log, if this log occurs too often I will increase the pool size.
With a mongo client initialized this way :
const client = new MongoClient(url, {
poolSize: 10,
});
Is there a way to be notified when the 10 connections are reached within my application ?
Use connection pool events. These should be implemented by all recent MongoDB drivers.
Node documentation
Documentation/example in Ruby
For your question you would track the pool size using ConnectionCheckOut*/ConnectionCheckedIn events if your driver does not expose the pool size or the pool at all directly.
To the folks of the stackoverflow community. I was looking for some help with an issue i am facing with HikariCP connection pooling.
High level:
I am trying to create several threads using a thread pool and my plan is to give each worker thread its own separate connection from the HikariCP, but what HikariCP is doing is that it is sharing a common connection across multiple threads. I am using
public synchronized Connection getConnection() throws SQLException
{
synchronized (dataSource) {
return dataSource.getConnection();
}
}
to retrieve a DB connection. Now when I close a connection, I am seeing issues in other threads saying that the connection got closed and the batch of records that thread is processing get dropped.
Here are the stmts from my log file:
2016-06-08 11:52:11,000 DEBUG [com.boxer.delete.SmartProcessWorker] (pool-1-thread-6:) Closing DB connection ConnectionJavassistProxy(1551909727) wrapping oracle.jdbc.driver.T4CConnection#7b4f840f
2016-06-08 11:52:11,002 DEBUG [com.boxer.delete.SmartProcessWorker] (pool-1-thread-9:) Closing DB connection ConnectionJavassistProxy(1511839669) wrapping oracle.jdbc.driver.T4CConnection#343b8714
2016-06-08 11:52:11,014 ERROR [com.boxer.delete.SmartProcessWorker] (pool-1-thread-5:) SQLException trying to Execute pstmt batch
2016-06-08 11:52:11,014 ERROR [com.boxer.delete.SmartProcessWorker] (pool-1-thread-5:) Connection is closed
java.sql.SQLException: Connection is closed
at com.zaxxer.hikari.proxy.ConnectionProxy.checkClosed(ConnectionProxy.java:275)
at com.zaxxer.hikari.proxy.ConnectionJavassistProxy.commit(ConnectionJavassistProxy.java)
at com.boxer.delete.SmartProcessWorker.processRecords(SmartProcessWorker.java:219)
at com.boxer.delete.SmartProcessWorker.run(SmartProcessWorker.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Now, can someone help me with how to get a Db connection from hikari datasource which is not apparently being shared by any other thread ?
Short answer: don't do that. JDBC connections are not meant to be shared between threads.
From the author of HikariCP (source):
Multithreaded access to Connections was deprecated in JDBC and is not
supported by HikariCP either.
HikariCP is fast enough that you can obtain a Connection, execute SQL,
and then return it back to the pool many times in the course of a
request.
It is a Best Practice to only hold Connections in local variables,
preferably in a try-with-resources block, or possibly passed on the
stack, but never in a class member field. If you follow that pattern
it is virtually impossible to leak a Connection or accidentally share
across threads.
I have a very long running query that takes too long to keep my client connected. I want to make a call into my DomainService, create a new worker thread, then return from the service so that my client can then begin polling to see if the long running query is complete.
The problem I am running into is that since my calling thread is exiting right away, I am getting exceptions thrown when my worker tries to access any entities since the ObjectContext gets disposed when the original thread ends.
Here is how I create the new context and call from my Silverlight client:
MyDomainContext context = new MyDomainContext();
context.SearchAndStore(_myParm, SearchQuery,
p => {
if (p.HasError) { // Do some work and return to start
} // polling the server for completion...
}, null);
The entry method on the server:
[Invoke]
public int SearchAndStore(object parm)
{
Thread t = new Thread(new ParameterizedThreadStart(SearchThread));
t.Start(parms);
return 0;
// Once this method returns, I get ObjectContext already Disposed Exceptions
}
Here is the WorkerProc method that gets called with the new Thread. As soon as I try to iterate through my query1 object, I get the ObjectContext already Disposed exception.
private void WorkerProc(object o)
{
HashSet<long> excludeList = new HashSet<long>();
var query1 = from doc in this.ObjectContext.Documents
join filters in this.ObjectContext.AppliedGlobalFilters
.Where(f => f.FilterId == 1)
on doc.FileExtension equals filters.FilterValue
select doc.FileId;
foreach (long fileId in query1) // Here occurs the exception because the
{ // Object Context is already disposed of.
excludeList.Add(fileId);
}
}
How can I prevent this from happening? Is there a way to create a new context for the new thread? I'm really stuck on this one.
Thanks.
Since you're using WCF RIA. I have to assume that you're implementing two parts:
A WCF Web Service
A Silverlight client which consumes the WCF Service.
So, this means that you have two applications. The service running on IIS, and the Silverlight running on the web browser. These applications have different life cycles.
The silverlight application starts living when it's loaded in the web page, and it dies when the page is closed (or an exception happens). On the other hand (at server side), the WCF Web Service life is quite sort. You application starts living when the service is requested and it dies once the request has finished.
In your case your the server request finishes when the SearchAndStore method finishes. Thus, when this particular method starts ,you create an Thread which starts running on background (in the server), and your method continues the execution, which is more likely to finishes in a couple of lines.
If I'm right, you don't need to do this. You can call your method without using a thread, in theory it does not matter if it takes awhile to respond. this is because the Silvelight application (on the client) won't be waiting. In Silverlight all the operations are asynchronous (this means that they're running in their own thread). Therefore, when you call the service method from the client, you only have to wait until the callback is invoked.
If it's really taking long time, you are more likely to look for a mechanism to keep the connection between your silverlight client and your web server alive for longer. I think by modifying the service configuration.
Here is a sample of what I'm saying:
https://github.com/hmadrigal/CodeSamples/tree/master/wcfria/SampleWebApplication01
In the sample you can see the different times on client and server side. You click the button and have to wait 30 seconds to receive a response from the server.
I hope this helps,
Best regards,
Herber
Architecturally what is the best way to handle JDBC with multiple threads? I have many threads concurrently accessing the database. With a single connection and statement I get the following error message:
org.postgresql.util.PSQLException: This ResultSet is closed.
Should I use multiple connections, multiple statements or is there a better method? My preliminary thought was to use one statement per thread which would guarantee a single result set per statement.
You should use one connection per task. If you use connection pooling you can't use prepared statements prepared by some other connection. All objects created by connection (ResultSet, PreparedStatements) are invalid for use after connection returned to pool.
So, it's alike
public void getSomeData() {
Connection conn = datasource.getConnection();
PreparedStatement st;
try {
st = conn.prepareStatement(...);
st.execute();
} finally {
close(st);
close(conn);
}
}
So in this case all your DAO objects take not Connection, but DataSource object (java.sql.DataSource) which is poolable connection factory indeed. And in each method you first of all get connection, do all your work and close connection. You should return connection to pool as fast as possible. After connection returned it may not be physically closed, but reinitialized (all active transactions closed, all session variables destroyed etc.)
Yes, use multiple connections with a connection pool. Open the connection for just long enough to do what you need, then close it as soon as you're done. Let the connection pool take care of the "physical" connection management for efficiency.