I am writing a multithreading application for Liberty JVM, the application using TCPIP communication open a socket, write to a socket and close socket, I am receiving closedChannel exception after OutOfMemmoryError error.
Question 1:
Did the closedChannel exception caused by Outofmemoryerror?
Question 2:
How to solve the issue ?
Related
I am using python 3.8 and I am trying to connect to an mqtt broker. This connection follows the path below:
Client (spawned with multiprocessing) -> thread (spawned by the client) -> thread tries to connect
I see the threads getting stuck in the socket create_connection function when the socket is created. Curious enough, if I turn things around in this way:
Client (spawned with multithreading) -> process (spawned by the client) -> process tries to connect
it works. Is there any reason why in the first case threads can't create threads which will connect to the server? I can't really debug this as all the exception are swallowed by the process
Thanks
It turned out that the process driving the threads and the threads themselves were all daemons. For some strange reason, even if you start the processes and the treads and you put a sleep after the process, threads won't connect to the server/broker even if they run. The solution is to not declare the threads as daemon and join them, then they will be capable to connect to the sever. The error wasn't clear at all because I would have expected the threads probably not to run up to that point or at least some clear indications of what was happening.
How long does the threads take to stop and exit for ActiveMQConsumer? I get a segmentation fault on closing my application. Which I figured out was due to the ActiveMQ threads. If I comment the consumer the issue is no longer present. Currently I am using cms::MessageConsumer in activemq-cpp-library-3.9.4.
I see that the activemq::core::ActiveMQConsumer has isClosed() function that I can use to confirm if the consumer is closed and then move forward with deleting the objects thereby avoiding the segmentation fault. I am assuming this will solve my issue. But I wanted to know what is the correct approach with these ActiveMQ objects to avoid the issues with threads?
I was using the same session with consumer and producer, but when the broker is stopped and started the ActiveMQ reconnect was adding threads. I am not using failover.
So I have separated the session to send and receive and have instantiated connection factory, connection, and session for each separately. This design has no issues until the applications memory was not getting cleaned up due to above segmentation fault.
That's why I wanted to know when should I use cms::MessageConsumer vs ActiveMQConsumer?
The ActiveMQ Website has documentation with examples for the CMS client. I'd suggest reading those and following the example code in how it shuts down the connection and the library resources prior to application shutdown to ensure that resources are cleaned up appropriately.
As with JMS the CMS consumer instance is linked with the thread in the session that created it so if you are closing down a good rule to follow is to close the session to ensure that message deliveries get stopped before you delete anything consumer instances.
I am seeing a strange behavior in my server logs where, everything time Full GC happens, I see a SocketException is thrown. Is this an expected behavior ?
jdk 1.7
jboss 6.1
Here's a scenario where this is expected behaviour:
Socket is opened
Application wraps SocketOutputStream in BufferedOutputStream
Application writes some data to the BufferedOutputStream
Application leaks the Socket > SocketOutputStream > BufferedOutputStream stack ... without closing it.
Time passes ...
The remote server / client times out the interaction and closes the TCP stream.
Time passes ...
The GC runs, finds the BufferedOutputStream and attempts to finalize it
The finalize() method attempts to flush the buffered data.
That triggers an exception, because the data cannot be flushed to a closed TCP/IP connection.
The following problem has occurred for the second time in few months. The session that tries to open and execute the query using the java driver hangs the particular thread. As a result of this , this particular thread waits forever and causes a thread locking problem. This was resolved using an app server restart . But , one cannot manually intervene for these kind of driver problems . Does anyone have an idea on this?
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:747)
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:905)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1217)
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:292)
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:181)
com.datastax.driver.core.Session.execute(Session.java:111)
com.datastax.driver.core.Session.execute(Session.java:80)
There is an open ticket on this issue (https://datastax-oss.atlassian.net/browse/JAVA-268). Your best bet would be adding any information you have to that ticket.
In the article about node's domains, they say I should not ignore errors -
"The better approach is send an error response to the request that
triggered the error, while letting the others finish in their normal
time, and stop listening for new requests in that worker."
So my question is, on what types of errors should I close the process:
Should I close the process on any error?
what If the error is not part of the req/res cycle - should I still close the process? let's say I was doing some callculations on data from the DB, and then when saving it again to the DB, I got an error - should I close the process ?
Should I close the process only when I get "uncaught exception" ?
So in general, I would be happy for some general guidelines about when to close a node.js process.
Thanks.
This is something that is primarily about uncaught exceptions.
If your code throws an exception that isn't handled, as a result, some parts of your application may be in an invalid state because the code couldn't finish what it was doing. This is why it's recommended to close/restart processes that do this.
If your process encounters an error which your code handles, then there's no reason to do a restart - you specifically added handling code for the error so that the application does not go into an invalid state and can gracefully handle the error scenario.
So, the answer to the specific question when should you close is when there's an uncaught exception.