ActiveMQ threads - multithreading

How long does the threads take to stop and exit for ActiveMQConsumer? I get a segmentation fault on closing my application. Which I figured out was due to the ActiveMQ threads. If I comment the consumer the issue is no longer present. Currently I am using cms::MessageConsumer in activemq-cpp-library-3.9.4.
I see that the activemq::core::ActiveMQConsumer has isClosed() function that I can use to confirm if the consumer is closed and then move forward with deleting the objects thereby avoiding the segmentation fault. I am assuming this will solve my issue. But I wanted to know what is the correct approach with these ActiveMQ objects to avoid the issues with threads?
I was using the same session with consumer and producer, but when the broker is stopped and started the ActiveMQ reconnect was adding threads. I am not using failover.
So I have separated the session to send and receive and have instantiated connection factory, connection, and session for each separately. This design has no issues until the applications memory was not getting cleaned up due to above segmentation fault.
That's why I wanted to know when should I use cms::MessageConsumer vs ActiveMQConsumer?

The ActiveMQ Website has documentation with examples for the CMS client. I'd suggest reading those and following the example code in how it shuts down the connection and the library resources prior to application shutdown to ensure that resources are cleaned up appropriately.
As with JMS the CMS consumer instance is linked with the thread in the session that created it so if you are closing down a good rule to follow is to close the session to ensure that message deliveries get stopped before you delete anything consumer instances.

Related

Azure Service Bus ReceiveMessages with Sub processes

I thought my question was related to post "Azure Service Bus: How to Renew Lock?" but I have tried the RenewLockAsync.
Here is the concern, I am receiving messages from the ServBus with Sessions enabled so I get the session then receive messages. All good, here's the Rub.
There are TWO ADDITIONAL processes to complete per message. A manual transform / harvest of the message into some other object which is then sent out to a Kafka topic (stream). Note its all Async on top of this craziness. My team lead is insistent that the two sub processes can just be added INTO the receive process (ReceiveAsync) and finally call session.CompleteAsync() AFTER the OTHER two processes complete.
Well needles to say I'm consistently erroring with "The session lock has expired on the MessageSession. Accept a new MessageSession." with that architecture. I haven't even fleshed out the send to Kafka part its just mocked so its going to take longer once fleshed out.
Is it even remotely plausible to session.CompleteAsync() AFTER the sub processes or shouldn't that be done when the message is successfully received, then move on to other processing? I thought separate tasks would be more appropriate but again he didn't dig that idea..
I appreciate all insight and opinions thank you !
"The session lock has expired on the MessageSession. Accept a new MessageSession." indicates one of 2 things:
The lock has been open for too long, in which case calling "RenewLockAsync" before it expires would help.
The message lock has been explicitly released, through a call to CompleteAsync, AbandonAsync, DeadLetterAsync, etc. That would indicate a bug, since the lock can not be used after it has been released

Clustered socket.io server hangs

I'm writing a socket.io based server in Node.js (6.9.0). I am using the builtin cluster module to enable multiple processes. For now, there is only two process: a master and a worker. The master receives the connections and maintains an in-memory global data structure (which the worker can query via IPC). The worker process does the majority of work by handling each incoming connection.
I am finding a hanging condition that I cannot attribute to any internal failure when the server is stressed at 300 concurrent users. Under lower concurrency, I don't see the hanging condition.
I'm enabling all forms of debugging (using the debug module: socket.io:socket, socket.io:client as well as my own custom calls to debug).
The last activity I can see is in socket.io, however, the messages indicate that sockets are closing ("reason client namespace disconnect") due to their own "end of test" cycle. It just seems like incoming connections are not be serviced.
I'm using Artillery.io as the test client.
In the server application, I have handlers for uncaught exceptions and try-catch blocks around everything.
In a prior iteration, I also used cluster, but reversed the responsibilities so that the master process handled the connections (with the worker handling global data). That didn't exhibit the same failure. Not sure if something is wrong with the connection distribution. For that, I have also dumped internalMessage events to monitor the internal workings of cluster.
I am not using any other module for connection distribution or sticky sessions. As there is only a single process handling connections (at this time), it doesn't seem relevant.
I was able to remove the hanging condition by changing the cluster scheduling policy from Round Robin (SCHED_RR) to None, which is OS specific (SCHED_NONE). I can't tell whether this is due to a bug in connection distribution (or something else inherent in the scheduling policy), but this one change seems to prevent the hanging condition.

Distributing topics between worker instances with minimum overlap

I'm working on a Twitter project, using their streaming API, built on Heroku with Node.js.
I have a collection of topics that my app needs to process, which are pulled from MongoDB. I need to track each of these topics via the API, however it needs to be done such that each topic is tracked only once. As each worker process expires after approximately 1 hour, when a worker receives SIGTERM it needs to untrack each topic assigned, and release it back to the pool again.
I've been using RabbitMQ to communicate between app and worker processes, however with this I'm a little stuck. Are there any good examples, or advice you can offer on the correct way to do this?
Couldn't the worker just send a message via the messagequeue to the application when it receives a SIGTERM? According to the heroku docs on shutdown the process is allowed a couple of seconds (10) before it will be forecefully killed.
So you can do something like this:
// listen for SIGTERM sent by heroku
process.on('SIGTERM', function () {
// - notify app that this worker is shutting down
messageQueue.sendSomeMessageAboutShuttingDown();
// - shutdown process (might need to wait for async completion
// of message delivery to not prevent it from being delivered)
process.exit()
});
Alternatively you could break up your work in much smaller chunks and have workers only 'take' work that will run for a couple of minutes or even seconds max. Your main application should be the bookkeeper and if a process doesn't complete its task within a specified time assume it has gone missing and make the task available for another process to handle. You can probably also implement this behavior using confirms in rabbitmq.
RabbitMQ won't do this for you.
It will allow you to distribute the work to another process and/or computer, but it won't provide the kind of mechanism you need to prevent more than one process / computer from working on a particular topic.
What you want is a semaphore - a way to control access to a particular "resource" from multiple processes... a way to ensure only one process is working on a particular resource at a given time. In your case the "resource" will be the topic... but it will still be the resource that you want to control access to.
FWIW, there has been discussion of using RabbitMQ to implement a distributed semaphore in the past:
https://www.rabbitmq.com/blog/2014/02/19/distributed-semaphores-with-rabbitmq/
https://aphyr.com/posts/315-call-me-maybe-rabbitmq
but the general consensus is that this is a bad idea. there are too many edge cases and scenarios in which RabbitMQ will fail to work as proper semaphore.
There are some node.js semaphore libraries available. I would recommend looking at them, and using one of them. Have a single process manage the semaphore and decide which other process can / cannot work on which topic.

zmq_connect() a socket while waiting for a zmq_send() or zmq_recv()

I'm working on an application where I want to use ZeroMQ to connect nodes of different types which may be added and removed while the system is running. This means that I want to call zmq_connect() or zmq_disconnect() at any time as nodes come and go.
Some connection use sockets of type ZMQ_REQ, which block when no peers are available. Thus, it may happen that one node is blocked in a zmq_recv(), without any node available for processing the request. If then a new node becomes available, I would like to connect the socket using zmq_connect(). The only way I can see how I could do that is to call zmq_connect() from a different thread. But the documentation states pretty clearly that zmq_socket instances cannot be used from multiple threads simultaneously.
How can I solve this problem, sending messages on a ZMQ_REQ socket without any connections (or connection which cannot be established) and then later add connections and have the waiting requests being processed?
You should not use zmq_recv() when no messages are ready. That way you avoid blocking your thread. Instead check that there indeed are a message to receive. The easiest way to achieve this is using a poller. Since you haven't stated which library or language you're using I can't give you the right example, but I guess C example from the ZeroMQ Guide's examples here could be of use.
Building ZeroMQ based applications is, in my experience, most effective by building one threaded nodes that reacts to messages and, if necessary, runs methods based on time intervals.
For building a system like you talk about I suggest you look at the Service Discovery chapter of the awesome ZeroMQ Guide.

Properly handle Azure MessagingCommunicationException?

I've got several long-running processes that listen on the same azure servicebus topic. After an extended time of running (usually a few days), I get one of these exceptions in one of the processes (and they all seem to stop working). The message itself and the documentation suggest that the answer is to re-try the connection. At first I was just trying to create a new TopicClient, but then found out the actual connection was held by the MessagingFactory. I have now tried creating a whole new MessagingFactory as well, but that doesn't seem to be working either.
What is the proper way to handle this exception? An example (even pseudocode) would be great.

Resources