Unexpected zero-byte message received on ZMQ-SUB socket - multicast

I am trying to exchange messages between 4 processes using multicasting [epgm: loopback interface].
All 4 processes belong to same group. Two processes send data and the other two receive this data. In both subscribing processes I am receiving one zero byte message (in between the actual publisher messages). There are no processes sending empty message on the port as far as I know.
I am not able to figure out why zero byte message is received. Any help is much appreciated.
Note: If I have only one publisher process then I don’t receive any zero byte message in the subscriber processes.

Related

What are outgoing messages in Service Bus?

I need help interpreting this graph:
It looks like messages are coming in, then being processed, and going back to 0. The graph is not continually rising.
However, the outgoing messages has "--". Does this mean 0?
Does an outgoing message represent a message being read by a service?
If the messages are not being read, then what is happening to them? The dead letter queue has 0 messages.
Yes, an outgoing message represents a message being read by a service.
In your case, I see you've disabled the dead letter queue(in your screenshot, the Dead lettering option is disabled.), so there is no messages in the DLQ. If DLQ is disabled, the messages will be got deleted after expired.
The docs describes them like this:
The number of events or messages received from Service Bus over a specified period.
So, the incoming messages are message that are sent to the service bus. The outgoing messages are messages that are picked up by message processors (your application). So the answer to your question
Does an outgoing message represent a message being read by a service?
is: Yes!

I want to re-queue into RabbitMQ when there is error with added values to queue payload

I have a peculiar type of problem statement to be solved.
Configured RabbitMQ as message broker and its working but when there is failure in process in consume I'm now acknowledging with nack but it blindly re-queues with whatever already came in as payload but i want to add some-more fields to it and re-queue with simpler steps
For Example:
When consume gets payload data from RabbitMQ it will then process it and try to do some process based on it in multiple host machines, but due to some thing if one machine not reachable i need to process that alone after some time .
Hence I'm planning to re-queue failed data with one more fields with machine name again back to queue so it will be processed again with existing logic itself.
How to achieve this ? Can someone help on me
When a message is requeued, the message will be placed to its original position in its queue, if possible. If not (due to concurrent deliveries and acknowledgements from other consumers when multiple consumers share a queue), the message will be requeued to a position closer to queue head. This way you will end up in an infinite loop(consuming and requeuing the message). To avoid this, you can, positively acknowledge the message and publish it to the queue with the updated fields. Publishing the message puts it at the end of the queue, hence you will be able to process it after some time.
Reference https://www.rabbitmq.com/nack.html

RabbitMQ - Single concurrent worker per routing key

Quite new to RabbitMQ and I'm trying to see if I can achieve what I need with it.
I am looking for the Worker Queues pattern but with one caveat. I want to have only a single worker running concurrently per routing key.
An example for clarification:
If i send the following messages with routing keys by order: a, a, b, c, I want to have only 3 workers running concurrently. When the first a message is received a worker picks it up and handles it.
When the next a message is received and the previous a message is still handled (not acknowledged) the new a message should wait in queue. When the b and c messages are received they each get a worker handling them. When the first a message is acknowledged any worker can pick up the next a message.
Would that pattern be possible using RabbitMQ in a natural way (without writing any application code on my side to handle the locking and stuff...)
Edit:
Another clarification. All workers can and should handle all messages, and I don't want to have a queue per Worker as I want to share the load between them, and the Publisher doesn't know which Worker should process the message. But I do want to make sure that no 2 Workers are working on messages sharing the same key at the same time.
For example, if I have a Publisher publishing messages with a userId field, I want to make sure no 2 Workers are handling messages with the same userId at the same time.
Edit 2
Expanding on the userId example. Let's say I have a single Publisher and 3 Workers. The publisher publishes messages like these: { userId: 1, text: 'Hello' }, with varying userIds. My 3 Workers all do the same thing to this messages, so I can have any of them handle the messages coming in. But what I'm trying to achieve is to have only a single worker processing a message from a certain user at the same time. If a Worker has received a message with userId 1 and is still processing it, and another message with userId 1 is received I want to make sure no other Worker picks up that message. But other messages coming in with different userIds should be processed by other available Workers.
userIds are not known beforehand, and the publisher doesn't know how many workers are or anything specific about them, he just wants to schedule the messages for processing.
what your asking is not possible with routing keys, but is built into queues with a few settings.
if you define "queue_a" for a messages, "queue_b" for b messages, etc, you can then have as many consumers connect to it as you want.
RabbitMQ will only deliver a given message to a single consumer of a given queue.
The way it works with multiple consumers on a single queue is basic round-robin style dispatch of the messages. that is, the first message will be delivered to one of the consumers, and the next message (assuming the first consumer is still busy) will be delivered to the next consumer.
So, that should satisfy the need to deliver the message to any given consumer of the queue.
To ensure your messages have an equal chance of getting to any of the consumer (and are not all delivered to the same consumer all the time), there are a few other settings you should put in place.
First, make sure to set the message consumer no ack setting to false (sometimes called "auto ack"). This will force you to ack the message from your code.
Lastly, set the "consumer prefetch" limit of the consumer to 1.
With this combination of settings, a single consumer will retrieve a single message and begin working on it. While that consumer is working, any message waiting in the queue will be delivered to other consumers if any are available. If there are none available, the message will wait in the queue until a consumer is available.
With this, you should be able to achieve the behavior you are wanting, on a given queue.
...
Keep in mind this only applies to queues, though. routing keys cannot be managed this way. all matched routing keys from an exchange will cause a copy of the message to be sent to the destination queue.

How to implement non-blocking PUSH messages in Rabbit.js?

My application currently uses RabbitMQ to queue and process messages to initiate data streams and to pass the streamed data to a processing area.
Because we only want one client to consume the data stream and only one client to process the streamed data, I am currently using PUSH messages.
The issue I am finding is that if I acknowledge the PUSH message to initiate the data stream and that process fails, the message will not be requeued. If I do not acknowledge the message, none of my other PUSH messages will be received until after I either acknowledge the data stream message or the process dies.
I have looked at REQUEST/REPLY messages, however I think the same issue may apply here, where I need to requeue automatically should the process/server die.
Is it possible to use non-blocking PUSH messages?
Perhaps of value is the "qos" / prefetch setting for consumers: https://www.rabbitmq.com/consumer-prefetch.html It's possible to set the value to greater than one to allow a single consumer to get more than one message at a time. Would this result in the non-blocking PUSH (read only) message you're looking for?

MultiThread or Multi Lists?

as I seen topic which not recommending more than 200 threads for server machine,
I am trying to implement Listener class which listens 1000 devices, I mean 1000 devices sending different type of messages to that application,
I tried 2 different way 1. create thread for each device runtime and dynamic list which hold the messages for that device and start thread for processing those messages from the list
but my machine not creating thread more than 50 :), and I agree its bad idea...
I created 10 different lists which holds the messages 10 different type of messages
and I created 10 processor thread for those list, which go to its relevant list and process the message and then delete it.
but here is the problem, let say I received 50 messages from 50 devices in List 1
by the time its list1's processor thread will go to last message (50th) its time will be expired which is 10 second
any idea to for best architecture that talk to more than 500 devices and process their different type of messages with in 10 seconds.
I am working in C#, my application connected with the server as a client using tcp/ip,
that server further connects with online devices, sending messages to server with device id and message data and message typ thn further I receiving messages from that server and then reply back through that server using device id,
I think you need to partition the system differently. The listeners should be high priority but only enqueue the requests. The queue should then be processed using a pool of workers. You could add prioritisation and other optimisations on the dequeuing side. In terms of getting every process done in 10s you will really be getting the second half of the system optimised.
Think of the traditional queuing system. You have a queue of work requests to process. Each request has a series of attributes. Lets same Name (string) and Priority (int). Once a the work request has been queued, other worker (thread/processes etc) can interrogate the queue to pull out items based on priority and process them.
To get the 10s I'd say as soon a worker has started processing the request a timer comes in to play and will mark that request as timed out in 10s unless the worker completes the task. Other workers can watch for results of the work in the queue and then handle the response behaviours.
use other Highly-concurrent programming models other than threaded, though threaded is one of the highly-concurrent models too.
if socket/tcpip/network messaging, please use epoll on Linux 2.6x and completion port on win/msvc.
see the docoument named EffoNetMsg.pdf at http://code.google.com/p/effonetmsg/downloads/list to learn more about highly-concurrent programming models. we only use 2 or 3 threads for multi-listeners and >1000 clients.

Resources