Related
I have created a queue in Azure Queue and enqueued two items in it. Using the nodejs sdk, i create a timer that executes every 5 secs and calls:
azure.createQueueService("precondevqueues", "<key>").getMessages(queueName, {numOfMessages : 1, visibilityTimeout: 1 }, callback)
I expect that the same message of the two in the queue to show up after every 5 secs but that does not seem to be the case. The output of this call alternates between the two messages.
This should not be the case since visibilityTimeout is set to 1 and hence, after 1 second, the message dequeued in the first call should be visible again before the next getMessage call is made.
As noted here, FIFO ordering is not guaranteed. So it may be the case, that most of the time messages are fetched in FIFO order, but that is not guaranteed and Azure can give you the messages in the order which is best for their implementation.
Messages are generally added to the end of the queue and retrieved
from the front of the queue, although first in, first out (FIFO)
behavior is not guaranteed.
Aha my mistake! I again read the getMessages documentation very carefully and realize that getMessages dequeues the message but retains a invisible copy outside of the queue. If the message processor does not delete the message before the visibility timeout expires, the copy is re-enqueued in the message and therefore they go to the end of the queue.
I need to limit the rate of consuming messages from rabbitmq queue.
I have found many suggestions, but most of them offer to use prefetch option. But this option doesn't do what I need. Even if I set prefetch to 1 the rate is about 6000 messages/sec. This is too many for consumer.
I need to limit for example about 70 to 200 messages per second. This means consuming one message every 5-14ms. No simultaneous messages.
I'm using Node.JS with amqp.node library.
Implementing a token bucket might help:
https://en.wikipedia.org/wiki/Token_bucket
You can write a producer that produces to the "token bucket queue" at a fixed rate with a TTL on the message (maybe expires after a second?) or just set a maximum queue size equal to your rate per second. Consumers that receive a "normal queue" message must also receive a "token bucket queue" message in order to process the message effectively rate limiting the application.
NodeJS + amqplib Example:
var queueName = 'my_token_bucket';
rabbitChannel.assertQueue(queueName, {durable: true, messageTtl: 1000, maxLength: bucket.ratePerSecond});
writeToken();
function writeToken() {
rabbitChannel.sendToQueue(queueName, new Buffer(new Date().toISOString()), {persistent: true});
setTimeout(writeToken, 1000 / bucket.ratePerSecond);
}
I've already found a solution.
I use module nanotimer from npm for calculation delays.
Then I calculate delay = 1 / [message_per_second] in nanoseconds.
Then I consume message with prefetch = 1
Then I calculate really delay as delay - [processing_message_time]
Then I make timeout = really delay before sending ack for the message
It works perfectly. Thanks to all
See 'Fair Dispatch' in RabbitMQ Documentation.
For example in a situation with two workers, when all odd messages are heavy and even messages are light, one worker will be constantly busy and the other one will do hardly any work. Well, RabbitMQ doesn't know anything about that and will still dispatch messages evenly.
This happens because RabbitMQ just dispatches a message when the message enters the queue. It doesn't look at the number of unacknowledged messages for a consumer. It just blindly dispatches every n-th message to the n-th consumer.
In order to defeat that we can use the prefetch method with the value of 1. This tells RabbitMQ not to give more than one message to a worker at a time. Or, in other words, don't dispatch a new message to a worker until it has processed and acknowledged the previous one. Instead, it will dispatch it to the next worker that is not still busy.
I don't think RabbitMQ can provide you this feature out of the box.
If you have only one consumer, then the whole thing is pretty easy, you just let it sleep between consuming messages.
If you have multiple consumers I would recommend you to use some "shared memory" to keep the rate. For example, you might have 10 consumers consuming messages. To keep 70-200 messages rate across all of them, you will make a call to Redis, to see if you are eligible to process message. If yes, then update Redis, to show other consumers that currently one message is in process.
If you have no control over consumer, then implement option 1 or 2 and publish message back to Rabbit. This way the original consumer will consume messages with the desired pace.
This is how I fixed mine with just settimeout
I set mine to process consume every 200mls which will consume 5 data in 1 seconds I did mine to do update if exist
channel.consume(transactionQueueName, async (data) => {
let dataNew = JSON.parse(data.content);
const processedTransaction = await seperateATransaction(dataNew);
// delay ack to avoid duplicate entry !important dont remove the settimeout
setTimeout(function(){
channel.ack(data);
},200);
});
Done
This is more of a conceptual question and doesn't apply to any particular programming language.
I have two entities communicating with each other, with three types of messages allowed:
Command Message: An unsolicited message commanding the other entity to do something.
Query Message: An unsolicited message asking the other entity for information.
Response Message: A solicited message answering a query message from the other entity.
Now each entity has two threads:
Reader Thread: Reads messages.
Worker Thread: Sends messages and does useful things
The two possible communication scenarios are:
Entity A sends a command to Entity B, and Entity A doesn't care what happens after.
Entity A sends a query to Entity B, and Entity A must wait until Entity B responds with the answer.
So the question is, how does the reader thread handle both solicited and unsolicited messages?
Unsolicited messages are easy to handle through events. The reader thread can just fire an event on the worker thread saying it received a query or a command, and the worker thread can react accordingly.
Solicited messages are hard to handle though. The worker thread sends a query, and must block until it receives a response or times out. How does the worker thread let the reader thread know it is waiting for a response, and how does the reader thread tie a response back to a specific query from the worker thread and deliver that response back to the worker thread's execution?
I know this has been done a million times in other programs, so whats the standard practice?
[I used Windows Azure Service Bus messaging entities as I am familiar with it, but in general this should be true with any Messaging system.]
Lets say your entity names are A and B.
Have 1 Topic (pub-sub entities) and 1 Queue for communication between A and B (as you need bidirectional communication) : Topic-A2B & Queue-B2A. A2B is for Commands from A to B or Queries from A to B and B2A, as the name says, is for Responses from B to A.
Typical Messaging Systems will offer MessageType property - for you to be able to set it and the later distinguish which type of messages you are reading and route it accordingly : Example from Windows Azure ServiceBus Brokered Message. Use that Property - to set whether its a Query or Command or Response.
The idea here is - while receiving a message in B - you will receive using Subscriptions. You will have 2 threads reading - (one) reads only Commands (theSecondOne) reads only Queries
For UnSolicited messages - as you said, its easy to handle. All you need to do is
A should send message to B with BrokeredMsg.ContentType="Cmd" and B should create a Subscription with a filter and read and process
For Solicited Messages - like Queries (a feature called Sessions will come handy here).
A should send Message to B with something like: BrokeredMessage.ContentType = "Query"
A also sets a correlation Id on the Message it sends to B: BrokeredMessage.SessionId = "ABC456" <-- The Correlation Id for A to be able to correlate this message with
Now A will wait for response and expects B to also set
BrokeredMessage.SessionId="ABC456" <--- The exact same value it had set earlier.
using the AcceptMessageSession API - with the Session Id and a Timeout. Ex: Q_B2A_QClient.AcceptMessageSession("ABC456", 2 mins)
At the receiving end B should Create a Subscription with a filter to be able to Receive these messages.
Once B receives the query - it processes and puts back the result in the Q-BToA
If B succeeds to put back the message in the Q-B2A in less than 2 Mins - then A will receive it and then you can orchestrate it further with a Callback method (as all of these are async methods - you will not need to use any Reader or Writer thread as you mentioned above - which will be a huge performance booster).
HTH!
Sree
My application (.NET-based) gets messages from a queue in a multithreaded fashion and I'm worried about the fact that I may receive messages in an out-of-order manner because one thread can be quicker than the other, for instance, given the following queue state:
[Message-5 | Message-4 | Message-3 | Message-2 | Message-1]
In a multithreaded operation, msg #2 may arrive before msg #1, even though msg #1 was first in the queue, due to many threading issues (thread time slices, thread scheduling etc).
In such a situation, it would be great if a message that is inside the queue have already stamped with an ordinal/sequence number when it was enqueued and even if I get the messages in an out of order fashion, I can still order them at some point within my application using their given ordinal-number attribute.
Any known mechanism to achieve it in a Websphere MQ environment?
You have 2 choices:
(1) Use Message Grouping in MQ as whitfiea mentioned or
(2) Change you application to be single threaded.
Note: If the sending application does not set the MQMD MsgId field then the queue manager will generate a unique number (based on queue manager name, date & time) and store it in the message's MQMD MsgID field.
You can obtain the MessageSequenceNumber from the MQMessage if the messages are put to the queue in a message group. The MessageSquenceNumber will either be the order that the messages were put to the queue by default or defined by the application that put the messages to the queue.
See the MessageSequenceNumber here for more details
Yes, if the originating message has an ordinal then as you receive your data you could:
Use a thread safe dictionary:
SortedDictionary<int,Message>
Now, suppose we are designing an application, consists of 2 Erlang Nodes. On Node A, will be very many processes, in the orders of thousands. These processes access resources on Node B by sending a message to a registered process on Node B. At Node B, lets say you have a process started by executing the following function:
start_server()->
register(zeemq_server,spawn(?MODULE,server,[])),ok.<br>
server()->
receive
{{CallerPid, Ref}, {Module, Func, Args}} ->
Result = (catch erlang:apply(Module, Func, Args)),
CallerPid ! {Ref, Result},
server();
_ -> server()
end.
On Node A, any process that wants to execute any function in a given module on Node B, uses the following piece of code:
call(Node, Module, Func, Args)->
Ref = make_ref(),
Me = self(),
{zeemq_server,Node} ! {{Me, Ref}, {Module, Func, Args}},
receive
{Ref, Result} -> Result
after timer:minutes(3) ->
error_logger:error_report(["Call to server took so long"]),
{error,remote_call_failed}
end.
So assuming that Process zeemq_server on Node B, will never be down, and that the network connection between Node A and B is always up, please answer the following questions:
Qn 1: Since there is only one receiving process on Node B, its mail box is most likely to be full , all the time. This is because, the processes are many on Node A and at a given interval, say, 2 seconds, every process at least ,makes a single call to the Node B server. In which ways, can the reception be made redundant on the Node B ? , e.g. Process Groups e.t.c. and explain (the concept) how this would replace the server side code above. Show what changes would happen on the Client side.
Qn 2: In a situation where there is only one receiver on Node B, is there a maximum number of messages allowable in the process mail box ? how would erlang respond , if a single process mail ox is flooded with too many messages ?
Qn 3: In what ways, using the very concept showed above, can i guarantee that every process which sends a request , gets back an answer as soon as possible before the timeout occurs ? Could converting the reception part on the Node B to a parallel operation help ? like this:
start_server()->
register(zeemq_server,spawn(?MODULE,server,[])),ok.<br>
server()->
receive
{{CallerPid, Ref}, {Module, Func, Args}} ->
<b>spawn(?MODULE,child,[Ref,CallerPid,{Module, Func, Args}]),</b>
server();
_ -> server()
end.
child(Ref,CallerPid,{Module, Func, Args})->
Result = (catch erlang:apply(Module, Func, Args)),
CallerPid ! {Ref, Result},
ok.
The method showed above, may increase the instantaneous number of processes running on the Node B, and this may affect the service greatly due to memory. However, it looks good and makes the server() loop to return immediately to handle the next request. What is your take on this modification ?
Lastly : Illustrate how you would implement a Pool of receiver Threads on Node B, yet appearing to be under one Name as regards Node A. Such that, incoming messages are multiplexed amongst the receiver threads and the load shared within this group of processes. Keep the meaning of the problem the same.
The maximum number of messages in a process mailbox is unbounded, except by the amount of memory.
Also, if you need to inspect the mailbox size, use
erlang:process_info(self(),[message_queue_len,messages]).
This will return something like:
[{message_queue_len,0},{messages,[]}]
What I suggest is that you first convert your server above into a gen_server. This your worker.
Next, I suggest using poolboy ( https://github.com/devinus/poolboy ) to create a pool of instances of your server as poolboy workers (there are examples in their github Readme.md). Lastly, I suggest creating a module for callers with a helper method that creates a poolboy transaction and applies a Worker arg from the pool to a function. Example below cribbed from their github:
squery(PoolName, Sql) ->
poolboy:transaction(PoolName, fun(Worker) ->
gen_server:call(Worker, {squery, Sql})
end).
That said, would Erlang RPC suit your needs better? Details on Erlang RPC at http://www.erlang.org/doc/man/rpc.html. A good treatment of Erlang RPC is found at http://learnyousomeerlang.com/distribunomicon#rpc.
IMO spawning a new process to handle each request may be overkill, but it's hard to say without knowing what has to be done with each request.
You can have a pool of process to handle each msg, using a round robin method to distribute the requests or based on type of request ether handle it, send it to a child process or spawn a process. You can also monitor the load of the pooled processes by looking at their msg queues and starting new children if they are overloaded. Using a supervisor.. just use a send_after in the init to monitor the load every few seconds and act accordingly. Use OTP if you can, there's overhead but it is worth it.
I wouldn't use http for a dedicated line communication, I believe it's too much overhead. You can control the load using a pool of processes to handle it.