multiple queues consuming in one channel - node.js

I use rabbitMq for manage and work with queues. I have multiple queues. the count of them is n't specific. I use direct exchange for publishing messages.
how can I consume all messages of each queues (based on routing_key) using only one
channel?
at this time I assume i have 5 queues. I've used for loop and create a channel per queue. like this:
stuff=["shoes","pants","hats","jewels","glasses"];
stuff.forEach(cnt =>
{
var ex = 'stuff';
var cq=cnt;
amqp
.connect('amqp://localhost')
.then(conn => conn.createChannel())
.then(ch => {
ch.assertExchange(ex, 'x-delayed-message', { durable: true,
arguments: { 'x-delayed-type': 'direct' } })
return ch
.assertQueue(cq, { durable: true })
.then(() => { ch.bindQueue(cq, ex, cq) /*second cq is routing*/
})
.then(() => {
ch.consume(cq, (msg) =>
{
console.log("['%s'] '%s'",cq, msg.content.toString());
if( msg.content.toString()!=null)
console.log(cq);
reciveMSG=JSON.parse(msg.content.toString());
}, { noAck: true });
});
})
});
but I wanna do it only with one channel. because its more optimistic and use less memory(i do n't know it is true or not!).is there a way for handle unspecific count of queues?

You can use one channel to consume from several queues, but you'll receive messages one-by-one, even if they are coming from different queues. I'm pretty sure a channel exception on one queue will stop consuming from all queues.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Related

RabbitMQ: Ack/Nack a message on a channel that is closed and reopened

I'm getting this error from the RabbitMq server
Channel closed by server: 406 (PRECONDITION-FAILED) with message "PRECONDITION_FAILED - unknown delivery tag 80"
This happends because the connection is lost during the consumer task and at the end, when the message is acked/nacked, i get this error because I cannot ack a message on a different channel than the one I got it from.
Here is the code for the RabbitMq connection
async connect({ prefetch = 1, queueName }) {
this.queueName = queueName;
console.log(`[AMQP][${this.queueName}] | connecting`);
return queue
.connect(this.config.rabbitmq.connstring)
.then(conn => {
conn.once('error', err => {
this.channel = null;
if (err.message !== 'Connection closing') {
console.error(
`[AMQP][${this.queueName}] (evt:error) | ${err.message}`,
);
}
});
conn.once('close', () => {
this.channel = null;
console.error(
`[AMQP][${this.queueName}] (evt:close) | reconnecting`,
);
this.connect({ prefetch, queueName: this.queueName });
});
return conn.createChannel();
})
.then(ch => {
console.log(`[AMQP-channel][${this.queueName}] created`);
ch.on('error', err => {
console.error(
`[AMQP-ch][${this.queueName}] (evt:error) | ${err.message}`,
);
});
ch.on('close', () => {
console.error(`[AMQP-ch][${this.queueName}] (evt:close)`);
});
this.channel = ch;
return this.channel;
})
.then(ch => {
return this.channel.prefetch(prefetch);
})
.then(ch => {
return this.channel.assertQueue(this.queueName);
})
.then(async ch => {
while (this.buffer.length > 0) {
const request = this.buffer.pop();
await request();
}
return this.channel;
})
.catch(error => {
console.error(error);
console.log(`[AMQP][${this.queueName}] reconnecting in 1s`);
return this._delay(1000).then(() =>
this.connect({ prefetch, queueName: this.queueName }),
);
});
}
async ack(msg) {
try {
if (this.channel) {
console.log(`[AMQP][${this.queueName}] ack`);
await this.channel.ack(msg);
} else {
console.log(`[AMQP][${this.queueName}] ack (buffer)`);
this.buffer.push(() => {
this.ack(msg);
});
}
} catch (e) {
console.error(`[AMQ][${this.queueName}] ack error: ${e.message}`);
}
}
As you can see, after the connection is enstablished a channel is created, and after i get a connection issue, the channel is set to NULL and after 1 second the connection retries, recreating a new channel.
For managing the offline period I'm using a buffer that collects all the ack message that are sent while the channel was NULL and after the connection is reenstabilshed i unload the buffer.
So basically I have to find a way to send an ACK after a connection is lost or a channel is closed for watherver reason.
Thanks for any help
You cannot acknowledge a message once the channel is closed (whatever is the reason). The broker will automatically re-deliver the same message to another consumer.
This is well documented in RabbitMQ message confirmation section.
When Consumers Fail or Lose Connection: Automatic Requeueing
When manual acknowledgements are used, any delivery (message) that was not acked is automatically requeued when the channel (or connection) on which the delivery happened is closed. This includes TCP connection loss by clients, consumer application (process) failures, and channel-level protocol exceptions (covered below).
...
Due to this behavior, consumers must be prepared to handle redeliveries and otherwise be implemented with idempotence in mind. Redeliveries will have a special boolean property, redeliver, set to true by RabbitMQ. For first time deliveries it will be set to false. Note that a consumer can receive a message that was previously delivered to another consumer.
As the documentation suggests, you need to handle such issues at the consumer side by implementing a message idempotency design pattern. In other words, your architecture should be ready to deal with message re-delivery due to errors.
Alternatively, you can disable message acknowledgment and obtain a "once delivery" type of pattern. This implies that in case of errors you will have to deal with message loss.
Further readings in the matter:
https://bravenewgeek.com/you-cannot-have-exactly-once-delivery/
And the follow up once Kafka introduced new semantics:
https://bravenewgeek.com/you-cannot-have-exactly-once-delivery-redux/
There is no way to send an ACK if the connection is dropped or broken for some reason because the connection happens at the socket level and once it is closed there is no way to recreate it with the same socket.
When the connection drops the message remains non-ACK and therefore another listener can process it or it will be processed again by the disconnected listener when it connects again.
In my opinion you are trying to solve a problem that is not given by RabbitMQ but by the socket implementation at the base.
You could solve this by avoiding managing the message buffer and taking advantage of the peculiarity of RabbitMQ which will re-present the last unprocessed message as soon as your listener connects again.

RabbitMQ data lost on crash

I'm using RabbitMQ to store and retrieve data. I referred this article. I have set the durable flag to true and the noAck flag to false (i need to store the messages on the queue even after consuming).
I created these scenarios:
I updated stock data 3 times with consumers off state (inactive). Then I activated the consumer.It consumed all the three messages from the queue. [Works good.]
Now I again produced three messages (consumer inactive again) then I turned off the rabbitmq server. When I restarted the server and activated the consumer. It doesn't seem to be consuming the data (are the messages that were on the queue has been lost?)
Consumer :
connection.createChannel(function (error1, channel) {
if (error1) {
throw error1;
}
var queue = "updateStock2";
channel.assertQueue(queue, {
durable: true,
});
console.log(
" [*] Waiting for stockData messages in %s. To exit press CTRL+C",
queue
);
channel.consume(
queue,
function (data) {
stock = JSON.parse(data.content.toString());
console.log(" [x] Received Stock:", stock.name + " : " + stock.value);
},
{
noAck: false,
}
);
Producer :
connection.createChannel(function (error1, channel) {
if (error1) {
throw error1;
}
var queue = "updateStock2";
channel.assertQueue(queue, {
durable: true,
});
channel.sendToQueue(queue, Buffer.from(data));
console.log(" [x] Sent %s", data);
});
setTimeout(function () {
connection.close();
//process.exit(0);
}, 500);});
Aren't they persistent? If the server crashes all the messages in the queue are gone forever?
How to retrieve data that were in the queue when the server crashes?
Thanks in advance.
Why your messages have lost?
Regret to say, you did not declare {persistent: true} when you send message.Check https://www.rabbitmq.com/tutorials/tutorial-two-javascript.html, so you should use channel.sendToQueue(queue, Buffer.from(msg), {persistent: true});
Aren't they persistent?
Durable queues will be recovered on node boot, including messages in them published as persistent. Messages published as transient will be discarded during recovery, even if they were stored in durable queues.
Which middleware maybe better for you?
If you want a middleware which can persist messages even if consumed by consumers, you maybe need kafka

AMQPlib nodejs consumer task concurrency

I'm building a background task management system with rabbitmq and nodejs using the amqlib module.
Some of the tasks are really CPU-consuming, so if I'm launching a lot of them and I have only a few workers up, my server can get killed (using too much CPU).
I'm wondering if there is a way to create an amqp queue so that my consumers will only consume one task of this queue at a time (i.e. Before an ack or a reject, do not send a task of this kind to this consumer).
Or should I handle this myself in the code (maybe keeping a reference in my worker that I'm handling a task of this queue and rejecting all tasks of this queue while I'm executing the task ?).
Here is my sample code :
I'm creating the amqp connection like that
const amqpConn = require('amqplib').connect('amqp://localhost');
My queue name is tasks :
amqpConn.then((conn) => {
return conn.createChannel();
}).then((ch) => {
return ch.assertQueue('tasks').then((ok) => {
ch.sendToQueue(q, new Buffer(`something to do ${i}`));
});
}).catch(console.warn);
And here is my consumer (I guess this is where I should do the work to limit only one concurrent task of this queue) :
amqpConn.then((conn) => {
return conn.createChannel();
}).then((ch) => {
return ch.assertQueue('tasks').then((ok) => {
return ch.consume('tasks', (msg) => {
if (msg !== null) {
console.log(msg.content.toString());
ch.ack(msg);
}
});
});
}).catch(console.warn);
Thanks a lot !
I'm wondering if there is a way to create an amqp queue so that my consumers will only consume one task of this queue at a time
If this is what you really need then yes, simply have exactly one consumer and declare the queue exclusive. In that way one tasks is consumed at the time.
I think I got it going by :
creating a Channel per queue
using the prefetch_count of the channel to limit the concurrency on a per-consumer basis
https://www.rabbitmq.com/consumer-prefetch.html

High performance on Nodejs RabbitMQ server

I'm building an analysis system with a million users online in the same time. I use RabbitMQ such as message broker to reduce capacity for server
Here is my diagram
My system include 3 components.
Publisher server : ( Producer )
This system was built on nodejs. The purpose of this system to publish the messages into queue
RabbitMQ queue : This system stored the messages that publisher server sent to. After that, one connect is opened to send message from queue for subscriber server.
Subscriber server ( Consumer ) : This system receive the messages from queue
Publisher server source code
var amqp = require('amqplib/callback_api');
amqp.connect("amqp://localhost", function(error, connect) {
if (error) {
return callback(-1, null);
} else {
connect.createChannel(function(error, channel) {
if (error) {
return callback(-3, null);
} else {
var q = 'logs';
var msg = data; // object
// convert msg object to buffer
var new_msg = Buffer.from(JSON.stringify(msg), 'binary');
channel.assertExchange(q, 'fanout', { durable: false });
channel.publish(q, 'message_queues', new Buffer(new_msg));
console.log(" [x] Sent %s", new_msg);
return callback(null, msg);
}
});
}
});
create exclusively exchange "message_queues" with "fanout" to send
broadcast to all consumer
Subscriber server source code
var amqp = require('amqplib/callback_api');
amqp.connect("amqp://localhost", function(error, connect) {
if (error) {
console.log('111');
} else {
connect.createChannel(function(error, channel) {
if (error) {
console.log('1');
} else {
var ex = 'logs';
channel.assertExchange(ex, 'fanout', { durable: false });
channel.assertQueue('message_queues', { exclusive: true }, function(err, q) {
if (err) {
console.log('123');
} else {
console.log(" [*] Waiting for messages in %s. To exit press CTRL+C", q.queue);
channel.bindQueue(q.queue, ex, 'message_queues');
channel.consume(q.queue, function(msg) {
console.log(" [x] %s", msg.content.toString());
}, { noAck: true });
}
});
}
});
}
});
receive messge from "message_queues" exchange
When I implement send a message. The system work well, however I tried benchmark test performance of this system (with ~ 1000 users sent request per second ) then the system has some issue. The system seem as overload / buffer overflow ( or some thing don't work well ).
I just only read about rabbitmq 2 days ago. I know its tutorials is basic example, so I need help to build systems in real world than .. Any
solution & suggestion
Hope that my question make a sense
Your question is general. Probably you should provide more details to help to identify the bottleneck and help you out.
So, first of all I think you should check the rabbit mq - whether its a bottleneck or not.
There are many things that can go wrong:
The number of consumers that can consume the message is too low (I assume you use a pool of consumers)
The network is too slow
The queues and messages are replicated between too many nodes of Rabbit MQ and go do disk (its possible to use rabbit mq like this)
The consumer can't really handle a message and it gets constantly re-queued
So, in general during your tests you should check rabbit mq and see what happens there.
The message once arrives into queue is in Ready State once this happens, it will be there till one of consumers connected to queue won't attempt to take the the message for handling
When one of consumers (rabbit does round-robin between them) picks the message for processing it's state will turn to Unacknowledged
if consumer fails to handle the message, it will be re-queued by rabbit so that another consumer would have a chance to handle the message.
Of course, if consumer handles the message successfully, the message disappears from rabbit mq server.
Assuming you've installed rabbit mq web ui (I highly recommend it especially for beginners) - you can visually see what happens in your queue - you'll see how many messages are in ready state, and how many are unacknowledged.
This will help to identify a bottleneck.
For example - if you see that only one message is usually in unacknowledged state, this can mean that the consumer can't handle the message and sends it back to rabbit. On the other hand new messages always arrive from producer, so the number of ready messages will increase very fast
It also can point on the fact that you use only one consumer that can handle only one message at a time. So you can consider paralleling here, by running many consumers in different threads or even clustering your application (in rabbit consumers can reside in different machines)
Hope this helps in general, of course, as I've said before if you have more specific questions - please provide more information about what exactly happens during the test

RabbitMQ / AMQP: single queue, multiple consumers for same message?

I am just starting to use RabbitMQ and AMQP in general.
I have a queue of messages
I have multiple consumers, which I would like to do different things with the same message.
Most of the RabbitMQ documentation seems to be focused on round-robin, ie where a single message is consumed by a single consumer, with the load being spread between each consumer. This is indeed the behavior I witness.
An example: the producer has a single queue, and send messages every 2 sec:
var amqp = require('amqp');
var connection = amqp.createConnection({ host: "localhost", port: 5672 });
var count = 1;
connection.on('ready', function () {
var sendMessage = function(connection, queue_name, payload) {
var encoded_payload = JSON.stringify(payload);
connection.publish(queue_name, encoded_payload);
}
setInterval( function() {
var test_message = 'TEST '+count
sendMessage(connection, "my_queue_name", test_message)
count += 1;
}, 2000)
})
And here's a consumer:
var amqp = require('amqp');
var connection = amqp.createConnection({ host: "localhost", port: 5672 });
connection.on('ready', function () {
connection.queue("my_queue_name", function(queue){
queue.bind('#');
queue.subscribe(function (message) {
var encoded_payload = unescape(message.data)
var payload = JSON.parse(encoded_payload)
console.log('Recieved a message:')
console.log(payload)
})
})
})
If I start the consumer twice, I can see that each consumer is consuming alternate messages in round-robin behavior. Eg, I'll see messages 1, 3, 5 in one terminal, 2, 4, 6 in the other.
My question is:
Can I have each consumer receive the same messages? Ie, both consumers get message 1, 2, 3, 4, 5, 6? What is this called in AMQP/RabbitMQ speak? How is it normally configured?
Is this commonly done? Should I just have the exchange route the message into two separate queues, with a single consumer, instead?
Can I have each consumer receive the same messages? Ie, both consumers get message 1, 2, 3, 4, 5, 6? What is this called in AMQP/RabbitMQ speak? How is it normally configured?
No, not if the consumers are on the same queue. From RabbitMQ's AMQP Concepts guide:
it is important to understand that, in AMQP 0-9-1, messages are load balanced between consumers.
This seems to imply that round-robin behavior within a queue is a given, and not configurable. Ie, separate queues are required in order to have the same message ID be handled by multiple consumers.
Is this commonly done? Should I just have the exchange route the message into two separate queues, with a single consumer, instead?
No it's not, single queue/multiple consumers with each consumer handling the same message ID isn't possible. Having the exchange route the message onto into two separate queues is indeed better.
As I don't require too complex routing, a fanout exchange will handle this nicely. I didn't focus too much on Exchanges earlier as node-amqp has the concept of a 'default exchange' allowing you to publish messages to a connection directly, however most AMQP messages are published to a specific exchange.
Here's my fanout exchange, both sending and receiving:
var amqp = require('amqp');
var connection = amqp.createConnection({ host: "localhost", port: 5672 });
var count = 1;
connection.on('ready', function () {
connection.exchange("my_exchange", options={type:'fanout'}, function(exchange) {
var sendMessage = function(exchange, payload) {
console.log('about to publish')
var encoded_payload = JSON.stringify(payload);
exchange.publish('', encoded_payload, {})
}
// Recieve messages
connection.queue("my_queue_name", function(queue){
console.log('Created queue')
queue.bind(exchange, '');
queue.subscribe(function (message) {
console.log('subscribed to queue')
var encoded_payload = unescape(message.data)
var payload = JSON.parse(encoded_payload)
console.log('Recieved a message:')
console.log(payload)
})
})
setInterval( function() {
var test_message = 'TEST '+count
sendMessage(exchange, test_message)
count += 1;
}, 2000)
})
})
The last couple of answers are almost correct - I have tons of apps that generate messages that need to end up with different consumers so the process is very simple.
If you want multiple consumers to the same message, do the following procedure.
Create multiple queues, one for each app that is to receive the message, in each queue properties, "bind" a routing tag with the amq.direct exchange. Change you publishing app to send to amq.direct and use the routing-tag (not a queue). AMQP will then copy the message into each queue with the same binding. Works like a charm :)
Example: Lets say I have a JSON string I generate, I publish it to the "amq.direct" exchange using the routing tag "new-sales-order", I have a queue for my order_printer app that prints order, I have a queue for my billing system that will send a copy of the order and invoice the client and I have a web archive system where I archive orders for historic/compliance reasons and I have a client web interface where orders are tracked as other info comes in about an order.
So my queues are: order_printer, order_billing, order_archive and order_tracking
All have the binding tag "new-sales-order" bound to them, all 4 will get the JSON data.
This is an ideal way to send data without the publishing app knowing or caring about the receiving apps.
Just read the rabbitmq tutorial. You publish message to exchange, not to queue; it is then routed to appropriate queues. In your case, you should bind separate queue for each consumer. That way, they can consume messages completely independently.
Yes each consumer can receive the same messages. have a look at
http://www.rabbitmq.com/tutorials/tutorial-three-python.html
http://www.rabbitmq.com/tutorials/tutorial-four-python.html
http://www.rabbitmq.com/tutorials/tutorial-five-python.html
for different ways to route messages. I know they are for python and java but its good to understand the principles, decide what you are doing and then find how to do it in JS. Its sounds like you want to do a simple fanout (tutorial 3), which sends the messages to all queues connected to the exchange.
The difference with what you are doing and what you want to do is basically that you are going to set up and exchange or type fanout. Fanout excahnges send all messages to all connected queues. Each queue will have a consumer that will have access to all the messages separately.
Yes this is commonly done, it is one of the features of AMPQ.
The send pattern is a one-to-one relationship. If you want to "send" to more than one receiver you should be using the pub/sub pattern. See http://www.rabbitmq.com/tutorials/tutorial-three-python.html for more details.
RabbitMQ / AMQP: single queue, multiple consumers for same message and page refresh.
rabbit.on('ready', function () { });
sockjs_chat.on('connection', function (conn) {
conn.on('data', function (message) {
try {
var obj = JSON.parse(message.replace(/\r/g, '').replace(/\n/g, ''));
if (obj.header == "register") {
// Connect to RabbitMQ
try {
conn.exchange = rabbit.exchange(exchange, { type: 'topic',
autoDelete: false,
durable: false,
exclusive: false,
confirm: true
});
conn.q = rabbit.queue('my-queue-'+obj.agentID, {
durable: false,
autoDelete: false,
exclusive: false
}, function () {
conn.channel = 'my-queue-'+obj.agentID;
conn.q.bind(conn.exchange, conn.channel);
conn.q.subscribe(function (message) {
console.log("[MSG] ---> " + JSON.stringify(message));
conn.write(JSON.stringify(message) + "\n");
}).addCallback(function(ok) {
ctag[conn.channel] = ok.consumerTag; });
});
} catch (err) {
console.log("Could not create connection to RabbitMQ. \nStack trace -->" + err.stack);
}
} else if (obj.header == "typing") {
var reply = {
type: 'chatMsg',
msg: utils.escp(obj.msga),
visitorNick: obj.channel,
customField1: '',
time: utils.getDateTime(),
channel: obj.channel
};
conn.exchange.publish('my-queue-'+obj.agentID, reply);
}
} catch (err) {
console.log("ERROR ----> " + err.stack);
}
});
// When the visitor closes or reloads a page we need to unbind from RabbitMQ?
conn.on('close', function () {
try {
// Close the socket
conn.close();
// Close RabbitMQ
conn.q.unsubscribe(ctag[conn.channel]);
} catch (er) {
console.log(":::::::: EXCEPTION SOCKJS (ON-CLOSE) ::::::::>>>>>>> " + er.stack);
}
});
});
As I assess your case is:
I have a queue of messages (your source for receiving messages, lets name it q111)
I have multiple consumers, which I would like to do different things with the same message.
Your problem here is while 3 messages are received by this queue, message 1 is consumed by a consumer A, other consumers B and C consumes message 2 and 3. Where as you are in need of a setup where rabbitmq passes on the same copies of all these three messages(1,2,3) to all three connected consumers (A,B,C) simultaneously.
While many configurations can be made to achieve this, a simple way is to use the following two step concept:
Use a dynamic rabbitmq-shovel to pickup messages from the desired queue(q111) and publish to a fanout exchange (exchange exclusively created and dedicated for this purpose).
Now re-configure your consumers A,B & C (who were listening to queue(q111)) to listen from this Fanout exchange directly using a exclusive & anonymous queue for each consumer.
Note: While using this concept don't consume directly from the source queue(q111), as messages already consumed wont be shovelled to your Fanout exchange.
If you think this does not satisfies your exact requirement... feel free to post your suggestions :-)
I think you should check sending your messages using the fan-out exchanger. That way you willl receiving the same message for differents consumers, under the table RabbitMQ is creating differents queues for each one of this new consumers/subscribers.
This is the link for see the tutorial example in javascript
https://www.rabbitmq.com/tutorials/tutorial-one-javascript.html
To get the behavior you want, simply have each consumer consume from its own queue. You'll have to use a non-direct exchange type (topic, header, fanout) in order to get the message to all of the queues at once.
If you happen to be using the amqplib library as I am, they have a handy example of an implementation of the Publish/Subscribe RabbitMQ tutorial which you might find handy.
There is one interesting option in this scenario I haven`t found in answers here.
You can Nack messages with "requeue" feature in one consumer to process them in another.
Generally speaking it is not a right way, but maybe it will be good enough for someone.
https://www.rabbitmq.com/nack.html
And beware of loops (when all concumers nack+requeue message)!
Fan out was clearly what you wanted. fanout
read rabbitMQ tutorial:
https://www.rabbitmq.com/tutorials/tutorial-three-javascript.html
here's my example:
Publisher.js:
amqp.connect('amqp://<user>:<pass>#<host>:<port>', async (error0, connection) => {
if (error0) {
throw error0;
}
console.log('RabbitMQ connected')
try {
// Create exchange for queues
channel = await connection.createChannel()
await channel.assertExchange(process.env.EXCHANGE_NAME, 'fanout', { durable: false });
await channel.publish(process.env.EXCHANGE_NAME, '', Buffer.from('msg'))
} catch(error) {
console.error(error)
}
})
Subscriber.js:
amqp.connect('amqp://<user>:<pass>#<host>:<port>', async (error0, connection) => {
if (error0) {
throw error0;
}
console.log('RabbitMQ connected')
try {
// Create/Bind a consumer queue for an exchange broker
channel = await connection.createChannel()
await channel.assertExchange(process.env.EXCHANGE_NAME, 'fanout', { durable: false });
const queue = await channel.assertQueue('', {exclusive: true})
channel.bindQueue(queue.queue, process.env.EXCHANGE_NAME, '')
console.log(" [*] Waiting for messages in %s. To exit press CTRL+C");
channel.consume('', consumeMessage, {noAck: true});
} catch(error) {
console.error(error)
}
});
here is an example i found in the internet. maybe can also help.
https://www.codota.com/code/javascript/functions/amqplib/Channel/assertExchange
You just need to assign different groups to the consumers.

Resources