I am using amqp-connection-manager given here
My code for reciever.js is as shown below :
var QUEUE_NAME = 'test';
var amqp = require('amqp-connection-manager');
// Handle an incomming message.
var onMessage = function (data) {
var message = JSON.parse(data.content.toString());
console.log("receiver: got message", message);
//channelWrapper.ack(data);
}
// Create a connetion manager
var connection = amqp.connect([process.env.CLOUDAMQP_MQTT_URL], {reconnectTimeInSeconds: 2, json: true});
connection.on('connect', function () {
console.log('Connected!');
});
connection.on('disconnect', function (params) {
console.log('Disconnected.', params.err.stack);
});
// Set up a channel listening for messages in the queue.
var channelWrapper = connection.createChannel({
setup: function (channel) {
// `channel` here is a regular amqplib `ConfirmChannel`.
return Promise.all([
channel.assertQueue(QUEUE_NAME, {durable: true}),
channel.prefetch(1),
channel.consume(QUEUE_NAME, onMessage)
]);
}
});
channelWrapper.waitForConnect()
.then(function () {
console.log("Listening for messages");
});
Now here whats happens is if I don't write channelWrapper.ack(data) , it stops receiving messages. So how can I enable receiving messages without writing channelWrapper.ack(data) .
That is because you are setting the prefetch value to 1. Prefetch is the number of unacknowledged messages you can have on a channel or on a queue (depends on the configuration) before receiving any more message.
You can change the prefetch value by changing this line in your code:
channel.prefetch(1)
With your current setup, you have to ack the messages eventually to be able to get more messages. If you are doing some async work with this message and acknowledge it later when the async work is done but do not want to wait for it to get other messages, you can just set the prefetch count to a reasonable amount.
If you are really sure that you don't need to ack the messages you can tell the broker not to expect an acknowledgement by noAck: true, just change this line:
channel.assertQueue(QUEUE_NAME, {durable: true, noAck: true})
Related
I have a node js service that consumes messages from Kafka and processes it through various steps of transformation logic. During the processing, services use Redis and mongo for storage and caching purposes. In the end, it sends the transformed message to another destination via UDP packets.
On startup, it starts consuming message from Kafka after a while, it crashes down with the unhandled error: ERR_CANNOT_SEND unable to send data(see below picture).
restarting the application resolves the issue temporarily.
I initially thought it might have to do with the forwarding through UDP sockets, but the forwarding destinations are reachable from the consumer!
I'd appreciate any help here. I'm kinda stuck here.
Consumer code:
const readFromKafka = ({host, topic, source}, transformationService) => {
const logger = createChildLogger(`kafka-consumer-${topic}`);
const options = {
// connect directly to kafka broker (instantiates a KafkaClient)
kafkaHost: host,
groupId: `${topic}-group`,
protocol: ['roundrobin'], // and so on the other kafka config.
};
logger.info(`starting kafka consumer on ${host} for ${topic}`);
const consumer = new ConsumerGroup(options, [topic]);
consumer.on('error', (err) => logger.error(err));
consumer.on('message', async ({value, offset}) => {
logger.info(`recieved ${topic}`, value);
if (value) {
const final = await transformationService([
JSON.parse(Buffer.from(value, 'binary').toString()),
]);
logger.info('Message recieved', {instanceID: final[0].instanceId, trace: final[1]});
} else {
logger.error(`invalid message: ${topic} ${value}`);
}
return;
});
consumer.on('rebalanced', () => {
logger.info('cosumer is rebalancing');
});
return consumer;
};
Consumer Service startup and error handling code:
//init is the async function used to initialise the cache and other config and components.
const init = async() =>{
//initialize cache, configs.
}
//startConsumer is the async function that connects to Kafka,
//and add a callback for the onMessage listener which processes the message through the transformation service.
const startConsumer = async ({ ...config}) => {
//calls to fetch info like topic, transformationService etc.
//readFromKafka function defn pasted above
readFromKafka( {topicConfig}, transformationService);
};
init()
.then(startConsumer)
.catch((err) => {
logger.error(err);
});
Forwarding code through UDP sockets.
Following code throws the unhandled error intermittently as this seemed to work for the first few thousands of messages, and then suddenly it crashes
const udpSender = (msg, destinations) => {
return Object.values(destinations)
.map(({id, host, port}) => {
return new Promise((resolve) => {
dgram.createSocket('udp4').send(msg, 0, msg.length, port, host, (err) => {
resolve({
id,
timestamp: Date.now(),
logs: err || 'Sent succesfully',
});
});
});
});
};
Based on our comment exchange, I believe the issue is just that you're running out of resources.
Throughout the lifetime of your app, every time you send a message you open up a brand new socket. However, you're not doing any cleanup after sending that message, and so that socket stays open indefinitely. Your open sockets then continue to pile up, consuming resources, until you eventually run out of... something. Perhaps memory, perhaps ports, perhaps something else, but ultimately your app crashes.
Luckily, the solution isn't too convoluted: just reuse existing sockets. In fact, you can just reuse one socket for the entirety of the application if you wanted, as internally socket.send handles queueing for you, so no need to do any smart hand-offs. However, if you wanted a little more concurrency, here's a quick implementation of a round-robin queue where we've created a pool of 10 sockets in advance which we just grab from whenever we want to send a message:
const MAX_CONCURRENT_SOCKETS = 10;
var rrIndex = 0;
const rrSocketPool = (() => {
var arr = [];
for (let i = 0; i < MAX_CONCURRENT_SOCKETS; i++) {
let sock = dgram.createSocket('udp4');
arr.push(sock);
}
return arr;
})();
const udpSender = (msg, destinations) => {
return Object.values(destinations)
.map(({ id, host, port }) => {
return new Promise((resolve) => {
var sock = rrSocketPool[rrIndex];
rrIndex = (rrIndex + 1) % MAX_CONCURRENT_SOCKETS;
sock.send(msg, 0, msg.length, port, host, (err) => {
resolve({
id,
timestamp: Date.now(),
logs: err || 'Sent succesfully',
});
});
});
});
};
Be aware that this implementation is still naïve for a few reasons, mostly because there's still no error handling on the sockets themselves, only on their .send method. You should look at the docs for more info about catching events such as error events, especially if this is a production server that's supposed to run indefinitely, but basically the error-handling you've put inside your .send callback will only work... if an error occurs in a call to .send. If between sending messages, while your sockets are idle, some system-level error outside of your control occurs and causes your sockets to break, your socket may then emit an error event, which will go unhandled (like what's happening in your current implementation, with the intermittent errors that you see prior to the fatal one). At that point they may now be permanently unusable, meaning they should be replaced/reinstated or otherwise dealt with (or alternatively, just force the app to restart and call it a day, like I do :-) ).
I'm using a service written in Node.js to receive messages via MQTT (https://www.npmjs.com/package/mqtt) which then writes to a database (SQL Server using mssql).
This will work very nicely when everything is functioning normally, I create the mqtt listener and subscribe to new message events.
However, if the connection to the DB fails (this may happen periodically due to a network outage etc.), writing the message to the database will fail and the message will be dropped on the floor.
I would like to tell the MQTT broker - "I couldn't process the message, keep it in the buffer until I can."
var mqtt = require('mqtt')
var client = mqtt.connect('mymqttbroker')
client.on('connect', function () {
client.subscribe('messagequeue')
})
client.on('message', function (topic, message) {
writeMessageToDB(message).then((result) => {console.log('success'};).catch((err) => {/* What can I do here ?*/});
})
Maybe set a timeout on a resend function? Probably should be improved to only try n times before dropping the message, but it's definitely a way to do it. This isn't tested, obviously, but it should hopefully give you some ideas...
var resend = function(message){
writeMessageToDB(message).then((result) => {
console.log('Resend success!')
})
.catch((err) => {
setTimeout(function(message){
resend(message);
}, 60000);
});
}
client.on('message', function (topic, message) {
writeMessageToDB(message).then((result) => {
console.log('success')
})
.catch((err) => {
resend(message);
});
});
Am learning MQTT and facing some issues understanding MQTT with RabbitMQ from http://blog.airasoul.io/the-internet-of-things-with-rabbitmq-node-js-mqtt-and-amqp/.
So, the issue here is when I run publisher code, a queue is added mqtt-subscription-test-qos1 but when I message doesn't get added in that queue. Although I've added binding of amq.topic to this queue with key-binding 'presence'.
This is my publisher code
var payload = {
message : 'Hello'
};
var client = mqtt.connect(url, { clientId: 'test-', clean:true});
client.on('connect', function () {
client.publish('presence', JSON.stringify(payload), { qos: 1 }, function() {
console.log("Sent");
client.end();
process.exit();
});
});
and below is my subscriber code.
var client = mqtt.connect(url, { clientId: 'test-', clean:true});
client.on('connect', function () {
client.subscribe('presence', { qos: 1 });
});
client.on('message', function (topic, message) {
console.log('received message ', message.toString());
});
This works, when I don't declare any options with connect function in publisher code. So what I don't get is, isn't publisher supposed to create a queue and then publish to topics?
What am I doing wrong?
You don't need to create a queue before publishing to the topic. When you publish first MQTT message, a queue gets created automatically with the default exchange name "amq.topic" and binding key same as your topic name.
I suspect your subscriber is not receiving the messages published since it starts and subscribes to the topic AFTER the publisher publishes the messages. Try by starting your subscriber first and then start your publisher.
I am very new to nodejs. I need to send a message to rabbitMQ using common-mq module. I have installed this package by using the below command
npm install common-mq
I am not able to write the sender and receiver using this. Can anyone please help me in writing the sender and receiver using nodejs?
var commonmq = require('common-mq');
var connect = commonmq.connect('amqp://localhost:5672/queue');
How do I proceed after this?
sender.js looks like below
var commonmq = require("common-mq");
var queue = commonmq.connect('amqp://localhost:5672/queue', { implOptions: { defaultExchangeName: '' }});
var msg =JSON.stringify("Hello world");
console.log("going for ready");
queue.on('ready',function () {
console.log("inside event");
setTimeout(function() { queue.publish({ task: 'take out trash' }); }, 1000);
});
//queue.publish({ task: 'sweep floor' });
queue.on('error',function(err){
console.log("error is:"+err);
});
The receiver code goes like this
var commonmq = require("common-mq");
var queue = commonmq.connect('amqp://localhost:5672/queue', { implOptions: { defaultExchangeName: '' }
});
queue.on('message', function(message) {
console.log('Got a new message', message);
});
queue.on('error',function(e){
console.log("errrorrr ",e);
});
No messages are received. Please suggest me where am I messing up the things?
After you setup the service, you can listen for new messages or send new ones.
Receiver:
The receiver listens on a queue and performs actions based on the messages:
//setup the service
var queue = commonmq.connect('amqp://localhost:5672/queue');
queue.on('message', function(message) {
console.log('Got a new message', message);
//do something
});
//listen eventually on other events (error, ready)
Sender:
The sender publishes new messages. Even a receiver could do it...
//setup the service
var queue = commonmq.connect('amqp://localhost:5672/queue');
queue.publish(yourMessageAsObject);
There are a few other events you could listen to (for example in case of errors). Just check the manual on the npm site.
Hi I am using zeroMQ for my node application where i use the publisher and subscriber for message queuing.Below is my code
Publisher.js
var zmq = require('zmq')
var publisher = zmq.socket('pub')
publisher.bind('tcp://127.0.0.1:7000', function(err) {
if(err)
console.log(err)
else
console.log("Listening on 7000...")
})
setTimeout(function() {
console.log('sent');
publisher.send("hi")
}, 1000)
process.on('SIGINT', function() {
publisher.close()
console.log('\nClosed')
})
Subscriber.js
var zmq = require('zmq')
var subscriber = zmq.socket('sub')
subscriber.on("message", function(reply) {
console.log('Received message: ', reply.toString());
})
subscriber.connect("tcp://localhost:7000")
subscriber.subscribe("")
process.on('SIGINT', function() {
subscriber.close()
console.log('\nClosed')
})
The above code is working fine if both the publisher and subscriber are running.If i stop my subscriber i'm not able to receive the publisher's data when the subscriber is offline.I want to persist the data even if my subscriber is down.I'm stuck here.Any help will be much appreciated.
See the 'Last value caching' pattern on zmq docs site. You can extend the example with the client first subscribing to a pattern with the latest item it had received, and the lvc proxy to resend the missing values(it has to cache them first). But this might work for a small number of cached items where disconnects happen rarely, otherwise PUSH might be the better option. PUB-SUB is not intended to support buffering.