Node js concurrency with amqp - node.js

I am writing one node js service which receives messages using rabbitmq. But I am facing one issue when I am trying to send concurrent requests to my node js service.
Here is amqp subscriber I have written,
const amqp = require('amqplib/callback_api')
let AmqpConnection = {
// some other methods to make connection
// ....
//....
subscribe: function(){
this.withChannel((channel) => {
let defaultQueueName = "my_queue";
channel.assertQueue(defaultQueueName, { durable: true }, function(err, _ok) {
if (err) throw err;
channel.consume(defaultQueueName, AmqpConnection.processMessage);
Logger.info("Waiting for requests..");
});
})
},
processMessage: function(payload){
debugger
try {
Logger.info("received"+(payload.content.toString()))
}
catch(error){
Logger.error("ERROR: "+ error.message)
//Channel.ack(payload)
}
}
}
And now I am trying to publish messages to this using publisher,
const amqp = require('amqplib/callback_api')
let Publisher = {
// some other methods to make connection
// ....
// ....
sendMessage: function(message){
this.withChannel((channel) => {
let exchangeName = 'exchange';
let exchangeType = 'fanout';
let defaultQueueName = 'my_queue';
channel.assertExchange(exchangeName, exchangeType)
channel.publish(exchangeName, defaultQueueName, new Buffer(message));
})
}
}
let invalidMsg = JSON.stringify({ "content": ""})
let correctMsg = JSON.stringify({ "content": "Test message"})
setTimeout(function () {
for(let i=0; i<2; i++){
Publisher.sendMessage(correctMsg)
Publisher.sendMessage(invalidMsg)
}
}, 3000)
But when I execute both publisher and subscriber, I get following output on subscriber side
2017-02-18T11:27:55.368Z - info: received{"content":""}
2017-02-18T11:27:55.378Z - info: received{"content":""}
2017-02-18T11:27:55.379Z - info: received{"content":""}
2017-02-18T11:27:55.380Z - info: received{"content":""}
It seems like concurrent requests are overriding message received. Can someone help here?

Related

Kafkajs Evenemitter consumer.on() not genrating any output

I am trying to create kafka consumer health check using event emmitter. I have tried to implement the following code to check heartbeat and consumer.crash event:
const isHealthy = async function() {
const { HEARTBEAT } = consumer.events;
let lastHeartbeat;
let crashVal;
console.log(consumer);
consumer.on(HEARTBEAT, ({timestamp}) => {
console.log("Inside consumer on HeartbeatVal: "+timestamp);
});
console.log("consumer after binding hearbeat"+JSON.stringify(consumer));
consumer.on('consumer.crash', event => {
const error = event?.payload?.error;
crashVal=error;
console.log("Hello error: "+JSON.stringify(error));
})
console.log("diff"+Date.now() - lastHeartbeat);
if (Date.now() - lastHeartbeat < SESSION_TIMEOUT) {
return true;
}
// Consumer has not heartbeat, but maybe it's because the group is currently rebalancing
try {
console.log("Inside Describe group");
const flag=await consumer.describeGroup()
const { state } = await consumer.describeGroup()
console.log("state: "+state);
if(state==='Stable'){
return true;
}
return ['CompletingRebalance', 'PreparingRebalance'].includes(state)
} catch (ex) {
return false
}
}
Could you help to provide any solution for the above.

Socket connection congests whole nodejs application

I have a socket connection using zmq.js client:
// routerSocket.ts
const zmqRouter = zmq.socket("router");
zmqRouter.bind(`tcp://*:${PORT}`);
zmqRouter.on("message", async (...frames) => {
try {
const { measurementData, measurementHeader } =
await decodeL2Measurement(frames[frames.length - 1]);
addHeaderInfo(measurementHeader);
// Add cell id to the list
process.send(
{ measurementData, measurementHeader, headerInfoArrays },
(e: any) => {
return;
},
);
} catch (e: any) {
return;
}
});
I run this socket connection within a forked process in index.ts:
// index.ts
const zmqProcess = fork("./src/routerSocket");
zmqProcess.on("message", async (data: ZmqMessage) => {
if (data !== undefined) {
const { measurementData, measurementHeader, headerInfoArrays } = data;
headerInfo = headerInfoArrays;
emitHeaderInfo(headerInfoArrays);
// Emit the message to subscribers of the rnti
const a = performance.now();
io.emit(
measurementHeader.nrCellId,
JSON.stringify({ measurementData, measurementHeader }),
);
// Emit the message to the all channel
io.emit("all", JSON.stringify({ measurementData, measurementHeader }));
const b = performance.now();
console.log("time to emit: ", a - b);
}
});
There is data coming in rapidly, about one message per ms, to the zmqRouter object, which it then processes and sends onto the main process where I use socket.io to distribute the data to clients. But as soon as the stream begins, node can't do anything else. Even a setInterval log stops working when the stream begins.
Thank you for your help!

GCP provided code snippets to both subscribe and publish mqtt in the same app doesn't work

In my Node.js app, I can successfully publish telemetry/state topics or subscribe to config/command topics, but can't both publish and subscribe.
Both Node.js code snippets that appear below are from
https://cloud.google.com/iot/docs/how-tos/mqtt-bridge
The subscribe code is as follows -
// const deviceId = `myDevice`;
// const registryId = `myRegistry`;
// const region = `us-central1`;
// const algorithm = `RS256`;
// const privateKeyFile = `./rsa_private.pem`;
// const serverCertFile = `./roots.pem`;
// const mqttBridgeHostname = `mqtt.googleapis.com`;
// const mqttBridgePort = 8883;
// const messageType = `events`;
// const numMessages = 5;
// The mqttClientId is a unique string that identifies this device. For Google
// Cloud IoT Core, it must be in the format below.
const mqttClientId = `projects/${projectId}/locations/${region}/registries/${registryId}/devices/${deviceId}`;
// With Google Cloud IoT Core, the username field is ignored, however it must be
// non-empty. The password field is used to transmit a JWT to authorize the
// device. The "mqtts" protocol causes the library to connect using SSL, which
// is required for Cloud IoT Core.
const connectionArgs = {
host: mqttBridgeHostname,
port: mqttBridgePort,
clientId: mqttClientId,
username: 'unused',
password: createJwt(projectId, privateKeyFile, algorithm),
protocol: 'mqtts',
secureProtocol: 'TLSv1_2_method',
ca: [readFileSync(serverCertFile)],
};
// Create a client, and connect to the Google MQTT bridge.
const iatTime = parseInt(Date.now() / 1000);
const client = mqtt.connect(connectionArgs);
// Subscribe to the /devices/{device-id}/config topic to receive config updates.
// Config updates are recommended to use QoS 1 (at least once delivery)
client.subscribe(`/devices/${deviceId}/config`, {qos: 1});
// Subscribe to the /devices/{device-id}/commands/# topic to receive all
// commands or to the /devices/{device-id}/commands/<subfolder> to just receive
// messages published to a specific commands folder; we recommend you use
// QoS 0 (at most once delivery)
client.subscribe(`/devices/${deviceId}/commands/#`, {qos: 0});
// The MQTT topic that this device will publish data to. The MQTT topic name is
// required to be in the format below. The topic name must end in 'state' to
// publish state and 'events' to publish telemetry. Note that this is not the
// same as the device registry's Cloud Pub/Sub topic.
const mqttTopic = `/devices/${deviceId}/${messageType}`;
client.on('connect', success => {
console.log('connect');
if (!success) {
console.log('Client not connected...');
} else if (!publishChainInProgress) {
publishAsync(mqttTopic, client, iatTime, 1, numMessages, connectionArgs);
}
});
client.on('close', () => {
console.log('close');
shouldBackoff = true;
});
client.on('error', err => {
console.log('error', err);
});
client.on('message', (topic, message) => {
let messageStr = 'Message received: ';
if (topic === `/devices/${deviceId}/config`) {
messageStr = 'Config message received: ';
} else if (topic.startsWith(`/devices/${deviceId}/commands`)) {
messageStr = 'Command message received: ';
}
messageStr += Buffer.from(message, 'base64').toString('ascii');
console.log(messageStr);
});
client.on('packetsend', () => {
// Note: logging packet send is very verbose
});
// Once all of the messages have been published, the connection to Google Cloud
// IoT will be closed and the process will exit. See the publishAsync method.
and the publish code is -
const publishAsync = (
mqttTopic,
client,
iatTime,
messagesSent,
numMessages,
connectionArgs
) => {
// If we have published enough messages or backed off too many times, stop.
if (messagesSent > numMessages || backoffTime >= MAXIMUM_BACKOFF_TIME) {
if (backoffTime >= MAXIMUM_BACKOFF_TIME) {
console.log('Backoff time is too high. Giving up.');
}
console.log('Closing connection to MQTT. Goodbye!');
client.end();
publishChainInProgress = false;
return;
}
// Publish and schedule the next publish.
publishChainInProgress = true;
let publishDelayMs = 0;
if (shouldBackoff) {
publishDelayMs = 1000 * (backoffTime + Math.random());
backoffTime *= 2;
console.log(`Backing off for ${publishDelayMs}ms before publishing.`);
}
setTimeout(() => {
const payload = `${argv.registryId}/${argv.deviceId}-payload-${messagesSent}`;
// Publish "payload" to the MQTT topic. qos=1 means at least once delivery.
// Cloud IoT Core also supports qos=0 for at most once delivery.
console.log('Publishing message:', payload);
client.publish(mqttTopic, payload, {qos: 1}, err => {
if (!err) {
shouldBackoff = false;
backoffTime = MINIMUM_BACKOFF_TIME;
}
});
const schedulePublishDelayMs = argv.messageType === 'events' ? 1000 : 2000;
setTimeout(() => {
const secsFromIssue = parseInt(Date.now() / 1000) - iatTime;
if (secsFromIssue > argv.tokenExpMins * 60) {
iatTime = parseInt(Date.now() / 1000);
console.log(`\tRefreshing token after ${secsFromIssue} seconds.`);
client.end();
connectionArgs.password = createJwt(
argv.projectId,
argv.privateKeyFile,
argv.algorithm
);
connectionArgs.protocolId = 'MQTT';
connectionArgs.protocolVersion = 4;
connectionArgs.clean = true;
client = mqtt.connect(connectionArgs);
client.on('connect', success => {
console.log('connect');
if (!success) {
console.log('Client not connected...');
} else if (!publishChainInProgress) {
publishAsync(
mqttTopic,
client,
iatTime,
messagesSent,
numMessages,
connectionArgs
);
}
});
client.on('close', () => {
console.log('close');
shouldBackoff = true;
});
client.on('error', err => {
console.log('error', err);
});
client.on('message', (topic, message) => {
console.log(
'message received: ',
Buffer.from(message, 'base64').toString('ascii')
);
});
client.on('packetsend', () => {
// Note: logging packet send is very verbose
});
}
publishAsync(
mqttTopic,
client,
iatTime,
messagesSent + 1,
numMessages,
connectionArgs
);
}, schedulePublishDelayMs);
}, publishDelayMs);
};
I am wondering if anyone has gotten their Node.js app to both successfully publish and subscribe with Google Cloud. If so, what might I be missing?

GCP Pubsub batch publishing triggering 3 to 4x time messages than actual number of messages

I am trying to publish messages via google pubsub batch publishing feature. The batch publishing code looks like below.
const gRPC = require("grpc");
const { PubSub } = require("#google-cloud/pubsub");
const createPublishEventsInBatch = (topic) => {
const pubSub = new PubSub({ gRPC });
const batchPublisher = pubSub.topic(topic, {
batching: {
maxMessages: 100,
maxMilliseconds: 1000,
},
});
return async (logTrace, eventData) => {
console.log("Publishing batch events for", eventData);
try {
await batchPublisher.publish(Buffer.from(JSON.stringify(eventData)));
} catch (err) {
console.error("Error in publishing", err);
}
};
};
And this batch publisher is getting called from a service like this.
const publishEventsInBatch1 = publishEventFactory.createPublishEventsInBatch(
"topicName1"
);
const publishEventsInBatch2 = publishEventFactory.createPublishEventsInBatch(
"topicName2"
);
events.forEach((event) => {
publishEventsInBatch1(logTrace, event);
publishEventsInBatch2(logTrace, event);
});
I am using push subscription to receive the messages with the below settings.
Acknowledgement deadline: 600 Seconds
Retry policy: Retry immediately
The issue I am facing is, if the total number of events/messages is 250k, the push subscription is supposed to get less than or equal to 250k messages based on the message execution. But in my case, I am getting 3-4 M records on subscription and it is getting varied.
My fastify and pubsub configuration is
fastify: 3.10.1
#google-cloud/pubsub: 2.12.0
Adding the subscription code
fastify.post("/subscription", async (req, reply) => {
const message = req.body.message;
let event;
let data;
let entityType;
try {
let payload = Buffer.from(message.data, "base64").toString();
event = JSON.parse(payload);
data = event.data;
entityType = event.entityType;
if (entityType === "EVENT") {
if (event.version === "1.0") {
console.log("Processing subscription");
await processMessage(fastify, data);
} else {
console.error("Unknown version of stock event, being ignored");
}
} else {
console.error("Ignore event");
}
reply.code(200).send();
} catch (err) {
if (err.status === 409) {
console.error("Ignoring stock update due to 409: Conflict");
reply.code(200).send();
} else {
console.error("Error while processing event from subscription");
reply.code(500).send();
}
}
});
Can any one guide me where I am doing the mistakes. It's a simple fastify application. Do I am making any mistake in coding or any configuration.

Sharing object between child and fork process node js

I'm working on child process using fork. but got totally confused on few things
• will it (process)pass app object instance eg:- let app = express(); using IPC
I m trying to explain my senario, first I have server.js where I initialize (starting point) server and other file is my task.js from where I am doing heavy task like reading a big file data and sending data back to other server. For send I had require authorization from that server whose logic is present in main.js and if any error occur I'm send email with few detail to client. Below provide code for email and authorization in main.js
Let task = require('./task.js')
app.sendEmail = function (message, emailinfo, attachment){
// my email logic
}
app.auth= function(host,port)
// Authorization logic
}
New task(app).run()
In task.js (sample code)
Class Task {
constructor(app){
this.app =app
}
run(){
fs.readfile('myfile',function(err,data){
if(err){ let msg =err;
let clientinf; clientinf.to = "client email";
clientinf.cc = " other user in CC";
this.app.sendEmail(msg, clientinf, attach);
}else{
let host='other server url';
let port='port';
this.app.auth(host,port);
}
})
}
}
I want to run task.js in one more thread . note cluster and worker(because I m using node 10.19 so not confident that worker works properly) I don't want to use . It is possible to use folk or spawn to share data between each other. If not how I can achieve my requirement using thread?
Here are two solutions. The first is using the Worker class from the worker_threads module but since you don't want to update the node version the second solution is using fork function from child_process module. They do pretty much the same thing to be honest I can't tell which is better but the worker_threads solution is more recent.
Solution 1:
const { Worker } = require('worker_threads')
const task_script = path.join(__dirname, "./task.js")
const obj = {data:"data"}
const worker = new Worker(task_script, {
workerData: JSON.stringify(obj)
})
worker.on("error", (err) => console.log(err))
worker.on("exit", () => console.log("exit"))
worker.on("message", (data) => {
console.log(data)
res.send(data)
})
and you have to change the task.js code slightly.Here it is
const { parentPort, workerData, isMainThread } = require('worker_threads')
class Task {
constructor(app){
this.app = app
}
run(){
if (!isMainThread) {
console.log("workerData: ", workerData) //you have worker data here
fs.readfile('myfile',function(err,data){
if(err){ let msg = err;
let clientinf; clientinf.to = "client email";
clientinf.cc = " other user in CC";
this.app.sendEmail(msg, clientinf, attach);
parentPort.postMessage(msg) //use can send message to parent like this
} else {
let host='other server url';
let port='port';
this.app.auth(host,port);
}
})
}
}
}
And here is the second solution
const { fork } = require('child_process');
const forked = fork('task.js');
forked.on('message', (msg) => {
console.log('Message from child', msg);
});
forked.send({ hello: 'world' });
and the taks.js way of sending and recieving data with this method
class Task {
constructor(app){
this.app = app
}
run(){
//receive
process.on('message', (msg) => {
console.log('Message from parent:', msg);
});
fs.readfile('myfile',function(err,data){
if(err){ let msg = err;
let clientinf; clientinf.to = "client email";
clientinf.cc = " other user in CC";
this.app.sendEmail(msg, clientinf, attach);
process.send(msg); //send method
} else {
let host='other server url';
let port='port';
this.app.auth(host,port);
}
})
}
}

Resources