Socket connection congests whole nodejs application - node.js

I have a socket connection using zmq.js client:
// routerSocket.ts
const zmqRouter = zmq.socket("router");
zmqRouter.bind(`tcp://*:${PORT}`);
zmqRouter.on("message", async (...frames) => {
try {
const { measurementData, measurementHeader } =
await decodeL2Measurement(frames[frames.length - 1]);
addHeaderInfo(measurementHeader);
// Add cell id to the list
process.send(
{ measurementData, measurementHeader, headerInfoArrays },
(e: any) => {
return;
},
);
} catch (e: any) {
return;
}
});
I run this socket connection within a forked process in index.ts:
// index.ts
const zmqProcess = fork("./src/routerSocket");
zmqProcess.on("message", async (data: ZmqMessage) => {
if (data !== undefined) {
const { measurementData, measurementHeader, headerInfoArrays } = data;
headerInfo = headerInfoArrays;
emitHeaderInfo(headerInfoArrays);
// Emit the message to subscribers of the rnti
const a = performance.now();
io.emit(
measurementHeader.nrCellId,
JSON.stringify({ measurementData, measurementHeader }),
);
// Emit the message to the all channel
io.emit("all", JSON.stringify({ measurementData, measurementHeader }));
const b = performance.now();
console.log("time to emit: ", a - b);
}
});
There is data coming in rapidly, about one message per ms, to the zmqRouter object, which it then processes and sends onto the main process where I use socket.io to distribute the data to clients. But as soon as the stream begins, node can't do anything else. Even a setInterval log stops working when the stream begins.
Thank you for your help!

Related

Kafkajs Evenemitter consumer.on() not genrating any output

I am trying to create kafka consumer health check using event emmitter. I have tried to implement the following code to check heartbeat and consumer.crash event:
const isHealthy = async function() {
const { HEARTBEAT } = consumer.events;
let lastHeartbeat;
let crashVal;
console.log(consumer);
consumer.on(HEARTBEAT, ({timestamp}) => {
console.log("Inside consumer on HeartbeatVal: "+timestamp);
});
console.log("consumer after binding hearbeat"+JSON.stringify(consumer));
consumer.on('consumer.crash', event => {
const error = event?.payload?.error;
crashVal=error;
console.log("Hello error: "+JSON.stringify(error));
})
console.log("diff"+Date.now() - lastHeartbeat);
if (Date.now() - lastHeartbeat < SESSION_TIMEOUT) {
return true;
}
// Consumer has not heartbeat, but maybe it's because the group is currently rebalancing
try {
console.log("Inside Describe group");
const flag=await consumer.describeGroup()
const { state } = await consumer.describeGroup()
console.log("state: "+state);
if(state==='Stable'){
return true;
}
return ['CompletingRebalance', 'PreparingRebalance'].includes(state)
} catch (ex) {
return false
}
}
Could you help to provide any solution for the above.

GCP Pubsub batch publishing triggering 3 to 4x time messages than actual number of messages

I am trying to publish messages via google pubsub batch publishing feature. The batch publishing code looks like below.
const gRPC = require("grpc");
const { PubSub } = require("#google-cloud/pubsub");
const createPublishEventsInBatch = (topic) => {
const pubSub = new PubSub({ gRPC });
const batchPublisher = pubSub.topic(topic, {
batching: {
maxMessages: 100,
maxMilliseconds: 1000,
},
});
return async (logTrace, eventData) => {
console.log("Publishing batch events for", eventData);
try {
await batchPublisher.publish(Buffer.from(JSON.stringify(eventData)));
} catch (err) {
console.error("Error in publishing", err);
}
};
};
And this batch publisher is getting called from a service like this.
const publishEventsInBatch1 = publishEventFactory.createPublishEventsInBatch(
"topicName1"
);
const publishEventsInBatch2 = publishEventFactory.createPublishEventsInBatch(
"topicName2"
);
events.forEach((event) => {
publishEventsInBatch1(logTrace, event);
publishEventsInBatch2(logTrace, event);
});
I am using push subscription to receive the messages with the below settings.
Acknowledgement deadline: 600 Seconds
Retry policy: Retry immediately
The issue I am facing is, if the total number of events/messages is 250k, the push subscription is supposed to get less than or equal to 250k messages based on the message execution. But in my case, I am getting 3-4 M records on subscription and it is getting varied.
My fastify and pubsub configuration is
fastify: 3.10.1
#google-cloud/pubsub: 2.12.0
Adding the subscription code
fastify.post("/subscription", async (req, reply) => {
const message = req.body.message;
let event;
let data;
let entityType;
try {
let payload = Buffer.from(message.data, "base64").toString();
event = JSON.parse(payload);
data = event.data;
entityType = event.entityType;
if (entityType === "EVENT") {
if (event.version === "1.0") {
console.log("Processing subscription");
await processMessage(fastify, data);
} else {
console.error("Unknown version of stock event, being ignored");
}
} else {
console.error("Ignore event");
}
reply.code(200).send();
} catch (err) {
if (err.status === 409) {
console.error("Ignoring stock update due to 409: Conflict");
reply.code(200).send();
} else {
console.error("Error while processing event from subscription");
reply.code(500).send();
}
}
});
Can any one guide me where I am doing the mistakes. It's a simple fastify application. Do I am making any mistake in coding or any configuration.

Async function inside MQTT message event

I'm using MQTTjs module in a Node app to subscribe to an MQTT broker.
I want, upon receiving new messages, to store them in MongoDB with async functions.
My code is something as:
client.on('message', (topic, payload, packet) => {
(async () => {
await msgMQTT.handleMQTT_messages(topic, payload, process.env.STORAGE,
MongoDBClient)
})
})
But I can't understand why it does not work, i.e. it executes the async function but any MongoDB query returns without being executed. Apparently no error is issued.
What am I missing?
I modified the code in:
client.on('message', (topic, payload, packet) => {
try {
msgMQTT.handleMQTT_messages(topic, payload, process.env.STORAGE,
MongoDBClient, db)
} catch (error) {
console.error(error)
}
})
Where:
exports.handleMQTT_messages = (topic, payload, storageType, mongoClient, db) => {
const dateFormat = 'YYYY-MMM-dddd HH:mm:ss'
// topic is in the form
//
const topics = topic.split('/')
// locations info are at second position after splitting by /
const coord = topics[2].split(",")
// build up station object containing GeoJSON + station name
//`Station_long${coord[0]}_lat${coord[1]}`
const stationObj = getStationLocs(coord.toString())
const msg = JSON.parse(payload)
// what follows report/portici/
const current_topic = topics.slice(2).join()
let data_parsed = null
// parse only messages having a 'd' property
if (msg.hasOwnProperty('d')) {
console.log(`${moment().format(dateFormat)} - ${stationObj.name} (topic:${current_topic})\n `)
data_parsed = parseMessages(msg)
// date rounded down to the nearest hour
// https://stackoverflow.com/questions/17691202/round-up-round-down-a-momentjs-moment-to-nearest-minute
dateISO_String = moment(data_parsed.t).startOf('hour').toISOString();
// remove AQ from station name using regex
let station_number = stationObj.name.match(/[^AQ]/).join('')
let data_to_save = {
id: set_custom_id(stationObj.name, dateISO_String),
//`${station_number}${moment(dateISO_String).format('YMDH')}`,
date: dateISO_String,
station: stationObj,
samples: [data_parsed]
}
switch (storageType) {
case 'lowdb':
update_insertData(db, data_to_save, coll_name)
break;
case 'mongodb': // MongoDB Replicaset
(async () => {
updateIoTBucket(data_to_save, mongoClient, db_name, coll_name)
})()
break;
default: //ndjson format
(async () => {
await fsp.appendFile(process.env.PATH_FILE_NDJSON,
JSON.stringify(data_to_save) + '\n')
})()
//saveToFile(JSON.stringify(data_to_save), process.env.PATH_FILE_NDJSON)
break;
}
// show raw messages (not parsed)
const show_raw = true
const enable_console_log = true
if (msg && enable_console_log) {
if (show_raw) {
console.log('----------RAW data--------------')
console.log(JSON.stringify(msg, null, 2))
console.log('--------------------------------')
}
if (show_raw && data_parsed) {
console.log('----------PARSED data-----------')
console.log(JSON.stringify(data_parsed, null, 2))
console.log('--------------------------------')
}
}
}
}
Only updateIoTBucket(data_to_save, mongoClient, db_name, coll_name) is executed asynchrounsly using mgongodb driver.

Node.js long process ran twice

I have a Node.js restful API built in express.js framework. It is usually hosted by pm2.
One of the services has very long process. When front end called the service, the process started up. Since there is an error in database, the process won't be done properly and the error would be caught. However, before the process reached the error, another exactly same process started with same parameters. So in the meantime, two processes were both running while one was ahead of the other. After a long time, the first process reached error point and returned error. Then the second one returned exactly the same thing.
I checked front end Network and noticed there was actually only one request sent. Where did the second request come from?
Edit 1:
The whole process is: first process sends query to db -> long time wait -> second process starts up -> second process sends query to db -> long time wait -> first process receives db response -> long time wait -> second process receives db response
Edit 2:
The code of the service is as follow:
import { Express, Request, Response } from "express";
import * as multer from "multer";
import * as fs from "fs";
import { Readable, Duplex } from "stream";
import * as uid from "uid";
import { Client } from "pg";
import * as gdal from "gdal";
import * as csv from "csv";
import { SuccessPayload, ErrorPayload } from "../helpers/response";
import { postgresQuery } from "../helpers/database";
import Config from "../config";
export default class ShapefileRoute {
constructor(app: Express) {
// Upload a shapefile
/**
* #swagger
* /shapefile:
* post:
* description: Returns the homepage
* responses:
* 200:
*/
app.post("/shapefile", (req: Request, res: Response, next: Function): void => {
// Create instance of multer
const multerInstance = multer().array("files");
multerInstance(req, res, (err: Error) => {
if (err) {
let payload: ErrorPayload = {
code: 4004,
errorMessage: "Multer upload file error.",
errorDetail: err.message,
hints: "Check error detail"
};
req.reservePayload = payload;
next();
return;
}
// Extract files
let files: any = req.files;
// Extract body
let body: any = JSON.parse(req.body.filesInfo);
// Other params
let writeFilePromises: Promise<any>[] = [];
let copyFilePromises: Promise<any>[] = [];
let rootDirectory: string = Config.uploadRoot;
let outputId: string = uid(4);
// Reset index of those files
let namesIndex: string[] = [];
files.forEach((item: Express.Multer.File, index: number) => {
if(item.originalname.split(".")[1] === "csv" || item.originalname.split(".")[1] === "txt" || item.originalname.split(".")[1] === "shp") {
namesIndex.push(item.originalname);
}
})
// Process and write all files to disk
files.forEach((item: Express.Multer.File, outterIndex: number) => {
if(item.originalname.split(".")[1] === "csv" || item.originalname.split(".")[1] === "txt") {
namesIndex.forEach((indexItem, index) => {
if(indexItem === item.originalname) {
ShapefileRoute.csv(item, index, writeFilePromises, body, rootDirectory, outputId,);
}
})
} else if (item.originalname.split(".")[1] === "shp") {
namesIndex.forEach((indexItem, index) => {
if(indexItem === item.originalname) {
ShapefileRoute.shp(item, index, writeFilePromises, body, rootDirectory, outputId,);
}
})
} else {
ShapefileRoute.shp(item, outterIndex, writeFilePromises, body, rootDirectory, outputId,);
}
})
// Copy files from disk to database
ShapefileRoute.copyFiles(req, res, next, writeFilePromises, copyFilePromises, req.reserveSuperPg, () => {
ShapefileRoute.loadFiles(req, res, next, copyFilePromises, body, outputId)
});
})
});
}
// Process csv file
static csv(file: Express.Multer.File, index: number, writeFilePromises: Promise<any>[], body: any, rootDirectory: string, outputId: string) {
// Streaming file to pivotcsv
writeFilePromises.push(new Promise((resolve, reject) => {
// Get specification from body
let delimiter: string;
let spec: any;
let lrsColumns: string[] = [null, null, null, null, null, null];
body.layers.forEach((jsonItem, i) => {
if (jsonItem.name === file.originalname.split(".")[0]) {
delimiter = jsonItem.file_spec.delimiter;
spec = jsonItem
jsonItem.lrs_cols.forEach((lrsCol) => {
switch(lrsCol.lrs_type){
case "rec_id":
lrsColumns[0] = lrsCol.name;
break;
case "route_id":
lrsColumns[1] = lrsCol.name;
break;
case "f_meas":
lrsColumns[2] = lrsCol.name;
break;
case "t_meas":
lrsColumns[3] = lrsCol.name;
break;
case "b_date":
lrsColumns[4] = lrsCol.name;
break;
case "e_date":
lrsColumns[5] = lrsCol.name;
break;
}
})
}
});
// Pivot csv file
ShapefileRoute.pivotCsv(file.buffer, `${rootDirectory}/${outputId}_${index}`, index, delimiter, outputId, lrsColumns, (path) => {
console.log("got pivotCsv result");
spec.order = index;
resolve({
path: path,
spec: spec
});
}, reject);
}));
}
// Process shapefile
static shp(file: Express.Multer.File, index: number, writeFilePromises: Promise<any>[], body: any, rootDirectory: string, outputId: string) {
// Write file to disk and then call shp2csv to gennerate csv
writeFilePromises.push(new Promise((resolve, reject) => {
// Write shpefile to disk
fs.writeFile(`${rootDirectory}/shps/${file.originalname}`, file.buffer, (err) => {
// If it is .shp file, resolve it's path and spec
if(file.originalname.split(".")[1] === "shp") {
// Find spec of the shapefile from body
body.layers.forEach((jsonItem, i) => {
if (jsonItem.name === file.originalname.split(".")[0]) {
let recordColumn: string = null;
let routeIdColumn: string = null;
jsonItem.lrs_cols.forEach((lrsLayer) => {
if (lrsLayer.lrs_type === "rec_id") {
recordColumn = lrsLayer.name;
}
if (lrsLayer.lrs_type === "route_id") {
routeIdColumn = lrsLayer.name;
}
})
// Transfer shp to csv
ShapefileRoute.shp2csv(`${rootDirectory}/shps/${file.originalname}`, `${rootDirectory}/${outputId}_${index}`, index, outputId, recordColumn, routeIdColumn, (path, srs) => {
// Add coordinate system, geom column and index of this file to spec
jsonItem.file_spec.proj4 = srs;
jsonItem.file_spec.geom_col = "geom";
jsonItem.order = index;
// Return path and spec
resolve({
path: path,
spec: jsonItem
})
}, (err) => {
reject;
})
}
});
} else {
resolve(null);
}
})
}));
}
// Copy files to database
static copyFiles(req: Request, res: Response, next: Function, writeFilePromises: Promise<any>[], copyFilePromises: Promise<any>[], client: Client, callback: () => void) {
// Take all files generated by writefile processes
Promise.all(writeFilePromises)
.then((results) => {
// Remove null results. They are from .dbf .shx etc of shapefile.
const files: any = results.filter(arr => arr);
// Create promise array. This will be triggered after all files are written to database.
files.forEach((file) => {
copyFilePromises.push(new Promise((copyResolve, copyReject) => {
let query: string = `copy lbo.lbo_temp from '${file.path}' WITH NULL AS 'null';`;
// Create super user call
postgresQuery(client, query, (data) => {
copyResolve(file.spec);
}, copyReject);
}));
});
// Trigger upload query
callback()
})
.catch((err) => {
// Response as error if any file generating is wrong
let payload: ErrorPayload = {
code: 4004,
errorMessage: "Something wrong when processing csv and/or shapefile.",
errorDetail: err.message,
hints: "Check error detail"
};
req.reservePayload = payload;
next();
})
}
// Load layers in database
static loadFiles(req: Request, res: Response, next: Function, copyFilePromises: Promise<any>[], body: any, outputId: string) {
Promise.all(copyFilePromises)
.then((results) => {
// Resort all results by the order assigned when creating files
results.sort((a, b) => {
return a.order - b.order;
});
results.forEach((result) => {
delete result.order;
});
// Create JSON for load layer database request
let taskJson = body;
taskJson.layers = results;
let query: string = `select lbo.load_layers2(p_session_id := '${outputId}', p_layers := '${JSON.stringify(taskJson)}'::json)`;
postgresQuery(req.reservePg, query, (data) => {
// Get result
let result = data.rows[0].load_layers2.result;
// Return 4003 error if no result
if (!result) {
let payload: ErrorPayload = {
code: 4003,
errorMessage: "Load layers error.",
errorDetail: data.rows[0].load_layers2.error ? data.rows[0].load_layers2.error.message : "Load layers returns no result.",
hints: "Check error detail"
};
req.reservePayload = payload;
next();
return;
}
let payload: SuccessPayload = {
type: "string",
content: "Upload files done."
};
req.reservePayload = payload;
next();
}, (err) => {
req.reservePayload = err;
next();
});
})
.catch((err) => {
// Response as error if any file generating is wrong
let payload: ErrorPayload = {
code: 4004,
errorMessage: "Something wrong when copy files to database.",
errorDetail: err,
hints: "Check error detail"
};
req.reservePayload = payload;
next();
})
}
// Pivot csv process. Write output csv to disk and return path of the file.
static pivotCsv(buffer: Buffer, outputPath: string, inputIndex: number, delimiter: string, outputId: string, lrsColumns: string[], callback: (path: string) => void, errCallback: (err: Error) => void) {
let inputStream: Duplex = new Duplex();
// Define output stream
let output = fs.createWriteStream(outputPath, {flags: "a"});
// Callback when output stream is done
output.on("finish", () => {
console.log("output stream finish");
callback(outputPath);
});
// Define parser stream
let parser = csv.parse({
delimiter: delimiter
});
// Close output stream when parser stream is end
parser.on("end", () => {
console.log("parser stream end");
output.end();
});
// Write data when a chunck is parsed
let header = [null, null, null, null, null, null];
let attributesHeader = [];
let i = 0;
let datumIndex: boolean = true;
parser.on("data", (chunk) => {
console.log("parser received on chunck: ", i);
if (datumIndex) {
chunk.forEach((datum, index) => {
if (lrsColumns.includes(datum)) {
header[lrsColumns.indexOf(datum)] = index;
} else {
attributesHeader.push({
name: datum,
index: index
})
}
});
datumIndex = false;
} else {
i ++;
// let layer_id = ;
let rec_id = header[0] ? chunk[header[0]] : i;
let route_id = header[1] ? chunk[header[1]] : null;
let f_meas = header[2] ? chunk[header[2]] : null;
let t_meas = header[3] ? chunk[header[3]] : null;
let b_date = header[4] ? chunk[header[4]] : null;
let e_date = header[5] ? chunk[header[5]] : null;
let attributes = {};
attributesHeader.forEach((attribute) => {
attributes[attribute.name] = chunk[attribute.index];
});
let attributesOrdered = {};
Object.keys(attributes).sort().forEach((key) => {
attributesOrdered[key] = attributes[key];
});
let outputData = `${outputId}\t${inputIndex}\t${rec_id}\t${route_id}\tnull\t${f_meas}\t${t_meas}\t${b_date}\t${e_date}\tnull\t${JSON.stringify(attributesOrdered)}\n`;
output.write(outputData);
}
});
inputStream.push(buffer);
inputStream.push(null);
inputStream.pipe(parser);
}
// Write shp and transfer to database format. Return file path and projection.
static shp2csv(inputPath: string, outputPath: string, i: number, ouputId: string, recordColumn: string, routeIdColumn: string, callback: (path: string, prj: string) => void, errCallback: (err: Error) => void) {
let dataset = gdal.open(inputPath);
let layercount = dataset.layers.count();
let layer = dataset.layers.get(0);
let output = fs.createWriteStream(outputPath, {flags: "a"});
output.on("finish", () => {
callback(outputPath, layer.srs.toProj4());
});
layer.features.forEach((feature, featureId) => {
let geom;
let recordId: number = null;
let routeId: string = null;
try {
let geomWKB = feature.getGeometry().toWKB();
let geomWKBString = geomWKB.toString("hex");
geom = geomWKBString;
if (recordColumn) {
recordId = feature.fields.get(recordColumn);
}
if (routeIdColumn) {
routeId = feature.fields.get(routeIdColumn);
}
}
catch (err) {
console.log(err);
}
let attributes = {};
let attributesOrdered = {};
feature.fields.forEach((value, field) => {
if (field != recordColumn && field != routeIdColumn) {
attributes[field] = value;
}
});
Object.keys(attributes).sort().forEach((key) => {
attributesOrdered[key] = attributes[key];
});
output.write(`${ouputId}\t${i.toString()}\t${recordId ? recordId : (featureId + 1).toString()}\t${routeId}\tnull\tnull\tnull\tnull\tnull\t${geom}\t${JSON.stringify(attributesOrdered)}\n`);
});
output.end();
}
}
The browser retries some requests if the server doesn't send a response and the browser hits its timeout value. Each browser may be configured with its own timeout, but 2 minutes sounds like it's probably the browser timeout.
You can't control the browser's timeout from your server. Two minutes is just too long to ask it to wait. You need a different design that responds sooner and then communicates back the eventual result later when it's ready. Either client polling or server push with webSocket/socket.io.
For client polling, you could have the server respond immediately from your first request and return back a token (some unique string). Then, the client can ask the server for the response for that token every minute until the server eventually has the response. If the server doesn't yet have the response, it just immediately returns back a code that means no response yet. If so, the client sets a timer and tries again in a minute, sending the token each time so the server knows which request it is asking about.
For server push, the client creates a persistent webSocket or socket.io connection to the server. When the client makes it's long running request, the server just immediately returns the same type of token described above. Then, when the server is done with the request, it sends the token and the final data over the socket.io connection. The client is listening for incoming messages on that socket.io connection and will receive the final response there.

Node js concurrency with amqp

I am writing one node js service which receives messages using rabbitmq. But I am facing one issue when I am trying to send concurrent requests to my node js service.
Here is amqp subscriber I have written,
const amqp = require('amqplib/callback_api')
let AmqpConnection = {
// some other methods to make connection
// ....
//....
subscribe: function(){
this.withChannel((channel) => {
let defaultQueueName = "my_queue";
channel.assertQueue(defaultQueueName, { durable: true }, function(err, _ok) {
if (err) throw err;
channel.consume(defaultQueueName, AmqpConnection.processMessage);
Logger.info("Waiting for requests..");
});
})
},
processMessage: function(payload){
debugger
try {
Logger.info("received"+(payload.content.toString()))
}
catch(error){
Logger.error("ERROR: "+ error.message)
//Channel.ack(payload)
}
}
}
And now I am trying to publish messages to this using publisher,
const amqp = require('amqplib/callback_api')
let Publisher = {
// some other methods to make connection
// ....
// ....
sendMessage: function(message){
this.withChannel((channel) => {
let exchangeName = 'exchange';
let exchangeType = 'fanout';
let defaultQueueName = 'my_queue';
channel.assertExchange(exchangeName, exchangeType)
channel.publish(exchangeName, defaultQueueName, new Buffer(message));
})
}
}
let invalidMsg = JSON.stringify({ "content": ""})
let correctMsg = JSON.stringify({ "content": "Test message"})
setTimeout(function () {
for(let i=0; i<2; i++){
Publisher.sendMessage(correctMsg)
Publisher.sendMessage(invalidMsg)
}
}, 3000)
But when I execute both publisher and subscriber, I get following output on subscriber side
2017-02-18T11:27:55.368Z - info: received{"content":""}
2017-02-18T11:27:55.378Z - info: received{"content":""}
2017-02-18T11:27:55.379Z - info: received{"content":""}
2017-02-18T11:27:55.380Z - info: received{"content":""}
It seems like concurrent requests are overriding message received. Can someone help here?

Resources