Node.js long process ran twice - node.js
I have a Node.js restful API built in express.js framework. It is usually hosted by pm2.
One of the services has very long process. When front end called the service, the process started up. Since there is an error in database, the process won't be done properly and the error would be caught. However, before the process reached the error, another exactly same process started with same parameters. So in the meantime, two processes were both running while one was ahead of the other. After a long time, the first process reached error point and returned error. Then the second one returned exactly the same thing.
I checked front end Network and noticed there was actually only one request sent. Where did the second request come from?
Edit 1:
The whole process is: first process sends query to db -> long time wait -> second process starts up -> second process sends query to db -> long time wait -> first process receives db response -> long time wait -> second process receives db response
Edit 2:
The code of the service is as follow:
import { Express, Request, Response } from "express";
import * as multer from "multer";
import * as fs from "fs";
import { Readable, Duplex } from "stream";
import * as uid from "uid";
import { Client } from "pg";
import * as gdal from "gdal";
import * as csv from "csv";
import { SuccessPayload, ErrorPayload } from "../helpers/response";
import { postgresQuery } from "../helpers/database";
import Config from "../config";
export default class ShapefileRoute {
constructor(app: Express) {
// Upload a shapefile
/**
* #swagger
* /shapefile:
* post:
* description: Returns the homepage
* responses:
* 200:
*/
app.post("/shapefile", (req: Request, res: Response, next: Function): void => {
// Create instance of multer
const multerInstance = multer().array("files");
multerInstance(req, res, (err: Error) => {
if (err) {
let payload: ErrorPayload = {
code: 4004,
errorMessage: "Multer upload file error.",
errorDetail: err.message,
hints: "Check error detail"
};
req.reservePayload = payload;
next();
return;
}
// Extract files
let files: any = req.files;
// Extract body
let body: any = JSON.parse(req.body.filesInfo);
// Other params
let writeFilePromises: Promise<any>[] = [];
let copyFilePromises: Promise<any>[] = [];
let rootDirectory: string = Config.uploadRoot;
let outputId: string = uid(4);
// Reset index of those files
let namesIndex: string[] = [];
files.forEach((item: Express.Multer.File, index: number) => {
if(item.originalname.split(".")[1] === "csv" || item.originalname.split(".")[1] === "txt" || item.originalname.split(".")[1] === "shp") {
namesIndex.push(item.originalname);
}
})
// Process and write all files to disk
files.forEach((item: Express.Multer.File, outterIndex: number) => {
if(item.originalname.split(".")[1] === "csv" || item.originalname.split(".")[1] === "txt") {
namesIndex.forEach((indexItem, index) => {
if(indexItem === item.originalname) {
ShapefileRoute.csv(item, index, writeFilePromises, body, rootDirectory, outputId,);
}
})
} else if (item.originalname.split(".")[1] === "shp") {
namesIndex.forEach((indexItem, index) => {
if(indexItem === item.originalname) {
ShapefileRoute.shp(item, index, writeFilePromises, body, rootDirectory, outputId,);
}
})
} else {
ShapefileRoute.shp(item, outterIndex, writeFilePromises, body, rootDirectory, outputId,);
}
})
// Copy files from disk to database
ShapefileRoute.copyFiles(req, res, next, writeFilePromises, copyFilePromises, req.reserveSuperPg, () => {
ShapefileRoute.loadFiles(req, res, next, copyFilePromises, body, outputId)
});
})
});
}
// Process csv file
static csv(file: Express.Multer.File, index: number, writeFilePromises: Promise<any>[], body: any, rootDirectory: string, outputId: string) {
// Streaming file to pivotcsv
writeFilePromises.push(new Promise((resolve, reject) => {
// Get specification from body
let delimiter: string;
let spec: any;
let lrsColumns: string[] = [null, null, null, null, null, null];
body.layers.forEach((jsonItem, i) => {
if (jsonItem.name === file.originalname.split(".")[0]) {
delimiter = jsonItem.file_spec.delimiter;
spec = jsonItem
jsonItem.lrs_cols.forEach((lrsCol) => {
switch(lrsCol.lrs_type){
case "rec_id":
lrsColumns[0] = lrsCol.name;
break;
case "route_id":
lrsColumns[1] = lrsCol.name;
break;
case "f_meas":
lrsColumns[2] = lrsCol.name;
break;
case "t_meas":
lrsColumns[3] = lrsCol.name;
break;
case "b_date":
lrsColumns[4] = lrsCol.name;
break;
case "e_date":
lrsColumns[5] = lrsCol.name;
break;
}
})
}
});
// Pivot csv file
ShapefileRoute.pivotCsv(file.buffer, `${rootDirectory}/${outputId}_${index}`, index, delimiter, outputId, lrsColumns, (path) => {
console.log("got pivotCsv result");
spec.order = index;
resolve({
path: path,
spec: spec
});
}, reject);
}));
}
// Process shapefile
static shp(file: Express.Multer.File, index: number, writeFilePromises: Promise<any>[], body: any, rootDirectory: string, outputId: string) {
// Write file to disk and then call shp2csv to gennerate csv
writeFilePromises.push(new Promise((resolve, reject) => {
// Write shpefile to disk
fs.writeFile(`${rootDirectory}/shps/${file.originalname}`, file.buffer, (err) => {
// If it is .shp file, resolve it's path and spec
if(file.originalname.split(".")[1] === "shp") {
// Find spec of the shapefile from body
body.layers.forEach((jsonItem, i) => {
if (jsonItem.name === file.originalname.split(".")[0]) {
let recordColumn: string = null;
let routeIdColumn: string = null;
jsonItem.lrs_cols.forEach((lrsLayer) => {
if (lrsLayer.lrs_type === "rec_id") {
recordColumn = lrsLayer.name;
}
if (lrsLayer.lrs_type === "route_id") {
routeIdColumn = lrsLayer.name;
}
})
// Transfer shp to csv
ShapefileRoute.shp2csv(`${rootDirectory}/shps/${file.originalname}`, `${rootDirectory}/${outputId}_${index}`, index, outputId, recordColumn, routeIdColumn, (path, srs) => {
// Add coordinate system, geom column and index of this file to spec
jsonItem.file_spec.proj4 = srs;
jsonItem.file_spec.geom_col = "geom";
jsonItem.order = index;
// Return path and spec
resolve({
path: path,
spec: jsonItem
})
}, (err) => {
reject;
})
}
});
} else {
resolve(null);
}
})
}));
}
// Copy files to database
static copyFiles(req: Request, res: Response, next: Function, writeFilePromises: Promise<any>[], copyFilePromises: Promise<any>[], client: Client, callback: () => void) {
// Take all files generated by writefile processes
Promise.all(writeFilePromises)
.then((results) => {
// Remove null results. They are from .dbf .shx etc of shapefile.
const files: any = results.filter(arr => arr);
// Create promise array. This will be triggered after all files are written to database.
files.forEach((file) => {
copyFilePromises.push(new Promise((copyResolve, copyReject) => {
let query: string = `copy lbo.lbo_temp from '${file.path}' WITH NULL AS 'null';`;
// Create super user call
postgresQuery(client, query, (data) => {
copyResolve(file.spec);
}, copyReject);
}));
});
// Trigger upload query
callback()
})
.catch((err) => {
// Response as error if any file generating is wrong
let payload: ErrorPayload = {
code: 4004,
errorMessage: "Something wrong when processing csv and/or shapefile.",
errorDetail: err.message,
hints: "Check error detail"
};
req.reservePayload = payload;
next();
})
}
// Load layers in database
static loadFiles(req: Request, res: Response, next: Function, copyFilePromises: Promise<any>[], body: any, outputId: string) {
Promise.all(copyFilePromises)
.then((results) => {
// Resort all results by the order assigned when creating files
results.sort((a, b) => {
return a.order - b.order;
});
results.forEach((result) => {
delete result.order;
});
// Create JSON for load layer database request
let taskJson = body;
taskJson.layers = results;
let query: string = `select lbo.load_layers2(p_session_id := '${outputId}', p_layers := '${JSON.stringify(taskJson)}'::json)`;
postgresQuery(req.reservePg, query, (data) => {
// Get result
let result = data.rows[0].load_layers2.result;
// Return 4003 error if no result
if (!result) {
let payload: ErrorPayload = {
code: 4003,
errorMessage: "Load layers error.",
errorDetail: data.rows[0].load_layers2.error ? data.rows[0].load_layers2.error.message : "Load layers returns no result.",
hints: "Check error detail"
};
req.reservePayload = payload;
next();
return;
}
let payload: SuccessPayload = {
type: "string",
content: "Upload files done."
};
req.reservePayload = payload;
next();
}, (err) => {
req.reservePayload = err;
next();
});
})
.catch((err) => {
// Response as error if any file generating is wrong
let payload: ErrorPayload = {
code: 4004,
errorMessage: "Something wrong when copy files to database.",
errorDetail: err,
hints: "Check error detail"
};
req.reservePayload = payload;
next();
})
}
// Pivot csv process. Write output csv to disk and return path of the file.
static pivotCsv(buffer: Buffer, outputPath: string, inputIndex: number, delimiter: string, outputId: string, lrsColumns: string[], callback: (path: string) => void, errCallback: (err: Error) => void) {
let inputStream: Duplex = new Duplex();
// Define output stream
let output = fs.createWriteStream(outputPath, {flags: "a"});
// Callback when output stream is done
output.on("finish", () => {
console.log("output stream finish");
callback(outputPath);
});
// Define parser stream
let parser = csv.parse({
delimiter: delimiter
});
// Close output stream when parser stream is end
parser.on("end", () => {
console.log("parser stream end");
output.end();
});
// Write data when a chunck is parsed
let header = [null, null, null, null, null, null];
let attributesHeader = [];
let i = 0;
let datumIndex: boolean = true;
parser.on("data", (chunk) => {
console.log("parser received on chunck: ", i);
if (datumIndex) {
chunk.forEach((datum, index) => {
if (lrsColumns.includes(datum)) {
header[lrsColumns.indexOf(datum)] = index;
} else {
attributesHeader.push({
name: datum,
index: index
})
}
});
datumIndex = false;
} else {
i ++;
// let layer_id = ;
let rec_id = header[0] ? chunk[header[0]] : i;
let route_id = header[1] ? chunk[header[1]] : null;
let f_meas = header[2] ? chunk[header[2]] : null;
let t_meas = header[3] ? chunk[header[3]] : null;
let b_date = header[4] ? chunk[header[4]] : null;
let e_date = header[5] ? chunk[header[5]] : null;
let attributes = {};
attributesHeader.forEach((attribute) => {
attributes[attribute.name] = chunk[attribute.index];
});
let attributesOrdered = {};
Object.keys(attributes).sort().forEach((key) => {
attributesOrdered[key] = attributes[key];
});
let outputData = `${outputId}\t${inputIndex}\t${rec_id}\t${route_id}\tnull\t${f_meas}\t${t_meas}\t${b_date}\t${e_date}\tnull\t${JSON.stringify(attributesOrdered)}\n`;
output.write(outputData);
}
});
inputStream.push(buffer);
inputStream.push(null);
inputStream.pipe(parser);
}
// Write shp and transfer to database format. Return file path and projection.
static shp2csv(inputPath: string, outputPath: string, i: number, ouputId: string, recordColumn: string, routeIdColumn: string, callback: (path: string, prj: string) => void, errCallback: (err: Error) => void) {
let dataset = gdal.open(inputPath);
let layercount = dataset.layers.count();
let layer = dataset.layers.get(0);
let output = fs.createWriteStream(outputPath, {flags: "a"});
output.on("finish", () => {
callback(outputPath, layer.srs.toProj4());
});
layer.features.forEach((feature, featureId) => {
let geom;
let recordId: number = null;
let routeId: string = null;
try {
let geomWKB = feature.getGeometry().toWKB();
let geomWKBString = geomWKB.toString("hex");
geom = geomWKBString;
if (recordColumn) {
recordId = feature.fields.get(recordColumn);
}
if (routeIdColumn) {
routeId = feature.fields.get(routeIdColumn);
}
}
catch (err) {
console.log(err);
}
let attributes = {};
let attributesOrdered = {};
feature.fields.forEach((value, field) => {
if (field != recordColumn && field != routeIdColumn) {
attributes[field] = value;
}
});
Object.keys(attributes).sort().forEach((key) => {
attributesOrdered[key] = attributes[key];
});
output.write(`${ouputId}\t${i.toString()}\t${recordId ? recordId : (featureId + 1).toString()}\t${routeId}\tnull\tnull\tnull\tnull\tnull\t${geom}\t${JSON.stringify(attributesOrdered)}\n`);
});
output.end();
}
}
The browser retries some requests if the server doesn't send a response and the browser hits its timeout value. Each browser may be configured with its own timeout, but 2 minutes sounds like it's probably the browser timeout.
You can't control the browser's timeout from your server. Two minutes is just too long to ask it to wait. You need a different design that responds sooner and then communicates back the eventual result later when it's ready. Either client polling or server push with webSocket/socket.io.
For client polling, you could have the server respond immediately from your first request and return back a token (some unique string). Then, the client can ask the server for the response for that token every minute until the server eventually has the response. If the server doesn't yet have the response, it just immediately returns back a code that means no response yet. If so, the client sets a timer and tries again in a minute, sending the token each time so the server knows which request it is asking about.
For server push, the client creates a persistent webSocket or socket.io connection to the server. When the client makes it's long running request, the server just immediately returns the same type of token described above. Then, when the server is done with the request, it sends the token and the final data over the socket.io connection. The client is listening for incoming messages on that socket.io connection and will receive the final response there.
Related
Socket connection congests whole nodejs application
I have a socket connection using zmq.js client: // routerSocket.ts const zmqRouter = zmq.socket("router"); zmqRouter.bind(`tcp://*:${PORT}`); zmqRouter.on("message", async (...frames) => { try { const { measurementData, measurementHeader } = await decodeL2Measurement(frames[frames.length - 1]); addHeaderInfo(measurementHeader); // Add cell id to the list process.send( { measurementData, measurementHeader, headerInfoArrays }, (e: any) => { return; }, ); } catch (e: any) { return; } }); I run this socket connection within a forked process in index.ts: // index.ts const zmqProcess = fork("./src/routerSocket"); zmqProcess.on("message", async (data: ZmqMessage) => { if (data !== undefined) { const { measurementData, measurementHeader, headerInfoArrays } = data; headerInfo = headerInfoArrays; emitHeaderInfo(headerInfoArrays); // Emit the message to subscribers of the rnti const a = performance.now(); io.emit( measurementHeader.nrCellId, JSON.stringify({ measurementData, measurementHeader }), ); // Emit the message to the all channel io.emit("all", JSON.stringify({ measurementData, measurementHeader })); const b = performance.now(); console.log("time to emit: ", a - b); } }); There is data coming in rapidly, about one message per ms, to the zmqRouter object, which it then processes and sends onto the main process where I use socket.io to distribute the data to clients. But as soon as the stream begins, node can't do anything else. Even a setInterval log stops working when the stream begins. Thank you for your help!
How to get an docx file generated in node saved to firebase storage
Hi I am quite new to docxtemplater but I absolutely love how it works. Right now I seem to be able to generate a new docx document as follows: const functions = require('firebase-functions'); const admin = require('firebase-admin'); const {Storage} = require('#google-cloud/storage'); var PizZip = require('pizzip'); var Docxtemplater = require('docxtemplater'); admin.initializeApp(); const BUCKET = 'gs://myapp.appspot.com'; exports.test2 = functions.https.onCall((data, context) => { // The error object contains additional information when logged with JSON.stringify (it contains a properties object containing all suberrors). function replaceErrors(key, value) { if (value instanceof Error) { return Object.getOwnPropertyNames(value).reduce(function(error, key) { error[key] = value[key]; return error; }, {}); } return value; } function errorHandler(error) { console.log(JSON.stringify({error: error}, replaceErrors)); if (error.properties && error.properties.errors instanceof Array) { const errorMessages = error.properties.errors.map(function (error) { return error.properties.explanation; }).join("\n"); console.log('errorMessages', errorMessages); // errorMessages is a humanly readable message looking like this : // 'The tag beginning with "foobar" is unopened' } throw error; } let file_name = 'example.docx';// this is the file saved in my firebase storage const File = storage.bucket(BUCKET).file(file_name); const read = File.createReadStream(); var buffers = []; readable.on('data', (buffer) => { buffers.push(buffer); }); readable.on('end', () => { var buffer = Buffer.concat(buffers); var zip = new PizZip(buffer); var doc; try { doc = new Docxtemplater(zip); doc.setData({ first_name: 'Fred', last_name: 'Flinstone', phone: '0652455478', description: 'Web app' }); try { doc.render(); doc.pipe(remoteFile2.createReadStream()); } catch (error) { errorHandler(error); } } catch(error) { errorHandler(error); } }); }); My issue is that i keep getting an error that doc.pipe is not a function. I am quite new to nodejs but is there a way to have the newly generated doc after doc.render() to be saved directly to the firebase storage?
Taking a look at the type of doc, we find that is a Docxtemplater object and find that doc.pipe is not a function of that class. To get the file out of Docxtemplater, we need to use doc.getZip() to return the file (this will be either a JSZip v2 or Pizzip instance based on what we passed to the constructor). Now that we have the zip's object, we need to generate the binary data of the zip - which is done using generate({ type: 'nodebuffer' }) (to get a Node.JS Buffer containing the data). Unfortunately because the docxtemplater library doesn't support JSZip v3+, we can't make use of the generateNodeStream() method to get a stream to use with pipe(). With this buffer, we can either reupload it to Cloud Storage or send it back to the client that is calling the function. The first option is relatively simple to implement: import { v4 as uuidv4 } from 'uuid'; /* ... */ const contentBuffer = doc.getZip() .generate({type: 'nodebuffer'}); const targetName = "compiled.docx"; const targetStorageRef = admin.storage().bucket() .file(targetName); await targetStorageRef.save(contentBuffer); // send back the bucket-name pair to the caller return { bucket: targetBucket, name: targetName }; However, to send back the file itself to the client isn't as easy because this involves switching to using a HTTP Event Function (functions.https.onRequest) because a Callable Cloud Function can only return JSON-compatible data. Here we have a middleware function that takes a callable's handler function but supports returning binary data to the client. import * as functions from "firebase-functions"; import * as admin from "firebase-admin"; import corsInit from "cors"; admin.initializeApp(); const cors = corsInit({ origin: true }); // TODO: Tighten function callableRequest(handler) { if (!handler) { throw new TypeError("handler is required"); } return (req, res) => { cors(req, res, (corsErr) => { if (corsErr) { console.error("Request rejected by CORS", corsErr); res.status(412).json({ error: "cors", message: "origin rejected" }); return; } // for validateFirebaseIdToken, see https://github.com/firebase/functions-samples/blob/main/authorized-https-endpoint/functions/index.js validateFirebaseIdToken(req, res, () => { // validateFirebaseIdToken won't pass errors to `next()` try { const data = req.body; const context = { auth: req.user ? { token: req.user, uid: req.user.uid } : null, instanceIdToken: req.get("Firebase-Instance-ID-Token"); // this is used with FCM rawRequest: req }; let result: any = await handler(data, context); if (result && typeof result === "object" && "buffer" in result) { res.writeHead(200, [ ["Content-Type", res.contentType], ["Content-Disposition", "attachment; filename=" + res.filename] ]); res.end(result.buffer); } else { result = functions.https.encode(result); res.status(200).send({ result }); } } catch (err) { if (!(err instanceof HttpsError)) { // This doesn't count as an 'explicit' error. console.error("Unhandled error", err); err = new HttpsError("internal", "INTERNAL"); } const { status } = err.httpErrorCode; const body = { error: err.toJSON() }; res.status(status).send(body); } }); }); }; }) functions.https.onRequest(callableRequest(async (data, context) => { /* ... */ const contentBuffer = doc.getZip() .generate({type: "nodebuffer"}); const targetName = "compiled.docx"; return { buffer: contentBuffer, contentType: "application/vnd.openxmlformats-officedocument.wordprocessingml.document", filename: targetName } })); In your current code, there are a number of odd segments where you have nested try-catch blocks and variables in different scopes. To help combat this, we can make use of File#download() that returns a Promise that resolves with the file contents in a Node.JS Buffer and File#save() that returns a Promise that resolves when the given Buffer is uploaded. Rolling this together for reuploading to Cloud Storage gives: // This code is based off the examples provided for docxtemplater // Copyright (c) Edgar HIPP [Dual License: MIT/GPLv3] import * as functions from "firebase-functions"; import * as admin from "firebase-admin"; import PizZip from "pizzip"; import Docxtemplater from "docxtemplater"; admin.initializeApp(); // The error object contains additional information when logged with JSON.stringify (it contains a properties object containing all suberrors). function replaceErrors(key, value) { if (value instanceof Error) { return Object.getOwnPropertyNames(value).reduce( function (error, key) { error[key] = value[key]; return error; }, {} ); } return value; } function errorHandler(error) { console.log(JSON.stringify({ error: error }, replaceErrors)); if (error.properties && error.properties.errors instanceof Array) { const errorMessages = error.properties.errors .map(function (error) { return error.properties.explanation; }) .join("\n"); console.log("errorMessages", errorMessages); // errorMessages is a humanly readable message looking like this : // 'The tag beginning with "foobar" is unopened' } throw error; } exports.test2 = functions.https.onCall(async (data, context) => { const file_name = "example.docx"; // this is the file saved in my firebase storage const templateRef = await admin.storage().bucket() .file(file_name); const template_content = (await templateRef.download())[0]; const zip = new PizZip(template_content); let doc; try { doc = new Docxtemplater(zip); } catch (error) { // Catch compilation errors (errors caused by the compilation of the template : misplaced tags) errorHandler(error); } doc.setData({ first_name: "Fred", last_name: "Flinstone", phone: "0652455478", description: "Web app", }); try { doc.render(); } catch (error) { errorHandler(error); } const contentBuffer = doc.getZip().generate({ type: "nodebuffer" }); // do something with contentBuffer // e.g. reupload to Cloud Storage const targetStorageRef = admin.storage().bucket().file("compiled.docx"); await targetStorageRef.save(contentBuffer); return { bucket: targetStorageRef.bucket.name, name: targetName }; }); In addition to returning a bucket-name pair to the caller, you may also consider returning an access URL to the caller. This could be a signed url that can last for up to 7 days, a download token URL (like getDownloadURL(), process described here) that can last until the token is revoked, Google Storage URI (gs://BUCKET_NAME/FILE_NAME) (not an access URL, but can be passed to a client SDK that can access it if the client passes storage security rules) or access it directly using its public URL (after the file has been marked public). Based on the above code, you should be able to merge in returning the file directly yourself.
download <title> of a url with minimal data usage
For the purpose of generating links to another websites I need to download content of tag. But I would like to use as minimal bandwidth as possible. In some hardcore variant, to process input stream and close the connection when reached. Or to e.g. fetch first 1024 chars on first attempt and when it did not contain the whole title as a fallback fetch the whole thing. What could I use in nodejs to achieve this?
In case someone else is interested, here is what I end up with (very initial version, use just for notion what to use). A few notes thought: the core of this solution is to read as few chunks of http response as possible, till the is reached. not sure, how friendly is by convention to interrupt the connection with response.destroy() (which probably closes the underlying socket). import {get as httpGet, IncomingMessage} from 'http'; import {get as httpsGet} from 'https'; import {titleParser} from '../ParseTitleFromHtml'; async function parseFromStream(response: IncomingMessage): Promise<string|null> { let body = ''; for await (const chunk of response) { const text = chunk.toString(); body += text; const title = titleParser.parse(body); if (title !== null) { response.destroy(); return title; } } response.destroy(); return null; } export enum TitleDownloaderRejectionCodesEnum { INVALID_STATUS_CODE = 'INVALID_STATUS_CODE', TIMEOUT = 'TIMEOUT', NOT_FOUND = 'NOT_FOUND', FAILED = 'FAILED', // all other errors } export class TitleDownloader { public async downloadTitle (url: string): Promise<string|null> { const isHttps = url.search(/https/i) === 0; const method = isHttps ? httpsGet : httpGet; return new Promise((resolve, reject) => { const clientRequest = method( url, async (response) => { if (!(response.statusCode >= 200 && response.statusCode < 300)) { clientRequest.abort(); reject(response.statusCode === 404 ? TitleDownloaderRejectionCodesEnum.NOT_FOUND : TitleDownloaderRejectionCodesEnum.INVALID_STATUS_CODE ); return; } const title = await parseFromStream(response); resolve (title); } ); clientRequest.setTimeout(2000, () => { clientRequest.abort(); reject(TitleDownloaderRejectionCodesEnum.TIMEOUT); }) .on('error', (err: any) => { // clear timeout if (err && err.message && err.message.indexOf('ENOTFOUND') !== -1) { reject(TitleDownloaderRejectionCodesEnum.NOT_FOUND); } reject(TitleDownloaderRejectionCodesEnum.FAILED); }); }); } } export const titleDownloader = new TitleDownloader();
Async function inside MQTT message event
I'm using MQTTjs module in a Node app to subscribe to an MQTT broker. I want, upon receiving new messages, to store them in MongoDB with async functions. My code is something as: client.on('message', (topic, payload, packet) => { (async () => { await msgMQTT.handleMQTT_messages(topic, payload, process.env.STORAGE, MongoDBClient) }) }) But I can't understand why it does not work, i.e. it executes the async function but any MongoDB query returns without being executed. Apparently no error is issued. What am I missing?
I modified the code in: client.on('message', (topic, payload, packet) => { try { msgMQTT.handleMQTT_messages(topic, payload, process.env.STORAGE, MongoDBClient, db) } catch (error) { console.error(error) } }) Where: exports.handleMQTT_messages = (topic, payload, storageType, mongoClient, db) => { const dateFormat = 'YYYY-MMM-dddd HH:mm:ss' // topic is in the form // const topics = topic.split('/') // locations info are at second position after splitting by / const coord = topics[2].split(",") // build up station object containing GeoJSON + station name //`Station_long${coord[0]}_lat${coord[1]}` const stationObj = getStationLocs(coord.toString()) const msg = JSON.parse(payload) // what follows report/portici/ const current_topic = topics.slice(2).join() let data_parsed = null // parse only messages having a 'd' property if (msg.hasOwnProperty('d')) { console.log(`${moment().format(dateFormat)} - ${stationObj.name} (topic:${current_topic})\n `) data_parsed = parseMessages(msg) // date rounded down to the nearest hour // https://stackoverflow.com/questions/17691202/round-up-round-down-a-momentjs-moment-to-nearest-minute dateISO_String = moment(data_parsed.t).startOf('hour').toISOString(); // remove AQ from station name using regex let station_number = stationObj.name.match(/[^AQ]/).join('') let data_to_save = { id: set_custom_id(stationObj.name, dateISO_String), //`${station_number}${moment(dateISO_String).format('YMDH')}`, date: dateISO_String, station: stationObj, samples: [data_parsed] } switch (storageType) { case 'lowdb': update_insertData(db, data_to_save, coll_name) break; case 'mongodb': // MongoDB Replicaset (async () => { updateIoTBucket(data_to_save, mongoClient, db_name, coll_name) })() break; default: //ndjson format (async () => { await fsp.appendFile(process.env.PATH_FILE_NDJSON, JSON.stringify(data_to_save) + '\n') })() //saveToFile(JSON.stringify(data_to_save), process.env.PATH_FILE_NDJSON) break; } // show raw messages (not parsed) const show_raw = true const enable_console_log = true if (msg && enable_console_log) { if (show_raw) { console.log('----------RAW data--------------') console.log(JSON.stringify(msg, null, 2)) console.log('--------------------------------') } if (show_raw && data_parsed) { console.log('----------PARSED data-----------') console.log(JSON.stringify(data_parsed, null, 2)) console.log('--------------------------------') } } } } Only updateIoTBucket(data_to_save, mongoClient, db_name, coll_name) is executed asynchrounsly using mgongodb driver.
Do node js worker never times out?
I have an iteration that can take up to hours to complete. Example: do{ //this is an api action let response = await fetch_some_data; // other database action await perform_operation(); next = response.next; }while(next); I am assuming that the operation doesn't times out. But I don't know it exactly. Any kind of explanation of nodejs satisfying this condition is highly appreciated. Thanks. Update: The actual development code is as under: const Shopify = require('shopify-api-node'); const shopServices = require('../../../../services/shop_services/shop'); const { create } = require('../../../../controllers/products/Products'); exports.initiate = async (redis_client) => { redis_client.lpop(['sync'], async function (err, reply) { if (reply === null) { console.log("Queue Empty"); return true; } let data = JSON.parse(reply), shopservices = new shopServices(data), shop_data = await shopservices.get() .catch(error => { console.log(error); }); const shopify = new Shopify({ shopName: shop_data.name, accessToken: shop_data.access_token, apiVersion: '2020-04', autoLimit: false, timeout: 60 * 1000 }); let params = { limit: 250 }; do { try { let response = await shopify.product.list(params); if (await create(response, shop_data)) { console.log(`${data.current}`); }; data.current += data.offset; params = response.nextPageParameters; } catch (error) { console.log("here"); console.log(error); params = false; }; } while (params); }); } Everything is working fine till now. I am just making sure that the execution will ever happen in node or not. This function is call by a cron every minute, and data for processing is provided by queue data.