Async function inside MQTT message event - node.js

I'm using MQTTjs module in a Node app to subscribe to an MQTT broker.
I want, upon receiving new messages, to store them in MongoDB with async functions.
My code is something as:
client.on('message', (topic, payload, packet) => {
(async () => {
await msgMQTT.handleMQTT_messages(topic, payload, process.env.STORAGE,
MongoDBClient)
})
})
But I can't understand why it does not work, i.e. it executes the async function but any MongoDB query returns without being executed. Apparently no error is issued.
What am I missing?

I modified the code in:
client.on('message', (topic, payload, packet) => {
try {
msgMQTT.handleMQTT_messages(topic, payload, process.env.STORAGE,
MongoDBClient, db)
} catch (error) {
console.error(error)
}
})
Where:
exports.handleMQTT_messages = (topic, payload, storageType, mongoClient, db) => {
const dateFormat = 'YYYY-MMM-dddd HH:mm:ss'
// topic is in the form
//
const topics = topic.split('/')
// locations info are at second position after splitting by /
const coord = topics[2].split(",")
// build up station object containing GeoJSON + station name
//`Station_long${coord[0]}_lat${coord[1]}`
const stationObj = getStationLocs(coord.toString())
const msg = JSON.parse(payload)
// what follows report/portici/
const current_topic = topics.slice(2).join()
let data_parsed = null
// parse only messages having a 'd' property
if (msg.hasOwnProperty('d')) {
console.log(`${moment().format(dateFormat)} - ${stationObj.name} (topic:${current_topic})\n `)
data_parsed = parseMessages(msg)
// date rounded down to the nearest hour
// https://stackoverflow.com/questions/17691202/round-up-round-down-a-momentjs-moment-to-nearest-minute
dateISO_String = moment(data_parsed.t).startOf('hour').toISOString();
// remove AQ from station name using regex
let station_number = stationObj.name.match(/[^AQ]/).join('')
let data_to_save = {
id: set_custom_id(stationObj.name, dateISO_String),
//`${station_number}${moment(dateISO_String).format('YMDH')}`,
date: dateISO_String,
station: stationObj,
samples: [data_parsed]
}
switch (storageType) {
case 'lowdb':
update_insertData(db, data_to_save, coll_name)
break;
case 'mongodb': // MongoDB Replicaset
(async () => {
updateIoTBucket(data_to_save, mongoClient, db_name, coll_name)
})()
break;
default: //ndjson format
(async () => {
await fsp.appendFile(process.env.PATH_FILE_NDJSON,
JSON.stringify(data_to_save) + '\n')
})()
//saveToFile(JSON.stringify(data_to_save), process.env.PATH_FILE_NDJSON)
break;
}
// show raw messages (not parsed)
const show_raw = true
const enable_console_log = true
if (msg && enable_console_log) {
if (show_raw) {
console.log('----------RAW data--------------')
console.log(JSON.stringify(msg, null, 2))
console.log('--------------------------------')
}
if (show_raw && data_parsed) {
console.log('----------PARSED data-----------')
console.log(JSON.stringify(data_parsed, null, 2))
console.log('--------------------------------')
}
}
}
}
Only updateIoTBucket(data_to_save, mongoClient, db_name, coll_name) is executed asynchrounsly using mgongodb driver.

Related

Socket connection congests whole nodejs application

I have a socket connection using zmq.js client:
// routerSocket.ts
const zmqRouter = zmq.socket("router");
zmqRouter.bind(`tcp://*:${PORT}`);
zmqRouter.on("message", async (...frames) => {
try {
const { measurementData, measurementHeader } =
await decodeL2Measurement(frames[frames.length - 1]);
addHeaderInfo(measurementHeader);
// Add cell id to the list
process.send(
{ measurementData, measurementHeader, headerInfoArrays },
(e: any) => {
return;
},
);
} catch (e: any) {
return;
}
});
I run this socket connection within a forked process in index.ts:
// index.ts
const zmqProcess = fork("./src/routerSocket");
zmqProcess.on("message", async (data: ZmqMessage) => {
if (data !== undefined) {
const { measurementData, measurementHeader, headerInfoArrays } = data;
headerInfo = headerInfoArrays;
emitHeaderInfo(headerInfoArrays);
// Emit the message to subscribers of the rnti
const a = performance.now();
io.emit(
measurementHeader.nrCellId,
JSON.stringify({ measurementData, measurementHeader }),
);
// Emit the message to the all channel
io.emit("all", JSON.stringify({ measurementData, measurementHeader }));
const b = performance.now();
console.log("time to emit: ", a - b);
}
});
There is data coming in rapidly, about one message per ms, to the zmqRouter object, which it then processes and sends onto the main process where I use socket.io to distribute the data to clients. But as soon as the stream begins, node can't do anything else. Even a setInterval log stops working when the stream begins.
Thank you for your help!

Error to emit socket: internal / bootstrap / pre_execution.js: 308

internal/bootstrap/pre_execution.js:308
get() {
^
RangeError: Maximum call stack size exceeded
at get (internal/bootstrap/pre_execution.js:308:8)
at hasBinary (/home/spirit/Documents/ProjectsJS/ProjectBack/node_modules/has-binary2/index.js:44:3)
Hello I am trying to issue my socket but I have this error, and I can not receive my socket in the back end below all the code I used
io.on('connection', function (socket) {
setInterval(() => Queue.searching() , 1000);
sessionMap.set(socket.id,socket);
//ADD PLAYER TO QUEUE
socket.on('addPlayer-Queue', (result) => {
const player = {
id:result.id,
socketid: socket.id,
name: result.name,
mmr: result.mmr,
socket: socket
}
const nPlayer = new Player(player);
Queue.addPlayer(nPlayer);
/*
console.log(queue);
console.log(sessionMap.all());*/
//socket.emit('match', matches)
});
});
//
searching = () => {
const firstPlayer = this.getRandomPlayer();
const secondPlayer = this.players.find(
playerTwo =>
playerTwo.mmr < this.calculateLessThanPercentage(firstPlayer) &&
playerTwo.mmr > this.calculateGreaterThanPercentage(firstPlayer) &&
playerTwo.id != firstPlayer.id
);
if(secondPlayer){
const matchedPlayers = [firstPlayer, secondPlayer];
this.removePlayers(matchedPlayers);
Matches.configurePlayersForNewMatch(matchedPlayers);
}
}
//
getMatchConfigurationFor = players => {
console.log(sessionMap.all())
if(players){
const match = new Match(players);
const result = {
idMatch: match.id,
playerOne: match.players[0],
playerTwo:match.players[1]
}
return result;
}
}
configurePlayersForNewMatch = (matchedPlayers) => {
matchedPlayers.forEach(player =>
sessionMap.get(player.socketId)
.broadcast.to(player.socketId)
.emit('match',
this.getMatchConfigurationFor(matchedPlayers)));
}
so what I admit the problem is in this code snippet:
matchedPlayers.forEach(player =>
sessionMap.get(player.socketId)
.broadcast.to(player.socketId)
.emit('match',
this.getMatchConfigurationFor(matchedPlayers)));
The session map is a mapping I made of my sockets saving key (socket id) and socket to send by emit later. on console log about my session.map this is console . log on my socket player https://pastebin.com/XHDXH9ih

Node.js long process ran twice

I have a Node.js restful API built in express.js framework. It is usually hosted by pm2.
One of the services has very long process. When front end called the service, the process started up. Since there is an error in database, the process won't be done properly and the error would be caught. However, before the process reached the error, another exactly same process started with same parameters. So in the meantime, two processes were both running while one was ahead of the other. After a long time, the first process reached error point and returned error. Then the second one returned exactly the same thing.
I checked front end Network and noticed there was actually only one request sent. Where did the second request come from?
Edit 1:
The whole process is: first process sends query to db -> long time wait -> second process starts up -> second process sends query to db -> long time wait -> first process receives db response -> long time wait -> second process receives db response
Edit 2:
The code of the service is as follow:
import { Express, Request, Response } from "express";
import * as multer from "multer";
import * as fs from "fs";
import { Readable, Duplex } from "stream";
import * as uid from "uid";
import { Client } from "pg";
import * as gdal from "gdal";
import * as csv from "csv";
import { SuccessPayload, ErrorPayload } from "../helpers/response";
import { postgresQuery } from "../helpers/database";
import Config from "../config";
export default class ShapefileRoute {
constructor(app: Express) {
// Upload a shapefile
/**
* #swagger
* /shapefile:
* post:
* description: Returns the homepage
* responses:
* 200:
*/
app.post("/shapefile", (req: Request, res: Response, next: Function): void => {
// Create instance of multer
const multerInstance = multer().array("files");
multerInstance(req, res, (err: Error) => {
if (err) {
let payload: ErrorPayload = {
code: 4004,
errorMessage: "Multer upload file error.",
errorDetail: err.message,
hints: "Check error detail"
};
req.reservePayload = payload;
next();
return;
}
// Extract files
let files: any = req.files;
// Extract body
let body: any = JSON.parse(req.body.filesInfo);
// Other params
let writeFilePromises: Promise<any>[] = [];
let copyFilePromises: Promise<any>[] = [];
let rootDirectory: string = Config.uploadRoot;
let outputId: string = uid(4);
// Reset index of those files
let namesIndex: string[] = [];
files.forEach((item: Express.Multer.File, index: number) => {
if(item.originalname.split(".")[1] === "csv" || item.originalname.split(".")[1] === "txt" || item.originalname.split(".")[1] === "shp") {
namesIndex.push(item.originalname);
}
})
// Process and write all files to disk
files.forEach((item: Express.Multer.File, outterIndex: number) => {
if(item.originalname.split(".")[1] === "csv" || item.originalname.split(".")[1] === "txt") {
namesIndex.forEach((indexItem, index) => {
if(indexItem === item.originalname) {
ShapefileRoute.csv(item, index, writeFilePromises, body, rootDirectory, outputId,);
}
})
} else if (item.originalname.split(".")[1] === "shp") {
namesIndex.forEach((indexItem, index) => {
if(indexItem === item.originalname) {
ShapefileRoute.shp(item, index, writeFilePromises, body, rootDirectory, outputId,);
}
})
} else {
ShapefileRoute.shp(item, outterIndex, writeFilePromises, body, rootDirectory, outputId,);
}
})
// Copy files from disk to database
ShapefileRoute.copyFiles(req, res, next, writeFilePromises, copyFilePromises, req.reserveSuperPg, () => {
ShapefileRoute.loadFiles(req, res, next, copyFilePromises, body, outputId)
});
})
});
}
// Process csv file
static csv(file: Express.Multer.File, index: number, writeFilePromises: Promise<any>[], body: any, rootDirectory: string, outputId: string) {
// Streaming file to pivotcsv
writeFilePromises.push(new Promise((resolve, reject) => {
// Get specification from body
let delimiter: string;
let spec: any;
let lrsColumns: string[] = [null, null, null, null, null, null];
body.layers.forEach((jsonItem, i) => {
if (jsonItem.name === file.originalname.split(".")[0]) {
delimiter = jsonItem.file_spec.delimiter;
spec = jsonItem
jsonItem.lrs_cols.forEach((lrsCol) => {
switch(lrsCol.lrs_type){
case "rec_id":
lrsColumns[0] = lrsCol.name;
break;
case "route_id":
lrsColumns[1] = lrsCol.name;
break;
case "f_meas":
lrsColumns[2] = lrsCol.name;
break;
case "t_meas":
lrsColumns[3] = lrsCol.name;
break;
case "b_date":
lrsColumns[4] = lrsCol.name;
break;
case "e_date":
lrsColumns[5] = lrsCol.name;
break;
}
})
}
});
// Pivot csv file
ShapefileRoute.pivotCsv(file.buffer, `${rootDirectory}/${outputId}_${index}`, index, delimiter, outputId, lrsColumns, (path) => {
console.log("got pivotCsv result");
spec.order = index;
resolve({
path: path,
spec: spec
});
}, reject);
}));
}
// Process shapefile
static shp(file: Express.Multer.File, index: number, writeFilePromises: Promise<any>[], body: any, rootDirectory: string, outputId: string) {
// Write file to disk and then call shp2csv to gennerate csv
writeFilePromises.push(new Promise((resolve, reject) => {
// Write shpefile to disk
fs.writeFile(`${rootDirectory}/shps/${file.originalname}`, file.buffer, (err) => {
// If it is .shp file, resolve it's path and spec
if(file.originalname.split(".")[1] === "shp") {
// Find spec of the shapefile from body
body.layers.forEach((jsonItem, i) => {
if (jsonItem.name === file.originalname.split(".")[0]) {
let recordColumn: string = null;
let routeIdColumn: string = null;
jsonItem.lrs_cols.forEach((lrsLayer) => {
if (lrsLayer.lrs_type === "rec_id") {
recordColumn = lrsLayer.name;
}
if (lrsLayer.lrs_type === "route_id") {
routeIdColumn = lrsLayer.name;
}
})
// Transfer shp to csv
ShapefileRoute.shp2csv(`${rootDirectory}/shps/${file.originalname}`, `${rootDirectory}/${outputId}_${index}`, index, outputId, recordColumn, routeIdColumn, (path, srs) => {
// Add coordinate system, geom column and index of this file to spec
jsonItem.file_spec.proj4 = srs;
jsonItem.file_spec.geom_col = "geom";
jsonItem.order = index;
// Return path and spec
resolve({
path: path,
spec: jsonItem
})
}, (err) => {
reject;
})
}
});
} else {
resolve(null);
}
})
}));
}
// Copy files to database
static copyFiles(req: Request, res: Response, next: Function, writeFilePromises: Promise<any>[], copyFilePromises: Promise<any>[], client: Client, callback: () => void) {
// Take all files generated by writefile processes
Promise.all(writeFilePromises)
.then((results) => {
// Remove null results. They are from .dbf .shx etc of shapefile.
const files: any = results.filter(arr => arr);
// Create promise array. This will be triggered after all files are written to database.
files.forEach((file) => {
copyFilePromises.push(new Promise((copyResolve, copyReject) => {
let query: string = `copy lbo.lbo_temp from '${file.path}' WITH NULL AS 'null';`;
// Create super user call
postgresQuery(client, query, (data) => {
copyResolve(file.spec);
}, copyReject);
}));
});
// Trigger upload query
callback()
})
.catch((err) => {
// Response as error if any file generating is wrong
let payload: ErrorPayload = {
code: 4004,
errorMessage: "Something wrong when processing csv and/or shapefile.",
errorDetail: err.message,
hints: "Check error detail"
};
req.reservePayload = payload;
next();
})
}
// Load layers in database
static loadFiles(req: Request, res: Response, next: Function, copyFilePromises: Promise<any>[], body: any, outputId: string) {
Promise.all(copyFilePromises)
.then((results) => {
// Resort all results by the order assigned when creating files
results.sort((a, b) => {
return a.order - b.order;
});
results.forEach((result) => {
delete result.order;
});
// Create JSON for load layer database request
let taskJson = body;
taskJson.layers = results;
let query: string = `select lbo.load_layers2(p_session_id := '${outputId}', p_layers := '${JSON.stringify(taskJson)}'::json)`;
postgresQuery(req.reservePg, query, (data) => {
// Get result
let result = data.rows[0].load_layers2.result;
// Return 4003 error if no result
if (!result) {
let payload: ErrorPayload = {
code: 4003,
errorMessage: "Load layers error.",
errorDetail: data.rows[0].load_layers2.error ? data.rows[0].load_layers2.error.message : "Load layers returns no result.",
hints: "Check error detail"
};
req.reservePayload = payload;
next();
return;
}
let payload: SuccessPayload = {
type: "string",
content: "Upload files done."
};
req.reservePayload = payload;
next();
}, (err) => {
req.reservePayload = err;
next();
});
})
.catch((err) => {
// Response as error if any file generating is wrong
let payload: ErrorPayload = {
code: 4004,
errorMessage: "Something wrong when copy files to database.",
errorDetail: err,
hints: "Check error detail"
};
req.reservePayload = payload;
next();
})
}
// Pivot csv process. Write output csv to disk and return path of the file.
static pivotCsv(buffer: Buffer, outputPath: string, inputIndex: number, delimiter: string, outputId: string, lrsColumns: string[], callback: (path: string) => void, errCallback: (err: Error) => void) {
let inputStream: Duplex = new Duplex();
// Define output stream
let output = fs.createWriteStream(outputPath, {flags: "a"});
// Callback when output stream is done
output.on("finish", () => {
console.log("output stream finish");
callback(outputPath);
});
// Define parser stream
let parser = csv.parse({
delimiter: delimiter
});
// Close output stream when parser stream is end
parser.on("end", () => {
console.log("parser stream end");
output.end();
});
// Write data when a chunck is parsed
let header = [null, null, null, null, null, null];
let attributesHeader = [];
let i = 0;
let datumIndex: boolean = true;
parser.on("data", (chunk) => {
console.log("parser received on chunck: ", i);
if (datumIndex) {
chunk.forEach((datum, index) => {
if (lrsColumns.includes(datum)) {
header[lrsColumns.indexOf(datum)] = index;
} else {
attributesHeader.push({
name: datum,
index: index
})
}
});
datumIndex = false;
} else {
i ++;
// let layer_id = ;
let rec_id = header[0] ? chunk[header[0]] : i;
let route_id = header[1] ? chunk[header[1]] : null;
let f_meas = header[2] ? chunk[header[2]] : null;
let t_meas = header[3] ? chunk[header[3]] : null;
let b_date = header[4] ? chunk[header[4]] : null;
let e_date = header[5] ? chunk[header[5]] : null;
let attributes = {};
attributesHeader.forEach((attribute) => {
attributes[attribute.name] = chunk[attribute.index];
});
let attributesOrdered = {};
Object.keys(attributes).sort().forEach((key) => {
attributesOrdered[key] = attributes[key];
});
let outputData = `${outputId}\t${inputIndex}\t${rec_id}\t${route_id}\tnull\t${f_meas}\t${t_meas}\t${b_date}\t${e_date}\tnull\t${JSON.stringify(attributesOrdered)}\n`;
output.write(outputData);
}
});
inputStream.push(buffer);
inputStream.push(null);
inputStream.pipe(parser);
}
// Write shp and transfer to database format. Return file path and projection.
static shp2csv(inputPath: string, outputPath: string, i: number, ouputId: string, recordColumn: string, routeIdColumn: string, callback: (path: string, prj: string) => void, errCallback: (err: Error) => void) {
let dataset = gdal.open(inputPath);
let layercount = dataset.layers.count();
let layer = dataset.layers.get(0);
let output = fs.createWriteStream(outputPath, {flags: "a"});
output.on("finish", () => {
callback(outputPath, layer.srs.toProj4());
});
layer.features.forEach((feature, featureId) => {
let geom;
let recordId: number = null;
let routeId: string = null;
try {
let geomWKB = feature.getGeometry().toWKB();
let geomWKBString = geomWKB.toString("hex");
geom = geomWKBString;
if (recordColumn) {
recordId = feature.fields.get(recordColumn);
}
if (routeIdColumn) {
routeId = feature.fields.get(routeIdColumn);
}
}
catch (err) {
console.log(err);
}
let attributes = {};
let attributesOrdered = {};
feature.fields.forEach((value, field) => {
if (field != recordColumn && field != routeIdColumn) {
attributes[field] = value;
}
});
Object.keys(attributes).sort().forEach((key) => {
attributesOrdered[key] = attributes[key];
});
output.write(`${ouputId}\t${i.toString()}\t${recordId ? recordId : (featureId + 1).toString()}\t${routeId}\tnull\tnull\tnull\tnull\tnull\t${geom}\t${JSON.stringify(attributesOrdered)}\n`);
});
output.end();
}
}
The browser retries some requests if the server doesn't send a response and the browser hits its timeout value. Each browser may be configured with its own timeout, but 2 minutes sounds like it's probably the browser timeout.
You can't control the browser's timeout from your server. Two minutes is just too long to ask it to wait. You need a different design that responds sooner and then communicates back the eventual result later when it's ready. Either client polling or server push with webSocket/socket.io.
For client polling, you could have the server respond immediately from your first request and return back a token (some unique string). Then, the client can ask the server for the response for that token every minute until the server eventually has the response. If the server doesn't yet have the response, it just immediately returns back a code that means no response yet. If so, the client sets a timer and tries again in a minute, sending the token each time so the server knows which request it is asking about.
For server push, the client creates a persistent webSocket or socket.io connection to the server. When the client makes it's long running request, the server just immediately returns the same type of token described above. Then, when the server is done with the request, it sends the token and the final data over the socket.io connection. The client is listening for incoming messages on that socket.io connection and will receive the final response there.

Deleting all messages in discord.js text channel

Ok, so I searched for a while, but I couldn't find any information on how to delete all messages in a discord channel. And by all messages I mean every single message ever written in that channel. Any clues?
Try this
async () => {
let fetched;
do {
fetched = await channel.fetchMessages({limit: 100});
message.channel.bulkDelete(fetched);
}
while(fetched.size >= 2);
}
Discord does not allow bots to delete more than 100 messages, so you can't delete every message in a channel. You can delete less then 100 messages, using BulkDelete.
Example:
const Discord = require("discord.js");
const client = new Discord.Client();
const prefix = "!";
client.on("ready" () => {
console.log("Successfully logged into client.");
});
client.on("message", msg => {
if (msg.content.toLowerCase().startsWith(prefix + "clearchat")) {
async function clear() {
msg.delete();
const fetched = await msg.channel.fetchMessages({limit: 99});
msg.channel.bulkDelete(fetched);
}
clear();
}
});
client.login("BOT_TOKEN");
Note, it has to be in a async function for the await to work.
Here's my improved version that is quicker and lets you know when its done in the console but you'll have to run it for each username that you used in a channel (if you changed your username at some point):
// Turn on Developer Mode under User Settings > Appearance > Developer Mode (at the bottom)
// Then open the channel you wish to delete all of the messages (could be a DM) and click the three dots on the far right.
// Click "Copy ID" and paste that instead of LAST_MESSAGE_ID.
// Copy / paste the below script into the JavaScript console.
var before = 'LAST_MESSAGE_ID';
var your_username = ''; //your username
var your_discriminator = ''; //that 4 digit code e.g. username#1234
var foundMessages = false;
clearMessages = function(){
const authToken = document.body.appendChild(document.createElement`iframe`).contentWindow.localStorage.token.replace(/"/g, "");
const channel = window.location.href.split('/').pop();
const baseURL = `https://discordapp.com/api/channels/${channel}/messages`;
const headers = {"Authorization": authToken };
let clock = 0;
let interval = 500;
function delay(duration) {
return new Promise((resolve, reject) => {
setTimeout(() => resolve(), duration);
});
}
fetch(baseURL + '?before=' + before + '&limit=100', {headers})
.then(resp => resp.json())
.then(messages => {
return Promise.all(messages.map((message) => {
before = message.id;
foundMessages = true;
if (
message.author.username == your_username
&& message.author.discriminator == your_discriminator
) {
return delay(clock += interval).then(() => fetch(`${baseURL}/${message.id}`, {headers, method: 'DELETE'}));
}
}));
}).then(() => {
if (foundMessages) {
foundMessages = false;
clearMessages();
} else {
console.log('DONE CHECKING CHANNEL!!!')
}
});
}
clearMessages();
The previous script I found for deleting your own messages without a bot...
// Turn on Developer Mode under User Settings > Appearance > Developer Mode (at the bottom)
// Then open the channel you wish to delete all of the messages (could be a DM) and click the three dots on the far right.
// Click "Copy ID" and paste that instead of LAST_MESSAGE_ID.
// Copy / paste the below script into the JavaScript console.
// If you're in a DM you will receive a 403 error for every message the other user sent (you don't have permission to delete their messages).
var before = 'LAST_MESSAGE_ID';
clearMessages = function(){
const authToken = document.body.appendChild(document.createElement`iframe`).contentWindow.localStorage.token.replace(/"/g, "");
const channel = window.location.href.split('/').pop();
const baseURL = `https://discordapp.com/api/channels/${channel}/messages`;
const headers = {"Authorization": authToken };
let clock = 0;
let interval = 500;
function delay(duration) {
return new Promise((resolve, reject) => {
setTimeout(() => resolve(), duration);
});
}
fetch(baseURL + '?before=' + before + '&limit=100', {headers})
.then(resp => resp.json())
.then(messages => {
return Promise.all(messages.map((message) => {
before = message.id;
return delay(clock += interval).then(() => fetch(`${baseURL}/${message.id}`, {headers, method: 'DELETE'}));
}));
}).then(() => clearMessages());
}
clearMessages();
Reference: https://gist.github.com/IMcPwn/0c838a6248772c6fea1339ddad503cce
This will work on discord.js version 12.2.0
Just put this inside your client on message event
and type the command: !nuke-this-channel
Every message on channel will get wiped
then, a kim jong un meme will be posted.
if (msg.content.toLowerCase() == '!nuke-this-channel') {
async function wipe() {
var msg_size = 100;
while (msg_size == 100) {
await msg.channel.bulkDelete(100)
.then(messages => msg_size = messages.size)
.catch(console.error);
}
msg.channel.send(`<#${msg.author.id}>\n> ${msg.content}`, { files: ['http://www.quickmeme.com/img/cf/cfe8938e72eb94d41bbbe99acad77a50cb08a95e164c2b7163d50877e0f86441.jpg'] })
}
wipe()
}
This will work so long your bot has appropriate permissions.
module.exports = {
name: "clear",
description: "Clear messages from the channel.",
args: true,
usage: "<number greater than 0, less than 100>",
execute(message, args) {
const amount = parseInt(args[0]) + 1;
if (isNaN(amount)) {
return message.reply("that doesn't seem to be a valid number.");
} else if (amount <= 1 || amount > 100) {
return message.reply("you need to input a number between 1 and 99.");
}
message.channel.bulkDelete(amount, true).catch((err) => {
console.error(err);
message.channel.send(
"there was an error trying to prune messages in this channel!"
);
});
},
};
In case you didn't read the DiscordJS docs, you should have an index.js file that looks a little something like this:
const Discord = require("discord.js");
const { prefix, token } = require("./config.json");
const client = new Discord.Client();
client.commands = new Discord.Collection();
const commandFiles = fs
.readdirSync("./commands")
.filter((file) => file.endsWith(".js"));
for (const file of commandFiles) {
const command = require(`./commands/${file}`);
client.commands.set(command.name, command);
}
//client portion:
client.once("ready", () => {
console.log("Ready!");
});
client.on("message", (message) => {
if (!message.content.startsWith(prefix) || message.author.bot) return;
const args = message.content.slice(prefix.length).split(/ +/);
const commandName = args.shift().toLowerCase();
if (!client.commands.has(commandName)) return;
const command = client.commands.get(commandName);
if (command.args && !args.length) {
let reply = `You didn't provide any arguments, ${message.author}!`;
if (command.usage) {
reply += `\nThe proper usage would be: \`${prefix}${command.name} ${command.usage}\``;
}
return message.channel.send(reply);
}
try {
command.execute(message, args);
} catch (error) {
console.error(error);
message.reply("there was an error trying to execute that command!");
}
});
client.login(token);
Another approach could be cloning the channel and deleting the one with the messages you want deleted:
// Clears all messages from a channel by cloning channel and deleting old channel
async function clearAllMessagesByCloning(channel) {
// Clone channel
const newChannel = await channel.clone()
console.log(newChannel.id) // Do with this new channel ID what you want
// Delete old channel
channel.delete()
}
I prefer this method rather than the ones listed on this thread because it most likely takes less time to process and (I'm guessing) puts the Discord API under less stress. Also, channel.bulkDelete() is only able to delete messages that are newer than two weeks, which means you won't be able to delete every channel message in case your channel has messages that are older than two weeks.
The possible downside is the channel changing id. In case you rely on storing ids in a database or such, don't forget to update those documents with the id of the newly cloned channel!
Here's #Kiyokodyele answer but with some changes from #user8690818 answer.
(async () => {
let deleted;
do {
deleted = await channel.bulkDelete(100);
} while (deleted.size != 0);
})();

Is this the proper way to write a multi-statement transaction with Neo4j?

I am having a hard time interpretting the documentation from Neo4j about transactions. Their documentation seems to indicate preference to doing it this way rather than explicitly declaring tx.commit() and tx.rollback().
Does this look best practice with respect to multi-statement transactions and neo4j-driver?
const register = async (container, user) => {
const session = driver.session()
const timestamp = Date.now()
const saltRounds = 10
const pwd = await utils.bcrypt.hash(user.password, saltRounds)
try {
//Start registration transaction
const registerUser = session.writeTransaction(async (transaction) => {
const initialCommit = await transaction
.run(`
CREATE (p:Person {
email: '${user.email}',
tel: '${user.tel}',
pwd: '${pwd}',
created: '${timestamp}'
})
RETURN p AS Person
`)
const initialResult = initialCommit.records
.map((x) => {
return {
id: x.get('Person').identity.low,
created: x.get('Person').properties.created
}
})
.shift()
//Generate serial
const data = `${initialResult.id}${initialResult.created}`
const serial = crypto.sha256(data)
const finalCommit = await transaction
.run(`
MATCH (p:Person)
WHERE p.email = '${user.email}'
SET p.serialNumber = '${serial}'
RETURN p AS Person
`)
const finalResult = finalCommit.records
.map((x) => {
return {
serialNumber: x.get('Person').properties.serialNumber,
email: x.get('Person').properties.email,
tel: x.get('Person').properties.tel
}
})
.shift()
//Merge both results for complete person data
return Object.assign({}, initialResult, finalResult)
})
//Commit or rollback transaction
return registerUser
.then((commit) => {
session.close()
return commit
})
.catch((rollback) => {
console.log(`Transaction problem: ${JSON.stringify(rollback, null, 2)}`)
throw [`reg1`]
})
} catch (error) {
session.close()
throw error
}
}
Here is the reduced version of the logic:
const register = (user) => {
const session = driver.session()
const performTransaction = session.writeTransaction(async (tx) => {
const statementOne = await tx.run(queryOne)
const resultOne = statementOne.records.map((x) => x.get('node')).slice()
// Do some work that uses data from statementOne
const statementTwo = await tx.run(queryTwo)
const resultTwo = statementTwo.records.map((x) => x.get('node')).slice()
// Do final processing
return finalResult
})
return performTransaction.then((commit) => {
session.close()
return commit
}).catch((rollback) => {
throw rollback
})
}
Neo4j experts, is the above code the correct use of neo4j-driver ?
I would rather do this because its more linear and synchronous:
const register = (user) => {
const session = driver.session()
const tx = session.beginTransaction()
const statementOne = await tx.run(queryOne)
const resultOne = statementOne.records.map((x) => x.get('node')).slice()
// Do some work that uses data from statementOne
const statementTwo = await tx.run(queryTwo)
const resultTwo = statementTwo.records.map((x) => x.get('node')).slice()
// Do final processing
const finalResult = { obj1, ...obj2 }
let success = true
if (success) {
tx.commit()
session.close()
return finalResult
} else {
tx.rollback()
session.close()
return false
}
}
I'm sorry for the long post, but I cannot find any references anywhere, so the community needs this data.
After much more work, this is the syntax we have settled on for multi-statement transactions:
Start session
Start transaction
Use try/catch block after (to enable proper scope in catch block)
Perform queries in the try block
Rollback in the catch block
.
const someQuery = async () => {
const session = Neo4J.session()
const tx = session.beginTransaction()
try {
const props = {
one: 'Bob',
two: 'Alice'
}
const tx1 = await tx
.run(`
MATCH (n:Node)-[r:REL]-(o:Other)
WHERE n.one = $props.one
AND n.two = $props.two
RETURN n AS One, o AS Two
`, { props })
.then((result) => {
return {
data: '...'
}
})
.catch((err) => {
throw 'Problem in first query. ' + e
})
// Do some work using tx1
const updatedProps = {
_id: 3,
four: 'excellent'
}
const tx2 = await tx
.run(`
MATCH (n:Node)
WHERE id(n) = toInteger($updatedProps._id)
SET n.four = $updatedProps.four
RETURN n AS One, o AS Two
`, { updatedProps })
.then((result) => {
return {
data: '...'
}
})
.catch((err) => {
throw 'Problem in second query. ' + e
})
// Do some work using tx2
if (problem) throw 'Rollback ASAP.'
await tx.commit
session.close()
return Object.assign({}, tx1, { tx2 })
} catch (e) {
tx.rollback()
session.close()
throw 'someQuery# ' + e
}
}
I will just note that if you are passing numbers into Neo4j, you should wrap them inside the Cypher Query with toInteger() so that they are parsed correctly.
I included examples of query parameters also and how to use them. I found it cleans up the code a little.
Besides that, you basically can chain as many queries inside the transaction as you want, but keep in mind 2 things:
Neo4j write-locks all involved nodes during a transaction, so if you have several processes all performing operations on the same node, you will see that only one process can complete a transaction at a time. We made our own business logic to handle write issues and opted to not even use transactions. It is working very well so far, writing 100,000 nodes and creating 100,000 relationships in about 30 seconds spread over 10 processes. It took 10 times longer to do in a transaction. We experience no deadlocking or race conditions using UNWIND.
You have to await the tx.commit() or it won't commit before it nukes the session.
My opinion is that this type of transaction works great if you are using Polyglot (multiple databases) and need to create a node, and then write a document to MongoDB and then set the Mongo ID on the node.
It's very easy to reason about, and extend as needed.

Resources