Correct way to organise this process in Node - node.js

I need some advice on how to structure this function as at the moment it is not happening in the correct order due to node being asynchronous.
This is the flow I want to achieve; I don't need help with the code itself but with the order to achieve the end results and any suggestions on how to make it efficient
Node routes a GET request to my controller.
Controller reads a .csv file on local system and opens a read stream using fs module
Then use csv-parse module to convert that to an array line by line (many 100,000's of lines)
Start a try/catch block
With the current row from the csv, take a value and try to find it in a MongoDB
If found, take the ID and store the line from the CSV and this id as a foreign ID in a separate database
If not found, create an entry into the DB and take the new ID and then do 6.
Print out to terminal the row number being worked on (ideally at some point I would like to be able to send this value to the page and have it update like a progress bar as the rows are completed)
Here is a small part of the code structure that I am currently using;
const fs = require('fs');
const parse = require('csv-parse');
function addDataOne(req, id) {
const modelOneInstance = new InstanceOne({ ...code });
const resultOne = modelOneInstance.save();
return resultOne;
}
function addDataTwo(req, id) {
const modelTwoInstance = new InstanceTwo({ ...code });
const resultTwo = modelTwoInstance.save();
return resultTwo;
}
exports.add_data = (req, res) => {
const fileSys = 'public/data/';
const parsedData = [];
let i = 0;
fs.createReadStream(`${fileSys}${req.query.file}`)
.pipe(parse({}))
.on('data', (dataRow) => {
let RowObj = {
one: dataRow[0],
two: dataRow[1],
three: dataRow[2],
etc,
etc
};
try {
ModelOne.find(
{ propertyone: RowObj.one, propertytwo: RowObj.two },
'_id, foreign_id'
).exec((err, searchProp) => {
if (err) {
console.log(err);
} else {
if (searchProp.length > 1) {
console.log('too many returned from find function');
}
if (searchProp.length === 1) {
addDataOne(RowObj, searchProp[0]).then((result) => {
searchProp[0].foreign_id.push(result._id);
searchProp[0].save();
});
}
if (searchProp.length === 0) {
let resultAddProp = null;
addDataTwo(RowObj).then((result) => {
resultAddProp = result;
addDataOne(req, resultAddProp._id).then((result) => {
resultAddProp.foreign_id.push(result._id);
resultAddProp.save();
});
});
}
}
});
} catch (error) {
console.log(error);
}
i++;
let iString = i.toString();
process.stdout.clearLine();
process.stdout.cursorTo(0);
process.stdout.write(iString);
})
.on('end', () => {
res.send('added');
});
};
I have tried to make the functions use async/await but it seems to conflict with the fs.openReadStream or csv parse functionality, probably due to my inexperience and lack of correct use of code...
I appreciate that this is a long question about the fundamentals of the code but just some tips/advice/pointers on how to get this going would be appreciated. I had it working when the data was sent one at a time via a post request from postman but can't implement the next stage which is to read from the csv file which contains many records

First of all you can make the following checks into one query:
if (searchProp.length === 1) {
if (searchProp.length === 0) {
Use upsert option in mongodb findOneAndUpdate query to update or upsert.
Secondly don't do this in main thread. Use a queue mechanism it will be much more efficient.
Queue which I personally use is Bull Queue.
https://github.com/OptimalBits/bull#basic-usage
This also provides the functionality you need of showing progress.
Also regarding using Async Await with ReadStream, a lot of example can be found on net such as : https://humanwhocodes.com/snippets/2019/05/nodejs-read-stream-promise/

Related

Dynamic Slash Command Options List via Database Query?

Background:
I am building a discord bot that operates as a Dungeons & Dragons DM of sorts. We want to store game data in a database and during the execution of certain commands, query data from said database for use in the game.
All of the connections between our Discord server, our VPS, and the VPS' backend are functional and we are now implementing slash commands since traditional ! commands are being removed from support in April.
We are running into problems making the slash commands though. We want to set them up to be as efficient as possible which means no hard-coded choices for options. We want to build those choice lists via data from the database.
The problem we are running into is that we can't figure out the proper way to implement the fetch to the database within the SlashCommandBuilder.
Here is what we currently have:
const {SlashCommandBuilder} = require('#discordjs/builders');
const fetch = require('node-fetch');
const {REST} = require('#discordjs/rest');
const test = require('../commonFunctions/test.js');
var options = async function getOptions(){
let x = await test.getClasses();
console.log(x);
return ['test','test2'];
}
module.exports = {
data: new SlashCommandBuilder()
.setName('get-test-data')
.setDescription('Return Class and Race data from database')
.addStringOption(option =>{
option.setName('class')
.setDescription('Select a class for your character')
.setRequired(true)
for(let op of options()){
//option.addChoice(op,op);
}
return option
}
),
async execute(interaction){
},
};
This code produces the following error when start the npm for our bot on our server:
options is not a function or its return value is not iterable
I thought that maybe the function wasn't properly defined, so I replaced the contents of it with just a simple array return and the npm started without errors and the values I had passed showed up in the server.
This leads me to think that the function call in the modules.exports block is immediatly attempting to get the return value of the function and as the function is async, it isn't yet ready and is either returning undefined or a promise or something else not iteratable.
Is there a proper way to implement the code as shown? Or is this way too complex for discord.js to handle?
Is there a proper way to implement the idea at all? Like creating a json object that contains the option data which is built and saved to a file at some point prior to this command being registered and then having the code above just pull in that file for the option choices?
Alright, I found a way. Ian Malcom would be proud (LMAO).
Here is what I had to do for those with a similar issues:
I had to basically re-write our entire application. It sucks, I know, but it works so who cares?
When you run your index file for your npm, make sure that you do the following things.
Note: you can structure this however you want, this is just how I set up my js files.
Setup a function that will setup the data you need, it needs to be an async function as does everything downstream from this point on relating to the creation and registration of the slash commands.
Create a js file to act as your application setup "module". "Module" because we're faking a real module by just using the module.exports method. No package.jsons needed.
In the setup file, you will need two requires. The first is a, as of yet, non-existent data manager file; we'll do that next. The second is a require for node:fs.
Create an async function in your setup file called setup and add it to your module.exports like so:
module.exports = { setup }
In your async setup function or in a function that it calls, make a call to the function in your still as of yet non-existent data manager file. Use await so that the application doesn't proceed until something is returned. Here is what mine looks like, note that I am writing my data to a file to read in later because of my use case, you may or may not have to do the same for yours:
async function setup(){
console.log('test');
//build option choice lists
let listsBuilt = await buildChoiceLists();
if (listsBuilt){
return true;
} else {
return false;
}
}
async function buildChoiceLists(){
let classListBuilt = await buildClassList();
return true;
}
async function buildClassList(){
let classData = await classDataManager.getClassData();
console.log(classData);
classList = classData;
await writeFiles();
return true;
}
async function writeFiles(){
fs.writeFileSync('./CommandData/classList.json', JSON.stringify(classList));
}
Before we finish off this file, if you want to store anything as a property in this file and then get it later on, you can do so. In order for the data to return properly though, you will need to define a getter function in your exports. Here is an example:
var classList;
module.exports={
getClassList: () => classList,
setup
};
So, with everything above you should have something that looks like this:
const classDataManager = require('./DataManagers/ClassData.js')
const fs = require('node:fs');
var classList;
async function setup(){
console.log('test');
//build option choice lists
let listsBuilt = await buildChoiceLists();
if (listsBuilt){
return true;
} else {
return false;
}
}
async function buildChoiceLists(){
let classListBuilt = await buildClassList();
return true;
}
async function buildClassList(){
let classData = await classDataManager.getClassData();
console.log(classData);
classList = classData;
await writeFiles();
return true;
}
async function writeFiles(){
fs.writeFileSync('./CommandData/classList.json', JSON.stringify(classList));
}
module.exports={
getClassList: () => classList,
setup
};
Next that pesky non-existent DataManager file. For mine, each data type will have its own, but you might want to just combine them all into a single .js file for yours.
Same with the folder name, I called mine DataManagers, if you're combining them all into one, you could just call the file DataManager and leave it in the same folder as your appSetup.js file.
For the data manager file all we really need is a function to get our data and then return it in the format we want it to be in. I am using node-fetch. If you are using some other module for data requests, write your code as needed.
Instead of explaining everything, here is the contents of my file, not much has to be explained here:
const fetch = require('node-fetch');
async function getClassData(){
return new Promise((resolve) => {
let data = "action=GetTestData";
fetch('http://xxx.xxx.xxx.xx/backend/characterHandler.php', {
method: 'post',
headers: { 'Content-Type':'application/x-www-form-urlencoded'},
body: data
}).then(response => {
response.json().then(res => {
let status = res.status;
let clsData = res.classes;
let rcData = res.races;
if (status == "Success"){
let text = '';
let classes = [];
let races = [];
if (Object.keys(clsData).length > 0){
for (let key of Object.keys(clsData)){
let cls = clsData[key];
classes.push({
"name": key,
"code": key.toLowerCase()
});
}
}
if (Object.keys(rcData).length > 0){
for (let key of Object.keys(rcData)){
let rc = rcData[key];
races.push({
"name": key,
"desc": rc.Desc
});
}
}
resolve(classes);
}
});
});
});
}
module.exports = {
getClassData
};
This file contacts our backend php and requests data from it. It queries the data then returns it. Then we format it into an JSON structure for use later on with option choices for the slash command.
Once all of your appSetup and data manager files are complete, we still need to create the commands and register them with the server. So, in your index file add something similar to the following:
async function getCommands(){
let cmds = await comCreator.appSetup();
console.log(cmds);
client.commands = cmds;
}
getCommands();
This should go at or near the top of your index.js file. Note that comCreator refers to a file we haven't created yet; you can name this require const whatever you wish. That's it for this file.
Now, the "comCreator" file. I named mine deploy-commands.js, but you can name it whatever. Once again, here is the full file contents. I will explain anything that needs to be explained after:
const {Collection} = require('discord.js');
const {REST} = require('#discordjs/rest');
const {Routes} = require('discord-api-types/v9');
const app = require('./appSetup.js');
const fs = require('node:fs');
const config = require('./config.json');
async function appSetup(){
console.log('test2');
let setupDone = await app.setup();
console.log(setupDone);
console.log(app.getClassList());
return new Promise((resolve) => {
const cmds = [];
const cmdFiles = fs.readdirSync('./commands').filter(f => f.endsWith('.js'));
for (let file of cmdFiles){
let cmd = require('./commands/' + file);
console.log(file + ' added to commands!');
cmds.push(cmd.data.toJSON());
}
const rest = new REST({version: '9'}).setToken(config.token);
rest.put(Routes.applicationGuildCommands(config.clientId, config.guildId), {body: cmds})
.then(() => console.log('Successfully registered application commands.'))
.catch(console.error);
let commands = new Collection();
for (let file of cmdFiles){
let cmd = require('./commands/' + file);
commands.set(cmd.data.name, cmd);
}
resolve(commands);
});
}
module.exports = {
appSetup
};
Most of this is boiler plate for slash command creation though I did combine the creation and registering of the commands into the same process. As you can see, we are grabbing our command files, processing them into a collection, registering that collection, and then resolving the promise with that variable.
You might have noticed that property, was used to then set the client commands in the index.js file.
Config just contains your connection details for your discord server app.
Finally, how I accessed the data we wrote for the SlashCommandBuilder:
data: new SlashCommandBuilder()
.setName('get-test-data')
.setDescription('Return Class and Race data from database')
.addStringOption(option =>{
option.setName('class')
.setDescription('Select a class for your character')
.setRequired(true)
let ops = [];
let data = fs.readFileSync('./CommandData/classList.json','utf-8');
ops = JSON.parse(data);
console.log('test data class options: ' + ops);
for(let op of ops){
option.addChoice(op.name,op.code);
}
return option
}
),
Hopefully this helps someone in the future!

Nodejs write multiple dynamically changing files with fs writefile

I need to write multiple dynamically changing files based on an array consisting of objects passed to a custom writeData() function. This array consists of objects containing the file name and the data to write as shown below:
[
{
file_name: "example.json",
dataObj,
},
{
file_name: "example2.json",
dataObj,
},
{
file_name: "example3.json",
dataObj,
},
{
file_name: "example4.json",
dataObj,
},
];
My current method is to map this array and read + write new data to each file:
array.map((entry) => {
fs.readFile(
entry.file_name,
"utf8",
(err, unparsedData) => {
if (err) console.log(err);
else {
var parsedData = JSON.parse(unparsedData);
parsedData.data.push(entry.dataObj);
const parsedDataJSON = JSON.stringify(parsedData, null, 2);
fs.writeFile(
entry.file_name,
parsedDataJSON,
"utf8",
(err) => {
if (err) console.log(err);
}
);
}
}
);
});
This however, does not work. Only a small percent of data is written to these files and often times the file is not correctly written in json format (I think this is because two writeFile processes are writing to the same file at once and that breaks the file). Obviously this does not work the way I expected it to.
The multiple ways I have tried to resolve this problem consisted of attempting to make the fs.writeFile synchronous (delay the map loop, allowing each process to finish before moving to the next entry), but this is not a good practice as synchronous processes hang up the entire app. I have also looked into implementing promises but to no avail. I am a new learner to nodejs so apologies for missed details/information. Any help is appreciated!
The same file is often listed multiple times in the array if that changes anything.
Well, that changes everything. You should have shown that in the original question. If that is the case, then you have to sequence each individual file in the loop so it finishes one before advancing to the next. To prevent conflicts between writing to the same file, you have to assure yourself of two things:
You sequence each of the files in the loop so the next one doesn't start until the previous one is done.
You don't call this code again while its still in operation.
You can assure yourself of the first item like this:
async function processFiles(array) {
for (let entry of array) {
const unparsedData = await fs.promises.readFile(entry.file_name, "utf8");
const parsedData = JSON.parse(unparsedData);
parsedData.data.push(entry.dataObj);
const json = JSON.stringify(parsedData, null, 2);
await fs.promise.writeFile(entry.file_name, json, "utf8");
}
}
This will abort the loop if it gets an error on any of them. If you want it to continue to write the others, you can add a try/catch internally:
async function processFiles(array) {
let firstError;
for (let entry of array) {
try {
const unparsedData = await fs.promises.readFile(entry.file_name, "utf8");
const parsedData = JSON.parse(unparsedData);
parsedData.data.push(entry.dataObj);
const json = JSON.stringify(parsedData, null, 2);
await fs.promise.writeFile(entry.file_name, json, "utf8");
} catch (e) {
// log error and continue with the rest of the loop
if (!firstError) {
firstError = e;
}
console.log(e);
}
}
// make sure we communicate back any error that happened
if (firstError) {
throw firstError;
}
}
To assure yourself of the second point above, you will have to either not use a setInterval() (replace it with a setTimeout() that you set when the promise that processFiles()resolves or make absolutely sure that the setInterval() time is long enough that it will never fire before processFiles() is done.
Also, make absolutely sure that you are not modifying the array used in this function while that function is running.

Best/proper/cleanest syntax for a simple promise?

I am in the process of converting a large system over from a standard sequential programming which means I am planning on converting 1,500+ functions. These are simple functions that (for the most part) do basic select/insert/update/delete on tables.
Because I am about to be duplicating SO MANY functions, I want to ensure I am making the best decisions w/ syntax and structure.
My plan so far was to create a functions file for each table in the database and to create simple promise-based functions for these CRUD operations. I export these functions and can require the correct files (by table-name) for any route that needs to use a function on a particular table or tables.
As an example, these is my telephone.js file, which is meant to handle basic CRUD operations for the TELEPHONE table.
const sql = require('mssql');
const { poolPromise } = require('../functions/db');
module.exports = {
async get (cus_id) {
const pool = await poolPromise;
let query;
return new Promise(function(resolve, reject) {
try {
query = ("SELECT TEL_NUMBER, TEL_TYPE, TEL_TYPE_GENERAL FROM TEL WHERE TEL_STATUS = 'A' AND TEL_PRIMARY = 'Y' AND TEL_CUS_ID = #CUS_ID ORDER BY TEL_RECEIVED_SMS_FROM DESC, CASE WHEN (TEL_TYPE_GENERAL = 'mobile') THEN 'Y' ELSE 'N' END, TEL_PRIMARY DESC");
pool.request()
.input('CUS_ID', sql.Int, cus_id)
.query(query, (err, results) => {
if (err) {
reject('Query failed in get(): ' + query);
} else {
if (results.rowsAffected > 0) {
resolve(results.recordsets[0]);
} else {
resolve(null);
}
}
});
} catch (err) {
reject('get() failed: ' + query);
}
});
}
};
This code works fine. I am able to get the records I need through the applicable route file, format the records and send them into a response like so:
const telephone = require('../functions/telephone');
...
telephone.get(cus_id).then(function(tel_recs) {
//do stuff w/ tel_recs
}).catch (function(err) {
console.log(err.message);
res.status(500);
res.send(err.message)
});
I have this working...BUT is this the best way?
I don't know what I don't know so I don't know everything to ask. I guess I am attempting to solicit general advice before spending the next 3 months replicating this code 1,500 times:
Any pitfalls this code is introducing.
Should try {} catch {} be used this way inside of a promise?
Is creating a separate functions file for each table the "best" way of handling a store of CRUD operations?

Twilio Node JS - filter sms per phone number

I would like to filter sms per phone number and date the SMS was sent using REST API, however the output of the following code is not available outside of client.messages.each() block.
Please advise how I can use the latest sms code sent to the filtered number:
const filterOpts = {
to: '+13075550185',
dateSent: moment().utc().format('YYYY-MM-DD')
};
let pattern = /([0-9]{1,})$/;
let codeCollection = [];
client.messages.each(filterOpts, (record) => {
codeCollection.push(record.body.match(pattern)[0]);
console.log(record.body.match(pattern)[0], record.dateSent);
});
console.log(codeCollection,'I get an empty array here');//how to get
the latest sms and use it
doSomethingWithSMS(codeCollection[0]);
Twilio developer evangelist here.
The each function doesn't actually return a Promise. You can run a callback function after each has completed streaming results by passing it into the options as done like this:
const codeCollection = [];
const pattern = /([0-9]{1,})$/;
const filterOpts = {
to: '+13075550185',
dateSent: moment().utc().format('YYYY-MM-DD'),
done: (err) => {
if (err) { console.error(err); return; }
console.log(codeCollection);
doSomethingWithSMS(codeCollection[0]);
}
};
client.messages.each(filterOpts, (record) => {
codeCollection.push(record.body.match(pattern)[0]);
console.log(record.body.match(pattern)[0], record.dateSent);
});
Let me know if that helps at all.
Do you have access to the length of the array of messages? If so, you can do something like this
const filterOpts = {
to: '+13075550185',
dateSent: moment().utc().format('YYYY-MM-DD')
};
let pattern = /([0-9]{1,})$/;
let codeCollection = [];
var i = 0
client.messages.each(filterOpts, (record) => {
if (i < messages.length){
codeCollection.push(record.body.match(pattern)[0]);
console.log(record.body.match(pattern)[0], record.dateSent);
i++;
else {
nextFunction(codeCollection);
}
});
function nextFunction(codeCollection){
console.log(codeCollection,'I get an empty array here');
doSomethingWithSMS(codeCollection[0]);
}
messages.each() is running asynchronously, so your main thread moves on to the next call while the client.messages() stuff runs on a background thread. So, nothing has been pushed to codeCollection by the time you've tried to access it. You need to somehow wait for the each() to finish before moving on. Twilio client uses backbone style promises, so you can just add another .then() link to the chain, like below. You could also use a library like async which lets you use await to write asynchronous code in a more linear looking fashion.
const filterOpts = {
to: '+13075550185',
dateSent: moment().utc().format('YYYY-MM-DD')
};
let pattern = /([0-9]{1,})$/;
let codeCollection = [];
client.messages.each(filterOpts, (record) => {
codeCollection.push(record.body.match(pattern)[0]);
console.log(record.body.match(pattern)[0], record.dateSent);
}).then(
function() {
console.log(codeCollection,'I get an empty array here');
if( codeCollection.count > 0 ) doSomethingWithSMS(codeCollection[0]);
}
);

Empty AWS S3 bucket of arbitrary cardinality with NodeJS & TypeScript

My removeObjects function has me stummped.The function is suppose to syncronoulsy get a list of objects in an S3 bucket then asyncronously removes the objects. Repeat if the list was truncated, until the there are no more objects to remove. (AWS doesn't provide the total count of objects in the bucket and listObjects pages the results.)
What am I doing wrong / why doesn't my function work? The solution should exploit single thread and async nature of JS. For the bounty I am hoping for an answer specific to the module. The git repo is public if you want to see the entire module.
export function removeObjects(params: IS3NukeRequest): Promise<S3.Types.DeleteObjectsOutput> {
const requests: Array<Promise<S3.Types.DeleteObjectsOutput>> = [];
let isMore;
do {
listObjectsSync(params)
.then((objectList: S3.Types.ListObjectsV2Output) => {
isMore = objectList.ContinuationToken = objectList.IsTruncated ? objectList.NextContinuationToken : null;
requests.push(params.Client.deleteObjects(listObjectsV2Output2deleteObjectsRequest(objectList)).promise());
})
.catch((err: Error) => { Promise.reject(err); });
} while (isMore);
return Promise.all(requests);
}
export async function listObjectsSync(params: IS3NukeRequest): Promise<S3.Types.ListObjectsV2Output> {
try {
return await params.Client.listObjectsV2(s3nukeRequest2listObjectsRequest(params)).promise();
} catch (err) {
return Promise.reject(err);
}
}
Thanks.
The thing is that listObjectsSync function returns a Promise, so you need to treat it as an async function and can't just use a loop with it. What you need to do is to create a chain of promises while your isMore is true, I've done it using a recursive approach (I'm not pro in TS, so please check the code before using it). I also haven't tried the code live, but logically it should work :)
const requests: Array<Promise<S3.Types.DeleteObjectsOutput>> = [];
function recursive(recursiveParams) {
return listObjectsSync(recursiveParams).then((objectList: S3.Types.ListObjectsV2Output) => {
let isMore = objectList.ContinuationToken = objectList.IsTruncated ? objectList.NextContinuationToken : null;
requests.push(params.Client.deleteObjects(listObjectsV2Output2deleteObjectsRequest(objectList)).promise());
if (isMore) {
//do we need to change params here?
return recursive(recursiveParams)
}
//this is not necessary, just to indicate that we get out of the loop
return true;
});
}
return recursive(params).then(() => {
//we will have all requests here
return Promise.all(requests);
});

Resources