I can't read file after checking if it exists in fs - node.js

So I'm trying to check if a file exists before I read that file but if I put the check before the read the read just becomes nothing even tho the data exists.
I've tried putting it below but then my file would just have an error from trying to read from a blank file.
bot.on('chat', function(username, message) {
var time = clock.zonedDateTime('SYSTEM').toString()
if (!fs.existsSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username)) {
fs.mkdirSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username);
}
if (!fs.existsSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username + '/lirstwords/')) {
fs.mkdirSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username + '/lirstwords/');
}
if (!fs.existsSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username + '/lirstwords/lastwords/')) {
fs.mkdirSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username + '/lirstwords/lastwords/');
}
if (!fs.existsSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username + '/lirstwords/lastwords/' + 'firstwords.txt')) {
fs.writeFile('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username + '/lirstwords/lastwords/' + 'lastwords.txt', 'sentat:' + time + ',<' + username + '>' + message, 'utf8', function(err) {
if (err) throw err;
});
}
if (message.startsWith('!lastwords ')) {
if (cooldown == 1) {
var lastwordsplit = message.toString().split(" ")
var lastwordperson = lastwordsplit[1]
if (fs.existsSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)/players/' + lastwordperson + '/lirstwords/lastwords/lastwords.txt')) {
bot.chat(fs.readFileSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)/players/' + lastwordperson + '/lirstwords/lastwords/lastwords.txt', 'utf8'))
console.log('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)/players/' + lastwordperson + '/lirstwords/lastwords/lastwords.txt')
}
if (!fs.existsSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)/players/' + lastwordperson + '/lirstwords/lastwords/lastwords.txt')) {
bot.chat(" does not have any documents that include the user " + lastwordperson)
if (!fs.existsSync('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username + '/lirstwords/lastwords/' + 'firstwords.txt')) {
fs.writeFile('C:/Users/La Fam/Desktop/kekbot_rewritten(tm)' + '/players/' + username + '/lirstwords/lastwords/' + 'lastwords.txt', 'sentat:' + time + ',<' + username + '>' + message, 'utf8', function(err) {
if (err) throw err;
});
}
}
}
}
});

You're not waiting for the fs.writeFile() to complete before you try to then read that data from the file in the lines of code below so probably all you will see is an empty file or maybe it won't even be created yet when you try to read it.
Then, you're mixing synchronous logic with asynchronous writes with no way to communicate success or errors. If this can use synchronous file I/O (which a server should never use except in startup code), then make it all synchronous file I/O.
If it all needs to be asynchronous, then it needs a complete rewrite and lots of this code can be refactored into a few shared functions.
It's not entirely clear to me everything that you're trying to accomplish in this function, but here's a simplified version of the logic that uses entirely asynchronous file I/O and attempts to DRY up the code a bit:
const mkdirp = require('mkdirp');
const fs = require('fs');
const path = require('path');
bot.on('chat', function(username, message) {
const time = clock.zonedDateTime('SYSTEM').toString();
const rootDir = 'C:/Users/La Fam/Desktop/kekbot_rewritten(tm)/players/';
const wordsDir = path.join(rootDir, username, 'lirstwords/lastwords');
const wordsFile = path.join(wordsDir, "lastwords.txt");
// do something useful when there's an error
// perhaps chat back that an unexpected error was encountered
function processError(err) {
console.log(err);
}
// make sure the desired directory exists (all the pieces of it)
mkdirp(wordsDir, function(err) {
if (err) return processError(err);
let data = 'sentat:' + time + ',<' + username + '>' + message;
fs.writeFile(wordsFile, data, {encoding: 'utf8', flag: 'wx'}, function(err) {
if (err) {
// we expect an error here if the file already exists, the wx flag will prevent overwriting existing file
// we do it this way to avoid race condition on checking if it exists and avoid extra system call
if (err.code !== 'EEXIST') {
processError(err);
}
if (message.startsWith('!lastwords ') && cooldown === 1) {
const lastwordperson = message.toString().split(" ")[1];
const lastWordFile = path.join(rootDir, lastwordperson, '/lirstwords/lastwords/lastwords.txt');
fs.readFile(lastWordFile, 'utf8', function(err, data) {
if (err) {
// treat file not found separately
if (err === 'ENOENT') {
bot.chat(" does not have any documents that include the user " + lastwordperson);
} else {
processError(err);
}
} else {
bot.chat(data);
}
});
}
});
});
});

Related

Canva publish extension API : Endpoint never get call

I try to make a Canva App with a publish extension.
I just follow the Quick start (https://docs.developer.canva.com/apps/extensions/publish-extensions/quick-start) with Glitch and it work well on it.
But when I try to put in on my own public host name, with a other port (like http://mydomaine.com:3000) Canva NEVER call my endpoint. I just write a log file of every action on my app post and I never get a update on it, and when I try the app on Canva.com it just show me a error message.
//Copy from the Quick Start
app.post('/publish/resources/upload', async (request, response) => {
try{
writeLog("Uploading file");
await fs.ensureDir(path.join(__dirname, 'export'));
// Get the first asset from the "assets" array
const [asset] = request.body.assets;
// Download the asset
const image = await jimp.read(asset.url);
const filePath = path.join(__dirname, 'export', asset.name);
await image.writeAsync(filePath);
// Respond with the URL of the published design
response.send({
type: 'SUCCESS',
url: url.format({
protocol: request.protocol,
host: request.get('host'),
pathname: asset.name,
}),
});
} catch (err) {
writeLog("ERROR (app.post('/publish/resources/upload'): " + err);
}
});
//Just log on the log file
function writeLog(log){
// fs.appendFile(path.join(__dirname, '/log/' + `${month}/${date}/${year}` +'log.txt'), dateDisplay + "|" + log + "\n", (err) => {
// if (err) throw err;
// });
var today = new Date();
var time = today.getHours() + ":" + today.getMinutes() + ":" + today.getSeconds();
var date = today.getFullYear() + '-' + (today.getMonth() + 1) + '-' + today.getDate();
var dateTime = date + ' ' + time;
natifFS.appendFile('log.txt', dateTime + '| '+ log + "\n", (err) => {
if (err) throw err;
});
}
Last thing, when I try to call a post request on the same endpoint as Canva (/publish/resources/upload) with Postman, I get a update on my log.txt file
If anyone has idea, thank you.

Unable to Get Row Count Using ibm_db for NodeJS

I have an issue with not able to get the affected rows result from the following
During the debug I notice it always crashes at conn.querySync(query.sqlUpdate, params);
Console.log is not showing anything as well.
What did I do wrong here?
CODE
//imports
const format = require('string-format');
const query = require('../db/query');
const message = require('../common/message');
const constant = require('../common/constant');
var ibmdb = require("ibm_db");
require('dotenv').config();
// access the environment variables for this environment
const database = "DATABASE=" + process.env.DATABASE + ";";
const hostname = "HOSTNAME=" + process.env.HOSTNAME + ";";
const uid = "UID=" + process.env.UID + ";";
const pwd = "PWD=" + process.env.PWD + ";";
const dbport = "PORT=" + process.env.DBPORT + ";";
const protocol = "PROTOCOL=" + process.env.PROTOCOL;
const connString = database+hostname+uid+pwd+dbport+protocol;
function updateContact(params) {
ibmdb.open(connString, function(err, conn){
//blocks until the query is completed and all data has been acquired
var rows = conn.querySync(query.sqlUpdate, params);
console.log(rows);
});
}
module.exports.updateContact = updateContact;
I finally understand what the problem is.
The problem lies in me using the querySync function. This function not return affected row counts.
https://github.com/ibmdb/node-ibm_db/blob/master/APIDocumentation.md#querySyncApi
The proper way is to use prepare followed by executeNonQuery.
https://github.com/ibmdb/node-ibm_db/blob/master/APIDocumentation.md#executeNonQueryApi
So from the API, i modify my codes.
...
conn.prepare(query.SQL_UPDATE, function (error, stmt) {
if (err) {
console.log(err);
return conn.closeSync();
}
stmt.executeNonQuery(params, function (err, result) {
if( err ) {
console.log(err);
}
else {
console.log("Affected rows = " + result);
}
//Close the connection
conn.close();
});
});
...

express res.download serving up 0 byte files

res.download is serving up the file but once downloaded it is 0 bytes?
Any ideas?
app.get('/download', function(req, res) {
console.log("download");
console.log(req.query.fileID);
fileDownload(req.query.fileID, function(rep){
if(rep.success){
console.log("Serving File to User, File: " + rep.data);
res.download(__dirname + "/" + rep.data, rep.data)
}else{
console.log(res);
}
});
})
A ls on the folder shows the file is there ready for download, the names are correct and all on download box that browser displays but download is always 0 bytes in size.
A check on the file from file file download shoes yes it is there and yes its all good.
ISSUE FOUND MAYBE:
I Think the issue is the file is not fully downloaded before its being served to client, see below,i will try adding a callback to the PIPE.....
function fileDownload(id, callback){
info(id, function(res){
if(!res.error){
info(id, function(res){
if(!res.error){
//console.log(res.data);
var d = JSON.parse(res.data);
//console.log(d['file_name']);
var url2 = baseurl + "/api/file/" + id ;
var r = request(url2);
r.on('response', function (res) {
res.pipe(fs.createWriteStream('./' + d['id'] + d['file_name']));
console.log("Download Done: " + './' + d['id'] + d['file_name']);
return callback({success:true, data:d['id'] + d['file_name']});
});
}else{
console.log("ERROR: " + res.data)
return callback({success:false, data: res.data});
}
});
}else{
console.log("ERROR: " + res.data)
return callback({success:false, data: res.data});
}
});
};
The issue was due to the file being served up before the stream was finished writing,
fix add .on('close') to the stream.
r.on('response', function (res) {
res.pipe(fs.createWriteStream('./' + d['id'] + d['file_name']).on('close', function() {
console.log('file done');
console.log("Download Done: " + './' + d['id'] + d['file_name']);
return callback({success:true, data:d['id'] + d['file_name']});
}));
console.log("You should not see this");
});

Batch requests placed via nodejs request.get using rxjs

I am currently using the following function to create a Promise from the result of calling request.get:
function dlPromiseForMeta(meta) {
return new Promise(function (resolve, reject) {
meta.error = false;
var fileStream = fs.createWriteStream(meta.filePath);
fileStream.on('error', function (error) {
meta.error = true;
console.log('filesystem ' + meta.localFileName + ' ERROR: ' + error);
console.log('record: ' + JSON.stringify(meta));
reject(meta);
});
fileStream.on('close', function () {
resolve(meta);
});
request.get({
uri: meta.url,
rejectUnauthorized: false,
followAllRedirects: true,
pool: {
maxSockets: 1000
},
timeout: 10000,
agent: false
})
.on('socket', function () {
console.log('request ' + meta.localFileName + ' made');
})
.on('error', function (error) {
meta.error = true;
console.log('request ' + meta.localFileName + ' ERROR: ' + error);
console.log('record: ' + JSON.stringify(meta));
reject(meta);
})
.on('end', function () {
console.log('request ' + meta.localFileName + ' finished');
fileStream.close();
})
.pipe(fileStream);
});
}
This works fine except when I am trying to call it too many times, as in the example below, where imagesForKeywords returns an rxjs Observable:
imagesForKeywords(keywords, numberOfResults)
.mergeMap(function (meta) {
meta.fileName = path.basename(url.parse(meta.url).pathname);
meta.localFileName = timestamp + '_' + count++ + '_' + meta.keyword + '_' + meta.source + path.extname(meta.fileName);
meta.filePath = path.join(imagesFolder, meta.localFileName);
return rxjs.Observable.fromPromise(dlPromiseForMeta(meta))(meta);
});
I start getting ESOCKETTIMEDOUT errors when the source observable becomes sufficiently large.
So what I would like to do is somehow batch what happens in mergeMap for every, say, 100 entries... so I do those 100 in parallel, and each batch serially, and then merge them at the end.
How can I accomplish this using rxjs?
I think the simplest thing to use is bufferTime() which triggers after a certain number of ms but also has a parameter at the end for count.
Using a timeout seems useful, in case there's a stream pattern that does not reach the batch limit in a reasonable time.
If that does not fit your use-case, comment me with some more details and I will adjust accordingly.
Your code will look like this,
bufferTime as described above
forkjoin - run the buffer contents in parallel and emit when all return
mergeMap - coalesce the results
imagesForKeywords(keywords, numberOfResults)
.mergeMap(function (meta) {
meta.fileName = path.basename(url.parse(meta.url).pathname);
meta.localFileName = timestamp + '_' + count++ + '_' + meta.keyword + '_' + meta.source + path.extname(meta.fileName);
meta.filePath = path.join(imagesFolder, meta.localFileName);
return meta;
})
.bufferTime(maxTimeout, null, maxBatch)
.mergeMap(items => rxjs.Observable.forkJoin(items.map(dlPromiseForMeta)))
.mergeMap(arr => rxjs.Observable.from(arr))
Here's a runnable mockup to show it working. Have commented out the last mergeMap to show the buffering.
I have assumed a couple of things,
imagesForKeywords breaks keywords into observable stream of keyword
there is one keyword per dlPromiseForMeta call
// Some mocking
const imagesForKeywords = (keywords, numberOfResults) => {
return Rx.Observable.from(keywords.map(keyword => { return {keyword} }))
}
const dlPromiseForMeta = (meta) => {
return Promise.resolve(meta.keyword + '_image')
}
// Compose meta - looks like it can run at scale, since is just string manipulations.
const composeMeta = meta => {
// meta.fileName = path.basename(url.parse(meta.url).pathname);
// meta.localFileName = timestamp + '_' + count++ + '_' + meta.keyword + '_' + meta.source + path.extname(meta.fileName);
// meta.filePath = path.join(imagesFolder, meta.localFileName);
return meta;
}
const maxBatch = 3
const maxTimeout = 50 //ms
const bufferedPromises = (keywords, numberOfResults) =>
imagesForKeywords(keywords, numberOfResults)
.map(composeMeta)
.bufferTime(maxTimeout, null, maxBatch)
.mergeMap(items => Rx.Observable.forkJoin(items.map(dlPromiseForMeta)))
//.mergeMap(arr => Rx.Observable.from(arr))
const keywords = ['keyw1', 'keyw2', 'keyw3', 'keyw4', 'keyw5', 'keyw6', 'keyw7'];
const numberOfResults = 1;
bufferedPromises(keywords, numberOfResults)
.subscribe(console.log)
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.6/Rx.js"></script>

Angular download file from byte array sent from node.js

I think I'm very close to what I want to do. I have the following api get method in node.js that is retrieving a file varbinary(MAX) from an SQL Server database. It was converted from a base64 encoded string before inserted so the Content Type information was stripped from the string.
node.js
router.get('/getFile', (req, res) => {
console.log("Calling getFile for file " + req.query.serialNumber + ".")
var serialNumber = req.query.serialNumber;
let request = new sql.Request(conn);
request.query('SELECT FileName + \'.\' + FileExtension AS \'File\', FileType, ContentType, SerialNumber, Chart ' +
'FROM dbo.ChangeFiles ' +
'WHERE SerialNumber = ' + serialNumber)
.then(function (recordset) {
log("Successfully retrieved file " + recordset[0].SerialNumber + " from database.");
log("Length of blob " + recordset[0].File + " is " + recordset[0].Chart.length)
res.status(200);
res.setHeader('Content-Type', recordset[0].ContentType);
res.setHeader('Content-Disposition', 'attachment;filename=' + recordset[0].File);
res.end(Buffer.from((recordset[0].Chart)));
}).catch(function (err) {
log(err);
res.status(500).send("Issue querying database!");
});
});
That works fine, but what to do in Angular to retrieve it and prompt for a download for the user has not been clear for me, nor has there been a lot as far as help/resources online. Here is what I have so far in my service class.
fileDownload.service.ts
downloadFile(serialNumber: string): Observable<any> {
return this.http.get(this.baseURL + '/getFile', { params: { serialNumber: serialNumber } })
.map(this.extractFile);
}
private extractFile(response: Response) {
const file = new Blob([response.blob]);
FileSaver.saveAs(file);
// const url = window.URL.createObjectURL(file);
// window.open(url);
return file;
}
As you can see I've tried a couple of approaches. The commented out portion of the extractFile method didn't work at all, and using the FileSaver.saveAs function produces a file download of an unknown type, so the headers sent from node.js didn't seem to affect the file itself.
Would someone be able to advise how to proceed in Angular with what is successfully being sent from node.js so that I can successfully download the file, regardless of type?
Thanks so much in advance.
I got it working afterall. I had to rework the api call so that it sent all of the file information separately so that the MIME type, and file name can be assigned to the file on the client side in the component class. For some reason when I tried to do so all in the api, it wouldn't work so that was my work around. So here is what works for me.
node.js api
router.get('/getFile', (req, res) => {
console.log("Calling getFile for file " + req.query.serialNumber + ".")
var serialNumber = req.query.serialNumber;
let request = new sql.Request(conn);
request.query('SELECT FileName + \'.\' + FileExtension AS \'File\', FileType, ContentType, SerialNumber, Chart ' +
'FROM dbo.ChangeFiles ' +
'WHERE SerialNumber = ' + serialNumber)
.then(function (recordset) {
log("Successfully retrieved file " + recordset[0].SerialNumber + " from database.");
log("Length of blob " + recordset[0].File + " is " + recordset[0].Chart.length)
res.send(recordset[0]);
}).catch(function (err) {
log(err);
res.status(500).send("Issue querying database!");
});
});
component class
downloadFile(serialNumber: string): void {
this.changeService.downloadFile(serialNumber).subscribe((res: any) => {
const ab = new ArrayBuffer(res.Chart.data.length);
const view = new Uint8Array(ab);
for (let i = 0; i < res.Chart.data.length; i++) {
view[i] = res.Chart.data[i];
}
const file = new Blob([ab], { type: res.ContentType });
FileSaver.saveAs(file, res.File);
console.log(res);
});
}
service class
downloadFile(serialNumber: string): Observable<any> {
return this.http.get(this.baseURL + '/getFile', { params: { serialNumber: serialNumber } })
.map(this.extractFile);
}
private extractFile(response: Response) {
// const file = new Blob([response.blob]);
// FileSaver.saveAs(file);
// const url = window.URL.createObjectURL(file);
// window.open(url);
const body = response.json();
return body || {};
}
Update your code to call subscribe instead of map

Resources