Node doesn't write to the file. - node.js

I'm trying to modify the plugin, to write to the file rather than console.
For that I have created a file stream in node like this:
var fs = require('fs');
var fileName = "mochareport.html"
var writeStream;
writeStream = fs.createWriteStream(fileName, function(err) {
if (err) {
log.warn('Cannot write HTML Report\n\t' + err.message);
} else {
log.debug('HTML report written to "%s".', fileName);
}
});
on the run, it create the file. But on the line [132]:
if(failures.length) that.writeFailures(failures);
it calls a method called writeFailures, so I have added a line like this in the writeFailures method:
writeStream.write("Hello")
But the text Hello doesn't get written to the file.
What mistake I'm doing here?

fs.createWriteStream() doesn't accept a callback. It returns a writable stream that you call .write() on and listen for events on (mostly finish if you're interested in that).
If you want to just write once (e.g. open, write, and then close) and not continue writing (e.g. open, write, write, ..., close), then you could instead use something like fs.writeFile() or fs.appendFile() (depending on your desired behavior), both of which accept a callback that gets called when the file is closed.

Related

Best Way to Build a Collection in Node Mongo Driver from a Directory

I asked a similar question yesterday to this, but the solution was very easy and did not really address my fundamental problem of not understanding flow control in asynchronous JavaScript. The short version of what I am trying to do is build a MongoDB collection from a directory of JSON files. I had it working, but I modified something and now the flow is such that the program runs to completion, and therefore closes the connection before the asynchronous insertOne() calls are executed. When the insertOne() calls are finally executed, the data is not input and I get warnings about an unhandled exception from using a closed connection.
I am new to this, so if what I am doing is not best practice (it isn't), please let me know and I am happy to change things to get it to be reliable. The relevant code basically looks like this:
fs.readDirSync(dataDir).forEach(async function(file){
//logic to build data object from JSON file
console.log('Inserting object ' + obj['ID']);
let result = await connection.insertOne(obj);
console.log('Object ' + result.insertedId + ' inserted.');
})
The above is wrapped in an async function that I await for. By placing a console.log() message at the end of program flow, followed by a while(true);, I have verified that all the "'Inserting object ' + obj[ID]" messages are printed, but not the following "'Object ' + result.insertedId + ' inserted'" messages when flow reaches the end of the program. If I remove the while(true); I get all the error messages, because I am no longer blocking and obviously by that point the client is closed. In no case is the database actually built.
I understand that there are always learning curves, but it is really frustrating to not be able to do something as simple as flow control. I am just trying to do something as simple as "loop through each file, perform function on each file, close, and exit", which is remedial programming. So, what is the best way to mark a point that flow control will not pass until all attempts to insert data into the Collection are complete (either successfully or unsuccessfully, because ideally I can use a flag to mark if there were any errors)?
I have found a better answer than my original, so I am going to post it for anyone else who needs this in the future, as there does not seem to be too much out there. I will leave my original hack up too, as it is an interesting experiment to run for anyone curious about the asynchronous queue. I will also note for everyone that there is a pretty obvious way to Promise.allSettled(), but it seems that this would put all files into memory at once which is what I am trying to avoid, so I am not going to write up that solution too.
This method uses the Node fs Promises API, specifically the fsPromises readdir method. I'll show the results running three test files I made in the same directory that have console.log() messages peppered throughout to help understand program flow.
This first file (without-fs-prom.js) uses the ordinary read method and demonstrates the problem. As you can see, the asynchronous functions (the doFile() calls) do not terminate until the end. This means anything you wanted to run only after all the files are processed would be run before processing finished.
/*
** This version loops through the files and calls an asynchronous
** function with the tradidional fs API (not the Promises API).
*/
const fs = require('fs');
async function doFile(file){
console.log(`Doing ${file}`);
return true;
}
async function loopFiles(){
console.log('Enter loopFiles(), about to loop through the files.');
fs.readdirSync(__dirname).forEach(async function(file){
console.log(`About to do file ${file}`);
ret = await doFile(file);
console.log(`Did file ${file}, returned ${ret}`);
return ret;
});
console.log('Done looping through the files, returning from loopFiles()');
}
console.log('Calling loopFiles()');
loopFiles();
console.log('Returned from loopFiles()');
/* Result of run:
> require('./without-fs-prom')
Calling loopFiles()
Enter loopFiles(), about to loop through the files.
About to do file with-fs-prom1.js
Doing with-fs-prom1.js
About to do file with-fs-prom2.js
Doing with-fs-prom2.js
About to do file without-fs-prom.js
Doing without-fs-prom.js
Done looping through the files, returning from loopFiles()
Returned from loopFiles()
{}
> Did file with-fs-prom1.js, returned true
Did file with-fs-prom2.js, returned true
Did file without-fs-prom.js, returned true
*/
The problem can be partially fixed using the fsPromises API as in with-fs-prom1.js follows:
/*
** This version loops through the files and calls an asynchronous
** function with the fs/promises API and assures all files are processed
** before termination of the loop.
*/
const fs = require('fs');
async function doFile(file){
console.log(`Doing ${file}`);
return true;
}
async function loopFiles(){
console.log('Enter loopFiles(), read the dir');
const files = await fs.promises.readdir(__dirname);
console.log('About to loop through the files.');
for(const file of files){
console.log(`About to do file ${file}`);
ret = await doFile(file);
console.log(`Did file ${file}, returned ${ret}`);
}
console.log('Done looping through the files, returning from loopFiles()');
}
console.log('Calling loopFiles()');
loopFiles();
console.log('Returned from loopFiles()');
/* Result of run:
> require('./with-fs-prom1')
Calling loopFiles()
Enter loopFiles(), read the dir
Returned from loopFiles()
{}
> About to loop through the files.
About to do file with-fs-prom1.js
Doing with-fs-prom1.js
Did file with-fs-prom1.js, returned true
About to do file with-fs-prom2.js
Doing with-fs-prom2.js
Did file with-fs-prom2.js, returned true
About to do file without-fs-prom.js
Doing without-fs-prom.js
Did file without-fs-prom.js, returned true
Done looping through the files, returning from loopFiles()
*/
In this case, code after the file iteration loop within the asynchronous function itself runs after all files have been processed. You can have code in any function context with the following construction (file with-fs-prom2.js):
/*
** This version loops through the files and calls an asynchronous
** function with the fs/promises API and assures all files are processed
** before termination of the loop. It also demonstrates how that can be
** done from another asynchrounous call.
*/
const fs = require('fs');
async function doFile(file){
console.log(`Doing ${file}`);
return true;
}
async function loopFiles(){
console.log('Enter loopFiles(), read the dir');
const files = await fs.promises.readdir(__dirname);
console.log('About to loop through the files.');
for(const file of files){
console.log(`About to do file ${file}`);
ret = await doFile(file);
console.log(`Did file ${file}, returned ${ret}`);
}
console.log('Done looping through the files, return from LoopFiles()');
return;
}
async function run(){
console.log('Enter run(), calling loopFiles()');
await loopFiles();
console.log('Returned from loopFiles(), return from run()');
return;
}
console.log('Calling run()');
run();
console.log('Returned from run()');
/* Result of run:
> require('./with-fs-prom2')
Calling run()
Enter run(), calling loopFiles()
Enter loopFiles(), read the dir
Returned from run()
{}
> About to loop through the files.
About to do file with-fs-prom1.js
Doing with-fs-prom1.js
Did file with-fs-prom1.js, returned true
About to do file with-fs-prom2.js
Doing with-fs-prom2.js
Did file with-fs-prom2.js, returned true
About to do file without-fs-prom.js
Doing without-fs-prom.js
Did file without-fs-prom.js, returned true
Done looping through the files, return from LoopFiles()
Returned from loopFiles(), return from run()
*/
EDIT
This was my first tentative answer. It is a hack of a solution at best. I am leaving it up because it is an interesting experiment for people who want to peer into the asynchronous queue, and there may be some real use case for this somewhere too. I think my newly posted answer is superior in all reasonable cases, though.
Original Answer
I found a bit of an answer. It is a hack, but further searching on the net and the lack of responses indicate that there may be no real good way to reliably control flow with asynchronous code and callbacks. Basically, the modification is along the lines of:
fs.readDirSync(dataDir).forEach(async function(file){
jobsOutstanding++;
//logic to build data object from JSON file
console.log('Inserting object ' + obj['ID']);
let result = await connection.insertOne(obj);
console.log('Object ' + result.insertedId + ' inserted.');
jobsOutstanding--;
})
Where jobsOutstanding is a top level variable to the module with an accessor, numJobsOutstanding().
I now wrap the close like this (with some logging to watch how the flow works):
async function closeClient(client){
console.log("Enter closeClient()");
if(!client || !client.topology || !client.topology.isConnected()){
console.log("Already closed.");
}
else if(dataObject.numJobsOutstanding() == 0){
await client.close();
console.log("Closed.");
}
else{
setTimeout(function(){ closeClient(client);}, 100);
}
}
I got this one to run correctly, and the logging is interesting to visualize the asynchronous queue. I am not going to accept this answer yet to see if anyone out there knows something better.

How can I write a buffer data to a file from readable.stream in Nodejs?

How can I write a buffer data to a file from readable.stream in Nodejs? I know there are already npm package, I am asking this question for learning purpose only. I am also wondering why there is no such method available in npm 'fs' where user can pass readablestream and create a file directly?
I tried to write a stream.readableBuffer to a file using fs.write by passing the buffer directly, but somehow a small portion of file, is corrupt, after writing, I can see image but a small portion look black in it, my guess buffer has not written completely.
I pass formdata from ajax XMLHttpRequest to serverside controller (node js router in this case).
and I used npm 'parse-formdata' to parse the request. below is the code:
parseFormdata(req, function (err, data) {
if (err) {
logger.error(err);
throw err
}
console.log('fields:', data.fields); // I have data here but how to write this data to a file?
/** perhaps a bad way to write the data to a file, looking for a better way **/
var chunk = data.parts[0].stream.readableBuffer.head.chunk;
fs.writeFile(data.parts[0].filename, chunk, function(err) {
if(err) {
console.log(err);
} else {
console.log("The file was saved!");
}
});
could some body tell me a better approach to write the data (that I got from parsing of FormData) ?
According to parseFormData
You may use the provided sample:
var pump = require('pump')
var concat = require('concat-stream')
pump(stream, concat(function (buf) {
assert.equal(String(buf), String(file), 'file received')
// then write to your file
res.end()
}))
But you may do shorter:
const ws = fs.createWriteStream('out.txt')
data.parts[0].stream.pipe(ws)
Finally, note that library has not been updated since 2017, so there may be some vulnerabilities or so..

fs does not Write all received data file

My node.js script:
var net = require('net');
var fs = require('fs');
var client = new net.Socket()
client.connect(PORT, HOST, function () {
client.write('You can Send file\0');
client.on('data', function (data) {
// console.log(data);
var destinationFile = fs.createWriteStream("destination.txt");
destinationFile.write(data);
});
});
it is coded for receiving a file from remote HOST.
when i use console.log(data) its OK to log remote data file to console.
but for writing data to file its write one part of received data file.
How can write all data to file?
thanks
The cause of what you get:
client.on('data') is called multiple times, as the file is being sent in data-chunks, not as a single whole data. As a result on receiving each piece of data, you create a new file stream and write to it..
Console.log on the other hand works, because it does not create new console window each time you write to it.
Quick solution would be:
client.connect(PORT, HOST, function () {
var destinationFile = fs.createWriteStream("destination.txt");
client.write('You can Send file\0');
client.on('data', function (data) {
// console.log(data);
destinationFile.write(data);
});
});
Also notice in the net Documentation on net.connect method:
Normally this method is not needed, as net.createConnection opens the
socket. Use this only if you are implementing a custom Socket.
The problem you got is because you declare the filestream inside the event on, which is called each time a packet arrive.
client.connect(PORT, HOST, function () {
var destinationFile = fs.createWriteStream("destination.txt"); //Should be here
client.write('You can Send file\0');
client.on('data', function (data) {
destinationFile.write(data);
});
});
thanks. in tiny test its OK. but written file has 16 bit more than real size. first of file has 16 bit of unknown data. how to ignore that?
In your code, since client.on is called multiple time, multiple write happens in the same times, so it is undefined behavior for which write happens, in which order, or if they ll be complete.
The 16 bytes of unknown data are probably bytes from different packet writen one after the other in random order. Your tiny test work because the file can be send in one packet, so the event is called only once.
If you declare the filestream first, then call the write in client.on, the order of write and data are preserved, and the file is written successfully.

Asynchronous file appends

In trying to learn node.js/socket.io I have been messing around with creating a simple file uploader that takes data chunks from a client browser and reassembles on server side.
The socket.io event for receiving a chunk looks as follows:
socket.on('sendChunk', function (data) {
fs.appendFile(path + fileName, data.data, function (err) {
if (err)
throw err;
console.log(data.sequence + ' - The data was appended to file ' + fileName);
});
});
The issue is that data chunks aren't necessarily appended in the order they were received due to the async calls. Typical console output looks something like this:
1 - The data was appended to file testfile.txt
3 - The data was appended to file testfile.txt
4 - The data was appended to file testfile.txt
2 - The data was appended to file testfile.txt
My question is, what is the proper way to implement this functionality in a non-blocking way but enforce sequence. I've looked at libraries like async, but really want to be able to process each as it comes in rather than creating a series and run once all file chunks are in. I am still trying to wrap my mind around all this event driven flow, so any pointers are great.
Generally you would use a queue for the data waiting to be written, then whenever the previous append finishes, you try to write the next piece. Something like this:
var parts = [];
var inProgress = false;
function appendPart(data){
parts.push(data);
writeNextPart();
}
function writeNextPart(){
if (inProgress || parts.length === 0) return;
var data = parts.shift();
inProgress = true;
fs.appendFile(path + fileName, data.data, function (err) {
inProgress = false;
if (err) throw err;
console.log(data.sequence + ' - The data was appended to file ' + fileName);
writeNextPart();
});
}
socket.on('sendChunk', function (data) {
appendPart(data);
});
You will need to expand this to keep a queue of parts and inProgress based on the fileName. My example assumes those will be constant for simplicity.
Since you need the appends to be in order or synchronous. You could use fs.appendFileSync instead of fs.appendFile. This is quickest way to handle it, but it hurts performance.
If you want to handle it asynchronously yourself, use streams which deal with this problem using EventEmitter. It turns out that the response (as well as the request) objects are streams. Create a writeable stream with fs.createWriteStream and write all pieces to append the file.
fs.createWriteStream(path, [options])#
Returns a new WriteStream object (See Writable Stream).
options is an object with the following defaults:
{ flags: 'w',
encoding: null,
mode: 0666 }
In your case you would use flags: 'a'

Creating a file only if it doesn't exist in Node.js

We have a buffer we'd like to write to a file. If the file already exists, we need to increment an index on it, and try again. Is there a way to create a file only if it doesn't exist, or should I just stat files until I get an error to find one that doesn't exist already?
For example, I have files a_1.jpg and a_2.jpg. I'd like my method to try creating a_1.jpg and a_2.jpg, and fail, and finally successfully create a_3.jpg.
The ideal method would look something like this:
fs.writeFile(path, data, { overwrite: false }, function (err) {
if (err) throw err;
console.log('It\'s saved!');
});
or like this:
fs.createWriteStream(path, { overwrite: false });
Does anything like this exist in node's fs library?
EDIT: My question isn't if there's a separate function that checks for existence. It's this: is there a way to create a file if it doesn't exist, in a single file system call?
As your intuition correctly guessed, the naive solution with a pair of exists / writeFile calls is wrong. Asynchronous code runs in unpredictable ways. And in given case it is
Is there a file a.txt? — No.
(File a.txt gets created by another program)
Write to a.txt if it's possible. — Okay.
But yes, we can do that in a single call. We're working with file system so it's a good idea to read developer manual on fs. And hey, here's an interesting part.
'w' - Open file for writing. The file is created (if it does not
exist) or truncated (if it exists).
'wx' - Like 'w' but fails if path exists.
So all we have to do is just add wx to the fs.open call. But hey, we don't like fopen-like IO. Let's read on fs.writeFile a bit more.
fs.readFile(filename[, options], callback)#
filename String
options Object
encoding String | Null default = null
flag String default = 'r'
callback Function
That options.flag looks promising. So we try
fs.writeFile(path, data, { flag: 'wx' }, function (err) {
if (err) throw err;
console.log("It's saved!");
});
And it works perfectly for a single write. I guess this code will fail in some more bizarre ways yet if you try to solve your task with it. You have an atomary "check for a_#.jpg existence, and write there if it's empty" operation, but all the other fs state is not locked, and a_1.jpg file may spontaneously disappear while you're already checking a_5.jpg. Most* file systems are no ACID databases, and the fact that you're able to do at least some atomic operations is miraculous. It's very likely that wx code won't work on some platform. So for the sake of your sanity, use database, finally.
Some more info for the suffering
Imagine we're writing something like memoize-fs that caches results of function calls to the file system to save us some network/cpu time. Could we open the file for reading if it exists, and for writing if it doesn't, all in the single call? Let's take a funny look on those flags. After a while of mental exercises we can see that a+ does what we want: if the file doesn't exist, it creates one and opens it both for reading and writing, and if the file exists it does so without clearing the file (as w+ would). But now we cannot use it neither in (smth)File, nor in create(Smth)Stream functions. And that seems like a missing feature.
So feel free to file it as a feature request (or even a bug) to Node.js github, as lack of atomic asynchronous file system API is a drawback of Node. Though don't expect changes any time soon.
Edit. I would like to link to articles by Linus and by Dan Luu on why exactly you don't want to do anything smart with your fs calls, because the claim was left mostly not based on anything.
What about using the a option?
According to the docs:
'a+' - Open file for reading and appending. The file is created if it does not exist.
It seems to work perfectly with createWriteStream
This method is no longer recommended. fs.exists is deprecated. See comments.
Here are some options:
1) Have 2 "fs" calls. The first one is the "fs.exists" call, and the second is "fs.write / read, etc"
//checks if the file exists.
//If it does, it just calls back.
//If it doesn't, then the file is created.
function checkForFile(fileName,callback)
{
fs.exists(fileName, function (exists) {
if(exists)
{
callback();
}else
{
fs.writeFile(fileName, {flag: 'wx'}, function (err, data)
{
callback();
})
}
});
}
function writeToFile()
{
checkForFile("file.dat",function()
{
//It is now safe to write/read to file.dat
fs.readFile("file.dat", function (err,data)
{
//do stuff
});
});
}
2) Or Create an empty file first:
--- Sync:
//If you want to force the file to be empty then you want to use the 'w' flag:
var fd = fs.openSync(filepath, 'w');
//That will truncate the file if it exists and create it if it doesn't.
//Wrap it in an fs.closeSync call if you don't need the file descriptor it returns.
fs.closeSync(fs.openSync(filepath, 'w'));
--- ASync:
var fs = require("fs");
fs.open(path, "wx", function (err, fd) {
// handle error
fs.close(fd, function (err) {
// handle error
});
});
3) Or use "touch": https://github.com/isaacs/node-touch
Todo this in a single system call you can use the fs-extra npm module.
After this the file will have been created as well as the directory it is to be placed in.
const fs = require('fs-extra');
const file = '/tmp/this/path/does/not/exist/file.txt'
fs.ensureFile(file, err => {
console.log(err) // => null
});
Another way is to use ensureFileSync which will do the same thing but synchronous.
const fs = require('fs-extra');
const file = '/tmp/this/path/does/not/exist/file.txt'
fs.ensureFileSync(file)
With async / await and Typescript I would do:
import * as fs from 'fs'
async function upsertFile(name: string) {
try {
// try to read file
await fs.promises.readFile(name)
} catch (error) {
// create empty file, because it wasn't found
await fs.promises.writeFile(name, '')
}
}
Here's a synchronous way of doing it:
try {
await fs.truncateSync(filepath, 0);
} catch (err) {
await fs.writeFileSync(filepath, "", { flag: "wx" });
}
If the file exists it will get truncated, otherwise it gets created if an error is raised.
This works for me.
// Use the file system fs promises
const {access} = require('fs/promises');
// File Exist returns true
// dont use exists which is no more!
const fexists =async (path)=> {
try {
await access(path);
return true;
} catch {
return false;
}
}
// Wrapper for your main program
async function mainapp(){
if( await fexists("./users.json")){
console.log("File is here");
} else {
console.log("File not here -so make one");
}
}
// run your program
mainapp();
Just keep eye on your async - awaits so everthing plays nice.
hope this helps.
You can do something like this:
function writeFile(i){
var i = i || 0;
var fileName = 'a_' + i + '.jpg';
fs.exists(fileName, function (exists) {
if(exists){
writeFile(++i);
} else {
fs.writeFile(fileName);
}
});
}

Resources