I have an application that will log data to a text file anytime the application is used. It may be used once a day or maybe once every second.
Is there anything wrong with me opening a handle to the text file and keeping it open as long as my application is running and never close it? This way I can just append data to the log without reopening the file handle. Are there consequences to this decision I should be aware of?
fs.open returns a file descriptor and if you don't close connection it may result in file descriptors leak. The file descriptor won't be closed, even if there's an error.
on the other hand, fs.readFile, fs.writeFile,fs.appendFile , fs.createReadStream and fs.createWriteStream don't return a file descriptor. They open the file , operate on it and then close the file.
Related
I am using Node.Js filesystem to write data to .csv file. If I open the file when the write operation is still incomplete, it's causing the loss of data because fs is unable to write when the file is open. So is there any way to open the file in read-only mode if the write operation is going on?
the issue has been resolved when I used fs.createWriteStream for the file operation.
I have an application that streams data to a file, can I use Node.js to read the file while it's being streamed to?
I tried using createReadStrem, but it only read one chunk and the stream ended
You could try watching for file changes with fs.watchFile(filename[, options], listener) or node-watch. In file change you could just read last line with read-last-lines.
Although I'm not sure how efficient it would be.
HI and thanks for any help. Is there a way to work with files larger than 10mg? I have to check for updates on items in a file that would be uploaded, but the file contains all items in the system and is approximately 20MG. This 10MG limit is killing me. I see streaming for file save and appending but not for file reading. So I am open to any suggestions. The provider in this instance doesn't offer the facility to chunk the files. thanks in advance for your help.
If you are using SS2 to process a file from the file cabinet then if you use file.lines.iterator() to process a file the size limit is 10MB per line.
I believe returning a file object from a map reduce script's getInputStage automatically parses the file into lines.
The 10MB file size limit comes into play if you try to create a file larger than 10MB.
If you are trying to read in a an external file via script then one approach that I've used is to proxy the call via an external service. e.g. query an AWS lambda function that checks for and saves the file to S3. Return the file path and size to your SuiteScript. The SuiteScript then asks for "pages" of the file that are less than 10MB and saves those. If you are uploading something like a .csv then the lambda function can send the header with each paged request.
My node.js app runs a function every second to (recursively) read a directory tree for .json files. These files are uploaded to the server via FTP from clients, and are placed in the folder that the node script is running on.
What I've found (at least what I think is happening), is that node is not waiting for the .json file to be fully written before trying to read it, and as such, is throwing a 'Unexpected end of input' error. It seems as though the filesystem needs a few seconds (milliseconds maybe) to write the file properly. This could also have something to do with the file being written from FTP (overheads possibly, I'm totally guessing here...)
Is there a way that we can wait for the file to be fully written to the filesytem before trying to read it with node?
fs.readFile(file, 'utf8', function(err, data) {
var json = JSON.parse(data); // throws error
});
You can check to see if the file is still growing with this:
https://github.com/felixge/node-growing-file
I use simple script for writing in file between node.js process:
var stream = fs.createWriteStream('test');
stream.once('open', function(fd) {
stream.write(progress_percents);
});
In log I see error "to many open file". How to write to file only if this file not using other processes ?
You've diagnosed the problem incorrectly. The too many open files error means that too many files are opened by this process.