fs.createReadStream() at specific position of file - node.js

Is it possible to create a stream that reads from a specific position of file in node.js?
I know that I could use a more traditional fs.open / seek / read API, but in that case I need to somehow wrap them in a stream for underlying layers of my application.

fs.createReadStream() has an option you can pass it to specify the start position for the stream.
let f = fs.createReadStream("myfile.txt", {start: 1000});
You could also open a normal file descriptor with fs.open(), then fs.read() one byte from a position right before where you want the stream to be positioned using the position argument to fs.read() and then you can pass that file descriptor into fs.createReadStream() as an option and the stream will start with that file descriptor and position (though obviously the start option to fs.createReadStream() is a bit simpler).

Related

How to use Node.js streams to append to end of file?

I have a node.js app, which opens a stream:
outputStream = fs.createWriteStream("output.txt");
I then, asynchronously, add text to the file:
outputStream.write( outputTxt, "utf8" );
This code is being run inside a loop, so it happens hundreds of times. However, the loop is asynchronous, so it sometimes pauses, and I can edit the output.txt file in an external editor in the meantime, and (for example) add a few chars in the beginning.
However, when I do that, the next time the outputStream.write is executed, it overwrites the last few chars previously added (the same number of chars that I added externally).
Is there some way to prevent this? Some way to tell the writeStream find the end of the file and then add the text?

tail -f implementation in node.js

I have created an implementation of tail -f in node.js using socket.io and fs.watch function.
I read the file using fs.readFile, convert it into array of lines and returns it to the client. Stores the current length in variable.
Then whenever the "file changed" event fires, I re-read the whole file, converts it into array of lines. And then compare the old length and current length. and slice it like
fileContent.slice(oldLength, fileContent.length)
this gives me the changed content. So running perfectly fine.
Problem: I am reading the whole file every time the file gets changed, which is not efficient if file is too large. So is there any way, of reading a file once, and then gets the changed content if there is any change?
I have also tried, spawning child process for "tail -f"
var spawn = require ('child_process').spawn;
var child = spawn ('tail', ['-f', logfile]);
child.stdout.on ('data', function (data){
linesArray = data.toString().split("\n")
console.log ( "Data sent" + linesArray[0]);
io.emit('changed', {
data: linesArray,
});
});
the problem with this is:
on("data") event fires multiple time when I save the logfile by writing some content.
On first load, it correctly returns the last ten line of the file. But if there is a change then it return the whole content again and again.
So if you have any idea of solving this problem let me know. Till then I will dig the internet.
So, I got the solution by reading someone else's code. So solution was to use fs.open which will open the file and then instead of reading whole file we can read the particular block from the file using fs.read() function.
To know about the fs.open/fs.read, read this nodejs-file-system.
Official doc : fs.read

How to read a file in Node JS into a buffer from end to start (backwards)?

I'm looking for a performance-oriented way to read a file into a buffer backwards (from end to beginning).
The zip file format has a crucial end of central directory record at the end of the file (it could be n bytes back, there is a signature I need to find to know I have got it, so I can't just read the last 22 bytes of the file since there is an optional 64K comment in there).
I couldn't find any discussion on Stack Overflow or using Google on how to accomplish this.
Check out this module: https://github.com/bnoordhuis/node-buffertools
You could use the reverse function given by the module, which creates a new buffer in memory of equal length and loops through the original buffer from the end, appending each element to the front of the new buffer.
You would be better off simply using a loop with the starting index as buffer.length - 1 and decrementing until you get the data you want.

Positional write to existing file [Linux, NodeJS]

I'm trying to edit an existing binary file using NodeJS.
My code goes something like this:
file = fs.createWriteStream("/path/to/existing/binary/file", {flags: "a"});
file.pos = 256;
file.write(new Buffer([0, 1, 2, 3, 4, 5]));
In OS X, this works as expected (The bytes at 256..261 get replaced with 0..5).
In linux however, the 5 bytes get appended to the end of file. This is also mentioned in the NodeJS API Reference:
On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.
How do I get around this?
Open with a mode of r+ instead of a. r+ is the portable way to say that you want to read and/or write to arbitrary positions in the file, and that the file should already exist.

How can I provide an input stream to node.js/express' send, or get its raw output stream?

I'm providing a route in my express app that provides the contents of a cloud file as a download. I have access to the cloud file's input stream, and I'd like to pipe that directly into the response's output stream. However, I'm using express, which doesn't seem to support an input stream.
I was hoping I could do this:
res.send (cloudInputStream);
but this doesn't work. Express' send takes a body or a buffer, but apparently not an input stream.
Since that's the case, what I'd like to do is set the headers using res.setHeader(), then get access to the raw output stream, and then:
cloudInputStream.pipe (responseOutputStream);
Is this possible?
Alternatively, I could turn read the input stream into a buffer, and provide that buffer to send. However, this reads the entire cloud file's contents into memory at one time, which I'd like to avoid.
Any thoughts?
All you have to do is cloudInputStream.pipe(res) after setting your headers.
You can do anything node can do. Use pipe for streams and res.set for header fields, or res.sendfile.

Resources