I'm trying to read the contents of some .gz log files using streams in node.
I've started simply with: fs.createReadStream('log.gz').pipe(zlib.createUnzip().
This works and I can pipe to process.stdout to verify. I'd like to pipe to this a new writeableStream, that I can have a data event to actually work with the contents. I guess I just don't fully understand how the streams work. I tried just creating a new writable stream, var writer = fs.createWriteStream() but this doesn't work because it requires a path.
Any ideas how I can go about doing this (without creating any other files to write to)?
var unzipStream = zlib.createUnzip()
unzipStream.on('data', myDataHandler).on('end', myEndHandler)
fs.createReadStream('log.gz').pipe(unzipStream)
That will get you the data and end events. Since it's log data you may also find split useful to get events line by line.
Related
I'm in a bit of an odd situation where I have several images (currently stored as base64 strings in buffers), and I'd like to be able to combine them into a video. The catch is that I need this video file in a buffer, and if I have to write files to the disk, and then read them back as a buffer, the program will run too slowly.
I've looked into several other posts on stackoverflow, and libraries that I've found online. However, everything that I've managed to find uses FFMPEG, which would require me to write everything to a file.
For the sake of clarity, what I'm essentially looking for is something that would be able to make the following pseudocode work:
var frames = .... //I have this already
var videoEncoder = new VideoEncoder();
for(var frame in frames)
{
videoEncoder.add(frames[frame]);
}
var buffer = videoEncoder.toBuffer();
For flask web app, I know I can't read a "file" multiple times from request.files because it's a stream. So when I read it once, I'll empty it. But I need to use the "file" multiple times without saving it locally and I'm having trouble doing it.
For example, from this
image = request.files["image"]
I'd like to have something like
image2 = image.copy
and perform different operations on image and image2.
Can someone please help me with this?
image = request.files["image"]
# Seek the pointer to the beginning of the file to read again
request.files["image"].seek(0)
After reading a file just run "f.stream.seek(0)" this points to the beginning of the file stream and then you are able to read the file from beginning again, you can simply put the following snippet in a loop and see it in action.
f.stream.seek(0)
stream = io.StringIO(f.stream.read().decode("UTF8"), newline=None)
reader = csv.reader(stream)
for row in reader:
print(row)
I have an application that streams data to a file, can I use Node.js to read the file while it's being streamed to?
I tried using createReadStrem, but it only read one chunk and the stream ended
You could try watching for file changes with fs.watchFile(filename[, options], listener) or node-watch. In file change you could just read last line with read-last-lines.
Although I'm not sure how efficient it would be.
I am trying to figure out the best way to read from a piped stream. Basically, I have a file on my computer that I would like to read from, pipe it through a crypto cipher and then upload it to an endpoint. I'm using the form-data package which helps create mutli-part form requests by accepting a ReadableStream. But I'm not sure the best way to pipe the readable stream to the cipher and still give the package a readable stream from that pipe without first writing that readable stream to a file and then reading from that file.
Current Code:
let myStream = fs.createReadStream('./myfile.txt'),
form = new FormData();
form.append('contents', myStream);
Hopefully this makes sense. Let me know if any clarification is needed.
Take a look at Transform Class of the Node js Stream API.
The main thing here is understanding that transform stream is kind of a duplex, that is both readable and writable.
Basically, your "streams-scheme" will look like:
read from file -> transform with cipher -> pipe to form-data
My node.js app runs a function every second to (recursively) read a directory tree for .json files. These files are uploaded to the server via FTP from clients, and are placed in the folder that the node script is running on.
What I've found (at least what I think is happening), is that node is not waiting for the .json file to be fully written before trying to read it, and as such, is throwing a 'Unexpected end of input' error. It seems as though the filesystem needs a few seconds (milliseconds maybe) to write the file properly. This could also have something to do with the file being written from FTP (overheads possibly, I'm totally guessing here...)
Is there a way that we can wait for the file to be fully written to the filesytem before trying to read it with node?
fs.readFile(file, 'utf8', function(err, data) {
var json = JSON.parse(data); // throws error
});
You can check to see if the file is still growing with this:
https://github.com/felixge/node-growing-file