cannot pipe mp4 file in node.js - node.js

I am trying to pipe an mp4 file in my root directory to but I cannot get it to work, it start buffering for streaming but never plays the file.
here is my code
var http = require('http'),
server = http.createServer(),
fs = require('fs'),
rs = fs.createReadStream('./Practice/Jumanji.mp4');
server.on('request', function(req, res) {
res.writeHead(200, {
'Content-Type': 'video/mp4'
});
rs.pipe(res);
res.end();
});
server.listen(4000);
The movie tries to load but never does.

You have two problems in your code that I see
Streams are one-time use objects, so once you fetch one video, a second request would fail because the stream was already closed. Your rs = fs.createReadStream('./Practice/Jumanji.mp4'); line should be inside the callback.
Likely the cause of your error in this case, is res.end();. That will immediately close res before your pipe has had time to write the video, so you are essentially saying "Send this stream" followed immediately by "Send nothing". You should delete that line, since the pipe will automatically close res when all the data is written.
All this said, there is a lot of error handling logic that your example is missing, which you may want to consider.

Related

request: how to pipe file and keep the response object

I use npm request. I want to download and write a file to the filesystem and after that-use the returned response object for further processing.
I know that I can pipe the response object directly but afterwards I have no response object more for further processing
var result = await request(builtReq).pipe(fs.createWriteStream(myFilePath));
So my implementation at the moment looks like this:
var result = await request(builtReq);
But I am not able to pipe the result object because the streamable state is false.
So I need a way to keep the return of the request and to write the file to the filesystem. Can I reset the stream state or somehow keep the response obj and write the file at the same time?
I tried to implement the write file manually via. fs.writeFile() after I received the response obj but I had problems with file encodings because I can receive anything and I ended up with broken files.
Has somebody an idea how to solve that?
Do you want the response object (this contains the status code and headers), or do you need the response body in-memory?
If you only want the response object, and still want the body written to a file, you can grab the response like so:
var response;
await request(builtReq).on('response', r => {
console.log('Response received: ' + r);
response = r;
}).pipe(fs.createWriteStream(myFilePath));
console.log('Finished saving file, response is still ' + response);
If you need to process the actual body, you have a couple options:
Keep your existing code as-is, then just read the file off the disk right after it finishes (since you are using await, that would be the next line) and process it in memory.
Pipe the input stream to multiple places -- one to the file write stream, and the other to an in-memory buffer (using a standard on('data') handler). Once the stream finishes, the file is saved and you can process the in-memory buffer. See related question node.js piping the same readable stream for several different examples.
remember promises can only be resolved once.
so to solve your problem:
var promise = request(builtReq)
// now do what ever number of operations you need
result.pipe(fs.createWriteStream(myFilePath));
promise.then(r=>console.log('i will still get data here :)');
promise.then(r=>console.log('and here !');
promise.then(r=>console.log('and here !!');
var result = await promise; // and here to !!
a promise once resolve, the .then will always get the result.

watching streaming HTTP response progress in NodeJS, express

i want to stream sizeable files in NodeJS 0.10.x using express#4.8.5 and pipes. currently i'm
doing it like this (in CoffeeScript):
app.get '/', ( request, response ) ->
input = P.create_readstream route
input
.pipe P.$split()
.pipe P.$trim()
.pipe P.$skip_empty()
.pipe P.$skip_comments()
.pipe P.$parse_csv headers: no, delimiter: '\t'
.pipe response
(P is pipedreams.)
what i would like to have is something like
.pipe count_bytes # ???
.pipe response
.pipe report_progress response
so when i look at the server running in the terminal, i get some indication of how many bytes have been
accepted by the client. right now, it is very annoying to see the client loading for ages without having
any indication whether the transmision will be done in a minute or tomorrow.
is there any middleware to do that? i couldn't find any.
oh, and do i have to call anything on response completion? it does look like it's working automagically right now.
For your second question, you don't have to close anything. The pipe function handles everything for you, even throttling of the streams (if the source stream has more data than the client can handle due to poor download speed, it will pause the source stream until the client can consume again the source instead of using a bunch of memory server side by completely reading the source).
For your first question, to have some stats server side on your streams, what you could use is a Transform stream like:
var Transform = require('stream').Transform;
var util = require('util').inherits;
function StatsStream(ip, options) {
Transform.call(this, options);
this.ip = ip;
}
inherits(StatsStream, Transform);
StatsStream.prototype._transform = function(chunk, encoding, callback) {
// here some bytes have been read from the source and are
// ready to go to the destination, do your logging here
console.log('flowing ', chunk.length, 'bytes to', this.ip);
// then tell the tranform stream that the bytes it should
// send to the destination is the same chunk you received...
// (and that no error occured)
callback(null, chunk);
};
Then in your requests handlers you can pipe like (sorry javascript):
input.pipe(new StatsStream(req.ip)).pipe(response)
I did this on top of my head so beware :)

gzipping a file with nodejs streams causes memory leaks

I'm trying to do what should be seemingly quite simple: take a file with filename X, and create a gzipped version as "X.gz". Nodejs's zlib module does not come with a convenient zlib.gzip(infile, outfile), so I figured I'd use an input stream, an output stream, and a zlib gzipper, then pipe them:
var zlib = require("zlib"),
zipper = zlib.createGzip(),
fs = require("fs");
var tryThing = function(logfile) {
var input = fs.createReadStream(logfile, {autoClose: true}),
output = fs.createWriteStream(logfile + ".gz");
input.pipe(zipper).pipe(output);
output.on("end", function() {
// delete original file, it is no longer needed
fs.unlink(logfile);
// clear listeners
zipper.removeAllListeners();
input.removeAllListeners();
});
}
however, every time I run this function, the memory footprint of Node.js grows by about 100kb. Am I forgetting to tell the streams they should just kill themselves off again because they won't be needed any longer?
Or, alternatively, is there a way to just gzip a file without bothering with streams and pipes? I tried googling for "node.js gzip a file" but it's just links to the API docs, and stack overflow questions on gzipping streams and buffers, not how to just gzip a file.
I think you need to properly unpipe and close the stream. Simply removeAllListeners() may not be enough to clean things up. As streams may be waiting for more data (and thus staying alive in memory unnecessarily.)
Also you're not closing the output stream as well and IMO I'd listen on the input stream's end instead of the output.
// cleanup
input.once('end', function() {
zipper.removeAllListeners();
zipper.close();
zipper = null;
input.removeAllListeners();
input.close();
input = null;
output.removeAllListeners();
output.close();
output = null;
});
Also I don't think the stream returned from zlib.createGzip() can be shared once ended. You should create a new one at every iteration of tryThing:
var input = fs.createReadStream(logfile, {autoClose: true}),
output = fs.createWriteStream(logfile + ".gz")
zipper = zlib.createGzip();
input.pipe(zipper).pipe(output);
Havn't tested this tho as I don't have a memory profile tool nearby right now.

How do I close a stream that has no more data to send in node.js?

I am using node.js and reading input from a serial port by opening a /dev/tty file, I send a command and read the result of the command and I want to close the stream once I've read and parsed all the data. I know that I'm done reading data by and end of data marker. I'm finding that once I've closed the stream my program does not terminate.
Below is an example of what I am seeing but uses /dev/random to slowly generate data (assuming your system isn't doing much). What I find is that the process will terminate once the device generates data after the stream has been closed.
var util = require('util'),
PassThrough = require('stream').PassThrough,
fs = require('fs');
// If the system is not doing enough to fill the entropy pool
// /dev/random will not return much data. Feed the entropy pool with :
// ssh <host> 'cat /dev/urandom' > /dev/urandom
var readStream = fs.createReadStream('/dev/random');
var pt = new PassThrough();
pt.on('data', function (data) {
console.log(data)
console.log('closing');
readStream.close(); //expect the process to terminate immediately
});
readStream.pipe(pt);
Update:1
I am back on this issue and have another sample, this one just uses a pty and is easily reproduced in the node repl. Login on 2 terminals and use the pty of the terminal you're not running node in the below call to createReadStream.
var fs = require('fs');
var rs = fs.createReadStream('/dev/pts/1'); // a pty that is allocated in another terminal by my user
//wait just a second, don't copy and paste everything at once
process.exit(0);
at this point node will just hang and not exit. This is on 10.28.
Instead of using
readStream.close(),
try using
readStream.pause().
But, if you are using the newest version of node, wrap the readstream with the object created from stream module by isaacs, like this :
var Readable = require('stream').Readable;
var myReader = new Readable().wrap(readStream);
and use myReader in place of readStream after that.
Best of luck! Tell me if this works.
You are closing the /dev/random stream, but you still have a listener for the 'data' event on the pass-through, which will keep the app running until the pass-through is closed.
I'm guessing there is some buffered data from the read stream and until that is flushed the pass-through is not closed. But this is just a guess.
To get the desired behaviour you can remove the event listener on the pass-through like this:
pt.on('data', function (data) {
console.log(data)
console.log('closing');
pt.removeAllListeners('data');
readStream.close();
});
i am actually pipe to a http request.. so for me it's about :
pt.on('close', (chunk) => {
req.abort();
});

Tailing a named pipe in node.js

I'm using node-tail to read a file in linux and send it down to a socket.
node.js sending data read from a text file
var io = require('socket.io');
Tail = require('tail').Tail;
tail = new Tail("/tmp/test.txt");
io.sockets.on('connection', function (socket) {
tail.on("line", function(data) {
socket.emit('Message', { test: data });
});
});
Receiving side
var socket = io.connect();
socket.on('Message', function (data) {
console.log(data.test);
});
This works but when I try to modify this part
tail = new Tail("/tmp/test.txt");
to this
tail = new Tail("/tmp/FIFOFILE");
I can't get any data from it.
Is there anyway to read a named pipe in linux? or a package that can read a named pipe?
I can get it to work in a silly way:
// app.js
process.stdin.resume();
process.stdin.on('data', function(chunk) {
console.log('D', chunk);
});
And start like this:
node app.js < /tmp/FIFOFILE
If I create a readable stream for the named pipe, it ends after having read the first piece of data written to the named pipe. Not sure why stdin is special.
The OS will send an EOF when the last process finishes writing to the FIFO. If only one process is writing to the FIFO then you get an EOF when that process finishes writing its stuff. This EOF triggers Node to close the stream.
The trick to avoiding this is given by #JoshuaWalsh in this answer, namely: you open the pipe yourself FOR READING AND WRITING - even though you have no intention of ever writing to it. This means that the OS sees that there is always at least one process writing to the file and so you never get the EOF.
So... just add in something like:
let fifoHandle = fs.open(fifoPath, fs.constants.O_RDWR,function(){console.log('FIFO open')});
You don't ever have to do anything with fifoHandle - just make sure it sticks around and doesn't get garbage collected.
In fact... in my case I was using createReadStream, and I found that simply adding the fs.constants.O_RDWR to this was enough (even though I have no intention of ever writing to the fifo.
let fifo = fs.createReadStream(fifoPath,{flags: fs.constants.O_RDWR});
fifo.on('data',function(data){
console.log('Got data:'+data.toString());
}

Resources