watching streaming HTTP response progress in NodeJS, express - node.js

i want to stream sizeable files in NodeJS 0.10.x using express#4.8.5 and pipes. currently i'm
doing it like this (in CoffeeScript):
app.get '/', ( request, response ) ->
input = P.create_readstream route
input
.pipe P.$split()
.pipe P.$trim()
.pipe P.$skip_empty()
.pipe P.$skip_comments()
.pipe P.$parse_csv headers: no, delimiter: '\t'
.pipe response
(P is pipedreams.)
what i would like to have is something like
.pipe count_bytes # ???
.pipe response
.pipe report_progress response
so when i look at the server running in the terminal, i get some indication of how many bytes have been
accepted by the client. right now, it is very annoying to see the client loading for ages without having
any indication whether the transmision will be done in a minute or tomorrow.
is there any middleware to do that? i couldn't find any.
oh, and do i have to call anything on response completion? it does look like it's working automagically right now.

For your second question, you don't have to close anything. The pipe function handles everything for you, even throttling of the streams (if the source stream has more data than the client can handle due to poor download speed, it will pause the source stream until the client can consume again the source instead of using a bunch of memory server side by completely reading the source).
For your first question, to have some stats server side on your streams, what you could use is a Transform stream like:
var Transform = require('stream').Transform;
var util = require('util').inherits;
function StatsStream(ip, options) {
Transform.call(this, options);
this.ip = ip;
}
inherits(StatsStream, Transform);
StatsStream.prototype._transform = function(chunk, encoding, callback) {
// here some bytes have been read from the source and are
// ready to go to the destination, do your logging here
console.log('flowing ', chunk.length, 'bytes to', this.ip);
// then tell the tranform stream that the bytes it should
// send to the destination is the same chunk you received...
// (and that no error occured)
callback(null, chunk);
};
Then in your requests handlers you can pipe like (sorry javascript):
input.pipe(new StatsStream(req.ip)).pipe(response)
I did this on top of my head so beware :)

Related

Nodejs PassThrough Stream

I want to transmit an fs.Readstream over a net.Socket (TCP) stream. For this I use a .pipe.
When the fs.Readstream is finished, I don't want to end the net.Socket stream. That's why I use
readStream.pipe(socket, {
end: false
})
Unfortunately I don't get 'close', 'finish' or 'end' on the other side. This prevents me from closing my fs.Writestream on the opposite side. However, the net.Socket connection remains, which I also need because I would like to receive an ID as a response.
Since I don't get a 'close' or 'finish' on the opposite, unfortunately I can't end the fs.Writestream and therefore can't send a response with a corresponding ID
Is there a way to manually send a 'close' or 'finish' event via the net.socket without closing it?
With the command, only my own events react.
Can anyone tell me what I am doing wrong?
var socket : net.Socket; //TCP connect
var readStream = fs.createWriteStream('test.txt');
socket.on('connect', () => {
readStream.pipe(socket, {
end: false
})
readStream.on('close', () => {
socket.emit('close');
socket.emit('finish');
})
//waiting for answer
//waiting for answer
//waiting for answer
socket.on('data', (c) => {
console.log('got my answer: ' + c.toString());
})
})
}
Well there's not really much you can do with a single stream except provide some way to the other side to know that the stream has ended programatically.
When the socket sends an end event it actually flushes the buffer and then closes the TCP connection, which then on the other side is translated into finish after the last byte is delivered. In order to re-use the connection you can consider these two options:
One: Use HTTP keep-alive
As you can imagine you're not the first person having faced this problem. It actually is a common thing and some protocols like HTTP have you already covered. This will introduce a minor overhead, but only on starting and ending the streams - which in your case may be more acceptable than the other options.
Instead of using basic TCP streams you can as simply use HTTP connections and send your data over http requests, a HTTP POST request would be just fine and your code wouldn't look any different except ditching that {end: false}. The socket would need to have it's headers sent, so it'd be constructed like this:
const socket : HTTP.ClientRequest = http.request({method: 'POST', url: '//wherever.org/somewhere/there:9087', headers: {
'connection': 'keep-alive',
'transfer-encoding': 'chunked'
}}, (res) => {
// here you can call the code to push more streams since the
});
readStream.pipe(socket); // so our socket (vel connection) will end, but the underlying channel will stay open.
You actually don't need to wait for the socket to connect, and pipe the stream directly like in the example above, but do check how this behaves if your connection fails. Your waiting for connect event will also work since HTTP request class implements all TCP connection events and methods (although it may have some slight differences in signatures).
More reading:
Wikipedia article of keep-alive - a good explaination how this works
Node.js http.Agent options - you can control how many connections you have, and more importantly set the default keep alive behavior.
Oh and a bit of warning - TCP keep-alive is a different thing, so don't get confused there.
Two: Use a "magic" end packet
In this case what you'd do is to send a simple end packet, for instance: \x00 (a nul character) at the end of the socket. This has a major drawback, because you will need to do something with the stream in order to make sure that a nul character doesn't appear there otherwise - this will introduce an overhead on the data processing (so more CPU usage).
In order to do it like this, you need to push the data through a transform stream before you send them to the socket - this below is an example, but it would work on strings only so adapt it to your needs.
const zeroEncoder = new Transform({
encoding: 'utf-8',
transform(chunk, enc, cb) { cb(chunk.toString().replace('\x00', '\\x00')); },
flush: (cb) => cb('\x00')
});
// ... whereever you do the writing:
readStream
.pipe(zeroEncoder)
.on('unpipe', () => console.log('this will be your end marker to send in another stream'))
.pipe(socket, {end: false})
Then on the other side:
tcpStream.on('data', (chunk) => {
if (chunk.toString().endsWith('\x00')) {
output.end(decodeZeros(chunk));
// and rotate output
} else {
output.write(decodeZeros(chunk));
}
});
As you can see this is way more complicated and this is also just an example - you could simplify it a bit by using JSON, 7-bit transfer encoding or some other ways, but it will in all cases need some trickery and most importantly reading through the whole stream and way more memory for it - so I don't really recommend this approach. If you do though:
Make sure you encode/decode the data correctly
Consider if you can find a byte that won't appear in your data
The above may work with strings, but will be at least bad with Buffers
Finally there's no error control or flow control - so at least pause/resume logic is needed.
I hope this is helpful.

Buffering a Float32Array to a client

This should be obvious, but for some reason I am not getting any result. I have already spent way too much time just trying different ways to get this working without results.
TLDR: A shorter way to explain this question could be: I know how to stream a sound from a file. How to stream a buffer containing sound that was synthesized on the server instead?
This works:
client:
var stream = ss.createStream();
ss(socket).emit('get-file', stream, data.bufferSource);
var parts = [];
stream.on('data', function(chunk){
parts.push(chunk);
});
stream.on('end', function () {
var blob=new Blob(parts,{type:"audio"});
if(cb){
cb(blob);
}
});
server (in the 'socket-connected' callback of socket.io)
var ss = require('socket.io-stream');
// ....
ss(socket).on('get-file', (stream:any, filename:any)=>{
console.log("get-file",filename);
fs.createReadStream(filename).pipe(stream);
});
Now, the problem:
I want to alter this audio buffer and send the modified audio instead of just the file. I converted the ReadStream into an Float32Array, and did some processes sample by sample. Now I want to send that modified Float32Array to the client.
In my view, I just need to replaces the fs.createReadStream(filename) with(new Readable()).push(modifiedSoundBuffer). However, I get a TypeError: Invalid non-string/buffer chunk. Interestingly, if I convert this modifiedSodunBuffer into a Uint8Array, it doesn't yell at me, and the client gets a large array, which looks good; only that all the array values are 0. I guess that it's flooring all the values?
ss(socket).on('get-buffer', (stream:any, filename:any)=>{
let readable=(new Readable()).push(modifiedFloat32Array);
readable.pipe(stream);
});
I am trying to use streams for two reasons: sound buffers are large, and to allow concurrent processing in the future
if you will convert object Float32Array to buffer before sending like this Readable()).push(Buffer.from(modifiedSoundBuffer)) ?

cannot pipe mp4 file in node.js

I am trying to pipe an mp4 file in my root directory to but I cannot get it to work, it start buffering for streaming but never plays the file.
here is my code
var http = require('http'),
server = http.createServer(),
fs = require('fs'),
rs = fs.createReadStream('./Practice/Jumanji.mp4');
server.on('request', function(req, res) {
res.writeHead(200, {
'Content-Type': 'video/mp4'
});
rs.pipe(res);
res.end();
});
server.listen(4000);
The movie tries to load but never does.
You have two problems in your code that I see
Streams are one-time use objects, so once you fetch one video, a second request would fail because the stream was already closed. Your rs = fs.createReadStream('./Practice/Jumanji.mp4'); line should be inside the callback.
Likely the cause of your error in this case, is res.end();. That will immediately close res before your pipe has had time to write the video, so you are essentially saying "Send this stream" followed immediately by "Send nothing". You should delete that line, since the pipe will automatically close res when all the data is written.
All this said, there is a lot of error handling logic that your example is missing, which you may want to consider.

Basic streams issue: Difficulty sending a string to stdout

I'm just starting learning about streams in node. I have a string in memory and I want to put it in a stream that applies a transformation and pipe it through to process.stdout. Here is my attempt to do it:
var through = require('through');
var stream = through(function write(data) {
this.push(data.toUpperCase());
});
stream.push('asdf');
stream.pipe(process.stdout);
stream.end();
It does not work. When I run the script on the cli via node, nothing is sent to stdout and no errors are thrown. A few questions I have:
If you have a value in memory that you want to put into a stream, what is the best way to do it?
What is the difference between push and queue?
Does it matter if I call end() before or after calling pipe()?
Is end() equivalent to push(null)?
Thanks!
Just use the vanilla stream API
var Transform = require("stream").Transform;
// create a new Transform stream
var stream = new Transform({
decodeStrings: false,
encoding: "ascii"
});
// implement the _transform method
stream._transform = function _transform(str, enc, done) {
this.push(str.toUpperCase() + "\n";
done();
};
// connect to stdout
stream.pipe(process.stdout);
// write some stuff to the stream
stream.write("hello!");
stream.write("world!");
// output
// HELLO!
// WORLD!
Or you can build your own stream constructor. This is really the way the stream API is intended to be used
var Transform = require("stream").Transform;
function MyStream() {
// call Transform constructor with `this` context
// {decodeStrings: false} keeps data as `string` type instead of `Buffer`
// {encoding: "ascii"} sets the encoding for our strings
Transform.call(this, {decodeStrings: false, encoding: "ascii"});
// our function to do "work"
function _transform(str, encoding, done) {
this.push(str.toUpperCase() + "\n");
done();
}
// export our function
this._transform = _transform;
}
// extend the Transform.prototype to your constructor
MyStream.prototype = Object.create(Transform.prototype, {
constructor: {
value: MyStream
}
});
Now use it like this
// instantiate
var a = new MyStream();
// pipe to a destination
a.pipe(process.stdout);
// write data
a.write("hello!");
a.write("world!");
Output
HELLO!
WORLD!
Some other notes about .push vs .write.
.write(str) adds data to the writable buffer. It is meant to be called externally. If you think of a stream like a duplex file handle, it's just like fwrite, only buffered.
.push(str) adds data to the readable buffer. It is only intended to be called from within our stream.
.push(str) can be called many times. Watch what happens if we change our function to
function _transform(str, encoding, done) {
this.push(str.toUpperCase());
this.push(str.toUpperCase());
this.push(str.toUpperCase() + "\n");
done();
}
Output
HELLO!HELLO!HELLO!
WORLD!WORLD!WORLD!
First, you want to use write(), not push(). write() puts data in to the stream, push() pushes data out of the stream; you only use push() when implementing your own Readable, Duplex, or Transform streams.
Second, you'll only want to write() data to the stream after you've setup the pipe() (or added some event listeners). If you write to a stream with nothing wired to the other end, the data you've written will be lost. As #naomik pointed out, this isn't true in general since a Writable stream will buffer write()s. In your example you do need to write() after pipe() though. Otherwise, the process will end before writing anything to STDOUT. This is possibly due to how the through module is implemented, but I don't know that for sure.
So, with that in mind, you can make a couple simple changes to your example to get it to work:
var through = require('through');
var stream = through(function write(data) {
this.push(data.toUpperCase());
});
stream.pipe(process.stdout);
stream.write('asdf');
stream.end();
Now, for your questions:
The easiest way to get data from memory in to a writable stream is to simply write() it, just like we're doing with stream.wrtie('asdf') in your example.
As far as I know, the stream doesn't have a queue() function, did you mean write()? Like I said above, write() is used to put data in to a stream, push() is used to push data out of the stream. Only call push() in your owns stream implementations.
Only call end() after all your data has been written to your stream. end() basically says: "Ok, I'm done now. Please finish what you're doing and close the stream."
push(null) is pretty much equivalent to end(). That being said, don't call push(null) unless you're doing it inside your own stream implementation (as stated above). It's almost always more appropriate to call end().
Based on the examples for stream (http://nodejs.org/api/stream.html#stream_readable_pipe_destination_options)
and through (https://www.npmjs.org/package/through)
it doesn't look like you are using your stream correctly... What happens if you use write(...) instead of push(...)?

Performing piped operations on individual chunks (node-wav)

I'm new to node and I'm working on an audio stream server. I'm trying to process / transform the chunks of a stream as they come out of each pipe.
So, file = fs.createReadStream(path) (filestream) is piped into file.pipe(wavy) (remove headers and output raw PCM) gets piped in to .pipe(waver) (add proper wav header to chunk) which is piped into .pipe(spark) (ouput chunk to client).
The idea is that each filestream chunk has headers removed if any (only applies to first chunk), then using the node-wav Writer that chunk is endowed with headers and then sent to the client. As I'm sure you guessed this doesn't work.
The pipe operations into node-wav are acting on the entire filestream, not the individual chunks. To confirm I've checked the output client side and it is effectively dropping the headers and re-adding them to the entire data stream.
From what I've read of the Node Stream docs it seems like what I'm trying to do should be possible, just not the way I'm doing it. I just can't pin down how to accomplish this.
Is it possible, and if so what am I missing?
Complete function:
processAudio = (path, spark) ->
wavy = new wav.Reader()
waver = new wav.Writer()
file = fs.createReadStream(path)
file.pipe(wavy).pipe(waver).pipe(spark)
I don't really know about wavs and headers but if you're "trying to process / transform the chunks of a stream as they come out of each pipe." you can use the Transform stream.
It permits you to sit between 2 streams and modify the bytes between them:
var util = require('util');
var Transform = require('stream').Transform;
util.inherits(Test, Transform);
function Test(options) {
Transform.call(this, options);
}
Test.prototype._transform = function(chunk, encoding, cb) {
// do something with chunk, then pass a modified chunk (or not)
// to the downstream
cb(null, chunk);
};
To observe the stream and potentially modify it, pipe like:
file.pipe(wavy).pipe(new Test()).pipe(waver).pipe(spark)

Resources