Stream files uploaded by client - node.js

Background: Stream Consumption
This is how you can consume and read a stream of data-bytes that is received in the client:
Get a Response object (example a fetch response.)
Retrieve the ReadableStream from the body field of it i.e response.body
As body is an instance of ReadableStream, we can do body.getReader()
And use the reader as reader.read()
That is simple enough, we can consume the stream of bytes coming from the server and we can consume it the same way in NodeJS and Browser through the Node Web Stream API.
Stream from path in NodeJS
To create a stream from URLPath in Node JS it is quite easy (you just pass the path to a stream.) You can also switch from NodeStream to WebStream with ease (Stream.toWebStream())
Problem: Stream from user upload action
Can we get a stream right from user file uploads in the browser ? Can you give a simple example ?
The point of it would be to process the files as the user is uploading them, not to use the Blob stream.
Is it possible analyze user data as it is being uploaded i.e is it possible for it to be a stream ?

Related

How to save a webRTC stream into a file on server with nodejs?

I get my stream from my client like this :
webrtc_connection.ontrack = async (e) => {
//TODO : RECORD
}
How can I record / save it into a file on server? Apparently nodejs does not have MediaRecorder, so I am at loss for going further.
There are two options. The first is to use MediaRecorder+Socket.io+FFmpeg. Here is an example of how to stream from the browser to RTMP via node.js, but instead of streaming, you can just save it to the file.
draw your video on canvas, use canvas.captureStream() to get MediaStream from the canvas;
append your audio to MediaStream that you got in the previous step using MediaStream.addTrack();
use MediaRecorder to get raw data from the MediaStream;
send this raw data via WebSockets to node.js;
use FFmpeg to decode and save your video data to a file.
The Second is to use node-webrtc. You can join your WebRTC room from the server as another participant using WebRTC and record media tracks using FFmpeg. Here is an example.

Cloud storage: how to cancel/destroy a stream created with createReadStream()

How can I cancel a stream created with const stream = file.createReadStream()?
I'm using node.js and sending files via fastify to the client. If the client closes the connection, I'd like also to cancel the reading from the stream, so to close the connection to cloud storage and stop transfering data.
Currently my server continues reading from the stream but the content is never used since the outgoing stream to the client is already destroyed. fastify calls stream.destroy() when the receiving end is prematurely closed. But the read stream happily receives data from the cloud storage.
From reading the source code, this is a PassThrough stream, that gets its content only via events and has no connection to the underlying receiving stream.
Is there a method to stop this cloud stream?
Christian
So far the only way I found to avoid this was to call reply.hijack()
https://www.fastify.io/docs/latest/Reference/Reply/#hijack
reply.hijack()
reply.raw.setHeader('Content-Type', object.contentType)
storage.createReadStream().pipe(reply.raw)

How to stream the response using wreck node module?

I have a requirement where I need to stream the response from a get request using Wreck node module (cannot use any other node modules as Wreck needs to be used as part of our framework). I went through the documentation but couldn't find any helpful results. Is there a way to using streaming here?
As per the Wreck docs:
get(uri, [options])
Returns a promise that resolves into an object with the following properties:
- res - The HTTP Incoming Message object, which is a readable stream that has "ended" and contains no more data to read
- payload - The payload in the form of a Buffer or (optionally) parsed JavaScript object (JSON).
As per the Node docs if readable stream emitted the 'end' event then the data from it has been consumed.
If you really must use a stream, I suggest you have a look at implementing a stream to basically fake streaming.

how to create node server that receiving video stream and save the stream as video file?

I try to look for some examples that doing something like it - but all i found is example of receiving picture and not receiving static stream of video.
I need to receive a video - and save it on the server disk until the client send some flag that the video stream is done.
How to do it ?
For example: you can convert stream to Binary Data and send this: Sending_and_Receiving_Binary_Data
Also you can use MediaStream Recording API, if you want to process video\audio stream: docs

Download Video Streams from Remote url using NightmareJs

I am trying to build a scraper to download video streams and and save them in a private cloud instance using NightMareJs (http://www.nightmarejs.org/)
I have seen the documentation and it shows how to download simple files like this -
.evaluate(function ev(){
var el = document.querySelector("[href*='nrc_20141124.epub']");
var xhr = new XMLHttpRequest();
xhr.open("GET", el.href, false);
xhr.overrideMimeType("text/plain; charset=x-user-defined");
xhr.send();
return xhr.responseText;
}, function cb(data){
var fs = require("fs");
fs.writeFileSync("book.epub", data, "binary");
})
-- based on the SO post here -> Download a file using Nightmare
But I want to download video streams using NodeJs async streams api. Is there a way to open a stream from a remote url and pipe it to local / other remote writable stream using NodeJs inbuilt stream apis
You can check if the server sends the "Accept-Ranges" (14.5) and "Content-Length" (14.13) headers through a HEAD request to that file, then request smaller chunks of the file you're trying to download using the "Content-Range" (14.16) header and write each chunk to the target file (you can use appending mode in order to reduce management of the file stream).
Of course, this will be quite slow if you're requesting very small chunks sequentially. You could build a pool of requestors (e.g. 4) and only write the next correct chunk to the file (so the other requestors would not take on future chunks if they are already done downloading).

Resources