Streaming video with socket io - node.js

I am having some difficulty streaming a video file with socket.io and node. My video file is on my server, and I am using the fs module to read it into a readStream. I am them passing chunks of data to a mediasource on the client side, which feeds into an html 5 video tag.
Although the client is receiving the chunks (I'm logging them), and I am appending the chunks to the buffer of the media source, nothing shows up in the video tag.
Anyone know how to fix this?
Here's my code:
Client side:
var mediaSource = new MediaSource();
var mimeCodec = 'video/mp4; codecs="avc1.42E01E, mp4a.40.2"';
document.getElementById('video').src = window.URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', function(event) {
var sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
console.log(sourceBuffer);
socket.on('chunk', function (data) {
if(!sourceBuffer.updating){
sourceBuffer.appendBuffer(data);
console.log(data);
}
});
socket.emit('go',{})
});
Server side:
var stream = fs.createReadStream(window.currentvidpath);
socket.on('go', function(){
console.log('WENT');
stream.addListener('data',function(data){
console.log('VIDDATA',data);
socket.emit('chunk',data);
})
})
Thanks a lot.

The problem is the fact that you only append the source buffer if there it is not updating
if(!sourceBuffer.updating){
sourceBuffer.appendBuffer(data);
console.log(data);
}
Heres my console after I added a else and log the times it don't append
SourceBuffer {mode: "segments", updating: false, buffered: TimeRanges, timestampOffset: 0, appendWindowStart: 0…}
site.html:24 connect
site.html:17 ArrayBuffer {}
30 site.html:20 not appending
So it appended the one chunk of the video and ignored 30
You should store the ones that aren't appended in a array. Then just make a loop with set Interval

Related

How to inject into nodejs stream

I have the following script it works however I can't figure out the best solution that when a metadata tag is triggered it is to stop/pause the stream play an mp3 URL and then reconnect to the stream (as a new connection).
My first idea worked, however, it seemed to pause the Icecast stream and then insert the mp3 and after that play, it just continued playing from the paused spot (that is not wanted). What I would like is if the mp3 is 2minutes long then the Icecast stream should also have been 2 minutes skipped.
var http = require('http'),request = require('request');
var url = 'http://stream.radiomedia.com.au:8003/stream'; // URL to a known Icecast stream
var icecast = require('icecast-stack');
var stream = icecast.createReadStream(url);
// var radio = require("radio-stream");
// var stream = radio.createReadStream(url);
var clients = [];
stream.on("connect", function() {
console.error("Radio Stream connected!");
//console.error(stream.headers);
});
// Fired after the HTTP response headers have been received.
stream.on('response', function(res) {
console.error("Radio Stream response!");
console.error(res.headers);
});
// When a chunk of data is received on the stream, push it to all connected clients
stream.on("data", function (chunk) {
if (clients.length > 0){
for (client in clients){
clients[client].write(chunk);
};
}
});
// When a 'metadata' event happens, usually a new song is starting.
stream.on('metadata', function(metadata) {
var title = icecast.parseMetadata(metadata).StreamTitle;
console.error(title);
});
// Listen on a web port and respond with a chunked response header.
var server = http.createServer(function(req, res){
res.writeHead(200,{
"Content-Type": "audio/mpeg",
'Transfer-Encoding': 'chunked'
});
// Add the response to the clients array to receive streaming
clients.push(res);
console.log('Client connected; streaming');
});
server.listen("9000", "127.0.0.1");
console.log('Server running at http://127.0.0.1:9000');
You can't simply arbitrarily concatenate streams like this. With MP3, the bit reservoir will bite you. Generally, it will be a small stream glitch, but you can have more picky clients just outright drop the connection.
To do what you want to do, you actually need to decode everything to PCM, mix the audio as you see fit, and then re-encode a fresh stream.
As an added bonus, you won't be tied to a particular codec and bitrate, and can offer an appropriate array of choices to your listeners. You also won't have to worry about the timing of MPEG frames, as your final stream can be sample-accurate.

Cannot get node to pipe data to a download file correctly

I'm fairly new to node and streaming, and I am having an issue when attempting to stream a large amount of data to a file on the client browser.
So for example, if on the server if i have a large file, test.txt, i can easily stream this to the client browser by setting the header attachment and piping the file to the request response as follows.
res.setHeader('Content-Type', 'text/csv');
res.setHeader('Content-disposition', 'attachment;filename=myfile.text');
fs.createReadStream('./test.txt')
.pipe(res);
When the user clicks the button, the download begins, and we see the data getting streamed to the download file. The stream takes several minutes, but during this time the client is not blocked and they can continue to do other things while the file is downloaded by the browser.
However my data is not stored in a file, I need to retrieve it one string at a time from another server. So I'm attempting to create my own read stream and push my data chunk by chunk, but it does not work, when i do something like this:
var s = new Readable();
s.pipe(res);
for(let i=0; i<=total; i++) {
dataString = //code here to get next string needed to push
s.push(dataString);
};
s.push(null);
With this code, when the user request the download, once the download begins, the client is blocked and cannot do any other actions until the download is completed. Also if the data takes more than 30 seconds to stream, we hit the server timeout in this case, and the download fails. With the file stream this is not an issue
How to I get this to act like a file stream and not block the client from doing other request while it downloads. Any recommendations on the best way to implement this would be appreciated.
I was able resolve this issue by doing something similar to here:
How to call an asynchronous function inside a node.js readable stream
My basic code is as follows, and this is not blocking the client or timing out on the request as the data is continuously piped to the file download on the client side.
res.setHeader('Content-Type', 'text/csv');
res.setHeader('Content-disposition', 'attachment;filename=myfile.text');
function MyStream() {
var rs = new Readable();
var hitsadded = 0;
rs._read = function() {}; // needed to avoid "Not implemented" exception
getResults(queryString, function getMoreUntilDone(err, res) {
if (err){
logger.logError(err);
}
rs.push(res.data);
hitsadded += res.records;
if (res.recordsTotal > hitsadded) {
getNextPage(query, getMoreUntilDone);
} else {
rs.push(null);
}
});
return rs;
}
MyStream().pipe(zlib.createGzip()).pipe(res);

Icecast metadata capture

I've been using the package node-icy to grab metadata from an icecast stream.
What is doing is is grabbing the metadata from the stream. Then, the stream is decoded using lame and played on the Speaker.
server.listen(port, ()=>{
icy.get(url, (res)=> {
// log HTTP responses headers
console.error(res.headers);
//log any "metadata" events that happen
res.on('metadata', (metadata)=>{
var parsed = icy.parse(metadata);
console.log('Metadata event');
console.error(parsed);
});
// Let's play the music (assuming MP3 data).
// lame decodes and Speaker sends to speakers!
res.pipe(new lame.Decoder())
.pipe(new Speaker());
});
console.log(`Server on port: ${port}`);
});
This will give me an output of the titles of the songs:
Metadata event
{ StreamTitle: 'ruby the hatchet - planetary space child - killer' }
If I remove
res.pipe(new lame.Decoder())
.pipe(new Speaker());
Then the metadata is grabbed only once. My guess is that the Speaker() function keeps running and when metadata changes, then icy.get will run res.on('metadata', ...).
I'm handling the streamer on the server and then send it to the Client on Angular 5. Is there a way to keep the icy.get(...) listening without using Speaker(). I'm fairly new to streams. Any help would be appreciated.
I was able to solve this problem by using
var icy = require('icy');
var devnull = require('dev-null');
icy.get(url, function (res) {
// log any "metadata" events that happen
res.on('metadata', function (metadata) {
const parsed = icy.parse(metadata);
console.log('metadata', parsed);
});
res.pipe(devnull());
});
You can see it here:
https://github.com/TooTallNate/node-icy/issues/16

HTML5 WebM streaming using chunks from FFMPEG via Socket.IO

I'm trying to make use of websockets to livestream chunks from a WebM stream. The following is some example code on the server side that I have pieced together:
const command = ffmpeg()
.input('/dev/video0')
.fps(24)
.audioCodec('libvorbis')
.videoCodec('libvpx')
.outputFormat('webm')
const ffstream = command.pipe()
ffstream.on('data', chunk => {
io.sockets.emit('Webcam', chunk)
})
I have the server code structured in this manner so ffstream.on('data', ...) can also write to a file. I am able to open the file and view the video locally, but have difficulty using the chunks to render in a <video> tag in the DOM.
const ms = new MediaSource()
const video = document.querySelector('#video')
video.src = window.URL.createObjectURL(ms)
ms.addEventListener('sourceopen', function () {
const sourceBuffer = ms.addSourceBuffer('video/webm; codecs="vorbis,vp8"')
// read socket
// ...sourceBuffer.appendBuffer(data)
})
I have something such as the above on my client side. I am able to receive the exact same chunks from my server but the sourceBuffer.appendBuffer(data) is throwing me the following error:
Failed to execute 'appendBuffer' on 'SourceBuffer': This SourceBuffer has been removed from the parent media source.
Question: How can I display these chunks in an HTML5 video tag?
Note: From my reading, I believe this has to do with getting key-frames. I'm not able to determine how to recognize these though.

Streaming Binary with Node.js and WebSockets

I've been googling this and looking around stackoverflow for a while but haven't found a solution - hence the post.
I am playing around with Node.js and WebSockets out of curiosity. I am trying to stream some binary data (an mp3) to the client. My code so far is below but is obviously not working as intended.
I suspect that my problem is that I am not actually sending binary data from the server and would like some clarification/help.
Heres my server...
var fs = require('fs');
var WebSocketServer = require('ws').Server;
var wss = new WebSocketServer({port: 8080,host:"127.0.0.1"});
wss.on('connection', function(ws) {
var readStream =
fs.createReadStream("test.mp3",
{'flags': 'r',
'encoding': 'binary',
'mode': 0666,
'bufferSize': 64 * 1024});
readStream.on('data', function(data) {
ws.send(data, {binary: true, mask: false});
});
});
And my client...
context = new webkitAudioContext();
var ws = new WebSocket("ws://localhost:8080");
ws.binaryType = 'arraybuffer';
ws.onmessage = function (evt) {
context.decodeAudioData(
evt.data,
function(buffer) {
console.log("Success");
},
function(error) {
console.log("Error");
});
};
The call to decode always end up in the error callback. I am assuming this is because it is receiving bad data.
So my question is how to I correctly stream the file as binary?
Thanks
What your server is doing is that it is sending messages consisting of binary audio data in 64 KB chunks to your client. Your client should rebuild the audio file before calling decodeAudioData.
You are calling decodeAudioDataevery time your client is getting message on websocket. You have to create a separate buffer to add all the chunks to it. Then on completion of transfer, the buffer should be given input to decodeAudioData.
You have two options now:
You load entire file (fs.read) without using stream events and send the whole file with ws.send (easy to do)
You use stream events, modify your client to accept chunks of data and assemble them before calling decodeAudioData
Problem solved.
I fixed this issue with a combination of removing the "'encoding': 'binary'" parameter from the options passed to "createReadStream()" and the solution at...
decodeAudioData returning a null error
As per some of my comments, when I updated the createReadStream options, the first chunk was playing but all other chunks were executing the onError callback from decodeAudioData(). The solution in the link above fixed this for me.
It seems that decodeAudioData() is a bit picky as to how the chunks it receives should be formatted. They should be valid chunks apparently...
Define 'valid mp3 chunk' for decodeAudioData (WebAudio API)

Resources