NodeJS - Live Stream Audio to Specific URL with mp3 audio "chunks" - node.js

I'm working on developing an application that will capture audio from the browser in 5 second "chunks" (these are full audio files and not simply partial files), send these 5 second chunks to the server, convert it from webm to mp3 on the server, and then broadcast the mp3 file to clients connected via a websocket or a static url.
I've successfully managed to do parts 1 and 2; however, I'm not quite sure the best approach to transmit this created mp3 audio file to the user. My thinking was to generate a single url for clients to listen in to, e.g http://localhost/livestream.mp3 (a live stream url that would automatically update itself with the latest audio data), or to emit the audio files to the clients over a websocket and attempt to play these sequenced audio files seamlessly without any noticeable gaps between the audio files as they switch out.
Here's a snippet of my [typescript] code where I create the mp3 file, and I've pointed out the area in which I would perform the writestream and from there I would expect to pipe this to users when they make an HTTP req.
private createAudioFile(audioObj: StreamObject, socket: SocketIO.Socket) : void {
const directory: string = `${__dirname}/streams/live`;
fs.writeFile(`${directory}/audio_${audioObj.time}.webm`, audioObj.stream, (err: NodeJS.ErrnoException) => {
if (err) logger.default.info(err.toString());
try {
const process: childprocess.ChildProcess = childprocess.spawn('ffmpeg', ['-i', `${directory}/audio_${audioObj.time}.webm`, `${directory}/audio_${audioObj.time}.mp3`]);
process.on('exit', () => {
// Ideally, this is where I would be broadcasting the audio from
// the static URL by adding the new stream data to it, or by
// emitting it out to all clients connected to my websocket
// const wso = fs.createWriteStream(`${directory}/live.mp3`);
// const rso = fs.createReadStream(`${directory}/audio_${audioObj.time}.mp3`);
// rso.pipe(wso);
if (audioObj.last == true) {
this.archiveAudio(directory, audioObj.streamName);
}
});
} catch (e) {
logger.default.error('CRITICAL ERROR: Exception occurred when converting file to mp3:');
logger.default.error(e);
}
});
}
I've seen a number of questions out there that ask for a similar concept, but not quite the final goal that I'm looking for. Is there a way to make this work?

Related

Sending data from Screen over Socket IO?

I have been googling around but cannot find a clear answer to this.
I am making a chrome extension which records tabs. The idea is to stream getUserMedia to the backend using Websockets (specifically Socket.io) where the backend writes to a file until a specific condition (a value in the backend) is set.
The problem is, I do not know how I would call the backend with a specific ID and also, how would I correctly write to the file without corrupting it?
You are sending the output from MediaRecorder, via a websocket, to your back end.
Presumably you do this from within MediaRecorder's ondataavailable handler.
It's quite easy to stuff that data into a websocket:
function ondataavailable ( event ) {
event.data.arrayBuffer().then ( buf => {socket.emit('media', buf) })
}
In the backend you must concatenate all the data you receive, in the order of reception, to a media file. This media file likely has a video/webm MIME type; if you give it the .webm filename extension most media players will handle it correctly. Notice that the media file will be useless if it doesn't contain all the Blobs in order. The first few Blobs contain metadata needed to make sense of the stream. This is easy to do by appending each received data item to the file.
Server-side you can use the socket.id attribute to make up a file name; that gives you a unique file for each socket connection. Something like this sleazy poorly-performing non-debugged not-production-ready code will do that.
io.on("connection", (socket) => {
if (!socket.filename)
socket.filename = path.join(__dirname, 'media', socket.id + '.webm))
socket.on('filename', (name) => {
socket.filename = path.join(__dirname, 'media', name + '.webm))
})
socket.on('media', (buf) => {
fs.appendFile(buf, filename)
})
})
On the client side you could set the filename with this.
socket.emit('filename', 'myFavoriteScreencast')

HTML5 WebM streaming using chunks from FFMPEG via Socket.IO

I'm trying to make use of websockets to livestream chunks from a WebM stream. The following is some example code on the server side that I have pieced together:
const command = ffmpeg()
.input('/dev/video0')
.fps(24)
.audioCodec('libvorbis')
.videoCodec('libvpx')
.outputFormat('webm')
const ffstream = command.pipe()
ffstream.on('data', chunk => {
io.sockets.emit('Webcam', chunk)
})
I have the server code structured in this manner so ffstream.on('data', ...) can also write to a file. I am able to open the file and view the video locally, but have difficulty using the chunks to render in a <video> tag in the DOM.
const ms = new MediaSource()
const video = document.querySelector('#video')
video.src = window.URL.createObjectURL(ms)
ms.addEventListener('sourceopen', function () {
const sourceBuffer = ms.addSourceBuffer('video/webm; codecs="vorbis,vp8"')
// read socket
// ...sourceBuffer.appendBuffer(data)
})
I have something such as the above on my client side. I am able to receive the exact same chunks from my server but the sourceBuffer.appendBuffer(data) is throwing me the following error:
Failed to execute 'appendBuffer' on 'SourceBuffer': This SourceBuffer has been removed from the parent media source.
Question: How can I display these chunks in an HTML5 video tag?
Note: From my reading, I believe this has to do with getting key-frames. I'm not able to determine how to recognize these though.

Record Internet Audio Stream in NodeJS

I have an internet audio stream that's constantly being broadcast (accessible via http url), and I want to somehow record that with NodeJS and write files that consist of one-minute segments.
Every module or article I find on the subject is all about streaming from NodeJS to the browser. I just want to open the stream and record it (time block by time block) to files.
Any ideas?
I think the project at https://github.com/TooTallNate/node-icy makes this easy, just do what you need to with the res object, in the example it is sent to the audio system:
var icy = require('icy');
var lame = require('lame');
var Speaker = require('speaker');
// URL to a known ICY stream
var url = 'http://firewall.pulsradio.com';
// connect to the remote stream
icy.get(url, function (res) {
// log the HTTP response headers
console.error(res.headers);
// log any "metadata" events that happen
res.on('metadata', function (metadata) {
var parsed = icy.parse(metadata);
console.error(parsed);
});
// Let's play the music (assuming MP3 data).
// lame decodes and Speaker sends to speakers!
res.pipe(new lame.Decoder())
.pipe(new Speaker());
});

How to bridge byte array and audio streaming?

I'm creating a relay server for my streaming app. Basically, it should work like this:
Client A streams microphone audio to server through sockets
Server a gets stream and maybe stores it somewhere temporarily?(not sure)
Client B gets a stream from server and plays it.
Basically, I have 1st part done(sending mic audio to server):
while(isStreaming)
{
minBufSize = recorder.read(buffer, 0, buffer.length);
mSocket.emit("stream", Arrays.toString(buffer));
}
And 3rd part done, simply playing audio:
mediaplayer.reset();
mediaplayer.setDataSource("http://192.168.1.2:1337/stream");
mediaplayer.prepare();
mediaplayer.start();
Now I'm not sure how to bridge incoming byte array and streaming. Here is my current server code:
var ms = require('mediaserver');
// from server to Client B
exports.letsStream = function(req, res, next) {
ms.pipe(req, res, "sample_song_music_file.mp3");
};
// from Client A to server
exports.handleSocketConnection = function(socket)
{
console.log("connected");
socket.on('stream', function(data)
{
var bytes = JSON.parse(data);
console.log("GETTING STREAM:" + bytes);
});
}
Any suggestions? How can I directly stream that byte array?
The mediaserver module only supports streaming an existing audio, rather than a "live" stream. This won't work.
One way to achieve the three tasks would be:
https://www.npmjs.com/package/microphone to read the microphone audio as a byte stream.
http://binaryjs.com/ to handle transmitting the byte stream over websockets to the server and then sending to the client. If you have two separate paths set up, one for sending the data, one for receiving. Send the data from one stream to the other.
Use https://github.com/TooTallNate/node-speaker to play the incoming PCM data stream on Client B

Using getUserMedia to get audio from the microphone as it is being recorded?

I'm trying to write a Meteor.JS app that uses peer to peer "radio" communication. When a user presses a button it broadcasts their microphone output to people.
I have some code that gets permission to record audio, and it successfully gets a MediaStream object, but I can't figure out how to get the data from the MediaStream object as it is being recorded.
I see there is an method defined somewhere for getting all of the tracks of the recorded audio. I'm sure I could find a way to write some kind of loop that notifies me when audio has been added, but it seems like there should be a native, event-driven way to retrieve the audio from getUserMedia. Am I missing something? Thanks
What you will want to do is to access the stream through the AudioAPI(for the recording part). This is after assigning a var to your stream that was grabbed through getUserMedia (I call it localStream). So, you can create as many MediaStreamsource nodes as you want from one stream, so you can record it WHILE sending it to numerous people through different rtcpeerconnections.
var audioContext = new webkitAudioContext() || AudioContext();
var source = audioContext.createMediastreamSource(localStream);
var AudioRecorder = function (source) {
var recording = false;
var worker = new Worker(WORKER_PATH);
var config = {};
var bufferLen = 4096;
this.context = source.context;
this.node = (this.context.createScriptProcessor ||
this.context.createJavaScriptNode).call(this.context,
bufferLen, 2, 2);
this.node.onaudioprocess = function (e) {
var sample = e.inputBuffer.getChannelData(0);
//do what you want with the audio Sample, push to a blob or send over a WebSocket
}
source.connect(this.node);
this.node.connect(this.context.destination);
};
Here is a version I wrote/modified to send audio over websockets for recording on a server.
For sending the audio only when it is available, you COULD use websockets or a webrtc peerconnection.
You will grab the stream through getUserMedia success object(you should have a global variable that will be the stream for all your connection). And when it becomes available, you can use a signalling server to forward the requesting SDPs to the audio supplier. You can set it the requesting SDPs to receive only and your connection.
PeerConnection example 1
PeerConnection example 2
Try with code like this:
navigator.webkitGetUserMedia({audio: true, video: false},
function(stream) { // Success Callback
var audioElement = document.createElement("audio");
document.body.appendChild(audioElement);
audioElement.src = URL.createObjectURL(stream);
audioElement.play();
}, function () { // Error callback
console.log("error")
});
You may use the stream from success callback to create a object URL and pass it into an HTML5 audio element.
Fiddle around in http://jsfiddle.net/veritas/2B9Pq/

Resources