Stream audio simultaneously from soundcloud source with node - node.js

I am using the Soundcloud api from a node server. I want to stream an audio track simultaneously to multiple users.
I tried something like this (using the code on this question Streaming audio from a Node.js server to HTML5 <audio> tag) but it does not work. Any idea on how I could do this?
var radio = require("radio-stream");
var http = require('http');
var url = "http://api.soundcloud.com/tracks/79031167/stream?client_id=db10c5086fe237d1718f7a5184f33b51";
var stream = radio.createReadStream(url);
var clients = [];
stream.on("connect", function() {
console.error("Radio Stream connected!");
console.error(stream.headers);
});
stream.on("data", function (chunk) {
if (clients.length > 0){
for (client in clients){
clients[client].write(chunk);
};
}
});
stream.on("metadata", function(title) {
console.error(title);
});
var server = http.createServer(function(req, res){
res.writeHead(200,{
"Content-Type": "audio/mpeg",
'Transfer-Encoding': 'chunked'
});
clients.push(res);
console.log('Client connected; streaming');
});
server.listen("8000", "0.0.0.0");
console.log('Server running at http://127.0.0.1:8000');

There are several problems
Follow Redirects
The radio-stream module that you're using hasn't been updated in 4 years. That's an eternity in Node.js API's time. I recommend not using it, as there are undoubtedly compatibility issues with current and future versions of Node.js. At a minimum, there are much better ways of handling this now with the new streams API.
In any case, that module does not follow HTTP redirects. The SoundCloud API is redirecting you to the actual media file.
Besides, the radio-stream module is built to demux SHOUTcast/Icecast style metadata, not MP3 ID3 data. It won't help you.
All you need is a simple http.get(). You can then either follow the redirect yourself, or use the request package. More here: How do you follow an HTTP Redirect in Node.js?
Chunked Encoding
Many streaming clients cannot deal with chunked encoding. Node.js (correctly) adds it when you have streaming output. For our purposes though, let's disable it.
res.useChunkedEncodingByDefault = false;
https://stackoverflow.com/a/11589937/362536
Building a Coherent Stream
In theory, you can just append MPEG stream after MPEG stream and all will work fine. In practice, this doesn't work. ID3 tags will corrupt the stream. One file might be in a different sample rate than the other file and most software will not be able to switch the hardware to that new sample rate on the fly. Basically, you cannot reliably do what you're trying to do.
The only thing you can do is re-encode the entire stream by playing back these audio files, and getting a solid stream out the other end. This gives you the added bonus that you can handle other codecs and formats, not just MP3.
To handle many of your codec issues, you can utilize FFmpeg. However, you're going to need a way to play back those files to FFmpeg for encoding.
Rate Limiting
You must stream audio at the rate of playback. (You can send an initial buffer to get clients started quickly, but you can't keep slamming data to them as fast as possible.) If you don't do this, you will run out of memory on the server very quickly, as clients will lower their TCP window size down to zero and stay there until the audio has caught up enough to allow buffering more data. Since you're not using pipe, your streams are in flowing mode and will indefinitely buffer on the server. Now, this is actually a good thing in some ways because that prevents one slow client from slowing down the others. It's a bad thing though in that your code, you are streaming as fast as possible and not at the rate of playback.
If you play back the audio to another encoder, use RTC over several seconds as a clock. It doesn't have to be perfect, that's what client buffers are for. If you're playing back to an audio device, it has its own clock of course, which will be used.
What you should actually do
You've stumbled into a huge project. I strongly recommend using Liquidsoap instead. There are ways you can control it from Node.js. From there, use a server like Icecast for your streaming.

Related

How to save a webRTC stream into a file on server with nodejs?

I get my stream from my client like this :
webrtc_connection.ontrack = async (e) => {
//TODO : RECORD
}
How can I record / save it into a file on server? Apparently nodejs does not have MediaRecorder, so I am at loss for going further.
There are two options. The first is to use MediaRecorder+Socket.io+FFmpeg. Here is an example of how to stream from the browser to RTMP via node.js, but instead of streaming, you can just save it to the file.
draw your video on canvas, use canvas.captureStream() to get MediaStream from the canvas;
append your audio to MediaStream that you got in the previous step using MediaStream.addTrack();
use MediaRecorder to get raw data from the MediaStream;
send this raw data via WebSockets to node.js;
use FFmpeg to decode and save your video data to a file.
The Second is to use node-webrtc. You can join your WebRTC room from the server as another participant using WebRTC and record media tracks using FFmpeg. Here is an example.

Re-stream icecast stream through nodejs

Play our icecast stream through Nodejs, so that I can read metadata and push another audio file at key parts.
What I am wondering is why the following script won't allow a user to hear the stream.
var http = require('http'),
request = require('request'),
remote = 'http://stream.radiomedia.com.au:8003/stream';
http.createServer(function (req, res) {
res.writeHead(200, {
'Content-Type': 'audio/mpeg',
'Content-Length': 1500
});
// http://somewhere.com/noo.bin
var remoteUrl = remote + req.url;
request(remoteUrl).pipe(res);
}).listen(8080);
'Content-Length': 1500
That's your primary problem. You need to leave the Content-Length unspecified, as it's indefinite for your stream.
Also, this will cause the server to use chunked transfer encoding, which many clients these days can handle just fine. Some can't, so if legacy client compatibility matters to you, you'll have to disable chunked transfer encoding.
play our icecast stream through nodejs, so that I can read metadata and push another audio file at key parts.
This isn't a trivial thing to do. MP3 uses the concept of a bit reservoir, so you cannot arbitrarily trim the stream, even on frame boundaries, unless you disable the bit reservoir on the encoder which causes a pretty significant degradation in quality.
For more information, see my answer here: Is it possible to splice advertisements or messages dynamically into an MP3 file via a standard GET request?

Stream data from Browser to nodejs server using getUserMedia

I'm trying to send data (video and audio) from Browser to a NodeJS server. On the client side, I'm using getUserMedia to get a stream data and trying to send it over websockets via SockJS. Here is my code so far :
navigator.mediaDevices.getUserMedia({
audio: true,
video: true
})
.then(function(stream){
// trying to send stream
var video = window.URL.createObjectURL(stream);
// send stream
mystream.send(video.play());
})
Where mystream is a SockJS instance.
My need is to persist the video as it is watched by a peer.
Has anyone ever sent a stream video to a server ? I'm out of ideas on this one. Any help/hint is appreciated.
After hours of researching, I just gave up and used Kurento. It is well documented and there is some pretty interesting examples involving NodeJS sample code. I let the question open in case someone comes with a better idea.

Send MediaStream through NodeJS

I have the MediaStream object returned from getUserMedia and can show it in my own screen.
The thing is, I don't know how to *send / pipe / stream/ * that MediaStream from Point A through NodeJS using socket.io to Point B.
My code right now is:
// Cámara
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true, video: true}, function(stream) {
video.src = URL.createObjectURL(stream) || window.URL.createObjectURL(stream);
webcamstream = stream;
}, onVideoFail);
} else {
alert ('failed');
}
});
function onVideoFail(e) {
console.log('webcam fail!', e);
};
I need the way to make this stream being sent constantly to other user using NodeJS.
The comment made in the answer for Audio and video conference with NodeJS are still valid
if you were to send the streams through socket.io instead of a peer connection, in any case that would be the raw video content (bytes). You would lose:
the streaming part (RTP/RTCP), and corresponding packet loss cancellation
the bandwidth adaptation
the encryption (DTLS)
the media engine (jitter correction, …)
over webRTC.
why not implement an SFU in node.js? Use node.js/socket.io for the signaling (i.e. the initial handshake) but also use node.js as a peer connection endpoint which relays/routes the media stream to (an)other(s) peer(s)? You would have an intermediate server like you seem to want, and all the advantages of webRTC.
another solution is to use an MCU, google webrtc+mcu and you will find many.
this might be a little late, but there's now a library called wrtc/node-webrtc. use that! https://github.com/node-webrtc/node-webrtc

node (socket) live audio stream / broadcast

Please, is there any easy way to stream (broadcast) media file (ogg, mp3, spx..) from server to client (browser) via NODE.js and possibly SOCKET.IO?
I have to record audio input on the server side and then be able to play it realtime for many clients.
I've been messing with binary.js or socket.io streams but wasnt able to get it right.
I've tried to encode audio input with speex, vorbis or lame and then load it by FS to client but I havent been successful. Or do i have to capture PCM and then decode it in browser?
Any suggestion on this, nothing Ive found ever helped me.
Many thanks for any tips, links and ideas.
You'll want to look for packages that work on Streams and from there it's just about piping your streams to output as necessary. Using Express or just the built-in HTTP you can accomplish this quite easily. Here's an example built around osx-audio which provides a PCM stream, lame which can encode a stream to mp3, and Express:
var Webcast = function(options) {
var lame = require('lame');
var audio = require('osx-audio');
var fs = require('fs');
// create the Encoder instance
var encoder = new lame.Encoder({
// input
channels: 2, // 2 channels (left and right)
bitDepth: 16, // 16-bit samples
sampleRate: 44100, // 44,100 Hz sample rate
// output
bitRate: options.bitrate,
outSampleRate: options.samplerate,
mode: (options.mono ? lame.MONO : lame.STEREO) // STEREO (default), JOINTSTEREO, DUALCHANNEL or MONO
});
var input = new audio.Input();
input.pipe(encoder);
// set up an express app
var express = require('express')
var app = express()
app.get('/stream.mp3', function (req, res) {
res.set({
'Content-Type': 'audio/mpeg3',
'Transfer-Encoding': 'chunked'
});
encoder.pipe(res);
});
var server = app.listen(options.port);
}
module.exports = Webcast;
How you get your input stream might be the most interesting part, but that will depend on your implementation. The popular request package is built around Streams as well though, so it might just be an HTTP request away!
On the web browser you have the HTML5 video element and the audio element. Both of them have sources. Each web browser supports different codecs natively. So you'll want to watch out for that if you're trying to stream mp3.
You don't need socket.io, you only need HTTP. Your app is reading a file, music.ogg, and for each chunk it reads, it will send it through the http server. It will be one single HTTP request that's kept open until the file is transferred.
Here's how your html will look:
<audio src="http://example.com/music.ogg"></audio>
And your nodejs code will be something like this (haven't tested this):
var http = require('http');
var fs = require('fs');
http.on('request', function(request, response) {
var inputStream = fs.open('/path/to/music_file.ogg');
inputStream.pipe(response);
})
I'm only using the ReadableStream.pipe method on the inputStream and the http and fs modules for the above code. If you want to transcode the audio file (for example, from mp3 to ogg) you'll want to find a module that does that and pipe the data from the file into the transcoder then into response:
// using some magical transcoder
inputStream.pipe(transcoder).pipe(response);
The method will call end on the stream whenever it's finished writing so that the HTTP request will be finished as soon as the file is done being read (and transcoded).
You can do this with node and RTC. There is some tools ready to use like SimpleWebRTC or EasyRTC. For what I already tested video is still a trouble, but audio works great.

Resources