Icecast metadata capture - node.js

I've been using the package node-icy to grab metadata from an icecast stream.
What is doing is is grabbing the metadata from the stream. Then, the stream is decoded using lame and played on the Speaker.
server.listen(port, ()=>{
icy.get(url, (res)=> {
// log HTTP responses headers
console.error(res.headers);
//log any "metadata" events that happen
res.on('metadata', (metadata)=>{
var parsed = icy.parse(metadata);
console.log('Metadata event');
console.error(parsed);
});
// Let's play the music (assuming MP3 data).
// lame decodes and Speaker sends to speakers!
res.pipe(new lame.Decoder())
.pipe(new Speaker());
});
console.log(`Server on port: ${port}`);
});
This will give me an output of the titles of the songs:
Metadata event
{ StreamTitle: 'ruby the hatchet - planetary space child - killer' }
If I remove
res.pipe(new lame.Decoder())
.pipe(new Speaker());
Then the metadata is grabbed only once. My guess is that the Speaker() function keeps running and when metadata changes, then icy.get will run res.on('metadata', ...).
I'm handling the streamer on the server and then send it to the Client on Angular 5. Is there a way to keep the icy.get(...) listening without using Speaker(). I'm fairly new to streams. Any help would be appreciated.

I was able to solve this problem by using
var icy = require('icy');
var devnull = require('dev-null');
icy.get(url, function (res) {
// log any "metadata" events that happen
res.on('metadata', function (metadata) {
const parsed = icy.parse(metadata);
console.log('metadata', parsed);
});
res.pipe(devnull());
});
You can see it here:
https://github.com/TooTallNate/node-icy/issues/16

Related

How to inject into nodejs stream

I have the following script it works however I can't figure out the best solution that when a metadata tag is triggered it is to stop/pause the stream play an mp3 URL and then reconnect to the stream (as a new connection).
My first idea worked, however, it seemed to pause the Icecast stream and then insert the mp3 and after that play, it just continued playing from the paused spot (that is not wanted). What I would like is if the mp3 is 2minutes long then the Icecast stream should also have been 2 minutes skipped.
var http = require('http'),request = require('request');
var url = 'http://stream.radiomedia.com.au:8003/stream'; // URL to a known Icecast stream
var icecast = require('icecast-stack');
var stream = icecast.createReadStream(url);
// var radio = require("radio-stream");
// var stream = radio.createReadStream(url);
var clients = [];
stream.on("connect", function() {
console.error("Radio Stream connected!");
//console.error(stream.headers);
});
// Fired after the HTTP response headers have been received.
stream.on('response', function(res) {
console.error("Radio Stream response!");
console.error(res.headers);
});
// When a chunk of data is received on the stream, push it to all connected clients
stream.on("data", function (chunk) {
if (clients.length > 0){
for (client in clients){
clients[client].write(chunk);
};
}
});
// When a 'metadata' event happens, usually a new song is starting.
stream.on('metadata', function(metadata) {
var title = icecast.parseMetadata(metadata).StreamTitle;
console.error(title);
});
// Listen on a web port and respond with a chunked response header.
var server = http.createServer(function(req, res){
res.writeHead(200,{
"Content-Type": "audio/mpeg",
'Transfer-Encoding': 'chunked'
});
// Add the response to the clients array to receive streaming
clients.push(res);
console.log('Client connected; streaming');
});
server.listen("9000", "127.0.0.1");
console.log('Server running at http://127.0.0.1:9000');
You can't simply arbitrarily concatenate streams like this. With MP3, the bit reservoir will bite you. Generally, it will be a small stream glitch, but you can have more picky clients just outright drop the connection.
To do what you want to do, you actually need to decode everything to PCM, mix the audio as you see fit, and then re-encode a fresh stream.
As an added bonus, you won't be tied to a particular codec and bitrate, and can offer an appropriate array of choices to your listeners. You also won't have to worry about the timing of MPEG frames, as your final stream can be sample-accurate.

Streaming video with socket io

I am having some difficulty streaming a video file with socket.io and node. My video file is on my server, and I am using the fs module to read it into a readStream. I am them passing chunks of data to a mediasource on the client side, which feeds into an html 5 video tag.
Although the client is receiving the chunks (I'm logging them), and I am appending the chunks to the buffer of the media source, nothing shows up in the video tag.
Anyone know how to fix this?
Here's my code:
Client side:
var mediaSource = new MediaSource();
var mimeCodec = 'video/mp4; codecs="avc1.42E01E, mp4a.40.2"';
document.getElementById('video').src = window.URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', function(event) {
var sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
console.log(sourceBuffer);
socket.on('chunk', function (data) {
if(!sourceBuffer.updating){
sourceBuffer.appendBuffer(data);
console.log(data);
}
});
socket.emit('go',{})
});
Server side:
var stream = fs.createReadStream(window.currentvidpath);
socket.on('go', function(){
console.log('WENT');
stream.addListener('data',function(data){
console.log('VIDDATA',data);
socket.emit('chunk',data);
})
})
Thanks a lot.
The problem is the fact that you only append the source buffer if there it is not updating
if(!sourceBuffer.updating){
sourceBuffer.appendBuffer(data);
console.log(data);
}
Heres my console after I added a else and log the times it don't append
SourceBuffer {mode: "segments", updating: false, buffered: TimeRanges, timestampOffset: 0, appendWindowStart: 0…}
site.html:24 connect
site.html:17 ArrayBuffer {}
30 site.html:20 not appending
So it appended the one chunk of the video and ignored 30
You should store the ones that aren't appended in a array. Then just make a loop with set Interval

how to create a blob in node.js to be used in a websocket?

I'm trying to use IBM's websocket implementation of their speech-to-text service. Currently I'm unable to figure out how to send a .wav file over the connection. I know I need to transform it into a blob, but I'm not sure how to do it. Right now I'm getting errors of:
You must pass a Node Buffer object to WebSocketConnec
-or-
Could not read a WAV header from a stream of 0 bytes
...depending on what I try to pass to the service. It should be noted that I am correctly sending the start message and am making it to the state of listening.
Starting from the v1.0 (still in beta) the watson-developer-cloud npm module has support for websockets.
npm install watson-developer-cloud#1.0.0-beta.2
Recognize a wav file:
var watson = require('watson-developer-cloud');
var fs = require('fs');
var speech_to_text = watson.speech_to_text({
username: 'INSERT YOUR USERNAME FOR THE SERVICE HERE',
password: 'INSERT YOUR PASSWORD FOR THE SERVICE HERE',
version: 'v1',
});
// create the stream
var recognizeStream = speech_to_text.createRecognizeStream({ content_type: 'audio/wav' });
// pipe in some audio
fs.createReadStream('audio-to-recognize.wav').pipe(recognizeStream);
// and pipe out the transcription
recognizeStream.pipe(fs.createWriteStream('transcription.txt'));
// listen for 'data' events for just the final text
// listen for 'results' events to get the raw JSON with interim results, timings, etc.
recognizeStream.setEncoding('utf8'); // to get strings instead of Buffers from `data` events
['data', 'results', 'error', 'connection-close'].forEach(function(eventName) {
recognizeStream.on(eventName, console.log.bind(console, eventName + ' event: '));
});
See more examples here.

Record Internet Audio Stream in NodeJS

I have an internet audio stream that's constantly being broadcast (accessible via http url), and I want to somehow record that with NodeJS and write files that consist of one-minute segments.
Every module or article I find on the subject is all about streaming from NodeJS to the browser. I just want to open the stream and record it (time block by time block) to files.
Any ideas?
I think the project at https://github.com/TooTallNate/node-icy makes this easy, just do what you need to with the res object, in the example it is sent to the audio system:
var icy = require('icy');
var lame = require('lame');
var Speaker = require('speaker');
// URL to a known ICY stream
var url = 'http://firewall.pulsradio.com';
// connect to the remote stream
icy.get(url, function (res) {
// log the HTTP response headers
console.error(res.headers);
// log any "metadata" events that happen
res.on('metadata', function (metadata) {
var parsed = icy.parse(metadata);
console.error(parsed);
});
// Let's play the music (assuming MP3 data).
// lame decodes and Speaker sends to speakers!
res.pipe(new lame.Decoder())
.pipe(new Speaker());
});

Streaming Binary with Node.js and WebSockets

I've been googling this and looking around stackoverflow for a while but haven't found a solution - hence the post.
I am playing around with Node.js and WebSockets out of curiosity. I am trying to stream some binary data (an mp3) to the client. My code so far is below but is obviously not working as intended.
I suspect that my problem is that I am not actually sending binary data from the server and would like some clarification/help.
Heres my server...
var fs = require('fs');
var WebSocketServer = require('ws').Server;
var wss = new WebSocketServer({port: 8080,host:"127.0.0.1"});
wss.on('connection', function(ws) {
var readStream =
fs.createReadStream("test.mp3",
{'flags': 'r',
'encoding': 'binary',
'mode': 0666,
'bufferSize': 64 * 1024});
readStream.on('data', function(data) {
ws.send(data, {binary: true, mask: false});
});
});
And my client...
context = new webkitAudioContext();
var ws = new WebSocket("ws://localhost:8080");
ws.binaryType = 'arraybuffer';
ws.onmessage = function (evt) {
context.decodeAudioData(
evt.data,
function(buffer) {
console.log("Success");
},
function(error) {
console.log("Error");
});
};
The call to decode always end up in the error callback. I am assuming this is because it is receiving bad data.
So my question is how to I correctly stream the file as binary?
Thanks
What your server is doing is that it is sending messages consisting of binary audio data in 64 KB chunks to your client. Your client should rebuild the audio file before calling decodeAudioData.
You are calling decodeAudioDataevery time your client is getting message on websocket. You have to create a separate buffer to add all the chunks to it. Then on completion of transfer, the buffer should be given input to decodeAudioData.
You have two options now:
You load entire file (fs.read) without using stream events and send the whole file with ws.send (easy to do)
You use stream events, modify your client to accept chunks of data and assemble them before calling decodeAudioData
Problem solved.
I fixed this issue with a combination of removing the "'encoding': 'binary'" parameter from the options passed to "createReadStream()" and the solution at...
decodeAudioData returning a null error
As per some of my comments, when I updated the createReadStream options, the first chunk was playing but all other chunks were executing the onError callback from decodeAudioData(). The solution in the link above fixed this for me.
It seems that decodeAudioData() is a bit picky as to how the chunks it receives should be formatted. They should be valid chunks apparently...
Define 'valid mp3 chunk' for decodeAudioData (WebAudio API)

Resources