I'm working on a nodewebkit app that uses the Web Audio API to record microphone data and save it to disk.
I've used the RecordRTC framework, but it doesn't expose a way to stream the data to disk as the recording progresses (which is necessary given that the recordings could be longer than an hour).
I can't seem to find a good way to stream the data to disk using other methods either. If there is a proper way to do this, I would appreciate tips on what the proper tool for the job is.
However, the non-working solution I have now is:
Creating a ScriptProcessorNode with WebAudio API to access the PCM data
Create a readable stream buffer (using the stream-buffer module) and pipe it to a fileWriter (made with the wav module)
Translating that data to 16 bit ints in the onaudioprocess event and add them to the readable stream so they can be written
This hasn't worked because the ReadableStreamBuffer is only piping 20 bytes at a time to the fileWriter and not queueing all of the bytes coming from the microphone for some reason.
var wav = require('wav');
var streamBuffers = require("stream-buffers");
function convertFloat32ToInt16(buffer) {
var l = buffer.length;
var buf = new Int16Array(l);
while (l--) {
buf[l] = Math.min(1, buffer[l])*0x7FFF;
}
return buf.buffer;
}
var filePath = utils.getCwd() + '/recordings/demo.wav';
var fileWriter = new wav.FileWriter( filePath, {
channels: 1,
sampleRate: 48000,
bitDepth: 16
});
var myReadableStreamBuffer = new streamBuffers.ReadableStreamBuffer({
frequency: 0, // in milliseconds.
chunkSize: 2048 // in bytes.
});
myReadableStreamBuffer.pipe(fileWriter);
source.connect(scriptNode);
scriptNode.connect(context.destination);
scriptNode.onaudioprocess = function(e) {
var arrayBuffer = convertFloat32ToInt16(e.inputBuffer.getChannelData(0));
myReadableStreamBuffer.put(arrayBuffer);
// close myReadableStreamBuffer and run fileWriter.end() when recording is done
};
Related
I'm using the lame package [1] to write some MP3 data into a file. The data is sent as raw audio on a socket and when received its written into a file stream and every 10 mins I write into a new file. The problem I'm facing is that when this runs for a long time, the system is running out of file handles because the file isn't closed. Something like this:
var stream;
var encoder = lame.Encoder({
// Input
channels: 2,
bitDepth: 16,
sampleRate: 44100,
// Output
bitRate: 128,
outSampleRate: 22050,
mode: lame.STEREO // STEREO (default), JOINTSTEREO, DUALCHANNEL or MONO
});
encoder.on('data', function(data) {
stream.write(data);
});
var server = net.createServer(function(socket) {
socket.on('data', function(data) {
// There is some logic here that will based on time if it's
// time to create a new file. When creating a new file it uses
// the following code.
stream = fs.createWriteStream(filename);
// This will write data through the encoder into the file.
encoder.write(data);
// Can't close the file here since it might try to write after
// it's closed.
});
});
server.listen(port, host);
However, how can I close the file after the last data chunk has been written? Technically a new file can be opened while the previous file still need to finish writing it's last chunk.
Is this scenario, how do I correctly close the file?
[1] https://www.npmjs.com/package/lame
You need process data as Readable Stream then using socket.io-stream to resolve your business.
var ss = require('socket.io-stream');
//encoder.on('data', function(data) {
// stream.write(data);
//});
var server = net.createServer(function(socket) {
ss(socket).on('data', function(stream) {
// There is some logic here that will based on time if it's
// time to create a new file. When creating a new file it uses
// the following code.
stream.pipe(encoder).pipe(fs.createWriteStream(filename))
});
});
Close the stream(file) after all write is done:
stream.end();
See documetation: https://nodejs.org/api/stream.html
writable.end([chunk][, encoding][, callback])#
* chunk String | Buffer Optional data to write
* encoding String The encoding, if chunk is a String
* callback Function Optional callback for when the stream is finished
Call this method when no more data will be written to the stream. If supplied, the
callback is attached as a listener on the finish event.
disclaimer: newbie to nodeJS and audio parsing
I'm trying to proxy a digital radio stream through an expressJS app with the help of node-icecast which works great. I am getting the radio's mp3 stream, and via node-lame decoding the mp3 to PCM and then sending it to the speakers. All of this just works straight from the github project's readme example:
var lame = require('lame');
var icecast = require('icecast');
var Speaker = require('speaker');
// URL to a known Icecast stream
var url = 'http://firewall.pulsradio.com';
// connect to the remote stream
icecast.get(url, function (res) {
// log the HTTP response headers
console.error(res.headers);
// log any "metadata" events that happen
res.on('metadata', function (metadata) {
var parsed = icecast.parse(metadata);
console.error(parsed);
});
// Let's play the music (assuming MP3 data).
// lame decodes and Speaker sends to speakers!
res.pipe(new lame.Decoder())
.pipe(new Speaker());
});
I'm now trying to setup a service to identify the music using the Doreso API. Problem is I'm working with a stream and don't have the file (and I don't know enough yet about readable and writable streams, and slow learning). I have been looking around for a while at trying to write the stream (ideally to memory) until I had about 10 seconds worth. Then I would pass that portion of audio to my API, however I don't know if that's possible or know where to start with slicing 10 seconds of a stream. I thought possibly trying passing the stream to ffmpeg as it has a -t option for duration, and perhaps that could limit it, however I haven't got that to work yet.
Any suggestions to cut a stream down to 10 seconds would be awesome. Thanks!
Updated: Changed my question as I originally thought I was getting PCM and converting to mp3 ;-) I had it backwards. Now I just want to slice off part of the stream while the stream still feeds the speaker.
It's not that easy.. but I've managed it this weekend. I would be happy if you guys could point out how to even improve this code. I don't really like the approach of simulating the "end" of a stream. Is there something like "detaching" or "rewiring" parts of a pipe-wiring of streams in node?
First, you should create your very own Writable Stream class which itself creates a lame encoding instance. This writable stream will receive the decoded PCM data.
It works like this:
var stream = require('stream');
var util = require('util');
var fs = require('fs');
var lame = require('lame');
var streamifier = require('streamifier');
var WritableStreamBuffer = require("stream-buffers").WritableStreamBuffer;
var SliceStream = function(lameConfig) {
stream.Writable.call(this);
this.encoder = new lame.Encoder(lameConfig);
// we need a stream buffer to buffer the PCM data
this.buffer = new WritableStreamBuffer({
initialSize: (1000 * 1024), // start as 1 MiB.
incrementAmount: (150 * 1024) // grow by 150 KiB each time buffer overflows.
});
};
util.inherits(SliceStream, stream.Writable);
// some attributes, initialization
SliceStream.prototype.writable = true;
SliceStream.prototype.encoder = null;
SliceStream.prototype.buffer = null;
// will be called each time the decoded steam emits "data"
// together with a bunch of binary data as Buffer
SliceStream.prototype.write = function(buf) {
//console.log('bytes recv: ', buf.length);
this.buffer.write(buf);
//console.log('buffer size: ', this.buffer.size());
};
// this method will invoke when the setTimeout function
// emits the simulated "end" event. Lets encode to MP3 again...
SliceStream.prototype.end = function(buf) {
if (arguments.length) {
this.buffer.write(buf);
}
this.writable = false;
//console.log('buffer size: ' + this.buffer.size());
// fetch binary data from buffer
var PCMBuffer = this.buffer.getContents();
// create a stream out of the binary buffer data
streamifier.createReadStream(PCMBuffer).pipe(
// and pipe it right into the MP3 encoder...
this.encoder
);
// but dont forget to pipe the encoders output
// into a writable file stream
this.encoder.pipe(
fs.createWriteStream('./fooBar.mp3')
);
};
Now you can pipe the decoded stream into an instance of your SliceStream class, like this (additional to the other pipes):
icecast.get(streamUrl, function(res) {
var lameEncoderConfig = {
// input
channels: 2, // 2 channels (left and right)
bitDepth: 16, // 16-bit samples
sampleRate: 44100, // 44,100 Hz sample rate
// output
bitRate: 320,
outSampleRate: 44100,
mode: lame.STEREO // STEREO (default), JOINTSTEREO, DUALCHANNEL or MONO
};
var decodedStream = res.pipe(new lame.Decoder());
// pipe decoded PCM stream into a SliceStream instance
decodedStream.pipe(new SliceStream(lameEncoderConfig));
// now play it...
decodedStream.pipe(new Speaker());
setTimeout(function() {
// after 10 seconds, emulate an end of the stream.
res.emit('end');
}, 10 * 1000 /*milliseconds*/)
});
Can I suggest using removeListener after 10 seconds? That will prevent future events from being sent through the listener.
var request = require('request'),
fs = require('fs'),
masterStream = request('-- mp3 stream --')
var writeStream = fs.createWriteStream('recording.mp3'),
handler = function(bit){
writeStream.write(bit);
}
masterStream.on('data', handler);
setTimeout(function(){
masterStream.removeListener('data', handler);
writeStream.end();
}, 1000 * 10);
I'm streaming recorded PCM audio from a browser with web audio api.
I'm streaming it with binaryJS (websocket connection) to a nodejs server and I'm trying to play that stream on the server using the speaker npm module.
This is my client. The audio buffers are at first non-interleaved IEEE 32-bit linear PCM with a nominal range between -1 and +1. I take one of the two PCM channels to start off and stream it below.
var client = new BinaryClient('ws://localhost:9000');
var Stream = client.send();
recorder.onaudioprocess = function(AudioBuffer){
var leftChannel = AudioBuffer.inputBuffer.getChannelData (0);
Stream.write(leftChannel);
}
Now I receive the data as a buffer and try writing it to a speaker object from the npm package.
var Speaker = require('speaker');
var speaker = new Speaker({
channels: 1, // 1 channel
bitDepth: 32, // 32-bit samples
sampleRate: 48000, // 48,000 Hz sample rate
signed:true
});
server.on('connection', function(client){
client.on('stream', function(stream, meta){
stream.on('data', function(data){
speaker.write(leftchannel);
});
});
});
The result is a high pitch screech on my laptop's speakers, which is clearly not what's being recorded. It's not feedback either. I can confirm that the recording buffers on the client are valid since I tried writing them to a WAV file and it played back fine.
The docs for speaker and the docs for the AudioBuffer in question
I've been stumped on this for days. Can someone figure out what is wrong or perhaps offer a different approach?
Update with solution
First off, I was using the websocket API incorrectly. I updated above to use it correctly.
I needed to convert the audio buffers to an array buffer of integers. I choose to use Int16Array. Since the given audio buffer has a range in-between 1 and -1, it was as simple as multiplying by the range of the new ArrayBuffer (32767 to -32768).
recorder.onaudioprocess = function(AudioBuffer){
var left = AudioBuffer.inputBuffer.getChannelData (0);
var l = left.length;
var buf = new Int16Array(l)
while (l--) {
buf[l] = left[l]*0xFFFF; //convert to 16 bit
}
Stream.write(buf.buffer);
}
It looks like you're sending your stream through as the meta object.
According to the docs, BinaryClient.send takes a data object (the stream) and a meta object, in that order. The callback for the stream event receives the stream (as a BinaryStream object, not a Buffer) in the first parameter and the meta object in the second.
You're passing send() the string 'channel' as the stream and the Float32Array from getChannelData() as the meta object. Perhaps if you were to swap those two parameters (or just use client.send(leftChannel)) and then change the server code to pass stream to speaker.write instead of leftchannel (which should probably be renamed to meta, or dropped if you don't need it), it might work.
Note that since Float32Array isn't a stream or buffer object, BinaryJS might try to send it in one chunk. You may want to send leftChannel.buffer (the ArrayBuffer behind that object) instead.
Let me know if this works for you; I'm not able to test your exact setup right now.
I'm trying to play some mp3 files in node.js. The thing is that I manage to play them one by one, or even, as I want in parallel. But what I also want is to be able to control the amplitude (gain) to be able to create a crossfade in the end. Could anyone help me understand what it is I need to do? (I want to use it in node-webkit so I need a solution that is node.js based with no external dependencies.)
This is what I've got so far:
var lame = require('lame'), Speaker = require('speaker'), fs = require('fs');
var audioOptions = {channels: 2, bitDepth: 16, sampleRate: 44100};
var decoder = lame.Decoder();
var stream = fs.createReadStream("music/ge.mp3", audioOptions).pipe(decoder).on("format", function (format) {
this.pipe(new Speaker(format))
}).on("data", function (data) {
console.log(data)
})
I customized the npm package pcm-volume to do that. To crossfade, provide two pcm audio buffers (output of your decoders). Pipe the result to your Speaker object.
Here is the main part of the modifications. In this case the crossfade happens at the scale of the provided buffer, but you can change that.
var l = buf.length;
var out = new Buffer(l);
for (var i=0; i < l; i+=2) {
volumeSunrise = 0.5*this.volume*(1-Math.cos(pi*i/l));
volumeSunset = 0.5*this.volume*(1+Math.cos(pi*i/l));
uint = Math.round(volumeSunrise*buf.readInt16LE(i) + volumeSunset*this.sunsetBuffer.readInt16LE(i));
// you may want to ensure that -32767 <= uint <= 32768 here, in case you use a volume higher than 1
out.writeInt16LE(uint, i);
}
this.push(out);
callback()
How can I transform a node.js buffer into a Readable stream following using the stream2 interface ?
I already found this answer and the stream-buffers module but this module is based on the stream1 interface.
The easiest way is probably to create a new PassThrough stream instance, and simply push your data into it. When you pipe it to other streams, the data will be pulled out of the first stream.
var stream = require('stream');
// Initiate the source
var bufferStream = new stream.PassThrough();
// Write your buffer
bufferStream.end(Buffer.from('Test data.'));
// Pipe it to something else (i.e. stdout)
bufferStream.pipe(process.stdout)
As natevw suggested, it's even more idiomatic to use a stream.PassThrough, and end it with the buffer:
var buffer = new Buffer( 'foo' );
var bufferStream = new stream.PassThrough();
bufferStream.end( buffer );
bufferStream.pipe( process.stdout );
This is also how buffers are converted/piped in vinyl-fs.
A modern simple approach that is usable everywhere you would use fs.createReadStream() but without having to first write the file to a path.
const {Duplex} = require('stream'); // Native Node Module
function bufferToStream(myBuuffer) {
let tmp = new Duplex();
tmp.push(myBuuffer);
tmp.push(null);
return tmp;
}
const myReadableStream = bufferToStream(your_buffer);
myReadableStream is re-usable.
The buffer and the stream exist only in memory without writing to local storage.
I use this approach often when the actual file is stored at some cloud service and our API acts as a go-between. Files never get wrote to a local file.
I have found this to be the very reliable no matter the buffer (up to 10 mb) or the destination that accepts a Readable Stream. Larger files should implement