The ScriptProcessorNode is deprecated. Use AudioWorkletNode instead - audio

There is a sound recorder and a recorded sound player on the site. It's fine with Firefox, but I get the following warning in Chronium-based browsers.
[Deprecation] The ScriptProcessorNode is deprecated. Use AudioWorkletNode instead. (https://developers.google.com/web/updates/2017/12/audio-worklet)
this.config = {
bufferLen: 4096,
numChannels: 2,
mimeType: 'audio/ogg'
};
this.recording = false;
this.callbacks = {
getBuffer: [],
exportWAV: []
};
Object.assign(this.config, cfg);
this.context = source.context;
this.node = (this.context.createScriptProcessor || this.context.createJavaScriptNode).call(this.context, this.config.bufferLen, this.config.numChannels, this.config.numChannels);
this.node.onaudioprocess = function (e) {
if (!_this.recording) return;
var buffer = [];
for (var channel = 0; channel < _this.config.numChannels; channel++) {
buffer.push(e.inputBuffer.getChannelData(channel));
}
_this.worker.postMessage({
command: 'record',
buffer: buffer
});
};
Full Code on PasteBin
How can I make this change?

The ScriptProcessorNode has been "deprecated" for years, but it isn't going anywhere anytime soon.
However, you don't need it anyway if you want a Vorbis or Opus audio recording in Ogg. Use the MediaRecorder API instead. https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder

Related

OverconstrainedError when requesting speakers stream via getUserMedia in Chrome

Please consider the code below:
navigator.mediaDevices.getUserMedia({audio: true}).then(function() {
navigator.mediaDevices.enumerateDevices().then((devices) => {
devices.forEach(function(device1, k1) {
if (device1.kind == 'audiooutput' && device1.deviceId == 'default') {
const speakersGroupId = device1.groupId;
devices.forEach(function(device2, k2) {
if (device2.groupId == speakersGroupId && ['default', 'communications'].includes(device2.deviceId) === false) {
const speakersId = device2.deviceId;
const constraints = {
audio: {
deviceId: {
exact: speakersId
}
}
};
console.log('Requesting stream for deviceId '+speakersId);
navigator.mediaDevices.getUserMedia(constraints).then((stream) => { // **this always fails**
console.log(stream);
});
}
});
}
});
});
});
The code asks for permissions via the first getUserMedia, then enumerates all devices, picks the default audio output then tries to get a stream for that output.
But it will always throw the error: OverconstrainedError { constraint: "deviceId", message: "", name: "OverconstrainedError" } when getting the audio stream.
There is nothing I can do in Chrome (don't care about other browsers, tested Chrome 108 and 109 beta) to get this to work.
I see a report here that it works, but not for me.
Please tell me that I'm doing something wrong, or if there's another way to get the speaker stream that doesn't involve chrome.tabCapture or chrome.desktopCapture.
Chrome MV3 extension ways are welcomed, not only HTML5.
.getUserMedia() is used to get input streams. So, when you tell it to use a speaker device, it can't comply. gUM's error reporting is, umm, confusing (to put it politely).
To use an output device, use element.setSinkId(deviceId). Make an audio or video element, then set its sink id. Here's the MDN example; it creates an audio element. You can also use a preexisting audio or video element.
const devices = await navigator.mediaDevices.enumerateDevices()
const audioDevice = devices.find((device) => device.kind === 'audiooutput')
const audio = document.createElement('audio')
await audio.setSinkId(audioDevice.deviceId)
console.log(`Audio is being played on ${audio.sinkId}`)

Web Audio Api: Proper way to play data chunks from a nodejs server via socket

I'm using the following code to decode audio chunks from nodejs's socket
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var delayTime = 0;
var init = 0;
var audioStack = [];
var nextTime = 0;
client.on('stream', function(stream, meta){
stream.on('data', function(data) {
context.decodeAudioData(data, function(buffer) {
audioStack.push(buffer);
if ((init!=0) || (audioStack.length > 10)) { // make sure we put at least 10 chunks in the buffer before starting
init++;
scheduleBuffers();
}
}, function(err) {
console.log("err(decodeAudioData): "+err);
});
});
});
function scheduleBuffers() {
while ( audioStack.length) {
var buffer = audioStack.shift();
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
if (nextTime == 0)
nextTime = context.currentTime + 0.05; /// add 50ms latency to work well across systems - tune this if you like
source.start(nextTime);
nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
};
}
But it has some gaps/glitches between audio chunks that I'm unable to figure out.
I've also read that with MediaSource it's possible to do the same and having the timing handled by the player instead of doing it manually. Can someone provide an example of handling mp3 data?
Moreover, which is the proper way to handle live streaming with web audio API? I've already read almost all questions os SO about this subject and none of them seem to work without glitches. Any ideas?
You can take this code as an example: https://github.com/kmoskwiak/node-tcp-streaming-server
It basically uses media source extensions. All you need to do is to change from video to audio
buffer = mediaSource.addSourceBuffer('audio/mpeg');
yes #Keyne is right,
const mediaSource = new MediaSource()
const sourceBuffer = mediaSource.addSourceBuffer('audio/mpeg')
player.src = URL.createObjectURL(mediaSource)
sourceBuffer.appendBuffer(chunk) // Repeat this for each chunk as ArrayBuffer
player.play()
But do this only if you don't care about IOS support 🤔 (https://developer.mozilla.org/en-US/docs/Web/API/MediaSource#Browser_compatibility)
Otherwise please let me know how you do it !

Playing incoming ArrayBuffer audio binary data from binaryjs server simultaneously

Good day! I'm into video chat streaming this morning and I've bumped into a problem with the incoming ArrayBuffer which contains binary data of an audio.
Here is the code I found for playing binary audio data (Uint8Array):
function playByteArray(byteArray) {
var arrayBuffer = new ArrayBuffer(byteArray.length);
var bufferView = new Uint8Array(arrayBuffer);
for (i = 0; i < byteArray.length; i++) {
bufferView[i] = byteArray[i];
}
context.decodeAudioData(arrayBuffer, function(buffer) {
buf = buffer;
play();
});
}
// Play the loaded file
function play() {
// Create a source node from the buffer
var source = context.createBufferSource();
source.buffer = buf;
// Connect to the final output node (the speakers)
source.connect(context.destination);
// Play immediately
source.start(0);
}
Now below, I've used MediaStreamRecorder from https://github.com/streamproc/MediaStreamRecorder to record the stream from getUserMedia. This code will continuously send the recorded binary data to the server.
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true, video: true}, function(stream) {
video.src = (window.URL || window.webkitURL).createObjectURL(stream); //get this for video strewam url
video.muted = true;
multiStreamRecorder = new MultiStreamRecorder(stream);
multiStreamRecorder.canvas = {
width: video.width,
height: video.height
};
multiStreamRecorder.video = video;
multiStreamRecorder.ondataavailable = function(blobs) {
var audioReader = new FileReader();
audioReader.addEventListener("loadend", function() {
var arrBuf = audioReader.result;
var binary = new Uint8Array(arrBuf);
streamToServ.write(binary);
// streamToServ is the binaryjs client
});
audioReader.readAsArrayBuffer(blobs.audio);
};
multiStreamRecorder.start(1);
}, onVideoFail);
} else {
alert ('failed');
}
Convert the blobs produced (audio and video) to binary and send it to binaryjs which will be played on another client with this:
client.on('stream', function (stream, meta) {
stream.on('data', function(data) {
playByteArray(new Uint8Array(data));
});
});
I had no problems with transferring the binary data but the problem is there is a hiccup sound in the playback significantly on every binary data that was played. Is there something wrong on how I play the incoming ArrayBuffers? I'm also thinking of asking streamproc about this.
Thanks in advance!
Cheers.
I found a solution to this problem by making an audio buffer queueing. Most of the code is from here:
Choppy/inaudible playback with chunked audio through Web Audio API
Thanks.
Not sure if this is the problem, but perhaps instead of source.start(0), you should use source.start(time), where time is where you want to start the source. source.start(0) will start playing immediately. If your byte array comes in faster than real-time, the sources might overlap because you start them as soon as possible.

Socket.io with AudioContext send and receive audio Errors on receiving

I am trying to build something, where a user can send audio instantly to many people using, socket.io, audioContext, js for the front-end and Node.js,socket.io for the server.
I can record the audio, send it to the server and send it back to other users, but I cannot play the data. I guess it must be a problem of how I send them or how I process the buffer that receives them.
I get the following error: Update!
The buffer passed to decodeAudioData contains an unknown content type.
Audio is passed fine, the buffer is created with no errors but there is no sound feedback.
The User presses record and it started recording/streaming with he following functions:
This is how it all starts:
navigator.getUserMedia({audio: true,video: false}, initializeRecorder, errorCallback);
function initializeRecorder(stream) {
var bufferSize = 2048;
audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source = audioCtx.createMediaStreamSource(stream);
var recorder = audioCtx.createScriptProcessor(bufferSize, 1, 1);
recorder.onaudioprocess = recorderProcess;
source.connect(recorder);
recorder.connect(audioCtx.destination);
recording = true;
initialized = true;
play = false;
stop = true;
}
function recorderProcess(e) {
var left = e.inputBuffer.getChannelData(0);
socket.emit('audio-blod-send', convertFloat32ToInt16(left));
}
function convertFloat32ToInt16(buffer) {
l = buffer.length;
buf = new Int16Array(l);
while (l--) {
buf[l] = Math.min(1, buffer[l])*0x7FFF;
}
return buf.buffer;
}
Then the server uses the socket to broadcast what the original sender send:
socket.on('audio-blod-send',function(data){
socket.broadcast.to(roomName).emit('audio-blod-receive', data);
});
And then the data are played: Update!
I was using audioContext.decodeData which I found out that it is only used to read/decode audio from MP3 or WAV files not streaming. With the new code no errors appear however there is no Audio feedback.
socket.on('audio-blod-receive',function(data) {
playAudio(data);
});
function playAudio(buffer)
{
var audioCtx;
var started = false;
if(!audioCtx) {
audioCtx = new (window.AudioContext || window.webkitAudioContext)();
}
source = audioCtx.createBufferSource();
audioBuffer = audioCtx.createBuffer( 1, 2048, audioCtx.sampleRate );
audioBuffer.getChannelData( 0 ).set( buffer );
source.buffer = audioBuffer;
source.connect( audioCtx.destination );
source.start(0);
console.log(buffer);
}
P.S If anyone is interested further in what I am trying to do, feel free to contact me.

Live streaming using FFMPEG to web audio api

I am trying to stream audio using node.js + ffmpeg to browsers connected in LAN only using web audio api.
Not using element because it's adding it's own buffer of 8 to 10 secs and I want to get maximum high latency possible (around 1 to 2 sec max).
Audio plays successfully but audio is very choppy and noisy.
Here is my node.js (server side) file:
var ws = require('websocket.io'),
server = ws.listen(3000);
var child_process = require("child_process");
var i = 0;
server.on('connection', function (socket)
{
console.log('New client connected');
var ffmpeg = child_process.spawn("ffmpeg",[
"-re","-i",
"A.mp3","-f",
"f32le",
"pipe:1" // Output to STDOUT
]);
ffmpeg.stdout.on('data', function(data)
{
var buff = new Buffer(data);
socket.send(buff.toString('base64'));
});
});
And here is my HTML:
var audioBuffer = null;
var context = null;
window.addEventListener('load', init, false);
function init() {
try {
context = new webkitAudioContext();
} catch(e) {
alert('Web Audio API is not supported in this browser');
}
}
var ws = new WebSocket("ws://localhost:3000/");
ws.onmessage = function(message)
{
var d1 = base64DecToArr(message.data).buffer;
var d2 = new DataView(d1);
var data = new Float32Array(d2.byteLength / Float32Array.BYTES_PER_ELEMENT);
for (var jj = 0; jj < data.length; ++jj)
{
data[jj] = d2.getFloat32(jj * Float32Array.BYTES_PER_ELEMENT, true);
}
var audioBuffer = context.createBuffer(2, data.length, 44100);
audioBuffer.getChannelData(0).set(data);
var source = context.createBufferSource(); // creates a sound source
source.buffer = audioBuffer;
source.connect(context.destination); // connect the source to the context's destination (the speakers)
source.start(0);
};
Can any one advise what is wrong?
Regards,
Nayan
I got it working !!
All I had to do is adjust the number of channel.
I've set FFMPEG to output mono audio and it worked like a charm. Here is my new FFMOEG command:
var ffmpeg = child_process.spawn("ffmpeg",[
"-re","-i",
"A.mp3",
"-ac","1","-f",
"f32le",
"pipe:1" // Output to STDOUT
]);
You are taking chunks of data, creating separate nodes from them, and starting them based on network timing. For audio to sound correct, the playback of buffers must be without break, and sample-accurate timing. You need to fundamentally change your method.
The way I do this is by creating a ScriptProcessorNode which manages its own buffer of PCM samples. On process, it reads the samples into to the output buffer. This guarantees smooth playback of audio.

Resources