How to enable noise suppression and audio mirroring in WebRTC - audio

How can I enable noise suppression and audio mirroring in WebRTC?
What I tried is to put in the media constraints
audio: {
mandatory: {
googNoiseSupression: true
googAudioMirroring: true
}
}
but it doesn't work. After the browser asks permission to share the mic and I click on "Allow", then nothing happens.
I got the options from here: https://chromium.googlesource.com/external/webrtc/+/master/talk/app/webrtc/mediaconstraintsinterface.cc. Is there somewhere else a list of the media constraints that can be used?
I'm using Chrome.

I finally made it work, but only with googNoiseSupression. Also adding googAudioMirroring and calling getUserMedia does nothing.

Related

Cannot capture system audio output with Electron desktopcapturer in Ubuntu

I want to capture OS system audio output with Electron desktopcapturer, it works well in Windows as following:
constraints = {
// audio: false,
audio: {
mandatory: {
chromeMediaSource: 'desktop'
}
},
video: {
mandatory: {
chromeMediaSource: 'desktop'
//maxFrameRate: 15
},
}
then, I use:
navigator.webkitGetUserMedia(constraints, function(dstream) {...
However, in Ubuntu, it always shows "could not start audio source". Can anyone tell me how to do? Thanks for your help.
Because of a patch merged in Chromium, it's not possible to access system audio without much low-level tinkering. Here is an issue raised on Electron's github page but is left un-resolved since 6yrs. Quoting a reply from the issue, that seems to be of little hope:
I was searching through the Chromium issue tracker, and found this: https://bugs.chromium.org/p/chromium/issues/detail?id=1143761&q=linux%20streaming&can=2
This may be worth keeping an eye on, since it seems related to this issue. It's possible that this issue may see resolution when the Chromium team starts pushing fixes.
Here is the pulseaudio patch submitted to chromium, which is the root cause of this issue. Comming to a potential solution you can revert back before this commit and audio capture should work fine then. But, I haven't tried out this solution. Let me know if someone manages to fix this, or try this.
Leaving my answer here for the record, it may or may not work for you. I ran into this error while testing my Electron app, while being in a Google Meet at the same time (i.e. Chrome had a lock on my microphone). The error stopped happening once I ended the Meet.

Chrome "stalling" when streaming mp3 file from nodejs windows only

We've got a really annoying bug when trying to send mp3 data. We've got the following set up.
Web cam producing aac -> ffmpeg convert to adts -> send to nodejs server -> ffmpeg on server converts adts to mp3 -> mp3 then streamed to browser.
This works *perfectly" on Linux ( chrome with HTML5 and flash, firefox flash only )
However on windows the sound just "stalls", no matter what combination we use ( browser/html5/flash ). If however we shutdown the server the sound then immediately starts to play as we expect.
For some reason on windows based machines it's as if the sound is being buffered "waiting" for something but we don't know what that is.
Any help would be greatly appreciated.
Relevant code in node
res.setHeader('Connection', 'Transfer-Encoding');
res.setHeader('Content-Type', 'audio/mpeg');
res.setHeader('Transfer-Encoding', 'chunked');
res.writeHeader('206');
that.eventEmitter.on('soundData', function (data) {
debug("Got sound data" + data.cameraId + " " + req.params.camera_id);
if (req.params.camera_id == data.cameraId) {
debug("Sending data direct to browser");
res.write(data.sound);
}
});
Code on browser
soundManager.setup({
url: 'http://dashboard.agricamera.co.uk/themes/agricamv2/swf/soundmanager2.swf',
useHTML5Audio: false,
onready: function () {
that.log("Sound manager is now ready")
var mySound = soundManager.createSound({
url: src,
autoLoad: true,
autoPlay: true,
stream: true,
});
}
});
If however we shutdown the server the sound then immediately starts to play as we expect.
For some reason on windows based machines it's as if the sound is being buffered "waiting" for something but we don't know what that is.
That's exactly what's happening.
First off, chrome can play ADTS streams so if possible, just use that directly and save yourself some audio quality by not having to use a second lossy codec in the chain.
Next, don't use soundManager, or at least let it use HTML5 audio. You don't need the Flash fallback these days in most cases, and Chrome is perfectly capable of playing your streams. I suspect this is where your problem lies.
Next, try disabling chunked transfer. Many clients don't like transfer encoding on streams.
Finally, I have seen cases where Chrome's built-in media handling (which I believe varies from OS to OS) cannot sync to the stream. There are a few bug tickets out there for Chromium. If your playback timer isn't incrementing, this is likely your problem and you can simply try to reload the stream programmatically to work around it.

Soundjs on safari and mobile browsers

Currently my audio won't play on safari and on mobile devices.
It works fine on a normal pc on FireFox, Chrome and IE
var manifest = [
{ id: "correct", src: 'assets/correct.mp3|assets/correct.ogg' },
{ id: "wrong", src: 'assets/wrong.mp3|assets/wrong.ogg' }
];
var queue = new createjs.LoadQueue();
queue.installPlugin(createjs.Sound);
queue.loadManifest(manifest, true);
And I'm calling the play function like this;
createjs.Sound.play("correct");
This function is written inside a function that's called when a user presses a div.
That code looks like it should work. Web Audio is initially muted on iOS devices, but when play is called inside of a user event it unmutes.
There are a couple of possibilities (without seeing the rest of the code):
You are working on iPad 1, which does not support web audio and has html audio disabled by default due to severe limitations.
You are not waiting for the audio to finish loading before calling play:
queue.addEventListener("complete", loadComplete);
The audio file path is incorrect and therefore the load is failing, which you can detect by listening for an error event.
You are using a non-default encoding for the mp3 files that is not supported by Safari. Generally that would break in other browsers as well though.
Safari requires quicktime for html audio to play, so that could be a problem.
Using createjs.Sound.registerPlugins, SoundJS is being set to use an unsupported on mobile plugin, such as FlashPlugin. You can check your current plugin with:
createjs.Sound.activePlugin.toString();
You may find the Mobile Safe Tutorial useful. Hope that helps.
There is a way to hack it, play an empty mp3 then play the audio.
It must load the empty mp3 within mainfest array firstly:
var manifest = [
...
{ id: "empty", src: 'assets/empty.mp3|assets/empty.ogg' }
];
...
Before playing the sound, play the empty mp3:
createjs.Sound.play("empty");
createjs.Sound.play("correct");

How to play audio in background with firefox os?

In my manifest file I've add the audio-channel-content in permissions:
"permissions": {
"audio-channel-content":{"description":"Use the audio channel for the music player"}
}
In my index.html I've got an audio tag like:
<audio mozaudiochannel="content" preload="none" src="http://my-stream-url"></audio>
I can play my audio stream during 2mn:
The first one when the phone is unlock.
After 1mn my phone auto-lock the screen and it continue playing for another minute.
Does it possible to play this audio stream more than 1mn after the lock?
Thanks in advance.
The code and permission block you have are correct and I can confirm it is working in Firefox OS 1.1. You can also do the whole thing in Javascript:
audio = new Audio();
audio.preload = 'none';
audio.mozAudioChannelType = 'content';
It's because the wifi is closed after screen shutdown (auto-lock)?
Are you using the dev version of gaia? Wifi is set to always connect in product version.

Record screen (screencast) using HTML5, e.g. getUserMedia or something?

Is there some HTML5 API that can help me record the screen (screencast)? I know about recording webcams etc, but I need the screen it self. It would help even more if this API was actually implemented in some cross-platform browser.
Screen capture is available as an experimental feature in Chrome, and you must enable it in your browser settings (chrome://flags/#enable-usermedia-screen-capture). Also, it seems to only work on an https connection.
navigator.getUserMedia({
video: {
mandatory: {
chromeMediaSource: 'screen'
// maxWidth: 640,
// maxHeight: 480
}
}
},
function (stream) {
//success! Now set up the video
var video = document.createElement('video');
video.src = window.URL.createObjectURL(stream);
video.autoplay = true;
//you can choose to display the video here or not.
},
function () {
//failure. browser is unable or permission denied
}
);
Once you have the video set up, it is possible to record it. There is a standard drafted for media recording, but it's not implemented in any browsers yet. However, it is still possible. Refer to this demo and sample code for a workaround.
Unfortunately, for now, this solution is available for Chrome only.
Screen capture is possible via the MediaRecorder API, and recording from a hidden canvas.
Here's an example

Resources