I want to capture OS system audio output with Electron desktopcapturer, it works well in Windows as following:
constraints = {
// audio: false,
audio: {
mandatory: {
chromeMediaSource: 'desktop'
}
},
video: {
mandatory: {
chromeMediaSource: 'desktop'
//maxFrameRate: 15
},
}
then, I use:
navigator.webkitGetUserMedia(constraints, function(dstream) {...
However, in Ubuntu, it always shows "could not start audio source". Can anyone tell me how to do? Thanks for your help.
Because of a patch merged in Chromium, it's not possible to access system audio without much low-level tinkering. Here is an issue raised on Electron's github page but is left un-resolved since 6yrs. Quoting a reply from the issue, that seems to be of little hope:
I was searching through the Chromium issue tracker, and found this: https://bugs.chromium.org/p/chromium/issues/detail?id=1143761&q=linux%20streaming&can=2
This may be worth keeping an eye on, since it seems related to this issue. It's possible that this issue may see resolution when the Chromium team starts pushing fixes.
Here is the pulseaudio patch submitted to chromium, which is the root cause of this issue. Comming to a potential solution you can revert back before this commit and audio capture should work fine then. But, I haven't tried out this solution. Let me know if someone manages to fix this, or try this.
Leaving my answer here for the record, it may or may not work for you. I ran into this error while testing my Electron app, while being in a Google Meet at the same time (i.e. Chrome had a lock on my microphone). The error stopped happening once I ended the Meet.
Related
I am trying to make myself Digital audio to Analog audio converter
I have STM32F769i Discovery0: https://www.st.com/en/evaluation-tools/32f769idiscovery.html
Which has SPDIFRX and SPDIFTX ports
I found a fearly good starting point here: http://www.tjaekel.com/DiscoveryF7Audio/index.html
Also the guy posted here: https://www.openstm32.org/forumthread921
But the guy used STM32746G Discovery: https://www.st.com/en/evaluation-tools/32f746gdiscovery.html
Instead
So I went and tried to just get his SPDIF audio portion working on my board
My Project can be found here (I hope it compiles, with CubeIDE you never know what will happen :)): https://www.mediafire.com/file/n0s2z9p6nn735qg/SPDIF_Example.zip/file
I have no idea what I am doing wrong, but for some reason SPDIF_RX_IRQHandler (in stm32f7xx_it.c) is never called (LED never turns green, yea my debugging tehniques are primitive, but will explain why later)
So because of that HAL_SPDIFRX_ReceiveDataFlow_IT (in spdifrx.c) always returns HAL_TIMEOUNT, and of course no audio is ever played on the speakers
I am not sure what I am doing wrong
When I start MCU I call BSP_SPDIF_Init() (defined in spdifrx.c) in main.c right after I take care of the clock
if (BSP_SPDIFRX_Init() != HAL_OK)
{
Error_Handler();
}
And it appears it initializes all right, because I get HAL_OK back
Maybe I am not inializing GPIO properly from HAL_MspInit in stm32f7xx_hal_msp.c inproperly
I am realy out of ideas, what I am doing wrong, because the analog side of the audio does init, I can hear that as pop pop from the speakers when I power up my MCU, its just that SPDIF side has problems
I am is my setup crocked?
I am using this component radio as my SPDIF transmitter (Hama DIT2000M): https://de.hama.com/webresources/article-documents/00054/man/00054821man_en.pdf
It says it has SPDIF Audio out (it says its digital over coaxial)
I know its optical side is working fine because on my component receiver it plays just fine (it reports as 48khz Stereo)
Is my cable to long? I am using this cable: https://i.imgur.com/JqAxePF.jpg
(not sure who made it)
Now why do I debug with blinking leds, because where my test subject is (my Hama receiver), there is no computer so…. Blinking leds it is, I would like to avoid aditional libraries and have a minimum working example, because you never know what problems they could bring so that's why LCD is not used right now
I hope someone has any advice, eather how to get any data in to SPDIF port (because right now for some reason, I don't get anything) or what I am doing wrong for my audio not to play, the usage of STM32F769i Discovery0 instead of STM32746G Discovery is probably not responsible
I hope that this is a proper place for this king of questions, because I did ask a question regarding SPDIF on the STM forum: https://community.st.com/s/feed/0D53W00001z0RaqSAE
But didn't get any usefull advice there
Now SPDIF realy does not have much examples, there is only a polling example which does work (with the same cable), there is no interupt example, my interupt example (you can read the post on the STM forum post I linked) is not working as well (interupts are probably not broken right?)
So yea, I am lost a bit not sure what to do, and who to ask, so I tried here
PS: I know stackvoverflow does not like links to code, but I believe something is wrong with my project (interupts don't fire for some reason), and its realy hard to put this all into the question
Thanks for Anwsering and Best Regards
I managed to solve this, I guess I did not initialize SPDIF GPIO properly
after setting this
GPIO_InitStructure.Pin = GPIO_PIN_7;
GPIO_InitStructure.Mode = GPIO_MODE_AF_PP;
GPIO_InitStructure.Pull = GPIO_NOPULL;
GPIO_InitStructure.Speed = GPIO_SPEED_FAST;
GPIO_InitStructure.Alternate = DISCOVERY_SPDIFRX_AF;
HAL_GPIO_Init(GPIOD, &GPIO_InitStructure);
to this
GPIO_InitStructure.Pin = GPIO_PIN_12;
GPIO_InitStructure.Mode = GPIO_MODE_AF_PP;
GPIO_InitStructure.Pull = GPIO_NOPULL;
GPIO_InitStructure.Speed = GPIO_SPEED_FAST;
GPIO_InitStructure.Alternate = GPIO_AF7_SPDIFRX;
HAL_GPIO_Init(GPIOG, &GPIO_InitStructure);
interupts started to fire
How can I enable noise suppression and audio mirroring in WebRTC?
What I tried is to put in the media constraints
audio: {
mandatory: {
googNoiseSupression: true
googAudioMirroring: true
}
}
but it doesn't work. After the browser asks permission to share the mic and I click on "Allow", then nothing happens.
I got the options from here: https://chromium.googlesource.com/external/webrtc/+/master/talk/app/webrtc/mediaconstraintsinterface.cc. Is there somewhere else a list of the media constraints that can be used?
I'm using Chrome.
I finally made it work, but only with googNoiseSupression. Also adding googAudioMirroring and calling getUserMedia does nothing.
We've got a really annoying bug when trying to send mp3 data. We've got the following set up.
Web cam producing aac -> ffmpeg convert to adts -> send to nodejs server -> ffmpeg on server converts adts to mp3 -> mp3 then streamed to browser.
This works *perfectly" on Linux ( chrome with HTML5 and flash, firefox flash only )
However on windows the sound just "stalls", no matter what combination we use ( browser/html5/flash ). If however we shutdown the server the sound then immediately starts to play as we expect.
For some reason on windows based machines it's as if the sound is being buffered "waiting" for something but we don't know what that is.
Any help would be greatly appreciated.
Relevant code in node
res.setHeader('Connection', 'Transfer-Encoding');
res.setHeader('Content-Type', 'audio/mpeg');
res.setHeader('Transfer-Encoding', 'chunked');
res.writeHeader('206');
that.eventEmitter.on('soundData', function (data) {
debug("Got sound data" + data.cameraId + " " + req.params.camera_id);
if (req.params.camera_id == data.cameraId) {
debug("Sending data direct to browser");
res.write(data.sound);
}
});
Code on browser
soundManager.setup({
url: 'http://dashboard.agricamera.co.uk/themes/agricamv2/swf/soundmanager2.swf',
useHTML5Audio: false,
onready: function () {
that.log("Sound manager is now ready")
var mySound = soundManager.createSound({
url: src,
autoLoad: true,
autoPlay: true,
stream: true,
});
}
});
If however we shutdown the server the sound then immediately starts to play as we expect.
For some reason on windows based machines it's as if the sound is being buffered "waiting" for something but we don't know what that is.
That's exactly what's happening.
First off, chrome can play ADTS streams so if possible, just use that directly and save yourself some audio quality by not having to use a second lossy codec in the chain.
Next, don't use soundManager, or at least let it use HTML5 audio. You don't need the Flash fallback these days in most cases, and Chrome is perfectly capable of playing your streams. I suspect this is where your problem lies.
Next, try disabling chunked transfer. Many clients don't like transfer encoding on streams.
Finally, I have seen cases where Chrome's built-in media handling (which I believe varies from OS to OS) cannot sync to the stream. There are a few bug tickets out there for Chromium. If your playback timer isn't incrementing, this is likely your problem and you can simply try to reload the stream programmatically to work around it.
I'm currently using child_process and command-line mplayer to play audio on the local machine, with my Node.JS application. This works, but it's not really an excellent solution. My biggest issue is that it takes 500ms from mplayer is started to audio starts playing.
Are there better ways to play audio? Preferably compressed audio, but I'll take what I can get.
I would suggest using node-speaker, which outputs raw PCM data to your speakers (so basically, it plays audio).
If you're playing something like mp3 files you might need to decode it first to PCM data, which is exactly what node-lame does.
Hope that helps.
Simplest I've found (on Mac OS) is to use
exec('afplay whatever.mp3', audioEndCallback)
Introducing, audic. It doesn't use any native dependencies so it can't break like in the answers higher up.
Observe:
import Audic from 'audic';
const audic = new Audic('audio.mp3');
await audic.play();
audic.addEventListener('ended', () => {
audic.destroy();
});
or more simply:
import {playAudioFile} from 'audic';
await playAudioFile('audio.mp3');
I think what you asking is there any good modules that work with audio in the nodejs ecosystem?
Whenever you have this type of question you should first go the npmjs and just type a appropiate keyword.
Here is list of modules that related to audio I found on the npmjs site.
substacks's baudio looks good to me.
You can use play-sound module also:
Install using npm, Run command:
npm install play-sound --save
Now use in your code:
var player = require('play-sound')(opts = {})
player.play('./music/somebody20.flac', function (err) {
if (err) throw err;
console.log("Audio finished");
});
Check out sound-play, it's a simple solution that works on Windows and MacOS without using external players:
const sound = require('sound-play')
sound.play('music.mp3')
Disclaimer: I'm the author of this package.
Check out node-groove - Node.js binding to libgroove:
This library provides decoding and encoding of audio on a playlist. It is intended to be used as a backend for music player applications, however it is generic enough to be used as a backend for any audio processing utility.
Disclaimer: I wrote the library, which is free, open source, and not affiliated with any product, service, or company.
You can use the play sound module and path to achieve this.
npm install play-sound
import path from 'path';
const __dirname = path.resolve();
import sound from 'sound-play';
const filePath = path.join(__dirname, "file.mp3");
sound.play(filePath);
I made a streaming music player and it works fine in the foreground.
But in the background iOS4, it doesn't play the next song automatically. ( remote control works )
The reason is AudioQueueStart return -12985.
I already check the audio session. it just fine. I use AudioQueueStart when it start to play the music.
How can you remove AudioQueueStart?
- (void)play
{
[self setupAudioQueueBuffers]; // calcluate the size to use for each audio queue buffer, and calculate the // number of packets to read into each buffer
OSStatus status = AudioQueueStart(self.queueObject, NULL);
}
I read the answer in the web about the AudioQueueStart fail subject.
One thing to check is that the AudioSession is active first.
In my case, I had previously set the session to inactive between song changes before starting a new song:AudioSessionSetActive(false);
Once I removed this AudioQueueStart works just fine from the background.
In my experience, the -12985 message occurs because another app already has an audio session active when you try to start playback in your app. Options are to 1) instruct the user to close the other app, or 2) set mix mode (see kAudioSessionProperty_OverrideCategoryMixWithOthers).
The disadvantage of mix mode is if you depend on lock screen art or remote controls, they won't work with mix mode set.
I also faced with such problem week ago. I've spent two days to find solution and I found it. May be this link will help (it is official answer): http://developer.apple.com/library/ios/#qa/qa1668/_index.html
Make sure that you activate session from applicationDidEnterBackground task handler. Now my application can play sound in background.
See this.
You probably need to include the following:
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents];
Towards the bottom there is a reiteration of the how important that line is. As it is not mentioned in any of the three main audio audio guides (AVFoundation, AudioSession, or AudioQueue) it can easily be missed.
I have the same problem.
I registry the AudioSessionInterruptionListener, pause the audio when phone call, resume it after the call end. but get -12985 error code when call AudioQueueStart to resume.
My solution is that I try to call AudioQueueStart after 0.02s.
I don't know the reason.
On iOS7, AudioQueueStart was returning '!int' ('tni!'), though i'm sure no one would be surprised to find that it's not documented in the docs or headers. It was the same issue, though, and the same fix (setting the audio session to active in the background task handler) worked for me.