ALSA, avoiding capture of sound before app starts - audio

I'm working on an app that captures audio using the ALSA API.
A tester of the app noticed the following behavior. If a sound is playing
just before the app starts, and then the app is started, that sound
is captured when snd_pcm_readi() is called. Any sound playing within
about a second before starting the app is captured in this way.
Is this behavior expected?
What I want the app to do is capture only the sound that occurs after the
application starts. How can I assure that happens (snd_pcm_drop(), maybe?).

Related

blocking audio app for node.js and electron

I'm using nodejs and ubuntu 21.10. I have an electron app that has to play an mp3 file. Presently I do this by launching a system app to play the file. I want to know when the playback has finished. I have tried mplayer and mpg123. They seem to work similarly. I issue the command to play the file and the app I use, mplayer or mpg123, returns right away before the mp3 finished playing.
If you were to launch the player playing the file from the command line, the playback is acompanied by some changing textual display on the screen. I think these players play the track in a separate thread. I want something that blocks until the track is done.
The reason I want this is that I'm using the microphone immediately after the playback. I don't want the mic listening to the playback. Presently I use a timer to wait 3 seconds before going to the microphone, but that's not the best, I'm sure.
I think my code looks like this now:
const out = execSync("mpg123 ./sample.mp3");
Any help would be appreciated.

How do I send a mediaStream from the electron renderer process to a background ffmpeg process?

Goal (to avoid the XY problem):
I'm building a small linux desktop application using webRTC, electron, and create-react-app. The application should receive a mediaStream via a webRTC peer connection, display the stream to the user, create a virtual webcam device, and send the stream to the virtual webcam so it can be selected as the input on most major videoconferencing platforms.
Problem:
The individual parts all work: receiving the stream (webRTC), creating the webcam device (v4l2loopback), creating a child process of ffmpeg from within electron, passing the video stream to the ffmpeg process, streaming the video to the virtual device using ffmpeg, and selecting the virtual device and seeing the video stream in a videoconference meeting.
But I'm currently stuck on tying the parts together.
The problem is, the mediaStream object is available inside electron's renderer process (as state in a deeply nested react component, FWIW). As far as I can tell, I can only create a node.js child process of ffmpeg from within electron's main process. That implies that I need to get the mediaStream from the renderer to the main process. To communicate between processes, electron uses an IPC system. Unfortunately, it seems that IPC doesn't support sending a complex object like a video stream.
What I've tried:
starting ffmpeg child process (using child_process.spawn) from within renderer process throws 'fs.fileexistssync' error. Browsing SO indicates that only the main process can start these background processes.
creating separate webRTC connection between renderer and main to re-stream the video. I'm using IPC to facilitate the connection, but offer/answer descriptions aren't reaching the other peer over IPC - my guess is this is due to the same limitations on IPC as before.
My next step is to create a separate node server on app startup which ingests the incoming RTC stream and rebroadcasts it to the app's renderer process, as well as to a background ffmpeg process.
Before I try that, though, does anyone have suggestions for approaches I should consider? (this is my first SO question, so any advice on how to improve it is appreciated).

Live streaming from UWP to Linux/Python Server

I have an UWP app that capture a live video stream (webcam), encodes it in h264, and sends it through a TCP socket (in a local network, I need high performance) to a Linux device.
Is there a way to do this? I need the video not for playing it but for extract single frames. I could do that with opencv but it requires a local video file, instead I'm using a live stream.
I would send photos instead of a video stream if the time needed for capture one was acceptable, but it requires about 250 ms.
Is RTP required? Does UWP (windows) provides a way to achive this?
Thank you
P.S.: The UWP app runs in Hololens.
You can use WebRTC to transmit live video from the HoloLens easily to any target. That's probably the easiest way to do it without going really low level.
For an introduction just grab this repo and try the sample app which runs perfectly on the HoloLens https://github.com/webrtc-uwp/PeerCC/tree/e95f231e1dc9c248ca2ffa040276b8a1265da145/Client

Manage playback of audio stream from device to chromecast

I have been searching for the best practice of stopping the playback on a device when the chromecast is selected. right now I connect to an audio stream, it starts playing on the chromecast fine, but also stays playing on my phone. I had hoped this was some type of automatic switch that was supposed to occur. Is it up to me to manage all of this?? If so what are the best practices to start/resume playback when switching back and forth from the chromecast to the device? It is a live stream so no way to pause and pick up where it left off.
Are there certain callbacks that I need to watch for to make the switch?
Yes, it is your responsibility to manage the behavior of your app. Our UX Design Checklist outlines the flow that we are recommending; for example when you start streaming to a cast device, you stop the local playback. Details of how you can stop the playback locally depends on your application but what you should use is a set of callbacks that the Android Cast SDK provides for you to learn about the success of your cast control commands and state changes that happen on the receiver. These callbacks can tell you if your launch of application was successful or not, whether the media is playing or paused, or when the metadata for the media has changed. You need to look at our SDK documentation to see which ones are appropriate for your case. We also have a number of sample projects that do most of these tasks.

Background Audio and Remote Control Support using MPMusicPlayerController on iOS 4. Is this even possible?

I've spent two days on this and have gotten nowhere. I'm trying to use [MPMusicPlayerController applicationMusicPlayer] to play audio chosen from the user's iPod library and have it run in the background as well as support remote events. Now getting the music actually playing is the easy part. Get the instance, pick the songs, assign the music queue and play. Done and done. BUT... a) I can't get it to play in the background, and b) even when in the foreground I can't get the remote control events to work at all!
And before you ask, yes, I have set the plist entries, the audio session category, the call to say I'm interested in getting remote events and set up a first responder to listen for them, so please know, yes, I've read read every single document on the subject that I could find* (*a task I blame Apple for for not being clear at all on this topic, nor having ANY example code for it!) and I've watched every one of the WWDC videos relating to it (even freezing the screen to copy the code exactly from their example...) so unless I've missed something not in this list, replying with any of those answers is not going to help.
One more thing... I am explicitly talking about using the MPMusicPlayerController which according to the docs, never uses an application session. It always uses the system session. (Maybe that in itself answers my question, but the docs don't clearly say that so I'm not sure, hence this question.)
That said, after two days, my thoughts are this:
When using the MPMusicPlayerController, regardless of what methods you call or what plist entries you set, your app will never run in the background. Period. If you use the ipodMusicPlayer instance, the music keeps playing, but that's because it's the iPod that's playing, not your app. If you use the applicationMusicPlayer instance instead, when going to the background your music stops. In both cases, your app is suspended.
Regardless of your using the ipodMusicPlayer or applicationMusicPlayer instances, all remote events go to the iPod application itself, not yours, even if you've explicitly asked for them. If you are using the applicationMusicPlayer instance and you use the remote to select 'Play', the iPod app receives the command so your audio ducks out and is interrupted and playback begins in the iPod app. If you've chosen the ipodMusicPlayer instead, then of course it doesn't matter as you have explicitly said you're basically just interested in remotely controlling the iPod app which again, is what actually receives the remote events.
The icon in the quick-switch controls at the bottom never changes to your app's icon because again, your app is never actually set up to receive the events. The iPod application is, which is why its icon does appear there.
So what I want to know is... am I wrong here? Has anyone successfully been able to use MPMusicPlayerController and been able to intercept the remote events? While I'd prefer to use the applicationMusicPlayer with background music support so I don't muck with the user's iPod, the bigger thing is remote control notifications, meaning if I have to use the ipodMusicControl and keep my app in the foreground to intercept those messages, so be it. It's ugly that way, but at least it's something.
Code examples, or at least explicit steps against one of the built-in app templates would be GREATLY appreciated. (Don't even need the implementation... just the steps. Hopefully that will appease the inevitable 'It's still under NDA' thing that people keep answering questions with.)
Mark
I solved it. The info is in my other question over here...
Stack Overflow: Play iPod music while receiving remote control events
...but the short version is you have to use AVPlayer (but not AVAudioPlayer. No idea why that is!) with the asset URL from the MPMediaItem you got from the library, then set the audio session's category to Playable (do NOT enable mixable!) and add the appropriate keys to your info.plist file telling the OS your app wants to support background audio.
This lets you play items from your iPod library (except Audible.com files for some reason!) and still get remote events. Granted you have to do more work, and since this is your audio player which is separate from, and will interrupt the iPod app (which may or may not be desirable. And again, don't enable mixing or the iPod app will hijack the remote control events) but those are the breaks!
For anyone who wants to know, I found out to get the audio playing in the background, you have to set the audio session's category to Playable and then background audio works just fine. If you also want to play your own sounds at the same time, you have to mark the category as mixable. That solved the background music part. But what I've found out is any time the iPod is playing, it doesn't seem possible for you to get remote notifications.
Here's the updated thread...
How can you play music from the iPod app while still receiving remote control events in your app?
M

Resources