OpenAL play sound, dealloc one instance, all the other's sound dispeared? - audio

I use OpenAL to play sound from network streaming in my NPAPI plugin (browser plugin). When I open more than one browser's tab, I close one of the tabs, the others' sound also dispeared. Anyone could give me some help? Or anyone can give me some advice for playing stream audio, such as audio from mic or such devices. When I use audio queue service, it still cannot surport multi instances. For work's reason, I cannot paste my code in there, sorry about that. I use OpenAL some way as Apple's Developer's example - oalTouch, the link is
https://developer.apple.com/library/ios/samplecode/oalTouch/Introduction/Intro.html
I use the system default device to play sound, which means I use alcOpenDevice(NULL) to open the device.

When a page with your plugin closes, the corresponding instance of your plugin is destroyed; there's one instance per plugin that is simultaneously active in the browser. Presumably the problem is that you are doing something in your plugin instance teardown that tears down something global.
Without being able to see anything about what your code is doing, it's hard to see how anyone can help figure out what that thing is.

Now I have found the reason. OpenAL do support multi audio source, but when I use it as the oalTouch example, I did not change anything. The reason why I close one browser's tab, the sound stopped is that the device is closed by me when I destroy the plugin instance. So if I want to use it in multi instances, I should design it carefully, and this is my design: I use static var to indicate if the default device is opened, if it is opened, I do not open it again (use alOpenDevice(NULL) method). But for different tabs, I use different sourceId (alGenSource(...) method is used). When I close the browser's teb, the plugin instance is destroyed too. I use the static variable to check if there still have plugin instance use the default audio device, if yes, do nothing, otherwise close the device.

Related

Stream audio from place 1 to place 2 over the internet

So I'm kinda stuck here.
I have a radio station, but we are mobile. So I have a studio on wheels. The problem is, we have an antenna, but we always have to place that really close to our studio. Now I want to make an device that can stream the audio from the audio mixer to the internet and can be received by another device in another network and send that signal to the antenna (audio output).
to make this clear, I made a schema with raspberry pi's;
I want this to be plug and play So I only have to plug in the device in the modem (or network we have) on both sides and the devices should find each other.
I don't know HOW I can do this, so I need to know a couple of things:
What hardware should I use?
What software should I use?
What is the best configuration to accomplish this?
Can I use 2 raspberry pi's?
How can I let the devices find each other over the internet?
There need to be some features;
The system needs to be able to buffer the audio for 5-10 seconds
It needs to be direct, so it's live and not a file that needs to be played
The system must be failless (beside the fact the internet can die).
Plug and play is a must, I don't want to have a really messy configuration to do. (if possible, without any kind of portforwarding).
I would really appreciate help and a decent explaination.
regards,
Robin
Well, it depends on your capabilities as a programmer.
If you're really fixated on the RPi for it's convenient form factor, there's a ton of community support, so I'd start with something like this project to kick start you in the right direction. If you already know python pretty well, modify away and have fun.
If you have no programming experience, you'll probably want to put a desktop in place of the RPi and launch some instances of VLC. It's not necessarily plug n play, but you can get close enough by getting a command line VLC to launch at startup.
Either way, the more difficult problem here is the "over the internet" part. This would really need to be a server-client model, but who is your server depends on who is more stationary (I'm guessing Location 2?) because the client will need to know the IP address of the server somehow. There are dozens of ways to make this happen, but at the end of the day, you'll want to use sockets accomplish the
It needs to be direct, so it's live and not a file
... which unfortunately gets complicated. See this answer for confirmation. Would love to help with some tips on implementation, but we need more information about your willingness to "dig into the code", the necessity of the RPi, and whether the stationary location has a static web address.

Manage playback of audio stream from device to chromecast

I have been searching for the best practice of stopping the playback on a device when the chromecast is selected. right now I connect to an audio stream, it starts playing on the chromecast fine, but also stays playing on my phone. I had hoped this was some type of automatic switch that was supposed to occur. Is it up to me to manage all of this?? If so what are the best practices to start/resume playback when switching back and forth from the chromecast to the device? It is a live stream so no way to pause and pick up where it left off.
Are there certain callbacks that I need to watch for to make the switch?
Yes, it is your responsibility to manage the behavior of your app. Our UX Design Checklist outlines the flow that we are recommending; for example when you start streaming to a cast device, you stop the local playback. Details of how you can stop the playback locally depends on your application but what you should use is a set of callbacks that the Android Cast SDK provides for you to learn about the success of your cast control commands and state changes that happen on the receiver. These callbacks can tell you if your launch of application was successful or not, whether the media is playing or paused, or when the metadata for the media has changed. You need to look at our SDK documentation to see which ones are appropriate for your case. We also have a number of sample projects that do most of these tasks.

Web App with Microphone Input

I'm working on a C++ application which takes microphone input, processes it, and plays back some audio. The processing will incorporate a database located on a server. For ease of creating UI and for maximum portability, I'm thinking it would be nice to have the front end be done in HTML. Essentially, I want to record audio in a browser, send that audio to the server for processing, and then receive audio from the server which will then be played back inside the browser.
Obviously, it would be nice if HTML5 supported microphone input, but it does not. So, I will need to create a plugin of some kind in order to make this happen. NPAPI scares me because of the security issues involved, so I was looking into PPAPI and Native Client. Native Client does not yet support microphone input, and I believe that the PPAPI audio input API would be limited to a dev build of Chrome. FireBreath doesn't look like it supports any microphone function either. So, I believe my options are:
Write my own NPAPI plugin to record the audio
Use Flash to get microphone input
Bail on browsers altogether and just make a native application
The target audience for this is young children and people who aren't computer-adept. I'd like to make it as portable and simple to use as possible. Any suggestions?
If you can do it all in Flash and have the relevant knowledge, that would probably be the best solution:
You can avoid writing platform-specific code, delivery/updating is easy and Flash has broad coverage so users don't need to install any custom plugins.
FireBreath doesn't look like it supports any microphone function either.
You can write your own (platform-dependent) code for audio recording with FireBreath, just like you could in a plain NPAPI plugin. FireBreath just makes it easier for you to write the plugin, the result is still a NPAPI (and ActiveX) plugin with access to native APIs etc.
You can use Capturing Audio & Video features in HTML5, see this link for more information.

Record audio from various internal devices in Android (via undocumented API)

I was wondering whether it is possible to capture audio data from other sources like the system out, FM radio, bluetooth headset, etc. I'm particularly interested in capturing audio from the FM radio and already investigated all possibilities including trying to sniff the raw bluetooth communication between the phone and the radio device with no luck. It's too bad Android only allows recording audio from the MIC.
I've looked at the Android source code and couldn't find a backdoor to allow me to do that without rooting the device. Do you, at least, have any idea how to use other devices (maybe access somehow /dev/audio) say via NDK or even better - Java (maybe Reflection?) to trick the system to capture the audio stream from say, the FM radio. (in my case I'm trying to develop the app for the HTC Desire)
PS. And for those of you who are against using undocumented APIs, please don't post here - I'm writing an app that will be for my personal use or even if I ever publish it I will warn the user of possible incompatibilities.
I've spent quite some time deciphering the audio stack, and I think you may try to hijack libaudio. You'll have trouble speaking directly to the hardware (/dev/*) because many devices use proprietary audio drivers. There's no rule in this regard.
However, the audio hardware abstraction layer (HAL) provided by /system/lib/libaudio.so should expose the API described at http://source.android.com/porting/audio.html
The Android system, and especially audioflinger, uses this libaudio HAL to find available devices, deal with routing, and of course to read/write PCM data.
So, you could hijack the interaction between audioflinger and libaudio, by renaming the later, and providing your own libaudio which decorates the real one. Doing so, you should be able to log what happens and very possibly intercept FM radio output, provided that this is not directly handled by the hardware.
Of course, all this requires rooting. Please comment if you manage to do this, that interests me.

Background Audio and Remote Control Support using MPMusicPlayerController on iOS 4. Is this even possible?

I've spent two days on this and have gotten nowhere. I'm trying to use [MPMusicPlayerController applicationMusicPlayer] to play audio chosen from the user's iPod library and have it run in the background as well as support remote events. Now getting the music actually playing is the easy part. Get the instance, pick the songs, assign the music queue and play. Done and done. BUT... a) I can't get it to play in the background, and b) even when in the foreground I can't get the remote control events to work at all!
And before you ask, yes, I have set the plist entries, the audio session category, the call to say I'm interested in getting remote events and set up a first responder to listen for them, so please know, yes, I've read read every single document on the subject that I could find* (*a task I blame Apple for for not being clear at all on this topic, nor having ANY example code for it!) and I've watched every one of the WWDC videos relating to it (even freezing the screen to copy the code exactly from their example...) so unless I've missed something not in this list, replying with any of those answers is not going to help.
One more thing... I am explicitly talking about using the MPMusicPlayerController which according to the docs, never uses an application session. It always uses the system session. (Maybe that in itself answers my question, but the docs don't clearly say that so I'm not sure, hence this question.)
That said, after two days, my thoughts are this:
When using the MPMusicPlayerController, regardless of what methods you call or what plist entries you set, your app will never run in the background. Period. If you use the ipodMusicPlayer instance, the music keeps playing, but that's because it's the iPod that's playing, not your app. If you use the applicationMusicPlayer instance instead, when going to the background your music stops. In both cases, your app is suspended.
Regardless of your using the ipodMusicPlayer or applicationMusicPlayer instances, all remote events go to the iPod application itself, not yours, even if you've explicitly asked for them. If you are using the applicationMusicPlayer instance and you use the remote to select 'Play', the iPod app receives the command so your audio ducks out and is interrupted and playback begins in the iPod app. If you've chosen the ipodMusicPlayer instead, then of course it doesn't matter as you have explicitly said you're basically just interested in remotely controlling the iPod app which again, is what actually receives the remote events.
The icon in the quick-switch controls at the bottom never changes to your app's icon because again, your app is never actually set up to receive the events. The iPod application is, which is why its icon does appear there.
So what I want to know is... am I wrong here? Has anyone successfully been able to use MPMusicPlayerController and been able to intercept the remote events? While I'd prefer to use the applicationMusicPlayer with background music support so I don't muck with the user's iPod, the bigger thing is remote control notifications, meaning if I have to use the ipodMusicControl and keep my app in the foreground to intercept those messages, so be it. It's ugly that way, but at least it's something.
Code examples, or at least explicit steps against one of the built-in app templates would be GREATLY appreciated. (Don't even need the implementation... just the steps. Hopefully that will appease the inevitable 'It's still under NDA' thing that people keep answering questions with.)
Mark
I solved it. The info is in my other question over here...
Stack Overflow: Play iPod music while receiving remote control events
...but the short version is you have to use AVPlayer (but not AVAudioPlayer. No idea why that is!) with the asset URL from the MPMediaItem you got from the library, then set the audio session's category to Playable (do NOT enable mixable!) and add the appropriate keys to your info.plist file telling the OS your app wants to support background audio.
This lets you play items from your iPod library (except Audible.com files for some reason!) and still get remote events. Granted you have to do more work, and since this is your audio player which is separate from, and will interrupt the iPod app (which may or may not be desirable. And again, don't enable mixing or the iPod app will hijack the remote control events) but those are the breaks!
For anyone who wants to know, I found out to get the audio playing in the background, you have to set the audio session's category to Playable and then background audio works just fine. If you also want to play your own sounds at the same time, you have to mark the category as mixable. That solved the background music part. But what I've found out is any time the iPod is playing, it doesn't seem possible for you to get remote notifications.
Here's the updated thread...
How can you play music from the iPod app while still receiving remote control events in your app?
M

Resources