I need some help because I don´t know how to approach this challenge.
I want to build a device, that's receiving a Bluetooth audio signal and is forwarding it to a Bluetooth speaker. It´s also running some algorithms with the audio data and also simultaneously sending results via UDP to a different device.
I already thought about using two or three ESP32s, using one with an extra Bluetooth module, or searching for a whole different MCU with Bluetooth 5.0 or higher and Wifi 5GHz. But I don´t know approach the best is, or maybe a completely different one.
Some context, why we want to do it:
We want to create a real-time light show, based on the current playing song. It is already working for PC, but also want to make it accessible for phone users. Sadly there is no way to capture the internal audio on iPhone or Android phones. Our Idea to make the music sync with the phone possible is that you are connected with the phone via Bluetooth to our "sync box" which is then connected to the speaker via Bluetooth or AUX. The "sync box" is running our algorithms for creating the light shows and then sending the data to the microcontrollers from the light strips.
So maybe you have an idea how we can sync the lights to the music completely differently or how I can approach the challenge with Bluetooth.
Any help is highly appreciated.
Thanks a lot.
I have 5 requirements:
I want to send sounds that are output of other programs over voice chat programs(e.g. TeamSpeak, Skype etc.)
I only want to send the sounds of certain programs. Not all my system sounds
I must still be able to talk to them (mice input should still be used).
I still want to hear the sounds of what I send.
It must be a software solution.
My scenario:
I am playing LoL/DoTA/CoD/BF (whichever makes you happy), I am on Teamspeak with some friends. Something happens and I want to play a fitting sound (e.g from http://www.myinstants.com/). So I want to send the sound from my browser over the chat.
What I tried:
I installed CheVolume (http://www.chevolume.com/Infos.aspx). This is for handling output devices, not sound input.
I set Stereo Mix as my default communication device. This works mostly, but then I also send my game sounds over chat.
I have installed VB-AUDIO (http://vb-audio.pagesperso-orange.fr/Voicemeeter/). It can be useful, but it is not what I want. I get similar results as using Stereo Mix.
I installed Jack (http://jackaudio.org/) shame to say it is to technical for me.
I tryed using Virtual Audio Cable (http://software.muzychenko.net/eng/vac.htm). Again, this only enables me to send all my system sounds.
but Voicemeeter allows to do that:
Exactly : See User Manual Case study #1
it is possible only if the application allow settings its playback device, then you will be able to route an application Voicemeeter virtual input or physical input through a VB-CABLE (Voicemeeter Banana version is better for that since it provide more I/O)
3,4,5: of course.
I'm trying to create an interactive voice-tree for an art project. Think of something like a choose-you-own-adventure, but on the phone and with voice commands. I already have a fair amount of experience working with Construct 2 (game-making software), and can easily build a branching, voice controlled interaction loadable through a modern browser with it. For reasons relevant to the overall story, I need players to connect to the interaction through a Google Voice number they will call.
I already have a GV number and have written an AutoHotKey script to auto-answer the Hangouts call, but I'm stuck trying to route the audio from the caller in Hangouts to the browser AND the audio response output of the browser back to the caller.
I know of an extremely primitive way to accomplish this, [which I've illustrated with this diagram:
Unfortunately, this is rather cumbersome and I suspect I can achieve my goal through virtualization or at the VERY least some sort of attenuation cables between two physical machines (I tried running a generic AUX cable between two laptops, but couldn't get speaker audio to go into microphone audio from one to the other).
I've been experimenting on Parallels running Windows 8.1 with Virtual Audio Cable(no luck), JACK(too robust), Chevolume(too limited), and IndieVolume(too limited).
I suspect VAC would be the best bet, but I can't seem to find a way to route Firefox audio output to a microphone input which directs to Chrome and vice versa. If I try accomplishing it all through just one virtual machine I have to use two different browsers for the voice-tree webpage and Hangouts call since Hangouts pushes its audio through Chrome (even the stand-alone application).
Is there any way to route microphone input and speaker output separately between two virtual machines? If not, could I still try and accomplish this with a specific type of cables between two laptops running windows 7/8 that have generic audio jacks?
I'm currently developing a Cydia tweak about speaker recognition on iPhone. This tweak can identify if the current user is the phone owner (after training). This tweak has already be implemented on Android and we have already compiled and tested the core library. The only difficult that we are facing is how to capture the audio data from Siri. We have tried:
hooked function "- (void)_tellSpeechDelegateRecordingWillBegin" and "- (void)_tellSpeechDelegateRecordingDidEnd" and used AvAudioRecorder to record the audio - failed because all the AvAudioSession will be interrupted when Siri is recording.
hooked function "- (void)startSpeechRequestWithSpeechFileAtURL:(id)arg1". This function seemed to be something related to the audio file but we could then get the function hooked with Logos tweak framework.
There are two possible ways we are considering:
Implement a low level audio recorder that can bypass Siri's interruption. (Something like a call recorder.)
Implement a Http(s) proxy server inside iPhone and capture the requests which forwards to the Siri's server.
But we have little experience for those options. Does anyone have ideas to capture the audio from Siri (by the phone but not through a external server)
Update (Feb 12 2014)
Check this.
I found there was a class named "AFSpeechRecorder". It was used in Siri. I guess it must be related to the audio data. But unluckily, this class is removed in the iOS 7. Can't get any idea about the changes.
I've spent two days on this and have gotten nowhere. I'm trying to use [MPMusicPlayerController applicationMusicPlayer] to play audio chosen from the user's iPod library and have it run in the background as well as support remote events. Now getting the music actually playing is the easy part. Get the instance, pick the songs, assign the music queue and play. Done and done. BUT... a) I can't get it to play in the background, and b) even when in the foreground I can't get the remote control events to work at all!
And before you ask, yes, I have set the plist entries, the audio session category, the call to say I'm interested in getting remote events and set up a first responder to listen for them, so please know, yes, I've read read every single document on the subject that I could find* (*a task I blame Apple for for not being clear at all on this topic, nor having ANY example code for it!) and I've watched every one of the WWDC videos relating to it (even freezing the screen to copy the code exactly from their example...) so unless I've missed something not in this list, replying with any of those answers is not going to help.
One more thing... I am explicitly talking about using the MPMusicPlayerController which according to the docs, never uses an application session. It always uses the system session. (Maybe that in itself answers my question, but the docs don't clearly say that so I'm not sure, hence this question.)
That said, after two days, my thoughts are this:
When using the MPMusicPlayerController, regardless of what methods you call or what plist entries you set, your app will never run in the background. Period. If you use the ipodMusicPlayer instance, the music keeps playing, but that's because it's the iPod that's playing, not your app. If you use the applicationMusicPlayer instance instead, when going to the background your music stops. In both cases, your app is suspended.
Regardless of your using the ipodMusicPlayer or applicationMusicPlayer instances, all remote events go to the iPod application itself, not yours, even if you've explicitly asked for them. If you are using the applicationMusicPlayer instance and you use the remote to select 'Play', the iPod app receives the command so your audio ducks out and is interrupted and playback begins in the iPod app. If you've chosen the ipodMusicPlayer instead, then of course it doesn't matter as you have explicitly said you're basically just interested in remotely controlling the iPod app which again, is what actually receives the remote events.
The icon in the quick-switch controls at the bottom never changes to your app's icon because again, your app is never actually set up to receive the events. The iPod application is, which is why its icon does appear there.
So what I want to know is... am I wrong here? Has anyone successfully been able to use MPMusicPlayerController and been able to intercept the remote events? While I'd prefer to use the applicationMusicPlayer with background music support so I don't muck with the user's iPod, the bigger thing is remote control notifications, meaning if I have to use the ipodMusicControl and keep my app in the foreground to intercept those messages, so be it. It's ugly that way, but at least it's something.
Code examples, or at least explicit steps against one of the built-in app templates would be GREATLY appreciated. (Don't even need the implementation... just the steps. Hopefully that will appease the inevitable 'It's still under NDA' thing that people keep answering questions with.)
Mark
I solved it. The info is in my other question over here...
Stack Overflow: Play iPod music while receiving remote control events
...but the short version is you have to use AVPlayer (but not AVAudioPlayer. No idea why that is!) with the asset URL from the MPMediaItem you got from the library, then set the audio session's category to Playable (do NOT enable mixable!) and add the appropriate keys to your info.plist file telling the OS your app wants to support background audio.
This lets you play items from your iPod library (except Audible.com files for some reason!) and still get remote events. Granted you have to do more work, and since this is your audio player which is separate from, and will interrupt the iPod app (which may or may not be desirable. And again, don't enable mixing or the iPod app will hijack the remote control events) but those are the breaks!
For anyone who wants to know, I found out to get the audio playing in the background, you have to set the audio session's category to Playable and then background audio works just fine. If you also want to play your own sounds at the same time, you have to mark the category as mixable. That solved the background music part. But what I've found out is any time the iPod is playing, it doesn't seem possible for you to get remote notifications.
Here's the updated thread...
How can you play music from the iPod app while still receiving remote control events in your app?
M