The situation goes a little something like this:
I am programming Xcode whilst concurrently listening to music on my Bluetooth headphones... you know to block out the world.
Then, I go to launch my app in the iOS simulator and BOOM all of a sudden my crystal clear music becomes garbled and super low quality like it is playing in a bathtub 2 blocks away... in the 1940s.
Note: the quality deterioration does NOT occur if I am playing music on my laptop or cinema display and I launch the sim. It seems to be exclusively a Sim -> Bluetooth issue.
The problem is more than just annoying. Because often after stopping the simulator the crappy bathtub quality music continues. To fix it I have to open sound preferences in OSX and briefly toggle back to my laptop sound and then back to my Bluetooth headphones.
This is a big deal because I launch the simulator 50x a day and have to do this toggle thing every time as well as suffer listening to 40s era mono ham radio quality music.
For your information, the headphones I am using are Plantronics BackBeat Pro and I am up to date on firmware. I am on OSX 10.11.4 and Xcode 7.3... but this problem has persisted through all versions for 2+ years now. Can you save me from the 1940s?
I've managed to fix it, and it actually seems to be a microphone issue. Go to System Preferences -> Sound, select the Input tab and set Internal Microphone as the input (mine was set with my headphones').
Crappy sound goes way after that =)
EDIT (May 30 2018):
I've found out an easier way to do the same as above. Instead of opening the System Preferences, you can just go to the Mac OSX toolbar, press Option (alt) + click on the sound icon and then select "Internal Microphone" from the "Input Device" list. Print screen as follows.
If you're using Xcode 9 or higher, you can set a default audio input and output for the simulator. This can be done by launching the simulator from Xcode and navigating to I/O > Audio Input within the menu bar and selecting Internal Microphone. This solution will save your audio preference so you won't have to change it on every launch.
On Simulator, Select;
I/O -> Audio Input -> Macbook [Pro]
Done.
Seems like years of suffering are finally over, Xcode 12 Beta Release Notes:
Simulator defaults to the internal microphone unless you explicitly choose a different audio source. This avoids triggering phone call mode on Bluetooth headsets which degrades audio quality while listening to music. (59338925, 59803381)
You can also switch to Mac's internal mic in System Preferences -> Sound, that's how I usually fix this bug (I have Sony Wh-1000XM3)
Related
I'm working on some home automation programs and one of the things I want to be able to do is detect when my 4th generation Apple TV has woken from sleep. This will generally only ever happen when someone pressed a button on its Siri remote to wake it up.
I have a PC (connected to the same TV as the Apple TV) that has a Pulse-Eight USB-CEC adapter, so naturally the first thing I tried was using CEC to determine when the Apple TV is awake. Unfortunately it's not reliable, since monitoring the Apple TV's power status to see when it wakes up produces false positives. (I should note that I do not have "Control TVs and Receivers" enabled on the Apple TV, and can't turn it on for the particular project I'm working on because I need the Apple TV to not change the TV's input.)
I'm trying to think of some other way to do this. I'm open to any possibilities, including things like:
Making use of private APIs on the Apple TV
Running an 'always on' program in the background of the Apple TV that sends a signal when the Apple TV wakes up, if that's even possible. (I suspect that it isn't.)
Monitoring the bluetooth communication between the Siri Remote and the Apple TV, if that's possible
Somehow filtering HDMI-CEC commands so that I can turn on 'Control TVs and Receivers', allow the Apple TV's CEC commands for turning on and off the TV, and exclude commands for changing the TV's input.
Any other method, no matter how hacky or ridiculous, as long as it works!
Does anyone have any suggestions? I'm running out of things to try!
I tried to post below on apple discussion / support communities but was told i don't have the right to post this content. Maybe someone in this group can succeed in doing it:
Apple TV 4 CEC integration is great when it works, but it doesn't work all the time and not with all the various equipment out there, you can do a search across forums and you will see lots of unhappy users. I would like to use a raspberry PI to detect when my AppleTV goes to sleep and wakes up and programmatically turn my tv on or off using its RS232C or custom CEC commands.
I used a bonjour services explorer and compared every single result between sleep and on states and there are no differences whatsoever. I would have expected Apple to welcome such automation projects and make this information readily available with a variable such as status: sleep or status: on.
Is there a way I could tell the two states apart via the network connection?
If not, could one build a TvOS app which runs on the background and makes this information available to clients somehow?
I finally found a method that seems to work consistently. This method is incredibly hacky and not at all the sort of way I'd prefer to do this, but it's the only one I've found so far that works consistently.
I have taken an old USB webcam and affixed it to the front of my Apple TV so that its lens is directly in front of the Apply TV's front facing light. Whenever the Apple TV is asleep, I simply check for the light turning on by taking images from the camera and analyzing their average luminosity. Since the lens is right next to the light, when it turns on it'll create a huge blown out white circle in the image that's incredibly easy to detect.
As long as the Apple TV is asleep, the light turning on seems to indicate 100% of the time that it has woken up. I have yet to find a single incident of either a false positive or false negative.
Since pressing buttons on the Siri remote causes this light to blink, this also means that I can detect buttons being pressed by looking for changes in the light while the Apple TV is awake. It's not 100% accurate, since some button presses are faster than the frame rate of my crappy old USB webcam, but it works well enough.
I would vastly prefer to find a better method of doing this, like making a request over the LAN to the Apple TV where the response clearly indicates it being awake or asleep, but so far it doesn't look like that's possible.
Here I am, six and a half years later, and I've finally found a better way to get the power state of my Apple TV.
I can simply use pyatv, which has a function named power_state that returns the Apple TV's current power state.
I am using LENEVO G500 Laptop and my sound card has support for Dolby Advanced Audio v2 that works nicely in Windows OS (i.e. Windows 7, 8 and 8.1). However I have failed to enable Dolby sound effect in my Linux OS (have tried it with Linux Mint 17, Fedora 20).
Does anyone have an idea which linux version has support for this feature or how I can enable in a linux OS.
I would appreciate if you could direct me to the right direction.
Thanks.
I've googled out a very good advise on forums that helps me to achieve a Dolby like sound on my Kubuntu 19.04 with Lenovo g780.
Install PulseEffects https://github.com/wwmm/pulseeffects
(repos with deb files are here: https://github.com/wwmm/pulseeffects/wiki/Package-Repositories#debian--ubuntu)
Restart the user session or reboot after this, because PulseAudio will be upgraded, and it may cause problems if you don't restart.
Run PulseEffects and close it. It'll create all settings dirs on first launch. They required for next step.
Install PulseEffects-Presets from here: https://github.com/JackHack96/PulseEffects-Presets
(I've used the suggested script that automatically downloads them to PulseEffects import dirs, it will require flatpak that could be got from repos with sudo apt install flatpak)
Launch PulseEffects again. Select Convolver. Enable it. Click on wave button. You'll see a list of presets. Enable:
Dolby ATMOS ((128K MP3)) 1.Default.irs
Close the dialog and that's it. You can toggle Convolver in PulseEffects on and off while playng music to compare results. You may play with other presets as well.
For improving sound on a notebook or a tablet PulseEffects help pages come with a tuorial about how to achieve this.
App can be minimized to tray on GTK-enabled desktops with an additional application: https://github.com/boomshop/pulseffectstray
It's better enable autostart in app settings (it will copy it's desktop file to ~/.config/autostart with --gapplication-service command line. So next time start without GUI).
It is possible to get reasonably close to the Dolby Advanced Audio output on Linux.
TLDR:
Record the result of playing a -0dBFS impulse in Windows with all effects enabled. Save that as a wav file and use it as input to the PulseEffects Convolver.
Step-by-step:
Install Audacity in Windows.
Configure Audacity to use WASAPI
Select the loopback device as input
Select your laptop speakers as output, making sure all Dolby Advanced Audio effects are enabled.
Start recording
Play an impulse audio file (you might need to do this twice, Audacity often doesn't pick up the first impulse.
Zoom in and select the area around the recorded impulse (see screenshot below)
Export the selection as a WAV file and change the extension to irs
Import this irs file into the Convolver.
Some notes:
Audacity isn't required, presumably any software capable of recording from the output device will be fine.
To avoid any changes introduced by sample rate conversion, set the sample rate of the output and input devices in Windows to be the same.
When recording, in Step 5, Audacity would not record unless audio was playing. This is probably due to using WASAPI. Just start recording, play the impulse and if you don't see it in the recording output as a single spike, play it again.
The screenshot is quite heavily zoomed in so that you can see the area where there is data. When selecting what to export, try to make sure the selection is roughly centered around the central peak. It doesn't have to be perfect.
As a useful check to make sure what you are recording has been processed by Dolby Advanced Audio, you can disable all effects on the output device in Windows and record the impulse a second time. This should show up as a single peak sample and not the symmetric pattern.
After a bit of research I found this explanation that seems to have satisfied my query. It generally says ...
There isn't going to be an easy fix for this, unless Dolby releases a Linux driver or publishes more information on what exactly their software is doing (which is unlikely).
Haswell-ThinkPad-problems, linux-low-audio-quality
Beware that recently PulseEffects has changed it's name to EasyEffects, but PulseEffects-Presets hasn't updated it's config files to cover this change; Therefore this answer might not be applicable anymore.
I cannot record audio using monitor source of sink devices,from 2 to 3 days.I have reinstalled Pulseaudio, but the problem remains. I am using ubuntu 12.04 with default pulse audio. few day ago, i had same problem but I reinstalled ubuntu so I overcame problem but now same problem...??
from my point of view, Monitor of internal audio does not seem to get any signal.because
i check Pulse Audio Volume Control (pavucontrol), in which volume bar does not shown volume level in playtab and same case in output Devices tab.However, I can hear audio,and the pavucontrol Play tab shows the name of the applications which is running.
suggest any way to overcome this problem, because my application need audio recording from speaker(from context of pulse audio from sink device).
Thanks...
I got the solution, it was a simple case of the monitor being muted. In pavucontrol go to input devices, then in the show button at the bottom switch it to All input devices. I believe it's normally set to all except monitors, so the monitor doesn't show up. In my case it was this monitor that was muted, but I could still hear sound because the internal audio wasn't muted. sorry for pasting here but lets Hope it helps someone....
I want to programmatically detect if a (local) computer (not mobile device) is playing any sound or music. Preferably via some high level api from Java or Python or a similar language.
I have never done it, but as a first approach I would open (open as input, not output) a fake recording stream on the master windows playback lineout device (instead the normal use of opening the mic or linein device for recording).
I would then monitor the captured frames. If for a certain time there are values over some small threshold, I would infer there is sound.
im trying to find resources about the possibility to control audio stream from the multitasking bar in iOS 4+. So possibly when app reaches background, when the home button was pressed twice, the audio controls in the multitasking bar should give me some possibility to control my audio stream. I don't know if using simple mpmovieplayer is enough or i should go with AVFoundation.
This is a picture of what i'm meaning:
http://bayimg.com/AabFEAAdg
Thank you very much