In Android, one can set the volume of the left and right speakers separately by using the android.media.MediaPlayer.setVolume method which takes two arguments: one float for the left speaker and another for the right speaker.
In Phonegap/Cordova, there is a media.setVolume method which only takes a single argument to set the volume of the media.
How can I the volume of each speaker individually in Phonegap?
Related
I want to do a couple of things:
-I want to hear sound from all other programs through max, and max only.
-I want to edit that sound in real time and hear only the edited sound.
-I want to slow down the sound, while stacking the non-slowed, incoming input onto a buffer, which I can then speed through to catch up.
Is this possible in Max? I have had a lot of difficulty working even step 1. Even if I use my speakers as an input device, I am unable to monitor it let alone edit it. I am using Max for Live, for what it's worth.
Step 1 and 2
On Mac, you can use Loopback
You can set your system output to the loopback driver, then set the loopback driver as the input in Max and then the speakers as the output.
For Windows you would do the same, but with a different internal audio routing system like Jack
Step 3
You can do that with the buffer~ object. Of course the buffer will have a finite size, and storing hours of audio might be problematic, but minutes shouldn't be a problem on a decent computer. The buffer~ help file will show you the first steps needed to store and read audio from it.
Is there a way with WASAPI to determine if two devices (an input and an output device) are both synced to the same underlying clock source?
In all the examples I've seen input and output devices are handled separately - typically a different thread or event handle is used for each and I've not seen any discussion about how to keep two devices in sync (or how to handle the devices going out of sync).
For my app I basically need to do real-time input to output processing where each audio cycle I get a certain number of incoming samples and I send the same number of output samples. ie: I need one triggering event for the audio cycle that will be correct for both devices - not separate events for each device.
I also need to understand how this works in both exclusive and shared modes. For exclusive I guess this will come down to finding if devices have a common clock source. For shared mode some information on what Windows guarantees about synchronization of devices would be great.
You can use the IAudioClock API to detect drift of a given audio client, relative to QPC; if two endpoints share a clock, their drift relative to QPC will be identical (that is, they will have zero drift relative to each other.)
You can use the IAudioClockAdjustment API to adjust for drift that you can detect. For example, you could correct both sides for drift relative to QPC; you could correct either side for drift relative to the other; or you could split the difference and correct both sides to the mean.
In IBasicAudio people can only control volume and balance, this 2 function can not controll each channels volume if they are more than 2.
Is there any way to do that in directshow.
My problem is not a problem at all, forget about it, the volume sounds same no matter filter graph use 1 or more instances.
IBasicAudio controls volume of entire audio stream, and typical implementations control hardware levels, without data modification. If you need to adjust individual channel volume, you need to convert data to PCM and modify the data (multiply the sample values by a factor of interest).
How is it that there is a Single Input to a headphone but the headphone is able to split the signals as per the channels. How is this splitting happening? To be more specific how is surround sound created by headphones with same single input ?
If you look at the TRS (Tip, Ring, Sleeve) connector jack on the end of your headphone cable, you'll see it is comprised of different sections, as so:
The input will normally be a stereo signal, with the left and right channels carried separately.
From memory, I think the tip picks up the left channel and the ring picks up the right but that doesn't matter so much with regard to your question.
As for surround sound, any "surround sound" from headphones is simulated as part of the stereo image.
"Surround sound" is usually achieved via a surrounding array of speakers, rather than via headphones.
I should also add that the above processes are analogue and have nothing whatsoever to do with bytes; any digital signal sent from your computer is converted to analogue before it reaches the headphone socket.
How do I control the audio volume of an A/V player created by the LiveCode mergExt suite's mergAV external? I need to turn the player's audio on and off, as well as set its volume to a specific value (0 - 100).
It's not currently possible so would need to be added to mergAV. Unfortunately Apple decided not to give AVPlayer an volume or mute property which is why it's not yet implemented. There is a workaround though. There is also MPVolumeView which could be added to mergMP which presents the system volume control.