NAudio - Using WaveIn and AudioEndpointVolume together - audio

In my app I'm using WaveIn to record from mic, and allow my client to adjust the recording level using AudioEndpointVolume. I didn't had any problems so far, but since my client may have a different sound card, I would like to ask if this combination may cause any issues.

You need to be aware that you are using two fundamentally different audio APIs. WaveIn is the old "MME" audio subsystem, and AudioEndpointVolume is from the new "Core Audio" API introduced with Vista. There is no reason why they shouldn't work together. The main challenge is ensuring that you are definitely controlling the same device with both on systems that have more than one audio input device.

Related

WebRTC - how to synchronize media streams

I'm using WebRTC in a sort of non-conventional way.
I have multiple streams generated by several 'broadcasting' peers being sent to a collection of several 'receiving' peer.
I intend to use an SFU media server (maybe Jitsi or Kurento)
It is very critical that these streams are presented at the receiving peers in a synchronized fashion.
What are the methods I can use for synchronization? Usually this isn't an issue with WebRTC because there is not usually a consistent clock between peers, but in my case there is a common clock for all the stream sources.
The only ways I can imagine doing it are:
Not worry about it and hope that WebRTC's low latency will cause everything to be in sync.
Somehow encoding timestamp metadata in the WebRTC stream frames, and somehow synchronizing display with javascript in the browser.
Using a tool like GStreamer that can perform video synchronization, mix the streams into a single stream and forward that to the media server (and thus to the receiving clients). I don't have a good idea of how I would actually perform the synchronization though.
Any thoughts and advice would be appreciated.
The only OTT system capable of synchronisation of low latency streams available (when writing this text), is the SYE system made by Net Insight. They are able to synchronise any device down to single digit millisecond in low latency mode.
They do not provide any open source that I know of but you can check it out by downloading a app that uses it.
Primetime
The game starts 20:00 CET every day, download it on several phones/tablets to verify the sync part.
However there are other systems that can synchronise playback that I found.
HibbTV
HibbTV seams to focus on more IPTV replacement solutions as I interpret the solution. They do not seam to target the wild west of internet. I might be wrong please correct me then.
W3C MULTI-DEVICE TIMING COMMUNITY GROUP
Spoke to the researchers a while back. They can synchronise playback but they target collaborative viewing. The low latency part is not part of the scope as I understand it.
Then when it comes to WebRTC, LHLS, MPEG-DASH CMAF and all other solutions they have no sense of time so it will not be possible to render the same video frame on different devices using various access technologies such as 4G, WiFi or cable or even if the devices uses the same technology because the rendering is buffer controlled not time controlled.
/Anders

How to send sound of certain applications over chat programs (win OS)

I have 5 requirements:
I want to send sounds that are output of other programs over voice chat programs(e.g. TeamSpeak, Skype etc.)
I only want to send the sounds of certain programs. Not all my system sounds
I must still be able to talk to them (mice input should still be used).
I still want to hear the sounds of what I send.
It must be a software solution.
My scenario:
I am playing LoL/DoTA/CoD/BF (whichever makes you happy), I am on Teamspeak with some friends. Something happens and I want to play a fitting sound (e.g from http://www.myinstants.com/). So I want to send the sound from my browser over the chat.
What I tried:
I installed CheVolume (http://www.chevolume.com/Infos.aspx). This is for handling output devices, not sound input.
I set Stereo Mix as my default communication device. This works mostly, but then I also send my game sounds over chat.
I have installed VB-AUDIO (http://vb-audio.pagesperso-orange.fr/Voicemeeter/). It can be useful, but it is not what I want. I get similar results as using Stereo Mix.
I installed Jack (http://jackaudio.org/) shame to say it is to technical for me.
I tryed using Virtual Audio Cable (http://software.muzychenko.net/eng/vac.htm). Again, this only enables me to send all my system sounds.
but Voicemeeter allows to do that:
Exactly : See User Manual Case study #1
it is possible only if the application allow settings its playback device, then you will be able to route an application Voicemeeter virtual input or physical input through a VB-CABLE (Voicemeeter Banana version is better for that since it provide more I/O)
3,4,5: of course.

How to route microphone & speaker audio between virtual machines?

I'm trying to create an interactive voice-tree for an art project. Think of something like a choose-you-own-adventure, but on the phone and with voice commands. I already have a fair amount of experience working with Construct 2 (game-making software), and can easily build a branching, voice controlled interaction loadable through a modern browser with it. For reasons relevant to the overall story, I need players to connect to the interaction through a Google Voice number they will call.
I already have a GV number and have written an AutoHotKey script to auto-answer the Hangouts call, but I'm stuck trying to route the audio from the caller in Hangouts to the browser AND the audio response output of the browser back to the caller.
I know of an extremely primitive way to accomplish this, [which I've illustrated with this diagram:
Unfortunately, this is rather cumbersome and I suspect I can achieve my goal through virtualization or at the VERY least some sort of attenuation cables between two physical machines (I tried running a generic AUX cable between two laptops, but couldn't get speaker audio to go into microphone audio from one to the other).
I've been experimenting on Parallels running Windows 8.1 with Virtual Audio Cable(no luck), JACK(too robust), Chevolume(too limited), and IndieVolume(too limited).
I suspect VAC would be the best bet, but I can't seem to find a way to route Firefox audio output to a microphone input which directs to Chrome and vice versa. If I try accomplishing it all through just one virtual machine I have to use two different browsers for the voice-tree webpage and Hangouts call since Hangouts pushes its audio through Chrome (even the stand-alone application).
Is there any way to route microphone input and speaker output separately between two virtual machines? If not, could I still try and accomplish this with a specific type of cables between two laptops running windows 7/8 that have generic audio jacks?

API for manipulating audio output in windows 8

I want to manipulate audio output data, for all the different running applications, before it is sent to the speakers.
Turn the volume up or down, filter the audio, things like that.
How can I gain access to the audio output in real time?
Is there a way to not depend on the audio driver interface?
Thanks! :)
Windows Store apps allow you to use WASAPI. In WASAPI, there is a concept of "audio sessions", of which there is one for every stream of audio being sent to the soundcard. You can enumerate the audio sessions which give you access to IAudioSessionControl. However, this doesn't let you manipulate the audio, which as far as I know WASAPI simply doesn't allow. The best you can hope for is to get hold of ISimpleAudioVolume for each session, but last time I tried that, I found that you couldn't get hold of the session GUIDs you needed to adjust the volume for other processes. You may be able to get hold of the audio endpoints and adjust the master volume for the soundcard.
In short, WASAPI is the most powerful audio API for Windows Store apps but unfortunately I don't think it will let you do very much of what you are asking here.

Record audio from various internal devices in Android (via undocumented API)

I was wondering whether it is possible to capture audio data from other sources like the system out, FM radio, bluetooth headset, etc. I'm particularly interested in capturing audio from the FM radio and already investigated all possibilities including trying to sniff the raw bluetooth communication between the phone and the radio device with no luck. It's too bad Android only allows recording audio from the MIC.
I've looked at the Android source code and couldn't find a backdoor to allow me to do that without rooting the device. Do you, at least, have any idea how to use other devices (maybe access somehow /dev/audio) say via NDK or even better - Java (maybe Reflection?) to trick the system to capture the audio stream from say, the FM radio. (in my case I'm trying to develop the app for the HTC Desire)
PS. And for those of you who are against using undocumented APIs, please don't post here - I'm writing an app that will be for my personal use or even if I ever publish it I will warn the user of possible incompatibilities.
I've spent quite some time deciphering the audio stack, and I think you may try to hijack libaudio. You'll have trouble speaking directly to the hardware (/dev/*) because many devices use proprietary audio drivers. There's no rule in this regard.
However, the audio hardware abstraction layer (HAL) provided by /system/lib/libaudio.so should expose the API described at http://source.android.com/porting/audio.html
The Android system, and especially audioflinger, uses this libaudio HAL to find available devices, deal with routing, and of course to read/write PCM data.
So, you could hijack the interaction between audioflinger and libaudio, by renaming the later, and providing your own libaudio which decorates the real one. Doing so, you should be able to log what happens and very possibly intercept FM radio output, provided that this is not directly handled by the hardware.
Of course, all this requires rooting. Please comment if you manage to do this, that interests me.

Resources