RTSP/Airplay: how to poll or monitor for Airplay Audio playing - rtsp

I'm building a stereo system, using a belkin Soundform Airplay adapter. I have a home server, and want to write an app that monitors the Soundform, and when it is playing audio turns the Audio Amp on or off.
I tried to monitor MDNS announce: I can see the adapter announcing, but the flags are static regardless of state. It's announcing, but not telling me if it's playing or not.
There are some good docs on airplay (http://nto.github.io/AirPlay.html), but I'm not finding a clean way to poll or monitor "are you playing audio". The only solution I have hack together is seeing high UDP traffic, and assuming that equates to an audio stream (but this is a hack and I'm not a fan).
How can I monitor/poll this device (or any RTSP device) and get a clear signal for "is playing audio now"?
Thanks!

Related

Using ffmpeg to stream live video from a raspberry pi to a web server for distribution

I am trying to build a device that will encode h.264 video on a raspberrypi and stream it out to a separate web server in the cloud. The main issue I am having is most implementations I search for either have the web server directly on the pi or have the embedded player playing video directly from the device.
I would like it to be pretty much plug and play no matter what network I am on ie no port forwarding of any sort all I need to do is connect the device to the network and the stream will be visible on a webpage.
One possible solution to the issue is just simply encode frames in base 64 as jpegs and send them to a an endpoint on the webserver, however, this is a huge waste of bandwidth and wont allow for the framerate h.264 would.
Any idea on some possible technologies that could be used to do this?
I feel like it can be done with some websockets or zmq and ffmpeg somehow but I am not sure.
It would be helpful if you could provide more description of the architecture of the device. Since it is an RPI, it is probably also being used for video acquisition via the camera expansion port. If this is the case, you can access the video device and do quite a bit with respect to streaming using the combination of available command line tools.
Something like the following will produce an RTMP stream from the video camera host.
raspivid [preferred options] -o - | ffmpeg -i - [preferred options] rtmp://[IP ADDR]/[location]
From there, FFmpeg will do a lot of heavy lifting for you.
This will now enable remote hosts to access the RTMP stream.
Other tools that would complement that architecture may be ffserver where the rtmp stream from the rpi host could be acquired and then be made available to a variety of clients such as a player in a webpage. Quick look shows ffserver may be obsolete, but that there are analogous components.

Headphones no sound

Today I've connected my bluetooth headphones(Ausdom M08) with PC(via bluetooth dongle).
When I open Skype or Discord I hear no sound on youtube, browser and so on. It only works on Skype and Discord - bad sound, not stereo.
I checked in Sounds Options and I have Ausdom M08 Stereo and Ausdom M08 Hands-Free. First one is default device and second one is default communication device.
When I try to force Skype and Discord to use that default device for sound output I hear no sound then, too!
What I tried:
-Disabling Hands Free Telephony, but I lose microphone function then.
-Tried to uninstall drivers and install again. Still the same.
-Disabling enhancements and exclusive controls of devices.
Literally I tried everything I found on internet or that I thought it can be.
Nothing works.
So the question is: How to make my PC output Stereo Sound from my headphones and still to be able to use microphone from it?
Thanks
There are a lot of missing information here: What PC dongle are you using? Which windows version?
From what I can see about the tech specs of Ausdom M08, it supports few basic profile (HSP/HFP/A2DP/AVRCP). A2DP profile lets you hear stereo audio (Ausdom M08 Stereo). HSP/HFP lets you use the microphone to Skype, but audio is limited to 8K-16K Hz sampling rate (Ausdom M08 Hands-Free). You can't use both Bluetooth profiles at the same time.
So to answer you question: You can't have stereo audio from your headphone while having microphone input.
There are proprietary codec developed by Qualcomm called aptX, which may support microphone over A2DP. But, you'll have to make sure both transmitter and receiver supports this codec.

Can anyone explain how voice commands works via Bluetooth remote(Nexus player remote) in Android(Nexus player)?

Can anyone please elaborate following questions?
How bluetooth stack handles audio data?
How audio commands are processed?
Did we need any service to process audio data?
Thanks in advance.
Basically, voice commands over BLE require:
some audio codec for reducing required bandwidth (ADPCM and SBC are common, OPUS is emerging),
some audio streaming method through BLE,
decoding and getting the audio stream from BLE daemon to a command processing framework.
In the android world, command processing framework is google sauce (closed) that most easily gets its audio from an ALSA device. What is left to be done is getting audio from the remote to an ALSA device.
So for audio streaming, either you:
use a custom L2CAP channel or a custom GATT service, this requires a custom android service app and/or modifications to Bluedroid to handle those, it will need a way to inject audio stream as ALSA, most probably with a "loop" audio device driver,
declare audio as custom HID reports, this way, Bluedroid injects them back to the kernel, then add a custom HID driver that processes these reports and exposes an audio device.
Audio over BLE is not standard, so all implementations do not do the actual same thing. In Nexus Player case, implementation uses HID: It streams an ADPCM audio stream, chunked in HID reports. There is a special HID driver "hid-atv-remote.c" in Android linux kernel that exposes an ALSA device in addition to input device. Bluedroid has no information about audio, all it does is forwarding HID reports from BLE to UHID.

Two-way audio for software ip camera

I am trying to setup a raspberry pi box with a usb camera as a IP Camera that can be viewed from a a generic android IP Camera monitor app. I've found some examples on how to get the video stream, and that works, but what I also need is two-way audio. This seems to come out of the box in standalone network cameras -- any ideas how that works? I want to set it up in a way compatible with typical network cameras so that my cam can be used by any generic ip camera viewer app.
Well, the modern cameras nowadays implement the ONVIF protocol. This protocol specifies that you have a RTSP server that streams audio and video from the camera to the pc, but it also mandates a so called audio backchannel. It's a bit long to explain how it works, check it in the specs.
ONVIF is the standard, but you could also install an existing SIP client and do a video/audio VoIP call rather than implementing ONVIF - depends on the long term goals of your project.

Audio hooking or a custom audio driver for audio processing and routing to the default audio device

I have developed a pretty complex audio software for my client with plugins for Winamp, Windows Media player and VST. Now the client is interested in some method to avoid maintaining the multitude of plugins, we have no way to support all the media players out there.
The client does not care for Unix/Mac yet, so I can look only at Windows XP and Vista/7/
Basically, what we need is a way to always reliably intercept as much audio stream protocols as possible (well, except maybe ASIO, that's another story, I guess), then pass this audio through our custom effects engine and then route back to the default audio device, whatever it is.
Now I am thinking, what options do I have (theoretically).
I could use hooks. I need to hook globally older vaweOut and also DirectSound.
But will this still work on Vista/7?
I could use a virtual driver, like the author of the Virtual Audio Cable did:
http://software.muzychenko.net/eng/vac.htm
Seems a pretty daunting task. Anyway, the client will contact the author of VAC to see if he agrees to sell his source code for a reasonable price.
This driver could install itself as a default audio output device, intercept the audio stream from Windows, and pass it back to default device. Hmm, but what about various DirectSound audio buffers, do I have to mix them myself or is there any way I could tell Windows mixer to mix all for me and pass a single mixed audio stream?
It seems, this custom driver will of course kill all the hardware audio acceleration, but we can live with that, if we warn our customers about this issue.
As I understand, the most current Windows driver standard is WDF.
But maybe it does not work for audio on Windows Vista/7?
I know, Vista/7 has a different audio stack from XP.
If I can do it using WDF, what driver should I write - kernel mode or user mode?
Maybe I am missing more elegant and simple options to intercept, process and route audio on Windows?
Try Virtual Audio Streaming SDK. Also virutal sound card and let you read/process audio data in realtime.
http://www.virtualaudiostreaming.net/sdk-license.html

Resources