VLC Player drops audio on RTSP stream after a short time - audio

I'm streaming audio and video from a HikVision security camera and the audio drops shortly after I start VLC. It doesn't return unless I close and restart VLC. I'm using VLC version 3.0.18 on Windows 10 with an NVIDIA RTX A4000
I have tried H.264 and H.265 encoding with MP3, MP2L2, and PCM audio encoding. PCM didn't work at all. The camera offers the following audio encoding options: G.722.1, G.711ulaw, G.711alaw, MP2L2, G.726, PCM, and MP3. I'm streaming a fairly low res (640x360) feed. I haven't tried streaming a high res stream but I doubt that will help. My gigabit switch says the camera is only transmitting at about 300Kbs on a 100 megabit connection so it seems unlikely there is a network bandwidth issue, especially since there are only 2 cameras, one PC a phone and tablet connected to this network and I'm the only one using any of it.

Related

How to stream RTSP on the web?

We generate RTSP stream (MP4 with ACC codec for audio) on our server and we need to send it to web app and play it.
We could send it via websocket and play it with media extensions but they are not supported on iOS.
We could also use WebRTC with media channel but that supports only Opus audio codec and we cannot afford transcoding from ACC to Opus.
Do you have any idea how can we play RTSP data on iOS devices?
EDIT: we aim for low latency playback (<1s) HSL has latency 5s+
you need to encode/package your stream in HLS on your server to send it to iOS clients. Try to look into FFMPEG streaming guides where the input is your RTSP stream and output is HLS. iOS really only plays HLS.

no audio data from PC send to STM32F4 audio class USB

I'm working in an audio project. We use stm32f407 like a USB audio device to get audio data from PC then send out by I2S module. We are using stm32f4 Discovery kit and STM32cubeMX. After generate code by following this video, i change nothing and flash to Kit; my PC identifies that STM Audio device but there isn't any data send to my kit when play music, except MuteCMD . My question is:
i don't know which function is callback when data stream from PC to Kit.
why PC identifies that my kit is an audio output device but the callback of volume control isn't called when I config volume on PC and there isn't any data of music send to my device. The only one mute control callback function is called when i mute the PC.
this is my config in STM32cubeMX
pinout config figure
USB device config figure 1
USB device config figure 2
USB device config figure 3
PC identifies AUDIO device figure
choosing PC's audio output device figure
fail to play test tone figure
You should set USBD_AUDIO_FREQ to 22050 (or 44100, or 11025). Your value is 22100 and it seems like Windows or built-in audio drivers can't use that frequency.
I had exact same problem.
My project is generated from STM32Cube.
Windows recognizes the F7-DISCO board as sound card but failed to play test sounds.
I changed USBD_AUDIO_FREQ to 48000 and PID to 0x5730 (22320 in decimal).
And then everything works fine.

Is a sound device necessary for an audio streaming server?

My project is to stream audio online with my PC as the server.
I have a HP Proliant ML110 G7 server PC, which does not have any integrated sound device in motherboard, nor any kind of sound device.
I am currently using ubuntu 16.04 in my PC, and I cannot configure IceCast and Ices2/Darkice properly, but I could do it following the same instructions in another laptop with same os same version, which has an integrated sound device.
Is an integrated sound device needed to make an audio streaming server?
Thank you.
Icecast itself just passes data on through. It requires no sound device at all.
Your source client, such as IceS, can be used to read audio from a sound device or just to read audio from files. If you have no sound device, you'll need to use some other audio source of course.

Can anyone explain how voice commands works via Bluetooth remote(Nexus player remote) in Android(Nexus player)?

Can anyone please elaborate following questions?
How bluetooth stack handles audio data?
How audio commands are processed?
Did we need any service to process audio data?
Thanks in advance.
Basically, voice commands over BLE require:
some audio codec for reducing required bandwidth (ADPCM and SBC are common, OPUS is emerging),
some audio streaming method through BLE,
decoding and getting the audio stream from BLE daemon to a command processing framework.
In the android world, command processing framework is google sauce (closed) that most easily gets its audio from an ALSA device. What is left to be done is getting audio from the remote to an ALSA device.
So for audio streaming, either you:
use a custom L2CAP channel or a custom GATT service, this requires a custom android service app and/or modifications to Bluedroid to handle those, it will need a way to inject audio stream as ALSA, most probably with a "loop" audio device driver,
declare audio as custom HID reports, this way, Bluedroid injects them back to the kernel, then add a custom HID driver that processes these reports and exposes an audio device.
Audio over BLE is not standard, so all implementations do not do the actual same thing. In Nexus Player case, implementation uses HID: It streams an ADPCM audio stream, chunked in HID reports. There is a special HID driver "hid-atv-remote.c" in Android linux kernel that exposes an ALSA device in addition to input device. Bluedroid has no information about audio, all it does is forwarding HID reports from BLE to UHID.

RTSP to Flash Media Server

I have a live RTSP h.264 720p 8Mb stream that I need to get to my streaming provider's Flash Media Server. The only way that I currently have to stream to my provider's FMS is through Adobe's "Flash Media Live Encoder" Which works well enough with our SD analog capture card for live streaming.
I can open and re-stream the RTSP stream with VLC however VLC will not stream RTMP to the Flash Media Server.
Any thoughts on how to do this without going through the analog hole?

Resources