HDMI ports for audio output - audio

I have a monitor with no audio output.
Can I use a HDMI splitter to connect a firestick and take an audio output on the the other?

Related

Popping noise when piping audio into a vitual mic debian

I have a pipe module with the command pactl load-module module-pipe-source source_name=VirtualMic file=/tmp/virtualmic format=wav rate=44100 channels=2
I want to use SoX to play a sound file into it. I am doing this with sox "example.wav" -t wav - > /tmp/virtualmic
I have tried piping the audio using ffmpeg, to the same result, and to confirm that it is not my computer speakers or the file, playing the file in audio programs such as vlc does not include a popping sound.
The number of channels and the sample rate are both identical, and other then the pop the audio plays normally

How do I create two separate audio streams using the same virtual audio driver?

I'm currently trying to develop a Node.js application on MacOS that routes audio from a camera rtsp to a virtual audio driver (SoundPusher) to be played through Zoom mic as one stream as well as grab audio from Zoom output through the virtual audio driver to a output rtsp stream as a different stream:
1. Camera Rtsp/Audio Element (SoundPusher Speaker) -> Zoom Input(SoundPusher Mic)
2. Zoom Output (SoundPusher Speaker) -> Pipe audio to Output Rtsp from SoundPusher Mic
1.The implementation that I have right now is that the audio from the camera rtsp is piped to a HTTP server with ffmpeg. On the client side, I create an audio element streaming the audio from the HTTP server through HLS. I then run setSinkId on the audio element to direct the audio to the Soundpusher input and have my microphone in Zoom set to Soundpusher output.
const audio = document.createElement('audio') as any;
audio.src = 'http://localhost:9999';
audio.setAttribute('type', 'audio/mpeg')
await audio.setSinkId(audioDriverSpeakerId);
audio.play();
2.I also have Soundpusher input set as the output for my audio in Zoom so I can obtain audio from Zoom and then pipe it to the output rtsp stream from Soundpusher output.
ffmpeg -f avfoundation -i "none:SoundPusher Audio" -c:a aac -f rtsp rtsp://127.0.0.1:5554/audio"
The problem is that the audio from my camera is being mixed in with the audio from Zoom in the output RTSP stream but I'm expecting to hear only the audio from Zoom. Does anyone know of a way to separate the audio from both streams but use the same audio driver? I'd like to route the audio streams so that the stream from the audio element to Zoom is separate from the stream from Zoom to the output rtsp.
I'm very new to audio streaming so any advice would be appreciated.

ALSA Card for Respeaker 4-Mic Setup

During the installation, we are supposed to check the sound card by pressing ‘arecord -L’ to obtain a certain output like shown below,
pi#raspberrypi:~ $ arecord -L
null
Discard all samples (playback) or generate zero samples (capture)
jack
JACK Audio Connection Kit
pulse
PulseAudio Sound Server
default
playback
ac108
sysdefault:CARD=seeed4micvoicec
seeed-4mic-voicecard,
Default Audio Device
dmix:CARD=seeed4micvoicec,DEV=0
seeed-4mic-voicecard,
Direct sample mixing device
dsnoop:CARD=seeed4micvoicec,DEV=0
seeed-4mic-voicecard,
Direct sample snooping device
hw:CARD=seeed4micvoicec,DEV=0
seeed-4mic-voicecard,
Direct hardware device without any conversions
plughw:CARD=seeed4micvoicec,DEV=0
seeed-4mic-voicecard,
Hardware device with all software conversions
usbstream:CARD=seeed4micvoicec
seeed-4mic-voicecard
USB Stream Output
usbstream:CARD=ALSA
bcm2835 ALSA
USB Stream Output
However, the output that I have received is as shown below,
Screenshot of Output
It basically shows that I don’t have the ALSA soundcard, and I cant move on to the sound localization process. Please show how can I move forward, thanks!

How to play a Video with mpv with multiple audio streams

I try to play a movie in two languages.
Audio 1 to Speaker
Audio 2 to Headset
mpv --lavfi-complex="[aid1] [aid2] amix [ao]" "input.mp4"
Play the video and mix audio1 & audio2 and output is standard device
mpv "input.mp4" --vid=1 --aid=1 --audio-device="wasapi/{d3178b30-xxxx-xxxx-xxxx-xxxxxxxxxxxx}"
Play video with audio1
mpv "input.mp4" --aid=2 --no-video --audio-device="wasapi/{06a44940-xxxx-xxxx-xxxx-xxxxxxxxxxxx}"
Play audio2 only
how to combine this?
I have successfully done this by creating a 5.1 stream with one language mixed down to [FL] and [FR] and another to [BL] and [BR].
That I sent either directly to a multi-channel hardware (via ALSA) or through JACK to be more flexible with the channel routing.
This might be possible using mpv’s --lavfi-complex but I always prepared the 5.1 stream using ffmpeg.
https://trac.ffmpeg.org/wiki/AudioChannelManipulation

How can I stream audio without latency?

I've been playing around with ffmpeg for sending audio to an endpoint and listening to it. This is the command I've used.
ffmpeg -f pulse -i 1 -f pulse 1
Where the two "1's" are the indices of my mic and output device as reported by pacmd list-sources and pacmd list-sinks.
This command allows me to speak into my microphone and hear it back in my speakers, but there's latency. Other parameters like -tune zerolatency and stuff don't help.
I know that low latency audio streaming is possible on Linux since apps like Discord work on it. Why is there latency in my command, and what protocol, program, or library should I use to transmit an audio stream?

Resources