Cannot play video stream in two instances of VLC player - libvlc

We are using libvlc in our Linux application to play an RTP MJPEG stream from an IP camera. We'd like to have two libvlc_media_players playing the video, one playing a full image in a GtkDrawingArea, and another playing a cropped/resized portion of the video (pseudo-zoom) in another GtkDrawingArea.
The problem is, only one of the media players is working. It seems whichever instance connects first, is blocking the second instance from binding to the port.
We need a way replicate the traffic to two ports may be or any other advice?

You have at least 2 ways to achieve what you want, depending on the exact result you need.
While you can only call libvlc_new once, and so have only 1 libvlc instance in your app running at all times, you may create as many media players as you want from libvlc. They will be independant though, so if you need exact sync, this isn't what you want to use.
One other way would be to duplicate the video output, using for example
Clone video filter (clone)
Duplicate your video to multiple windows and/or video output modules
--clone-count=<integer> Number of clones
Number of video windows in which to clone the video.
--clone-vout-list=<string> Video output modules
You can use specific video output modules for the clones. Use a
comma-separated list of modules.
https://wiki.videolan.org/VLC_command-line_help
with libvlc_media_add_option (replace -- with : when using this function).
You can also use --sout with #duplicate as well. https://wiki.videolan.org/Documentation:Streaming_HowTo/Command_Line_Examples/
Depending on the way you choose, VLC might create a new Window by itself, which you might have to grab and incorporate in your app.

Related

How to Playback Multiple Audio Files Starting at Different Times

I have two audio files of different durations. I want to play them simultaneously with the shorter duration starting in the middle of the longer duration file.
I've enabled media synchronization with the app launch setting --sout-all --sout #display.
Swapping between input-master and -slave settings results in either the shorter file not playing or nothing played back.
How can this be done in VLC?
As of the date of this response, asynchronized audio playback or recordings where files start and end at different times cannot be done using a single instance of VLC alone without customized addons, if available. This case is not an intended use of the VLC standard application.
It is possible yet cumbersome to have asynchronized playback with two or more instances of VLC and manually starting and ending each audio track.
ALTERNATIVES
Alternatively, there are numerous online audio editing tools, many accessible freely, that permit uploading audio files as separate tracks for playback, editing, mixing, or recording and for downloading that are much more advanced than the features of an uncustomized VLC.
A web search for "audio editors online" will produce a lengthy list of options.

Sending a webcam input to zoom using a recorded clip

I have an idea that I have been working on, but there are some technical details that I would love to understand before I proceed.
From what I understand, Linux communicates with the underlying hardware through the /dev/. I was messing around with my video cam input to zoom and I found someone explaining that I need to create a virtual device and mount it to the output of another program called v4loop.
My questions are
1- How does Zoom detect the webcams available for input. My /dev directory has 2 "files" called video (/dev/video0 and /dev/video1), yet zoom only detects one webcam. Is the webcam communication done through this video file or not? If yes, why does simply creating one doesn't affect Zoom input choices. If not, how does zoom detect the input and read the webcam feed?
2- can I create a virtual device and write a kernel module for it that feeds the input from a local file. I have written a lot of kernel modules, and I know they have a read, write, release methods. I want to parse the video whenever a read request from zoom is issued. How should the video be encoded? Is it an mp4 or a raw format or something else? How fast should I be sending input (in terms of kilobytes). I think it is a function of my webcam recording specs. If it is 1920x1080, and each pixel is 3 bytes (RGB), and it is recording at 20 fps, I can simply calculate how many bytes are generated per second, but how does Zoom expect the input to be Fed into it. Assuming that it is sending the strean in real time, then it should be reading input every few milliseconds. How do I get access to such information?
Thank you in advance. This is a learning experiment, I am just trying to do something fun that I am motivated to do, while learning more about Linux-hardware communication. I am still a beginner, so please go easy on me.
Apparently, there are two types of /dev/video* files. One for the metadata and the other is for the actual stream from the webcam. Creating a virtual device of the same type as the stream in the /dev directory did result in Zoom recognizing it as an independent webcam, even without creating its metadata file. I did finally achieve what I wanted, but I used OBS Studio virtual camera feature that was added after update 26.0.1, and it is working perfectly so far.

How to simultaneously play two PCM audio streams in one DMA

I am working on a project on the stm32f769i-disco board, which has to run a video game made by me. The graphical part is solved, where I have problems is when I need to play two or more wav files, I do not know which is the correct way to add them in a single stream.
What is the algorithm that I must follow for more than one stream?
PD: I know manage a single stream which is played through dma with double buffer.

Audio stream mangement in Linux

I have a very complicated audio setup for a project. Here's what we have:
3 applications playing sound
2 applications recording sound
2 sound cards
I really don't really have the code to any of these applications. All I want to do is monitor and control the audio streams. Here are a few examples of operations I'd like to do while the applications are running:
Mute one of the incoming audio streams.
Have one of the incoming audio streams do a "solo" (be the only stream that can "talk").
Get a graph (about 30 seconds worth) of the audio that each stream produced.
Send one of the audio streams to soundcard #1, but all three audio streams to soundcard #2.
I would likely switch audio streams every 2 minutes or so with one of the operations listed above. A GUI would be preferred. I started looking at the sound systems in Linux and it gets extremely complex and I feel like there have been many new advances in the past few years. I see jack, pulseaudio, artsd, and several other packages. They all have some promise but where should I start? Is there something someone already built that can help?
PulseAudio should be able to let you do all that. You'll need to configure a custom pipeline for splitting the app's audio for task 4, and I'm not exactly certain how you'd accomplish task 3, but I do know that it's capable of all sorts of audio stream handling via its volume control (pavucontrol).
I use Jack, which is quite simple to install and use, even if it
requires more efforts to configure with Flash and Firefox ...
You can try the latest Ubuntu Studio distribution and see if it solves your
problem (for the GUI, look at "patchage").

How does youtube support starting playback from any part of the video?

Basically I'm trying to replicate YouTube's ability to begin video playback from any part of hosted movie. So if you have a 60 minute video, a user could skip straight to the 30 minute mark without streaming the first 30 minutes of video. Does anyone have an idea how YouTube accomplishes this?
Well the player opens the HTTP resource like normal. When you hit the seek bar, the player requests a different portion of the file.
It passes a header like this:
RANGE: bytes-unit = 10001\n\n
and the server serves the resource from that byte range. Depending on the codec it will need to read until it gets to a sync frame to begin playback
Video is a series of frames, played at a frame rate. That said, there are some rules about the order of what frames can be decoded.
Essentially, you have reference frames (called I-Frames) and you have modification frames (class P-Frames and B-Frames)... It is generally true that a properly configured decoder will be able to join a stream on any I-Frame (that is, start decoding), but not on P and B frames... So, when the user drags the slider, you're going to need to find the closest I frame and decode that...
This may of course be hidden under the hood of Flash for you, but that is what it will be doing...
I don't know how YouTube does it, but if you're looking to replicate the functionality, check out Annodex. It's an open standard that is based on Ogg Theora, but with an extra XML metadata stream.
Annodex allows you to have links to named sections within the video or temporal URIs to specific times in the video. Using libannodex, the server can seek to the relevant part of the video and start serving it from there.
If I were to guess, it would be some sort of selective data retrieval, like the Range header in HTTP. that might even be what they use. You can find more about it here.

Resources