GStreamer pipeline of 2 wav files onto single RTSP with 2 channels - audio

I'm trying to build a pipeline which I'll give him 2 wav files and stream those 2 as a single RTP, which has 2 channels that each channel is composed of the relative wav file.
I want to send the RTP using RTSP as well in order to do authentication for the RTSP connection.
I've tried using this pipeline
gst-launch-1.0 interleave name=i ! audioconvert ! wavenc ! filesink location=file.wav filesrc location=first_audio_file.wav ! decodebin ! audioconvert ! "audio/x-raw,channels=1,channel-mask=(bitmask)0x1" ! queue ! i.sink_0 filesrc location=second_audio_file.wav ! decodebin ! audioconvert ! "audio/x-raw,channels=1,channel-mask=(bitmask)0x2" ! queue ! i.sink_1
Which helps to take 2 wav files and saves them as a new file.wav in the same directory.
The output of file.wav is the mixing between them and as well he has 2 channels.
I've tried manipulating this pipeline in order to achieve what I've described but the main issue is making the sink to be RTSP with RTP split to 2 channels.
If anyone has a suggestion to solve this, that would be great!
Thanks :)

RTSP is not a streaming transport protocol but a session protocol, so it's completely different from the actual streaming logic (which you can implement with a GStreamer pipeline). That's also why you have a rtpsink (which you can use to stream to RTP), but not an rtspsink for example.
To get a working RTSP server, you can use for example gst-rtsp-server, of which you can find multiple example to set it up in their repo, like this small example. Although the examples are all in C, GStreamer also provides bindings to other languages like Python, Javascript, ...

Related

Play a audio file trough a specific speaker channel using gst-launch-1.0

I want to play a audio clip through only a specific speaker channel, for example, only through the Right Channel using gst-launch-1.0 command.
How can I do this ? I have 6 channel, so I am planning to play different audio through each of this channel one by one
You can use the audiochannelmix element to send audio to a single channel:
gst-launch-1.0 audiotestsrc ! audiochannelmix left-to-left=1 right-to-left=1 right-to-right=0 ! alsasink
Since the audio sink will be the same for both channels you'll want to use an audio mix, so that the right-only and the left-only audio channels are routed to a single sink device.
gst-launch-1.0 \
audiotestsrc wave=1 ! audiochannelmix right-to-left=1 right-to-right=0 ! mix. \
audiotestsrc wave=5 ! audiochannelmix left-to-right=1 left-to-left=0 right-to-right=0 ! mix. \
audiomixer name=mix ! alsasink
If you have multiple audio devices you want to route your audio to, you'll need to modify the alsasink's device property so that it matches the desired audio sink.

How to fix image problems when streaming h.264 via gstreamer udpsink

Using gstreamer I want to stream images from several Logitech C920 webcams to a Janus media server in RTP/h.264 format. The webcams produce h.264 encoded video streams, so I can send the streams to a UDP sink without re-encoding data, only payloading it.
I'm using the gst-interpipe plugin to switch between the different webcams, so that the video stream received by Janus stays the same, but with images coming from whatever webcam I choose.
It works but I'm experiencing some problems with broken frames where the colors are gray and details are blurred away, mainly the first 5 - 10 seconds after I switch between webcam source streams. After that the images correct themselves.
First frames
After 5 - 10 seconds or more
First I thought it was a gst-interpipe specific problem, but I can reproduce it by simply setting up two pipelines - one sending a video stream to a UDP sink and one reading from a UDP source:
gst-launch-1.0 -v -e v4l2src device=/dev/video0 ! queue ! video/x-
h264,width=1280,height=720,framerate=30/1 ! rtph264pay
config-interval=1 ! udpsink host=127.0.0.1 port=8004
gst-launch-1.0 -v udpsrc port=8004 caps = "application/x-rtp,
media=video, clock-rate=90000, encoding-name=H264, payload=96" !
rtph264depay ! decodebin ! videoconvert ! xvimagesink
NB: I'm not experiencing this problem if I send the video stream directly to an xvimagesink, i.e. when not using UDP streaming.
Am I missing some important parameters in my pipelines? Is this a buffering issue? I really have no idea how to correct this.
Any help is greatly appreciated.
Due to the nature of temporal dependencies of video streams you cannot just tune in into stream and expect it to be decode-able immediately. Correct decoding can only start at Random-Access-Point frames (e.g. I- or IDR-frames). before that you will get image data that rely on video frames you haven't received - so they will look broken. Some decoders offers some control on what to do on these cases. libavdec_h264 for example has a output-corrupt option. (But actually I don't how it behaves for "correct" frames which just are missing reference frames). Or they may have options to skip everything until a RAP-frame occurs. This depends on your specific decoder implementation. Note however that on any of these options the initial delay before you will see any image will increase.

Mix multiple audio streams into one playback-sound using Gstreamer

I want to use Gstreamer to receive audio streams from multiple points on the same port.
Indeed I want to stream audio from different nodes on the network to one device that listen to incoming audio streams, and it should mix multiple audios before playback.
I know that I should use audiomixer or liveadder to do such a task.
But I can't do it, and the mixer doesn't act correctly and when two audio streams came, the output sound would be so noisy and corrupted.
I used the following command :
gst-launch-1.0.exe -v udpsrc port=5001 caps="application/x-rtp" !
queue ! rtppcmudepay ! mulawdec ! audiomixer name=mix mix. !
audioconvert ! audioresample ! autoaudiosink
but it doesn't work.
Packets on a same port couldn't demux from each other as normal way you wrote in your command, to receive multiple audio streams from same port you should use SSRC and rtpssrcdemux demux.
However to receive multiple audio streams on multiple ports and mix them, you could use liveadder element. An example to receive two audio streams from two ports and mix them is as follows:
gst-launch-1.0 -v udpsrc name=src5001 caps="application/x-rtp"
port=5001 ! rtppcmudepay ! mulawdec ! audioresample ! liveadder
name=m_adder ! alsasink device=hw:0,0 udpsrc name=src5002
caps="application/x-rtp" port=5002 ! rtppcmudepay ! mulawdec !
audioresample ! m_adder.
First, you probably want to use audiomixer over liveadder as the first guarantees synchronization of the different audio streams.
Then, about your mixing problem, you mention that the output sound is "noisy and corrupted", which makes me think of problem with audio levels. Though audiomixer clips the output audio to the maximum allowed amplitude range, it can result in audio artefacts if your sources are too loud. Thus, you might want to play with the volume property on both sources. See here and there for more information.

Sync audio and video when playing mp4 file with GStreamer

I need to sync video and audio when I play mp4 file. How can I do that?
Here's my pipeline:
gst-launch-0.10 filesrc location=./big_buck_bunny.mp4 ! \
qtdemux name=demux demux.video_00 ! queue ! TIViddec2 engineName=codecServer codecName=h264dec ! ffmpegcolorspace !tidisplaysink2 video-standard=pal display-output=composite \
demux.audio_00 ! queue max-size-buffers=500 max-size-time=0 max-size-bytes=0 ! TIAuddec1 ! audioconvert ! audioresample ! autoaudiosink
Have you tried playing the video on a regular desktop without using TI's elements? GStreamer should take care of synchronization for playback cases (and many others).
If the video is perfectly synchronized on a desktop then you have a bug on the elements specific to your target platform (TIViddec2 and tidisplaysink2). qtdemux should already put the expected timestamps on the buffers, so it is possible that TIViddec2 isn't copying those to its decoded buffers or tidisplaysink2 isn't respecting them. (The same might apply to the audio part)
I'd first check TIViddec2 by replacing the rest of the pipeline after it with a fakesink and run with verbose mode of gst-launch. The output from fakesink should show you the output timestamps, check if those are consistent, you can also put a fakesink right after qtdemux to check the timestamps that it produces and see if the decoders are respecting that.
I used wrong video framerate actually.

gstreamer pipeline to mix two audio source

I want to create a pipeline in gstreamer that will have two audio source and will mix the audios with some scaling factor and through the output data to alsasink. I have seen the example of "adder" but am not sure if adder can be used with multiple filesrc.
I need your help in constructing this pipeline.
Of course it can !
Here's an example launch line to get you going :
gst-launch-1.0 uridecodebin uri=file:///home/meh/Music/Fonky\ Family-\ L\'amour\ Du\ Risque-MGvSx-foo3E.wav ! adder name = m ! autoaudiosink uridecodebin uri=file:///home/meh/Music/kendrick.wav ! audioconvert ! m.
Have a nice day :)

Resources