I want to create a pipeline in gstreamer that will have two audio source and will mix the audios with some scaling factor and through the output data to alsasink. I have seen the example of "adder" but am not sure if adder can be used with multiple filesrc.
I need your help in constructing this pipeline.
Of course it can !
Here's an example launch line to get you going :
gst-launch-1.0 uridecodebin uri=file:///home/meh/Music/Fonky\ Family-\ L\'amour\ Du\ Risque-MGvSx-foo3E.wav ! adder name = m ! autoaudiosink uridecodebin uri=file:///home/meh/Music/kendrick.wav ! audioconvert ! m.
Have a nice day :)
Related
I'm trying to build a pipeline which I'll give him 2 wav files and stream those 2 as a single RTP, which has 2 channels that each channel is composed of the relative wav file.
I want to send the RTP using RTSP as well in order to do authentication for the RTSP connection.
I've tried using this pipeline
gst-launch-1.0 interleave name=i ! audioconvert ! wavenc ! filesink location=file.wav filesrc location=first_audio_file.wav ! decodebin ! audioconvert ! "audio/x-raw,channels=1,channel-mask=(bitmask)0x1" ! queue ! i.sink_0 filesrc location=second_audio_file.wav ! decodebin ! audioconvert ! "audio/x-raw,channels=1,channel-mask=(bitmask)0x2" ! queue ! i.sink_1
Which helps to take 2 wav files and saves them as a new file.wav in the same directory.
The output of file.wav is the mixing between them and as well he has 2 channels.
I've tried manipulating this pipeline in order to achieve what I've described but the main issue is making the sink to be RTSP with RTP split to 2 channels.
If anyone has a suggestion to solve this, that would be great!
Thanks :)
RTSP is not a streaming transport protocol but a session protocol, so it's completely different from the actual streaming logic (which you can implement with a GStreamer pipeline). That's also why you have a rtpsink (which you can use to stream to RTP), but not an rtspsink for example.
To get a working RTSP server, you can use for example gst-rtsp-server, of which you can find multiple example to set it up in their repo, like this small example. Although the examples are all in C, GStreamer also provides bindings to other languages like Python, Javascript, ...
I currently know how to blend two videos into one, it was very hard to learn how to do this (more than 30 continuous hours researching), I've used the following pipeline:
gst-launch-1.0 filesrc location=candidate.webm ! decodebin ! videoscale ! video/x-raw,width=680,height=480 ! compositor name=comp sink_1::xpos=453 sink_1::ypos=340 ! vp9enc ! webmmux ! filesink location=out.web filesrc location=interviewer.webm ! decodebin ! videoscale ! video/x-raw,width=200,height=140 ! comp.
In this case I'm blending two videos so that the second of them is in the right bottom corner, and the first one is the "background". Well, does somebody knows how can I get both audios in the same file too? I hope someone find useful my pipeline.
The audiomixer element does take multiple audio streams and mixes them into a single audio stream.
I want to use Gstreamer to receive audio streams from multiple points on the same port.
Indeed I want to stream audio from different nodes on the network to one device that listen to incoming audio streams, and it should mix multiple audios before playback.
I know that I should use audiomixer or liveadder to do such a task.
But I can't do it, and the mixer doesn't act correctly and when two audio streams came, the output sound would be so noisy and corrupted.
I used the following command :
gst-launch-1.0.exe -v udpsrc port=5001 caps="application/x-rtp" !
queue ! rtppcmudepay ! mulawdec ! audiomixer name=mix mix. !
audioconvert ! audioresample ! autoaudiosink
but it doesn't work.
Packets on a same port couldn't demux from each other as normal way you wrote in your command, to receive multiple audio streams from same port you should use SSRC and rtpssrcdemux demux.
However to receive multiple audio streams on multiple ports and mix them, you could use liveadder element. An example to receive two audio streams from two ports and mix them is as follows:
gst-launch-1.0 -v udpsrc name=src5001 caps="application/x-rtp"
port=5001 ! rtppcmudepay ! mulawdec ! audioresample ! liveadder
name=m_adder ! alsasink device=hw:0,0 udpsrc name=src5002
caps="application/x-rtp" port=5002 ! rtppcmudepay ! mulawdec !
audioresample ! m_adder.
First, you probably want to use audiomixer over liveadder as the first guarantees synchronization of the different audio streams.
Then, about your mixing problem, you mention that the output sound is "noisy and corrupted", which makes me think of problem with audio levels. Though audiomixer clips the output audio to the maximum allowed amplitude range, it can result in audio artefacts if your sources are too loud. Thus, you might want to play with the volume property on both sources. See here and there for more information.
I have the following GStreamer Pipeline, which takes a videostream from the UDP and renders an overlay with a custom pluign (myoverlayrendererplugin), which is derived of the GstPushSrc Class:
gst-launch-1.0 \
videomixer name=mix ! videoconvert ! \
video/x-raw,width=800,height=480,pixel-aspect-ratio=1/1 ! xvimagesink \
udpsrc do-timestamp=false port=8554 ! decodebin ! \
videoscale ! video/x-raw,width=800,height=480,pixel-aspect-ratio=1/1 ! mix.\
myoverlayrendererplugin ! video/x-raw,format=ARGB,width=800,height=480 ! mix.
Individually both streams are shown correctly. When mixing, the pipeline is only playing (i.e. the final XWindow with the composed image is opened) only when both streams are active, i.e. the udpsrc reads data. When I don't use the videoscale plugin, the output is activated even when no data is coming, but the input is always shown on top of the rendered image instead of below the rendered image.
How can I set the videomixer to not wait for incoming data from all streams? Or is it possible to get the videoscale plugin to send empty frames when no input data is available? Or do I have to write an application that regularly checks the available data on the udpsrc and dynamically links it to the mixer if data is available?
I also tried to add a queue/queue2 between the videoscale and the mixer to the pipeline but that didn't help either. I'm using GStreamer 1.4.5 on Ubuntu 15.04. Later everything will be ported to a custom embedded platform.
Thank you for your support and Best Regards
Stefan
I need to sync video and audio when I play mp4 file. How can I do that?
Here's my pipeline:
gst-launch-0.10 filesrc location=./big_buck_bunny.mp4 ! \
qtdemux name=demux demux.video_00 ! queue ! TIViddec2 engineName=codecServer codecName=h264dec ! ffmpegcolorspace !tidisplaysink2 video-standard=pal display-output=composite \
demux.audio_00 ! queue max-size-buffers=500 max-size-time=0 max-size-bytes=0 ! TIAuddec1 ! audioconvert ! audioresample ! autoaudiosink
Have you tried playing the video on a regular desktop without using TI's elements? GStreamer should take care of synchronization for playback cases (and many others).
If the video is perfectly synchronized on a desktop then you have a bug on the elements specific to your target platform (TIViddec2 and tidisplaysink2). qtdemux should already put the expected timestamps on the buffers, so it is possible that TIViddec2 isn't copying those to its decoded buffers or tidisplaysink2 isn't respecting them. (The same might apply to the audio part)
I'd first check TIViddec2 by replacing the rest of the pipeline after it with a fakesink and run with verbose mode of gst-launch. The output from fakesink should show you the output timestamps, check if those are consistent, you can also put a fakesink right after qtdemux to check the timestamps that it produces and see if the decoders are respecting that.
I used wrong video framerate actually.