I have the following GStreamer Pipeline, which takes a videostream from the UDP and renders an overlay with a custom pluign (myoverlayrendererplugin), which is derived of the GstPushSrc Class:
gst-launch-1.0 \
videomixer name=mix ! videoconvert ! \
video/x-raw,width=800,height=480,pixel-aspect-ratio=1/1 ! xvimagesink \
udpsrc do-timestamp=false port=8554 ! decodebin ! \
videoscale ! video/x-raw,width=800,height=480,pixel-aspect-ratio=1/1 ! mix.\
myoverlayrendererplugin ! video/x-raw,format=ARGB,width=800,height=480 ! mix.
Individually both streams are shown correctly. When mixing, the pipeline is only playing (i.e. the final XWindow with the composed image is opened) only when both streams are active, i.e. the udpsrc reads data. When I don't use the videoscale plugin, the output is activated even when no data is coming, but the input is always shown on top of the rendered image instead of below the rendered image.
How can I set the videomixer to not wait for incoming data from all streams? Or is it possible to get the videoscale plugin to send empty frames when no input data is available? Or do I have to write an application that regularly checks the available data on the udpsrc and dynamically links it to the mixer if data is available?
I also tried to add a queue/queue2 between the videoscale and the mixer to the pipeline but that didn't help either. I'm using GStreamer 1.4.5 on Ubuntu 15.04. Later everything will be ported to a custom embedded platform.
Thank you for your support and Best Regards
Stefan
Related
So, I want to receive a video stream encoded in RTP/H.264, decode it and write the raw data on a shared memory so that another application can use and display it.
To do the decoding, I use gstreamer, with the following pipeline:
gst-launch-1.0 udpsrc port=5004 caps="application/x-rtp,media=video,encoding-name=H264,clock-rate=90000,payload=96" ! rtpjitterbuffer latency=0 ! rtph264depay ! h264parse ! omxh264dec use-dmabuf=false ! video/x-raw,format=I420 ! shmsink socket-path=/tmp/gst0 sync=false wait-for-connection=true
I use the following pipeline to get the data from shared memory and display on screen :
gst-launch-1.0 shmsrc socket-path=/tmp/gst0 ! "video/x-raw, format=I420, color-matrix=sdtv, chroma-site=mpeg2, width=(int)1920, height=(int)720, framerate=(fraction)30/1" ! queue ! videoconvert ! fbdevsink
I’m at an impasse and I don’t really understand what the problem is. When I use the omxh264dec decoder as it is, I have a color shifted image but the data stream arrives without any problem, but when I use it with the DMA deactivated the images are well decoded and well rested but the image stream does not pass and I receive only first frames.
Has anyone experienced this kind of problem ?
I'm trying to build a pipeline which I'll give him 2 wav files and stream those 2 as a single RTP, which has 2 channels that each channel is composed of the relative wav file.
I want to send the RTP using RTSP as well in order to do authentication for the RTSP connection.
I've tried using this pipeline
gst-launch-1.0 interleave name=i ! audioconvert ! wavenc ! filesink location=file.wav filesrc location=first_audio_file.wav ! decodebin ! audioconvert ! "audio/x-raw,channels=1,channel-mask=(bitmask)0x1" ! queue ! i.sink_0 filesrc location=second_audio_file.wav ! decodebin ! audioconvert ! "audio/x-raw,channels=1,channel-mask=(bitmask)0x2" ! queue ! i.sink_1
Which helps to take 2 wav files and saves them as a new file.wav in the same directory.
The output of file.wav is the mixing between them and as well he has 2 channels.
I've tried manipulating this pipeline in order to achieve what I've described but the main issue is making the sink to be RTSP with RTP split to 2 channels.
If anyone has a suggestion to solve this, that would be great!
Thanks :)
RTSP is not a streaming transport protocol but a session protocol, so it's completely different from the actual streaming logic (which you can implement with a GStreamer pipeline). That's also why you have a rtpsink (which you can use to stream to RTP), but not an rtspsink for example.
To get a working RTSP server, you can use for example gst-rtsp-server, of which you can find multiple example to set it up in their repo, like this small example. Although the examples are all in C, GStreamer also provides bindings to other languages like Python, Javascript, ...
I want to use Gstreamer to receive audio streams from multiple points on the same port.
Indeed I want to stream audio from different nodes on the network to one device that listen to incoming audio streams, and it should mix multiple audios before playback.
I know that I should use audiomixer or liveadder to do such a task.
But I can't do it, and the mixer doesn't act correctly and when two audio streams came, the output sound would be so noisy and corrupted.
I used the following command :
gst-launch-1.0.exe -v udpsrc port=5001 caps="application/x-rtp" !
queue ! rtppcmudepay ! mulawdec ! audiomixer name=mix mix. !
audioconvert ! audioresample ! autoaudiosink
but it doesn't work.
Packets on a same port couldn't demux from each other as normal way you wrote in your command, to receive multiple audio streams from same port you should use SSRC and rtpssrcdemux demux.
However to receive multiple audio streams on multiple ports and mix them, you could use liveadder element. An example to receive two audio streams from two ports and mix them is as follows:
gst-launch-1.0 -v udpsrc name=src5001 caps="application/x-rtp"
port=5001 ! rtppcmudepay ! mulawdec ! audioresample ! liveadder
name=m_adder ! alsasink device=hw:0,0 udpsrc name=src5002
caps="application/x-rtp" port=5002 ! rtppcmudepay ! mulawdec !
audioresample ! m_adder.
First, you probably want to use audiomixer over liveadder as the first guarantees synchronization of the different audio streams.
Then, about your mixing problem, you mention that the output sound is "noisy and corrupted", which makes me think of problem with audio levels. Though audiomixer clips the output audio to the maximum allowed amplitude range, it can result in audio artefacts if your sources are too loud. Thus, you might want to play with the volume property on both sources. See here and there for more information.
I need to sync video and audio when I play mp4 file. How can I do that?
Here's my pipeline:
gst-launch-0.10 filesrc location=./big_buck_bunny.mp4 ! \
qtdemux name=demux demux.video_00 ! queue ! TIViddec2 engineName=codecServer codecName=h264dec ! ffmpegcolorspace !tidisplaysink2 video-standard=pal display-output=composite \
demux.audio_00 ! queue max-size-buffers=500 max-size-time=0 max-size-bytes=0 ! TIAuddec1 ! audioconvert ! audioresample ! autoaudiosink
Have you tried playing the video on a regular desktop without using TI's elements? GStreamer should take care of synchronization for playback cases (and many others).
If the video is perfectly synchronized on a desktop then you have a bug on the elements specific to your target platform (TIViddec2 and tidisplaysink2). qtdemux should already put the expected timestamps on the buffers, so it is possible that TIViddec2 isn't copying those to its decoded buffers or tidisplaysink2 isn't respecting them. (The same might apply to the audio part)
I'd first check TIViddec2 by replacing the rest of the pipeline after it with a fakesink and run with verbose mode of gst-launch. The output from fakesink should show you the output timestamps, check if those are consistent, you can also put a fakesink right after qtdemux to check the timestamps that it produces and see if the decoders are respecting that.
I used wrong video framerate actually.
I have a TV tuner card that shows up as /dev/video1. I am trying to digitize some old VHS tapes. The TV tuner doesn't do audio, I have a wire connected to my microphone in.
This is the gstreamer pipeline I'm using to capture video & audio and save it to a file. I'm using motion jpeg because I don't want it to drop frames and lose content. I'll re-encode it better later.
gst-launch-0.10 v4l2src device=/dev/video1 ! \
queue ! \
video/x-raw-yuv,width=640,height=480 ! \
ffmpegcolorspace ! \
jpegenc ! \
avimux name=mux ! \
filesink location=output.avi \
pulsesrc ! \
queue ! \
audioconvert ! \
audio/x-raw-int,rate=44100,channels=2 ! \
mux.
This all works well and good. I have files that play that have video and audio. However sometimes when playing the output files, the audio & video goes out of sync. It happens at the same place in the video, on numerous different media players (totem, mplayer). So I think this is a problem in how I'm saving and recording the file.
Is there anything I can do to the pipeline to make it less likely to suffer from audio/video sync problems? I'm a bit of a newbie to gstreamer and video/audio codecs, so I might be doing something stupid here (please point out!). Is there any video/audio/muxer codec that would be better?
Try adding an audiorate element in the audio branch, and a videorate element in the video branch, to see if that makes a difference, or try a different muxer, like qtmux or matroskamux.