How to solve a RAW stream playback problem with GStreamer and VAAPi - linux

I am currently experiencing a small problem with GStreamer, here are more details:
Configuration:
Intel i7-6700
Intel HD Graphics 530
Ubuntu 18.04 LTS
GStreamer1.0
VAAPI plugin
I receive a UDP stream from a video source, this stream is sent in RAW UYVY format. Here is my command line to decode it:
gst-launch-1.0 -v udpsrc port="1234" caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:2, depth=(string)8, width=(string)1920, height=(string)1080, colorimetry=(string)BT709-2, payload=(int)96, ssrc=(uint)1188110121, timestamp-offset=(uint)4137478200, seqnum-offset=(uint)7257, a-framerate=(string)25" ! rtpvrawdepay ! decodebin ! queue ! videoconvert ! xvimagesink
Problem as we can see on the screenshot below, the CPU load (right) is far too high for this kind of task and we can see the GPU load (left) which is almost zero.
To overcome this problem, I want to use the VAAPI graphics acceleration as I did in a previous project with H264 of which here is the command line below:
gst-launch-1.0 -v udpsrc port=1234 caps= "application/x-rtp, media\=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, profile-level-id=(string)640028, payload=(int)96, ssrc=(uint)2665415388, timestamp-offset=(uint)3571350145, seqnum-offset=(uint)18095, a-framerate=(string)25" ! rtph264depay ! queue ! vaapih264dec low-latency=1 ! autovideosink
The line above works perfectly and the CPU has almost no more loads. So I adapt this command line to use it with a RAW stream, here is the command:
gst-launch-1.0 -v udpsrc port="1234" caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:2, depth=(string)8, width=(string)1920, height=(string)1080, colorimetry=(string)BT709-2, payload=(int)96, ssrc=(uint)1188110121, timestamp-offset=(uint)4137478200, seqnum-offset=(uint)7257, a-framerate=(string)25" ! rtpvrawdepay ! vaapidecodebin ! videoconvert ! xvimagesink
It is the same line as the one at the beginning but I changed the element decodebin by vaapidecodebin as I had replaced avdec_h264 by vaapih264dec for my H264 stream. Unfortunately it doesn't work and I end up with this error:
WARNING: wrong pipeline: unable to connect rtpvrawdepay0 to vaapidecodebin0
How I can solve this problem? Do you have any leads to solve this problem?

What exactly are you trying to accelerate here? The CPU load is probably either due to the videoconvert as this is run in software to convert UYVY into a format your renderer supports (Hopefully that's another YUV format and not RGB) or it is the data transfer of the uncompressed data from CPU memory to GPU memory.
Note that transferring uncompressed image data is a much higher data rate than compressed H.264 video.
If you think the videoconvert is the expensive part you may want to try to use OpenGL for convert and displaying: .. ! glupload ! glcolorconvert ! glimagesink.
Maybe vaapipostproc can help you with color conversion if you don't want to go the OpenGL route.

Related

udpsink doesnt seem to stream anything but filesink works

I have trouble streaming pulse audio monitor via rtp to an audio player like vlc or gst-launch with udpsrc
this command works and the file has audio that is currently being played
gst-launch-1.0 -v pulsesrc device = "alsa_output.pci-0000_00_1b.0.analog-stereo.monitor" ! opusenc ! oggmux ! filesink location=test.ogg
but when i use this,
gst-launch-1.0 -v pulsesrc device="alsa_output.pci-0000_00_1b.0.analog-stereo.monitor" ! opusenc ! rtpopuspay ! udpsink host=0.0.0.0 port=4000
vlc (from an android phone) tells me that it cannot play the stream with uri rtp://ip-addr:4000
and gst-launch from same machine starts but the resulting file is empty.
gst-launch-1.0 -v udpsrc uri=rtp://0.0.0.0:4000 ! rtpopusdepay ! oggmux ! filesink location=test.ogg
gstreamer version is
$ gst-launch-1.0 --version
gst-launch-1.0 version 1.16.0
GStreamer 1.16.0
Just started this account, I wanted to add this as a comment, but couldn't because of rep limitations.
I'm not experienced in using vlc, but at least I get your GStreamer pipelines working if I add the caps, definitions of the rtp stream parameters, to the rtpopusdepay.
So instead of:
gst-launch-1.0 -v udpsrc uri=rtp://0.0.0.0:4000 ! rtpopusdepay ! oggmux ! filesink location=test.ogg
you'll need to use:
gst-launch-1.0 udpsrc uri=udp://0.0.0.0:4000 ! application/x-rtp,payload=96,encoding-name=OPUS ! rtpopusdepay ! opusdec ! autoaudiosink
for GStreamer. The mandatory parts are the payload and encoding-name, others you can find from gst-inspect-1.0 rtpopuspay/rtpopusdepay. You might need to change the numbers depending on what you define on the server side/what's the default on your machine.
So in conclusion, I got that GStreamer pipeline working by moving the rtp definitions to the caps for rtpopusdepay. As I said, I'm not familiar with vlc, so I don't know how to define those GStreamer caps there, if it even depends on those, but I hope this gives some insight on your work.

Gstreamer audio problem on embedded linux

I work on embedded linux. I want play video with minimum CPU. So after I completed compile, I tried play video with mplayer and gstreamer. Mplayer use CPU avarage %10-20. I want to obtain this perform on gstreamer. So I tried these command:
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec ! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux ! mpegvideoparse ! mpeg2dec ! xvimagesink
These commands use avarage %10-20 CPU. This number that I want number. But audio did not work with these command. I tried added audio element but I could not achieve.
I also tried gst-launch-1.0 playbin uri=file:///video_path.mpeg. Audio work with this command but CPU usage is so high and I don2t prefer this.
How can I work audio with 1 or 2 commands?
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec
! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux !
mpegvideoparse ! mpeg2dec ! xvimagesink
With the above two pipelines you are asking gtreamer to just play video, as a result you aren’t getting any audio.
gst-launch filesrc location=video_path.mpeg ! mpegdemux name=demuxer
demuxer. ! queue ! mpeg2dec ! autovideosink demuxer. ! queue ! mad !
audioconvert ! audioresample ! autoaudiosink
The above pipeline should play both audio and video.
Note: If you have support for hardware decoding that would reduce further CPU usage.

gst-launch camera get wrong color space

I am using the following command to review the Raspberry Pi camera with Tinker board(Tinker OS V2.0.8).
gst-launch-1.0 v4l2src device=/dev/video0 !
video/x-raw,format=NV12,width=640,height=480 ! videoconvert !
autovideosink
But the colour of the images show green as below(suppose be white):
So what suppose be the problem?
Is there any way to adjust the colour-balance?
I'm guessing the problem is about the format of the output images, which gives NV12 that makes image look greenly.
Problem solved:
Base on the tutorial https://tinkerboarding.co.uk/wiki/index.php?title=CSI-camera
After Tinker OS V2.0.8 using following command to stream video:
gst-launch-1.0 rkcamsrc device=/dev/video0 io-mode=4 isp-mode=2A
tuning-xml-path=/etc/cam_iq/IMX219.xml ! videoconvert !
video/x-raw,format=NV12,width=1800,height=960 ! rkximagesink

Low latency audio streaming with gstreamer, Dropped samples when lowering the buffer-time on a pepper robot

I'm trying to send audio through an RTP Stream using gstreamer with the lowest latency possible and I want to do it from a Pepper(gstreamer 0.10) to my computer(gstreamer 0.10 or 1.0).
I can send audio with little latency (20 ms) from the computer to Pepper however I doesn't work as well from the Pepper to the computer. When I try to adjust the buffer-time under 200 ms, I get this type of error :
WARNING: Can't record audio fast enough
Dropped 318 samples. This is most likely beacause downstream can't keep up and is consuming samples too slowly.
I used the answers here and so far and worked with the following pipelines:
Sender
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Receiver
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! autoaudiosink buffer-time=80000 sync=false
I don't really know how to tackle this issue as the CPU usage is not anormal.
And to be frank I am quite new in this, so I don't get what are the parameters to play with to get low latency. I hope someone can help me! (and that it is not a hardware problem too ^^)
Thanks a lot!
I don't think gst-launch-0.10 is made to work in real-time (RT).
Please consider writing your own program (even using GStreamer) to perform the streaming from an RT thread. NAOqi OS has the RT patches included and support this.
But with network involved in your pipeline, you may not be able to guarantee it is going to be processed in time.
So maybe the simplest solution could be to keep a queue before the audio processing of the sender:
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
queue ! audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Also consider that the receiver may cause a drop if it cannot process the audio in time, and that a queue might help too:
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! queue ! autoaudiosink buffer-time=80000 sync=false

gStreamer Video Recording Memory Leak

HI I am trying to record rtsp stream coming from camera(H264 format).
I am using following gst command to do recording in MPEG4 Format
gst-launch -e rtspsrc location=rtsp://10.17.8.136/mediainput/h264 latency=100 ! decodebin ! ffenc_mpeg4 ! avimux ! filesink location=test.mp4
and H264 format
gst-launch-0.10 -e rtspsrc location="rtsp://10.17.8.136/mediainput/h264" latency=100 ! rtph264depay byte-stream=false ! capsfilter caps="video/x-h264,width=1920,height=1080,framerate=(fraction)25/1" ! mp4mux ! filesink location=testh264.mp4
Both are doing recording but i have observed that There is RAM mermory is gradually increasing.
Does gStreamer has memory leak. or there is problem in my pipeline command?
That is not a leak, the mp4 muxer is building the index table in memory, before writing it out to disk on eos.

Resources