Failed delayed linking some pad of GstDecodeBin named decodebin to some pad of GstAudioConvert named audioconvert0 - audio

I have created a Gstreamer pipeline like below
gst-launch-1.0 rtpbin name=rtpbin latency=200 rtp-profile=avpf rtspsrc location="rtsp://url" protocols=2 timeout=50000000 ! decodebin name=decodebin ! audioconvert ! audioresample ! opusenc ! rtpopuspay pt=111 ssrc=11111111 ! rtprtxqueue max-size-time=1000 max-size-packets=0 ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! udpsink host=10.7.50.43 port=12785 rtpbin.send_rtcp_src_0 ! udpsink host=10.7.50.43 port=12900 sync=false async=false funnel name=rtp_funnell ! udpsink host=10.7.50.43 port=14905 funnel name=rtcp_funnell ! udpsink host=10.7.50.43 port=13285 sync=false async=false decodebin. ! videoconvert ! tee name=video_tee video_tee. ! queue ! videoconvert ! videoscale ! videorate ! video/x-raw , format=I420 , width=320 , height=180 , framerate=24/1 ! x264enc tune=zerolatency speed-preset=9 dct8x8=false bitrate=512 insert-vui=true key-int-max=10 b-adapt=true qp-max=40 qp-min=21 pass=17 ! h264parse ! rtph264pay ssrc=33333333 pt=101 ! rtprtxqueue max-size-time=2000 max-size-packets=100 ! rtpbin.send_rtp_sink_1 rtpbin.send_rtp_src_1 ! rtp_funnell.sink_0 rtpbin.send_rtcp_src_1 ! rtcp_funnell.sink_0 video_tee. ! queue ! videoconvert ! videoscale ! videorate ! video/x-raw , format=I420 , width=640 , height=360 , framerate=24/1 ! x264enc tune=zerolatency speed-preset=9 dct8x8=false bitrate=1024 insert-vui=true key-int-max=10 b-adapt=true qp-max=40 qp-min=21 pass=17 ! h264parse ! rtph264pay ssrc=33333334 pt=101 ! rtprtxqueue max-size-time=2000 max-size-packets=100 ! rtpbin.send_rtp_sink_2 rtpbin.send_rtp_src_2 ! rtp_funnell.sink_1 rtpbin.send_rtcp_src_2 ! rtcp_funnell.sink_1 video_tee. ! queue ! videoconvert ! videoscale ! videorate ! video/x-raw , format=I420 , width=960 , height=540 , framerate=24/1 ! x264enc tune=zerolatency speed-preset=9 dct8x8=false bitrate=2048 insert-vui=true key-int-max=10 b-adapt=true qp-max=40 qp-min=21 pass=17 ! h264parse ! rtph264pay ssrc=33333335 pt=101 ! rtprtxqueue max-size-time=2000 max-size-packets=100 ! rtpbin.send_rtp_sink_3 rtpbin.send_rtp_src_3 ! rtp_funnell.sink_2 rtpbin.send_rtcp_src_3 ! rtcp_funnell.sink_2 video_tee. ! queue ! videoconvert ! videoscale ! videorate ! video/x-raw , format=I420 , width=1280 , height=720 , framerate=24/1 ! x264enc tune=zerolatency speed-preset=9 dct8x8=false bitrate=4096 insert-vui=true key-int-max=10 b-adapt=true qp-max=40 qp-min=21 pass=17 ! h264parse ! rtph264pay ssrc=33333336 pt=101 ! rtprtxqueue max-size-time=2000 max-size-packets=100 ! rtpbin.send_rtp_sink_4 rtpbin.send_rtp_src_4 ! rtp_funnell.sink_3 rtpbin.send_rtcp_src_4 ! rtcp_funnell.sink_3
It returns with the following warning and audio is not transcoded properly
WARNING: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin: Delayed linking failed.
Additional debug info:
gst/parse/grammar.y(544): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstDecodeBin:decodebin:
failed delayed linking some pad of GstDecodeBin named decodebin to some pad of GstAudioConvert named audioconvert0
But if we change the source to static source like below (replace rtspsrc pipeline)
filesrc location="BigBuckBunny.mp4"
It works fine.
The difference of these two sources are as follows
Working Source
> gst-discoverer-1.0 "BigBuckBunny.mp4"
Analyzing file:BigBuckBunny.mp4
Done discovering file:BigBuckBunny.mp4
Properties:
Duration: 0:09:56.473333333
Seekable: yes
Live: no
container #0: Quicktime
video #1: H.264 (High Profile)
Stream ID: 786017c5b5a8102940e7912e1130363236dc5ce24cb9a0f981d989da87e36cbe/002
Width: 1280
Height: 720
Depth: 24
Frame rate: 24/1
Pixel aspect ratio: 1/1
Interlaced: false
Bitrate: 1991280
Max bitrate: 5372792
audio #2: MPEG-4 AAC
Stream ID: 786017c5b5a8102940e7912e1130363236dc5ce24cb9a0f981d989da87e36cbe/001
Language: <unknown>
Channels: 2 (front-left, front-right)
Sample rate: 44100
Depth: 32
Bitrate: 125488
Max bitrate: 169368
Not Working Source
>gst-discoverer-1.0 rtsp://ip:port
Analyzing rtsp://ip:port
Done discovering rtsp://ip:port
Properties:
Duration: 99:99:99.999999999
Seekable: no
Live: yes
container #0: application/rtsp
unknown #2: application/x-rtp
audio #1: MPEG-4 AAC
Stream ID: 9053093890e06258c9ebd10a484943f40698af07428b21e6d4e07cc150314b0b/audio:0:0:RTP:AVP:97
Language: <unknown>
Channels: 2 (front-left, front-right)
Sample rate: 48000
Depth: 32
Bitrate: 0
Max bitrate: 0
unknown #4: application/x-rtp
video #3: H.264 (Main Profile)
Stream ID: 9053093890e06258c9ebd10a484943f40698af07428b21e6d4e07cc150314b0b/video:0:0:RTP:AVP:96
Width: 1920
Height: 1080
Depth: 24
Frame rate: 60/1
Pixel aspect ratio: 1/1
Interlaced: false
Bitrate: 0
Max bitrate: 0

Related

Gstreamer: Could not capture whole duration audio with pipeline

Have tried to capture audio with video together in a dynamic pipeline but it seems like audio could only capture for fragment of time (not the full length in the file). I have used gst-discoverer to find out this problem
File Tested
gst-discoverer-1.0 xxxx.mp4
Analyzing file:///xxxx.mp4
====== AIUR: 4.6.1 build on May 11 2021 03:19:55. ======
Core: AVI_PARSER_03.06.08 build on Sep 15 2020 02:45:45
file: /usr/lib/imx-mm/parser/lib_avi_parser_arm_elinux.so.3.1
------------------------
Track 00 [video_0] Enabled
Duration: 0:04:41.160000000
Language: und
Mime:
video/x-h264, parsed=(boolean)true, alignment=(string)au, stream-format=(string)byte-stream, width=(int)720, height=(int)480, framerate=(fraction)25/1
------------------------
====== VPUDEC: 4.6.1 build on May 11 2021 03:19:55. ======
wrapper: 3.0.0 (VPUWRAPPER_ARM64_LINUX Build on Jun 3 2021 04:20:32)
vpulib: 1.1.1
firmware: 1.1.1.65535
------------------------
Track 01 [audio_0] Enabled
Duration: 0:00:12.750476000
Language: und
Mime:
audio/x-raw, format=(string)S16LE, channels=(int)2, layout=(string)interleaved, rate=(int)44100, bitrate=(int)1411200
------------------------
Done discovering file:///xxxx.mp4
Properties:
Duration: 0:04:41.160000000
Seekable: yes
Live: no
container: Audio Video Interleave (AVI)
audio: Raw 16-bit PCM audio
Stream ID: b6e7e8a9768c340295f0f67833e05ab2e8fe2243b4d7ec4e5d6152cbe76dc8af/1
Language: <unknown>
Channels: 2 (front-left, front-right)
Sample rate: 44100
Depth: 16
Bitrate: 0
Max bitrate: 0
video: H.264 (Constrained Baseline Profile)
Stream ID: b6e7e8a9768c340295f0f67833e05ab2e8fe2243b4d7ec4e5d6152cbe76dc8af/0
Width: 720
Height: 480
Depth: 24
Frame rate: 25/1
Pixel aspect ratio: 1/1
Interlaced: false
Bitrate: 0
Max bitrate: 0
From the description given by gst-discoverer, it seem that the audio was only recording for 12+ seconds. I then constructed the static pipeline and test it out with gst-launch
gst-launch-1.0 -e -v \
v4l2src \
! video/x-raw,width=720,height=480,framerate=30/1,is-live=true \
! clockoverlay \
! videorate \
! video/x-raw,framerate=25/1 \
! tee name=tv \
tv. \
! queue name=q1a \
! vpuenc_h264 \
! h264parse \
! mux.video_0 \
tv. \
! queue name=q1b \
! vpuenc_h264 \
! tee name=tv2 \
tv2. \
! queue \
! rtph264pay pt=96 \
! udpsink host="x.x.x.x" port=3456 \
pulsesrc volume=8.0 \
! audioconvert \
! audioresample \
! volume volume=1.0 \
! audio/x-raw,rate=8000,channels=1,depth=8,format=S16LE \
! tee name=ta \
! queue \
! alawenc \
! tee name=ta2 \
ta2. \
! queue \
! rtppcmapay pt=8 \
! udpsink host="x.x.x.x" port=3458 \
ta2. \
! queue \
! mux.audio_0 \
avimux name=mux \
! queue \
! filesink location=file%02d.avi
the audiofile recorded
Analyzing file:///home/root/file%2502d.avi
====== AIUR: 4.6.1 build on May 11 2021 03:19:55. ======
Core: AVI_PARSER_03.06.08 build on Sep 15 2020 02:45:45
file: /usr/lib/imx-mm/parser/lib_avi_parser_arm_elinux.so.3.1
------------------------
Track 00 [video_0] Enabled
Duration: 0:00:40.520000000
Language: und
Mime:
video/x-h264, parsed=(boolean)true, alignment=(string)au, stream-format=(string)byte-stream, width=(int)720, height=(int)480, framerate=(fraction)25/1
------------------------
====== VPUDEC: 4.6.1 build on May 11 2021 03:19:55. ======
wrapper: 3.0.0 (VPUWRAPPER_ARM64_LINUX Build on Jun 3 2021 04:20:32)
vpulib: 1.1.1
firmware: 1.1.1.65535
Track 01 [audio]: Disabled
Codec: 2, SubCodec: 0
------------------------
Done discovering file:///home/root/file%2502d.avi
Properties:
Duration: 0:00:40.520000000
Seekable: yes
Live: no
container: Audio Video Interleave (AVI)
video: H.264 (Constrained Baseline Profile)
Stream ID: 2f44a8a002c570424bca50bdc0bc9c743ea882e7cd3f855918368cd108ff977f/0
Width: 720
Height: 480
Depth: 24
Frame rate: 25/1
Pixel aspect ratio: 1/1
Interlaced: false
Bitrate: 0
Max bitrate: 0
It seem to me that there are no audio recorded but on opening the file, there seem to be some static noise being heard(i assume that audio is perhaps recorded)
So my question is
For the static pipeline, is there really audio recorded? gst-discoverer and actual opening of files seem to show different outcomes
For dynamic pipeline (coded in C), what could the problem be that I could look into. Would be grateful if anyone can, with similar prior experience, point out area I could look into
Thanks

Duplicate camera stream not working with custom frame width and height

I have been able to duplicated my webcam stream on ubuntu with
gst-launch-1.0 v4l2src device=/dev/video0 ! tee name=t ! queue ! v4l2sink device=/dev/video2 t. ! queue ! v4l2sink device=/dev/video3
Im able to launch 2 simultaneous streams by
gst-launch-1.0 v4l2src device=/dev/video2 ! videoconvert ! ximagesink
gst-launch-1.0 v4l2src device=/dev/video3 ! videoconvert ! ximagesink
But if I try to change streams width and height it doesnt work
gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, width=640,height=480,framerate=15/1' ! videoconvert ! ximagesink
Error ---
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Even this isnt working --
gst-launch-1.0 v4l2src device=/dev/video2 ! videoconvert ! videoscale ! video/x-raw, width=640,height=480,framerate=15/1 ! ximagesink -v
UPDATE
Its working now with this command, but only with framerate =30. If I change the framerate to anything else it just doesnt work at all
gst-launch-1.0 v4l2src device=/dev/video2 ! videoconvert ! videoscale ! video/x-raw, width=640,height=480, framerate=30/1 ! ximagesink -v
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, width=(int)800, height=(int)600, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/2147483647, format=(string)YUY2, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps = video/x-raw, width=(int)800, height=(int)600, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/2147483647, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstVideoScale:videoscale0.GstPad:src: caps = video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstXImageSink:ximagesink0.GstPad:sink: caps = video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstVideoScale:videoscale0.GstPad:sink: caps = video/x-raw, width=(int)800, height=(int)600, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/2147483647, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink: caps = video/x-raw, width=(int)800, height=(int)600, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/2147483647, format=(string)YUY2, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
Internal data stream error appears when you pass wrong height, width or another cam parameter. Check cam resolution and framerate before you. One note, you cant change framerate by pass different value, cam have fixed set resolution and framerate, if you want change framerate use videorate.

Raspberry Pi Camera streaming Video to Nvidia Xavier NX using python OpenCv loses Color Information

Raspberry Pi 0w Camera runs:
raspivid -n -t 0 -rot 180 -w 640 -h 480 -fps 30 -b 1000000 -o - | gst-launch-1.0 -e -vvvv fdsrc ! h264parse ! rtph264pay pt=96 config-interval=5 ! udpsink host=192.168.1.242 port=5000
Xavier NX to test from OS runs fine with color:
gst-launch-1.0 -v udpsrc port=5000 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! autovideoconvert ! ximagesink
Xavier NX python OpenCV code - Missing Color(Gray video) - When I print the frame .shape I get Height and Width, No Color info:
import cv2
cam0 = cv2.VideoCapture('udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=96"'
' ! rtph264depay'
' ! decodebin'
' ! autovideoconvert'
' ! appsink', cv2.CAP_GSTREAMER)
while True:
_, frameCam0 = cam0.read()
print(frameCam0.shape)
cv2.imshow("Camera 0", frameCam0)
cv2.moveWindow("Camera 0", 0, 0)
if cv2.waitKey(1)==ord('q'):
break
cam0.release()
cv2.destroyAllWindows()

Creating a mulaw audio file from L16

I have a wave file with these properties.
sampling rate = 16000 Hz
encoding = L16
channels = 1
bit resolution = 16
I want to make 2 pipelines
1) I am throwing this file contents as RTP packets on port=5000
2) listen to port=500 catch the rtp packets and make an audio file with
following properties
sampling rate = 8000 Hz
encoding = PCMU
channels = 1
bit resolution = 8
What I have tried is:
Sender:
gst-launch-1.0 filesrc location=/path/to/test_l16.wav ! wavparse ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! udpsink host=192.168.xxx.xxx port=5000
Receiver:
gst-launch-1.0 udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU, channels=(int)1" ! rtppcmudepay ! mulawdec ! filesink location=/path/to/test_pcmu.ulaw
But I am getting L16 file at the Test.ulaw and not PCMU
Any suggestion?
Inspect what the mulawdec element does:
Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
audio/x-mulaw
rate: [ 8000, 192000 ]
channels: [ 1, 2 ]
SRC template: 'src'
Availability: Always
Capabilities:
audio/x-raw
format: S16LE
layout: interleaved
rate: [ 8000, 192000 ]
channels: [ 1, 2 ]
So basically it decodes Mu Law to PCM. If you want to save the raw Mu Law instead remove the mulawdec element.

Gstreamer - opus caps parsing error, anyone know how to fix it?

What is wrong in my parsing? Its failing to parse properly the opus caps (but not speex) and causing it not functional anyone know, where i have to add more \ or / or " or ' symbols to make it valid caps?
$ gst-launch-0.10 -v gstrtpbin name=rtpbin latency=100 udpsrc caps="application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)X-GST-OPUS-DRAFT-SPITTKA-00, caps=(string)\"audio/x-opus\\,\\ multistream\\=\\(boolean\\)false\\,\\ streamheader\\=\\(buffer\\)\\<\\ 4f707573486561640101000080bb0000000000\\,\\ 4f707573546167731e000000456e636f6465642077697468204753747265616d6572204f707573656e63010000001a0000004445534352495054494f4e3d617564696f74657374207761766501\\ \\>\", ssrc=(uint)3090172512, payload=(int)96, clock-base=(uint)4268257583, seqnum-base=(uint)10001" port=5002 ! rtpbin.recv_rtp_sink_1 rtpbin. ! rtpopusdepay ! opusdec ! audioconvert ! audioresample ! alsasink device=2 name=uudpsink0 udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 rtpbin.send_rtcp_src_1 ! udpsink port=5007 host=%s sync=false async=false
(gst-plugin-scanner:25672): GStreamer-WARNING **: Failed to load plugin '/usr/lib/gstreamer-0.10/libgstsimsyn.so': /usr/lib/gstreamer-0.10/libgstsimsyn.so: undefined symbol: gst_controller_sync_values
(gst-plugin-scanner:25672): GStreamer-WARNING **: Failed to load plugin '/usr/lib/gstreamer-0.10/libgstaudiodelay.so': /usr/lib/gstreamer-0.10/libgstaudiodelay.so: undefined symbol: gst_base_transform_set_gap_aware
(gst-plugin-scanner:25672): GStreamer-WARNING **: Failed to load plugin '/usr/lib/gstreamer-0.10/libgstbml.so': /usr/lib/gstreamer-0.10/libgstbml.so: undefined symbol: gst_base_src_set_format
WARNING: erroneous pipeline: could not set property "caps" in element "udpsrc0" to "application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)X-GST-OPUS-DRAFT-SPITTKA-00, caps=(string)"audio/x-opus\,\\ multistream\=\(boolean\)false\,\\ streamheader\=\(buffer\)\<\\ 4f707573486561640101000080bb0000000000\,\\ 4f707573546167731e000000456e636f6465642077697468204753747265616d6572204f707573656e63010000001a0000004445534352495054494f4e3d617564696f74657374207761766501\\ \>", ssrc=(uint)3090172512, payload=(int)96, clock-base=(uint)4268257583, seqnum-base=(uint)10001"
I don't think special escaping is needed. If your pipeline is correct then this should work:
gst-launch-0.10 -v gstrtpbin name=rtpbin latency=100 udpsrc caps="application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)X-GST-OPUS-DRAFT-SPITTKA-00, caps=(string)audio/x-opus, multistream=(boolean)false, streamheader=(buffer)<4f707573486561640101000080bb0000000000,4f707573546167731e000000456e636f6465642077697468204753747265616d6572204f707573656e63010000001a0000004445534352495054494f4e3d617564696f74657374207761766501>, ssrc=(uint)3090172512, payload=(int)96, clock-base=(uint)4268257583, seqnum-base=(uint)10001" port=5002 ! rtpbin.recv_rtp_sink_1 rtpbin. ! rtpopusdepay ! opusdec ! audioconvert ! audioresample ! alsasink device=2 name=uudpsink0 udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 rtpbin.send_rtcp_src_1 ! udpsink port=5007 host=%s sync=false async=false
If you need to take care of special characters that bash could be interpreting, change caps="..." to caps='...'.
Here is a clumsy python version:
import subprocess
args=[ 'gst-launch-0.10',
'-v',
'gstrtpbin',
'name=rtpbin',
'latency=100',
'udpsrc',
'caps="application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)X-GST-OPUS-DRAFT-SPITTKA-00, caps=(string)audio/x-opus, multistream=(boolean)false, streamheader=(buffer)<4f707573486561640101000080bb0000000000,4f707573546167731e000000456e636f6465642077697468204753747265616d6572204f707573656e63010000001a0000004445534352495054494f4e3d617564696f74657374207761766501>, ssrc=(uint)3090172512, payload=(int)96, clock-base=(uint)4268257583, seqnum-base=(uint)10001"',
'port=5002',
'!',
'rtpbin.recv_rtp_sink_1',
'rtpbin.',
'!',
'rtpopusdepay',
'!',
'opusdec',
'!',
'audioconvert',
'!',
'audioresample',
'!',
'alsasink',
'device=2',
'name=uudpsink0',
'udpsrc',
'port=5003',
'!',
'rtpbin.recv_rtcp_sink_1',
'rtpbin.send_rtcp_src_1',
'!',
'udpsink',
'port=5007',
'host=%s',
'sync=false',
'async=false',
]
child = subprocess.Popen(args, stdout=subprocess.PIPE)
streamdata = child.communicate()[0] # streamdata will contain output of gst-launch-0.10
rc = child.returncode # rc will contain the returncode of gst-launch-0.10
print streamdata
print "\nprocess returned %d" %(rc)
I think you are better of finding a good python module for gstreamer than using subprocess or there like.
Hope this helps!

Resources