raspberrypi gstreamer1.0 alsasrc0: Internal data flow error - linux

I have one problem trying to stream rtmp from logitech c210 webcam with sound.
I'v installed gstreamer1.0-omx and other needed stuf, but when i tryin to get video (for simplification let' write it into flv file):
gst-launch-1.0 v4l2src ! "video/x-raw,width=640,height=480,framerate=30/1" ! \
omxh264enc target-bitrate=1000000 control-rate=variable ! video/x-h264,profile=high ! \
h264parse ! queue ! flvmux name=mux alsasrc device=plughw:1 ! audioresample \
! audio/x-raw,rate=48000,channels=1 ! queue ! voaacenc bitrate=32000 ! queue ! mux. mux. \! filesink location=1.flv
And i got an error like:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
ERROR: from element /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2812): gst_base_src_loop (): /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 69507879 ns.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
libv4l2: warning v4l2 mmap buffers still mapped on close()
Freeing pipeline ...
There is with some more debug info ( -vvv option):
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
/GstPipeline:pipeline0/GstAlsaSrc:alsasrc0: actual-buffer-time = 200000
/GstPipeline:pipeline0/GstAlsaSrc:alsasrc0: actual-latency-time = 10000
/GstPipeline:pipeline0/GstAlsaSrc:alsasrc0.GstPad:src: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)1
/GstPipeline:pipeline0/GstAudioResample:audioresample0.GstPad:src: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2.GstPad:src: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)1
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:src: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)1
/GstPipeline:pipeline0/GstVoAacEnc:voaacenc0.GstPad:sink: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)1
/GstPipeline:pipeline0/GstVoAacEnc:voaacenc0.GstPad:sink: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)1
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, format=(string)I420, width=(int)640, height=(int)480, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, format=(string)I420, width=(int)640, height=(int)480, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstVoAacEnc:voaacenc0.GstPad:src: caps = audio/mpeg, mpegversion=(int)4, channels=(int)1, rate=(int)48000, stream-format=(string)raw, level=(string)2, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)1188
/GstPipeline:pipeline0/GstOMXH264Enc-omxh264enc:omxh264enc-omxh264enc0.GstPad:sink: caps = video/x-raw, format=(string)I420, width=(int)640, height=(int)480, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, format=(string)I420, width=(int)640, height=(int)480, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)30/1
ERROR: from element /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2812): gst_base_src_loop (): /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0:
streaming task paused, reason not-negotiated (-4)
Execution ended after 561957256 ns.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
libv4l2: warning v4l2 mmap buffers still mapped on close()
Freeing pipeline ...
I sepose there is a problem with alsa speed or buffer or such, so is there any solution how make it work?
I can record just sound by arecord or gstreamer also without any problem. Or just video also. The problem appears only by capturing video and sound together.
Thanks

The problem is that your alsasrc1.0 either hasn't been installed properly or not installed at all.i would try the following:
first try to additionally install alsasrc for gstreamer1.0:
sudo apt-get install gstreamer1.0-alsa
Sometime Linux just acts weird and although you have already installed a package the merging does not get done properly.
If the above solution didn't work try to install all the needed packages for your pipeline for gstreamer0.10 version and try your pipeline with gst-launch-0.10

Related

Failed delayed linking some pad of GstDecodeBin named decodebin to some pad of GstAudioConvert named audioconvert0

I have created a Gstreamer pipeline like below
gst-launch-1.0 rtpbin name=rtpbin latency=200 rtp-profile=avpf rtspsrc location="rtsp://url" protocols=2 timeout=50000000 ! decodebin name=decodebin ! audioconvert ! audioresample ! opusenc ! rtpopuspay pt=111 ssrc=11111111 ! rtprtxqueue max-size-time=1000 max-size-packets=0 ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! udpsink host=10.7.50.43 port=12785 rtpbin.send_rtcp_src_0 ! udpsink host=10.7.50.43 port=12900 sync=false async=false funnel name=rtp_funnell ! udpsink host=10.7.50.43 port=14905 funnel name=rtcp_funnell ! udpsink host=10.7.50.43 port=13285 sync=false async=false decodebin. ! videoconvert ! tee name=video_tee video_tee. ! queue ! videoconvert ! videoscale ! videorate ! video/x-raw , format=I420 , width=320 , height=180 , framerate=24/1 ! x264enc tune=zerolatency speed-preset=9 dct8x8=false bitrate=512 insert-vui=true key-int-max=10 b-adapt=true qp-max=40 qp-min=21 pass=17 ! h264parse ! rtph264pay ssrc=33333333 pt=101 ! rtprtxqueue max-size-time=2000 max-size-packets=100 ! rtpbin.send_rtp_sink_1 rtpbin.send_rtp_src_1 ! rtp_funnell.sink_0 rtpbin.send_rtcp_src_1 ! rtcp_funnell.sink_0 video_tee. ! queue ! videoconvert ! videoscale ! videorate ! video/x-raw , format=I420 , width=640 , height=360 , framerate=24/1 ! x264enc tune=zerolatency speed-preset=9 dct8x8=false bitrate=1024 insert-vui=true key-int-max=10 b-adapt=true qp-max=40 qp-min=21 pass=17 ! h264parse ! rtph264pay ssrc=33333334 pt=101 ! rtprtxqueue max-size-time=2000 max-size-packets=100 ! rtpbin.send_rtp_sink_2 rtpbin.send_rtp_src_2 ! rtp_funnell.sink_1 rtpbin.send_rtcp_src_2 ! rtcp_funnell.sink_1 video_tee. ! queue ! videoconvert ! videoscale ! videorate ! video/x-raw , format=I420 , width=960 , height=540 , framerate=24/1 ! x264enc tune=zerolatency speed-preset=9 dct8x8=false bitrate=2048 insert-vui=true key-int-max=10 b-adapt=true qp-max=40 qp-min=21 pass=17 ! h264parse ! rtph264pay ssrc=33333335 pt=101 ! rtprtxqueue max-size-time=2000 max-size-packets=100 ! rtpbin.send_rtp_sink_3 rtpbin.send_rtp_src_3 ! rtp_funnell.sink_2 rtpbin.send_rtcp_src_3 ! rtcp_funnell.sink_2 video_tee. ! queue ! videoconvert ! videoscale ! videorate ! video/x-raw , format=I420 , width=1280 , height=720 , framerate=24/1 ! x264enc tune=zerolatency speed-preset=9 dct8x8=false bitrate=4096 insert-vui=true key-int-max=10 b-adapt=true qp-max=40 qp-min=21 pass=17 ! h264parse ! rtph264pay ssrc=33333336 pt=101 ! rtprtxqueue max-size-time=2000 max-size-packets=100 ! rtpbin.send_rtp_sink_4 rtpbin.send_rtp_src_4 ! rtp_funnell.sink_3 rtpbin.send_rtcp_src_4 ! rtcp_funnell.sink_3
It returns with the following warning and audio is not transcoded properly
WARNING: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin: Delayed linking failed.
Additional debug info:
gst/parse/grammar.y(544): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstDecodeBin:decodebin:
failed delayed linking some pad of GstDecodeBin named decodebin to some pad of GstAudioConvert named audioconvert0
But if we change the source to static source like below (replace rtspsrc pipeline)
filesrc location="BigBuckBunny.mp4"
It works fine.
The difference of these two sources are as follows
Working Source
> gst-discoverer-1.0 "BigBuckBunny.mp4"
Analyzing file:BigBuckBunny.mp4
Done discovering file:BigBuckBunny.mp4
Properties:
Duration: 0:09:56.473333333
Seekable: yes
Live: no
container #0: Quicktime
video #1: H.264 (High Profile)
Stream ID: 786017c5b5a8102940e7912e1130363236dc5ce24cb9a0f981d989da87e36cbe/002
Width: 1280
Height: 720
Depth: 24
Frame rate: 24/1
Pixel aspect ratio: 1/1
Interlaced: false
Bitrate: 1991280
Max bitrate: 5372792
audio #2: MPEG-4 AAC
Stream ID: 786017c5b5a8102940e7912e1130363236dc5ce24cb9a0f981d989da87e36cbe/001
Language: <unknown>
Channels: 2 (front-left, front-right)
Sample rate: 44100
Depth: 32
Bitrate: 125488
Max bitrate: 169368
Not Working Source
>gst-discoverer-1.0 rtsp://ip:port
Analyzing rtsp://ip:port
Done discovering rtsp://ip:port
Properties:
Duration: 99:99:99.999999999
Seekable: no
Live: yes
container #0: application/rtsp
unknown #2: application/x-rtp
audio #1: MPEG-4 AAC
Stream ID: 9053093890e06258c9ebd10a484943f40698af07428b21e6d4e07cc150314b0b/audio:0:0:RTP:AVP:97
Language: <unknown>
Channels: 2 (front-left, front-right)
Sample rate: 48000
Depth: 32
Bitrate: 0
Max bitrate: 0
unknown #4: application/x-rtp
video #3: H.264 (Main Profile)
Stream ID: 9053093890e06258c9ebd10a484943f40698af07428b21e6d4e07cc150314b0b/video:0:0:RTP:AVP:96
Width: 1920
Height: 1080
Depth: 24
Frame rate: 60/1
Pixel aspect ratio: 1/1
Interlaced: false
Bitrate: 0
Max bitrate: 0

Duplicate camera stream not working with custom frame width and height

I have been able to duplicated my webcam stream on ubuntu with
gst-launch-1.0 v4l2src device=/dev/video0 ! tee name=t ! queue ! v4l2sink device=/dev/video2 t. ! queue ! v4l2sink device=/dev/video3
Im able to launch 2 simultaneous streams by
gst-launch-1.0 v4l2src device=/dev/video2 ! videoconvert ! ximagesink
gst-launch-1.0 v4l2src device=/dev/video3 ! videoconvert ! ximagesink
But if I try to change streams width and height it doesnt work
gst-launch-1.0 v4l2src device=/dev/video2 ! 'video/x-raw, width=640,height=480,framerate=15/1' ! videoconvert ! ximagesink
Error ---
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Even this isnt working --
gst-launch-1.0 v4l2src device=/dev/video2 ! videoconvert ! videoscale ! video/x-raw, width=640,height=480,framerate=15/1 ! ximagesink -v
UPDATE
Its working now with this command, but only with framerate =30. If I change the framerate to anything else it just doesnt work at all
gst-launch-1.0 v4l2src device=/dev/video2 ! videoconvert ! videoscale ! video/x-raw, width=640,height=480, framerate=30/1 ! ximagesink -v
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, width=(int)800, height=(int)600, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/2147483647, format=(string)YUY2, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps = video/x-raw, width=(int)800, height=(int)600, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/2147483647, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstVideoScale:videoscale0.GstPad:src: caps = video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstXImageSink:ximagesink0.GstPad:sink: caps = video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstVideoScale:videoscale0.GstPad:sink: caps = video/x-raw, width=(int)800, height=(int)600, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/2147483647, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink: caps = video/x-raw, width=(int)800, height=(int)600, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/2147483647, format=(string)YUY2, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
Internal data stream error appears when you pass wrong height, width or another cam parameter. Check cam resolution and framerate before you. One note, you cant change framerate by pass different value, cam have fixed set resolution and framerate, if you want change framerate use videorate.

Raspberry Pi Camera streaming Video to Nvidia Xavier NX using python OpenCv loses Color Information

Raspberry Pi 0w Camera runs:
raspivid -n -t 0 -rot 180 -w 640 -h 480 -fps 30 -b 1000000 -o - | gst-launch-1.0 -e -vvvv fdsrc ! h264parse ! rtph264pay pt=96 config-interval=5 ! udpsink host=192.168.1.242 port=5000
Xavier NX to test from OS runs fine with color:
gst-launch-1.0 -v udpsrc port=5000 ! application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! autovideoconvert ! ximagesink
Xavier NX python OpenCV code - Missing Color(Gray video) - When I print the frame .shape I get Height and Width, No Color info:
import cv2
cam0 = cv2.VideoCapture('udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=96"'
' ! rtph264depay'
' ! decodebin'
' ! autovideoconvert'
' ! appsink', cv2.CAP_GSTREAMER)
while True:
_, frameCam0 = cam0.read()
print(frameCam0.shape)
cv2.imshow("Camera 0", frameCam0)
cv2.moveWindow("Camera 0", 0, 0)
if cv2.waitKey(1)==ord('q'):
break
cam0.release()
cv2.destroyAllWindows()

Creating a mulaw audio file from L16

I have a wave file with these properties.
sampling rate = 16000 Hz
encoding = L16
channels = 1
bit resolution = 16
I want to make 2 pipelines
1) I am throwing this file contents as RTP packets on port=5000
2) listen to port=500 catch the rtp packets and make an audio file with
following properties
sampling rate = 8000 Hz
encoding = PCMU
channels = 1
bit resolution = 8
What I have tried is:
Sender:
gst-launch-1.0 filesrc location=/path/to/test_l16.wav ! wavparse ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! udpsink host=192.168.xxx.xxx port=5000
Receiver:
gst-launch-1.0 udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMU, channels=(int)1" ! rtppcmudepay ! mulawdec ! filesink location=/path/to/test_pcmu.ulaw
But I am getting L16 file at the Test.ulaw and not PCMU
Any suggestion?
Inspect what the mulawdec element does:
Pad Templates:
SINK template: 'sink'
Availability: Always
Capabilities:
audio/x-mulaw
rate: [ 8000, 192000 ]
channels: [ 1, 2 ]
SRC template: 'src'
Availability: Always
Capabilities:
audio/x-raw
format: S16LE
layout: interleaved
rate: [ 8000, 192000 ]
channels: [ 1, 2 ]
So basically it decodes Mu Law to PCM. If you want to save the raw Mu Law instead remove the mulawdec element.

Gstreamer - opus caps parsing error, anyone know how to fix it?

What is wrong in my parsing? Its failing to parse properly the opus caps (but not speex) and causing it not functional anyone know, where i have to add more \ or / or " or ' symbols to make it valid caps?
$ gst-launch-0.10 -v gstrtpbin name=rtpbin latency=100 udpsrc caps="application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)X-GST-OPUS-DRAFT-SPITTKA-00, caps=(string)\"audio/x-opus\\,\\ multistream\\=\\(boolean\\)false\\,\\ streamheader\\=\\(buffer\\)\\<\\ 4f707573486561640101000080bb0000000000\\,\\ 4f707573546167731e000000456e636f6465642077697468204753747265616d6572204f707573656e63010000001a0000004445534352495054494f4e3d617564696f74657374207761766501\\ \\>\", ssrc=(uint)3090172512, payload=(int)96, clock-base=(uint)4268257583, seqnum-base=(uint)10001" port=5002 ! rtpbin.recv_rtp_sink_1 rtpbin. ! rtpopusdepay ! opusdec ! audioconvert ! audioresample ! alsasink device=2 name=uudpsink0 udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 rtpbin.send_rtcp_src_1 ! udpsink port=5007 host=%s sync=false async=false
(gst-plugin-scanner:25672): GStreamer-WARNING **: Failed to load plugin '/usr/lib/gstreamer-0.10/libgstsimsyn.so': /usr/lib/gstreamer-0.10/libgstsimsyn.so: undefined symbol: gst_controller_sync_values
(gst-plugin-scanner:25672): GStreamer-WARNING **: Failed to load plugin '/usr/lib/gstreamer-0.10/libgstaudiodelay.so': /usr/lib/gstreamer-0.10/libgstaudiodelay.so: undefined symbol: gst_base_transform_set_gap_aware
(gst-plugin-scanner:25672): GStreamer-WARNING **: Failed to load plugin '/usr/lib/gstreamer-0.10/libgstbml.so': /usr/lib/gstreamer-0.10/libgstbml.so: undefined symbol: gst_base_src_set_format
WARNING: erroneous pipeline: could not set property "caps" in element "udpsrc0" to "application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)X-GST-OPUS-DRAFT-SPITTKA-00, caps=(string)"audio/x-opus\,\\ multistream\=\(boolean\)false\,\\ streamheader\=\(buffer\)\<\\ 4f707573486561640101000080bb0000000000\,\\ 4f707573546167731e000000456e636f6465642077697468204753747265616d6572204f707573656e63010000001a0000004445534352495054494f4e3d617564696f74657374207761766501\\ \>", ssrc=(uint)3090172512, payload=(int)96, clock-base=(uint)4268257583, seqnum-base=(uint)10001"
I don't think special escaping is needed. If your pipeline is correct then this should work:
gst-launch-0.10 -v gstrtpbin name=rtpbin latency=100 udpsrc caps="application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)X-GST-OPUS-DRAFT-SPITTKA-00, caps=(string)audio/x-opus, multistream=(boolean)false, streamheader=(buffer)<4f707573486561640101000080bb0000000000,4f707573546167731e000000456e636f6465642077697468204753747265616d6572204f707573656e63010000001a0000004445534352495054494f4e3d617564696f74657374207761766501>, ssrc=(uint)3090172512, payload=(int)96, clock-base=(uint)4268257583, seqnum-base=(uint)10001" port=5002 ! rtpbin.recv_rtp_sink_1 rtpbin. ! rtpopusdepay ! opusdec ! audioconvert ! audioresample ! alsasink device=2 name=uudpsink0 udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 rtpbin.send_rtcp_src_1 ! udpsink port=5007 host=%s sync=false async=false
If you need to take care of special characters that bash could be interpreting, change caps="..." to caps='...'.
Here is a clumsy python version:
import subprocess
args=[ 'gst-launch-0.10',
'-v',
'gstrtpbin',
'name=rtpbin',
'latency=100',
'udpsrc',
'caps="application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)X-GST-OPUS-DRAFT-SPITTKA-00, caps=(string)audio/x-opus, multistream=(boolean)false, streamheader=(buffer)<4f707573486561640101000080bb0000000000,4f707573546167731e000000456e636f6465642077697468204753747265616d6572204f707573656e63010000001a0000004445534352495054494f4e3d617564696f74657374207761766501>, ssrc=(uint)3090172512, payload=(int)96, clock-base=(uint)4268257583, seqnum-base=(uint)10001"',
'port=5002',
'!',
'rtpbin.recv_rtp_sink_1',
'rtpbin.',
'!',
'rtpopusdepay',
'!',
'opusdec',
'!',
'audioconvert',
'!',
'audioresample',
'!',
'alsasink',
'device=2',
'name=uudpsink0',
'udpsrc',
'port=5003',
'!',
'rtpbin.recv_rtcp_sink_1',
'rtpbin.send_rtcp_src_1',
'!',
'udpsink',
'port=5007',
'host=%s',
'sync=false',
'async=false',
]
child = subprocess.Popen(args, stdout=subprocess.PIPE)
streamdata = child.communicate()[0] # streamdata will contain output of gst-launch-0.10
rc = child.returncode # rc will contain the returncode of gst-launch-0.10
print streamdata
print "\nprocess returned %d" %(rc)
I think you are better of finding a good python module for gstreamer than using subprocess or there like.
Hope this helps!

Resources