GStreamer rtsp pipeline freezes in cv2.VideoCapture , maybe related to audio channel - python-3.x

I'm connecting with python+gstreamer to a DVR rtsp feed for image processing using opencv2.
when using gstreamer pipelines on the cv2.VideoCapture the connection freezes (possibly waiting for a signal). The rtsp feed is made of h264 video (that I want to process) + pcma audio (which I want to discard).
Here is the function - it freezes on the cv2.VideoCapture line:
def simple_test2():
rtsp_path = 'rtsp://admin:XXXX#10.0.0.50:554/cam/realmonitor?channel=1&subtype=0'
rtsp_path2 = 'rtspsrc location="{}" ! decodebin name=dcd dcd. ! videoconvert ! appsink max-buffers=1 drop=true dcd. ! audioconvert ! fakesink '.format(rtsp_path)
logging.info('connecting to {} : {} '.format(cc['name'],rtsp_path2))
vcap = cv2.VideoCapture(rtsp_path2)
for i in range(2):
logging.info('read frame #%s' % i )
succ, frame = vcap.read()
if succ:
logging.info('%s got frame %d : %s open? %s' % (cc['name'], i, str(frame.shape[0:2]),str(vcap.isOpened())))
else:
logging.info('%s fail frame %d open? %s' % (cc['name'], i,str(vcap.isOpened())))
time.sleep(1)
vcap.release()
Further investigation:
1) oddly enough - running the same pipeline in gst-launch works OK !
gst-launch-1.0 -v rtspsrc location="rtsp://admin:XXXX#10.0.0.50:554/cam/realmonitor?channel=2&subtype=0" ! decodebin name=dcd dcd. ! videoconvert ! appsink max-buffers=1 drop=true dcd. ! audioconvert ! fakesink
return a steady stream of log messages coming in
2) If I use a simpler pipeline, in the code eg:
rtspsrc location="{}" ! decodebin ! videoconvert ! appsink max-buffers=1 drop=true
I end up getting connection error related to the fact the audio pad cannot be linked, which is why I tried using a fakesink to discard audio. other approaches will be appreciated
3) I also tried adding ! queue ! plugin in various places and combinations. nothing helps...

Related

GStreamer problem with separating audio and video

I'm new to gstreamer, basically a newbie.
I want to receive an rtmp video, process the video, reencode the video, merge it with the sound from the received video and then send it out as a new rtmp-video. Somehow I can not get get the sound working:
Receiver:
"rtmpsrc location=rtmp://xx.yy.10.40:1935/orig/1 do-timestamp=true ! queue ! flvdemux name=demux demux.video ! h264parse ! video/x-h264 ! nvh264dec ! videoconvert ! appsink"
"demux.audio ! aacparse ! queue ! mp4mux streamable=true ! shmsink socket-path=/tmp/foo sync=true wait-for-connection=false shm-size=100000000"
Please note, I separated the 2 strings simply for better readability. Both strings together are the reveiver queue. I get no error or warning up to GST_DBG=3. I used mp4mux because some claim, that I need a container.
Sender:
"appsrc ! videoconvert ! nvh264enc ! h264parse ! queue ! mux.video"
" shmsrc socket-path=/tmp/foo ! qtdemux ! aacparse ! queue ! mux.audio"
" flvmux name=mux ! rtmpsink location=rtmp://xx.yy.10.50:1935/result/1"
Please note I separated the strings for better readability. Again I get no error. But reading the sound buffer from shared memory (shmsrc) simply stalls. If I remove this line everything seems to work perfectly well, stable even for hours.
Any ideas someone, because all the working solutions seem to use raw audio and caps. But actually I'm not interested in audio at all, I just need it copied to the sender...
so an update from our side:
We tried many things, but the answer is that this problem is inherent to the elements shmsink and shmsrc from gstreamer
When using the shm-communication between threads you loose all meta-data, basically the audio stream coming from shmsrc is not an audio stream any more. You can test this by using launch:
gst-launch-1.0 --verbose rtmpsrc location=rtmp://xx.yy.10.40:1935/ai/1 do-timestamp=true timeout=10 ! flvdemux name=demux demux.video ! h264parse ! video/x-h264 ! fakesink demux.audio ! queue ! faad ! shmsink socket-path=/tmp/foo sync=true wait-for-connection=false shm-size=100000
will produce a lot of info, especially the audio and video formats
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, rate=(int)48000, channels=(int)2, codec_data=(buffer)1190 ...
/GstPipeline:pipeline0/GstShmSink:shmsink0.GstPad:sink: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, stream-format=(string)avc, codec_data=(buffer)014d4029ffe10015674d402995900780227e5c04400000fa40002ee02101000468eb8f20, pixel-aspect-ratio=(fraction)1/1, width=(int)1920, height=(int)1080, framerate=(fraction)24000/1001, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, alignment=(string)au, profile=(string)main, level=(string)4.1
When you do the same on the side of the shmsrc (pls note we only transfer the audio via shm)
gst-launch-1.0 --verbose shmsrc socket-path=/tmp/foo ! queue ! fakesink
you will get nothing, for gstreamer "nothing" looks like this:
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
If you want to use shmsrc, you need to set the meta-data via caps manually:
gst-launch-1.0 --verbose shmsrc socket-path=/tmp/foo ! 'audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2' ! queue ! fakesink
Will give you:
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, rate=(int)48000, channels=(int)2
Which is a correct, but totally useless solution. So we decided to move to libav instead of gstreamer.

How to fix 'Lossing Stream Before End of Stream in Gstream-0.10'

I have streamed video via vlc player over rtsp and then I have displayed this video via gstreamer-0.10. However, While vlc was streaming video over rtsp, I suddenly lost stream in the first minute of stream before end of stream.
I have used following pipeline:
GST_DEBUG=2 gst-launch-0.10 rtspsrc location=rtsp://127.0.0.1:8554/test !
gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! videorate ! xvimagesink
sync=false
I have got following output:
rtpjitterbuffer.c:428:calculate_skew: delta - skew: 0:00:01.103711536 too big, reset skew
rtpjitterbuffer.c:387:calculate_skew: backward timestamps at server, taking new base time
Got EOS from element "pipeline0".
Execution ended after 59982680309 ns.
Setting pipeline to PAUSED ...
gst_rtspsrc_send: got NOT IMPLEMENTED, disable method PAUSE
How to fix this problem ?
I have found solution. I have used rtspt://... instead of rtsp://... to enforce TCP instead of UDP.
gst-launch-0.10 rtspsrc location= rtspt://127.0.0.1:8554/test ! gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false

GStreamer stream audio and video via UDP to be able to playback on VLC

I am trying to stream audio and video via Gstreamer via UDP but playback on VLC only returns video without audio. Currently I am using a sample of Big Buck Bunny and have confirmed that it does have audio. I am planning to use Snowmix to feed media to Gstreamer output in the future.
Streaming from file source via UDP to playback on VLC I currently perform by:
gst-launch-1.0 -v uridecodebin uri=file:///home/me/files/Snowmix-0.5.1/test/big_buck_bunny_720p_H264_AAC_25fps_3400K.MP4 ! queue ! videoconvert ! x264enc ! mpegtsmux ! queue ! udpsink host=230.0.0.1 port=4012 sync=true
which allows me to open a network stream in VLC on my Windows machine to receive packets and only plays video.
What am I missing from my command?
As RSATom stated previously, the audio is missing from the pipeline.
The correct pipeline for video and audio is the next (tested with the same input file):
gst-launch-1.0 -v uridecodebin name=uridec uri=file:///home/usuario/Desktop/map/big_buck_bunny_720p_H264_AAC_25fps_3400K.MP4 ! queue ! videoconvert ! x264enc ! video/x-h264 ! mpegtsmux name=mux ! queue ! udpsink host=127.0.0.1 port=5014 sync=true uridec. ! audioconvert ! voaacenc ! audio/mpeg ! queue ! mux.
Remember that in this case you're re-encoding all the content from the source video file, which means high CPU consumption. Other option would be to demux the content from the input file and mux again without encoding (using h264parse and aacparse).

gstreamer audio error on linux

i am using g streamer-0.10 on Ubuntu os for streaming an web cam video on to an rtmp server i am getting an video output but their is a problem in audio . Below command used for streaming
gst-launch-0.10 v4l2src ! videoscale method=0 ! video/x-raw-yuv,width=852,height=480,framerate=(fraction)24/1 ! ffmpegcolorspace ! x264enc pass=pass1 threads=0 bitrate=900 tune=zerolatency ! flvmux name=mux ! rtmpsink location='rtmp://..../live/testing' demux. alsasrc device="hw:0,0" ! audioresample ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! pulseaudiosink
Blockquote
by running the above command i got an error
gstbaseaudiosrc.c(840): gst_base_audio_src_create (): /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0:
Dropped 13920 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.
Blockquote
so the audio is not audible.
Help me out to solve this problem.
Thanks in advance
Ameeth
I don't understand your pipeline. What is "demux." in the middle?
The problem you are facing is because you have not seperated your elements with queues. Keep a queue before your sinks and after your sources to give the rest all seperate threads to run. It should allow get rid of the issue.
Since I don't have pulse audio or rtmp reciever in my system i have tested out the following and it works.
gst-launch-0.10 v4l2src ! ffmpegcolorspace ! queue ! x264enc pass=pass1 threads=0 bitrate=900000 tune=zerolatency ! queue ! flvmux name=mux ! fakesink alsasrc ! queue ! audioresample ! audioconvert ! queue ! autoaudiosink
You can change it accordingly and use it. The only thing I had to do to make it work and remove the error your are facing is to add the queues.
For me (Logitech c920 on Raspberry Pi3 w/ GStreamer 1.4.4) I was able to get rid of the "Dropped samples" warning by using audioresample to set the sampling rate of the alsasrc to something that flvmux liked. From gst-inspect-1.0 flvmux, it looks like flvmux only supports 5512, 11025, 22050, 44100 sample rates for x-raw and 5512, 8000, 11025, 16000, 22050, 44100 for mp4. Here's my working pipeline
gst-launch-1.0 -v -e \
uvch264src initial-bitrate=800000 average-bitrate=800000 iframe-period=2000 device=/dev/video0 name=src auto-start=true \
src.vidsrc ! video/x-h264,width=864,height=480,framerate=30/1 ! h264parse ! mux. \
alsasrc device=hw:1 ! 'audio/x-raw, rate=32000, format=S16LE, channels=2' ! queue ! audioresample ! "audio/x-raw,rate=44100" ! queue ! voaacenc bitrate=96000 ! mux. \
flvmux name=mux ! rtmpsink location="rtmp://live-sea.twitch.tv/app/MYSTREAMKEY"
I was surprised that flvmux didn't complain about getting an audio source that was at an unsupported sampling rate. Not sure if that's expected behavior.

Error in playing multichannel audio at different instances?

I am trying to play 2 channels in which audio in one and silence in other channel is being played.
$ gst-launch \
interleave name=i ! alsasink
filesrc location=/home/test1.mp3 \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! i. \
audiotestsrc wave=silence \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! volume volume=1.0 ! i.
After 10 sec I want to play silence in first and some audio in the second channel.
$ gst-launch \
interleave name=i ! alsasink \
audiotestsrc wave=silence \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! i. \
filesrc location=/home/test2.mp3 \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! volume volume=1.0 ! i.
This can be done on PC's side, while playing these pipeline in two different terminals or making one of them run in background. But when I am playing one pipeline on am335x board and trying to play the other one, its something like this:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstAlsaSink:alsasink0: Could not open audio device for playback.
Device is being used by another application.
Additional debug info:
gstalsasink.c(697): gst_alsasink_open (): /GstPipeline:pipeline0/GstAlsaSink:alsasink0:
Device 'default' is busy
Setting pipeline to NULL ...
Freeing pipeline ...
when we check in gstalsasink.c it is calling snd_pcm_open in non-blocking mode .
CHECK (snd_pcm_open (&alsa->handle, alsa->device, SND_PCM_STREAM_PLAYBACK,
SND_PCM_NONBLOCK), open_error);
Then why its blocking other events to use the audio device?
Can anyone suggest me what to do for the target side ,since PC side alsasink is perfect.
could there be a little delay for closing the alsa device on your embedded hardware. Check with fuser which process has it still open. Also consider using gnonlin for developing a sequential playback of streams. This will reuse the existing audio sink.

Resources