Hi I would like to ask two questions on gst_rtsp_server.
I have a pipeline like the following:
gchar *pipeline =
g_strdup_printf ("( udpsrc port=4444 name=src0 "
"! queue ! rtpmp2tdepay ! video/mpegts, systemstream=true, packetsize=188 ! aiurdemux name=demux "
" demux.video_0 ! queue ! h264parse name=v ! rtph264pay config-interval=1 name=pay0 pt=96 "
" demux.audio_0 ! queue ! aacparse name=a ! rtpmp4gpay name=pay1 pt=97 )");
and I have set the service port to be at 8552.
g_object_set (server, "service","8552", NULL);
However when I checked with my wireshark, I found this for UDP and TCP Capture
So it is true that the port 8552 is associated with RTSP information exchange (DESCRIBE, PLAY TEARDOWN etc) and the underlying media exchange (Audio/Video) port is 40012 and 40013?
I have noticed the Gstreamer RTSP will sometimes be in UDP and sometimes in TCP. IS this so? Can I fixed to a particular transport protocol.
Thanks
Hope it helps anyone
For 2) gst_rtsp_media_factory_set_protocols (factory, GST_RTSP_LOWER_TRANS_UDP);
Related
I have streamed video via vlc player over rtsp and then I have displayed this video via gstreamer-0.10. However, While vlc was streaming video over rtsp, I suddenly lost stream in the first minute of stream before end of stream.
I have used following pipeline:
GST_DEBUG=2 gst-launch-0.10 rtspsrc location=rtsp://127.0.0.1:8554/test !
gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! videorate ! xvimagesink
sync=false
I have got following output:
rtpjitterbuffer.c:428:calculate_skew: delta - skew: 0:00:01.103711536 too big, reset skew
rtpjitterbuffer.c:387:calculate_skew: backward timestamps at server, taking new base time
Got EOS from element "pipeline0".
Execution ended after 59982680309 ns.
Setting pipeline to PAUSED ...
gst_rtspsrc_send: got NOT IMPLEMENTED, disable method PAUSE
How to fix this problem ?
I have found solution. I have used rtspt://... instead of rtsp://... to enforce TCP instead of UDP.
gst-launch-0.10 rtspsrc location= rtspt://127.0.0.1:8554/test ! gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false
I'm connecting with python+gstreamer to a DVR rtsp feed for image processing using opencv2.
when using gstreamer pipelines on the cv2.VideoCapture the connection freezes (possibly waiting for a signal). The rtsp feed is made of h264 video (that I want to process) + pcma audio (which I want to discard).
Here is the function - it freezes on the cv2.VideoCapture line:
def simple_test2():
rtsp_path = 'rtsp://admin:XXXX#10.0.0.50:554/cam/realmonitor?channel=1&subtype=0'
rtsp_path2 = 'rtspsrc location="{}" ! decodebin name=dcd dcd. ! videoconvert ! appsink max-buffers=1 drop=true dcd. ! audioconvert ! fakesink '.format(rtsp_path)
logging.info('connecting to {} : {} '.format(cc['name'],rtsp_path2))
vcap = cv2.VideoCapture(rtsp_path2)
for i in range(2):
logging.info('read frame #%s' % i )
succ, frame = vcap.read()
if succ:
logging.info('%s got frame %d : %s open? %s' % (cc['name'], i, str(frame.shape[0:2]),str(vcap.isOpened())))
else:
logging.info('%s fail frame %d open? %s' % (cc['name'], i,str(vcap.isOpened())))
time.sleep(1)
vcap.release()
Further investigation:
1) oddly enough - running the same pipeline in gst-launch works OK !
gst-launch-1.0 -v rtspsrc location="rtsp://admin:XXXX#10.0.0.50:554/cam/realmonitor?channel=2&subtype=0" ! decodebin name=dcd dcd. ! videoconvert ! appsink max-buffers=1 drop=true dcd. ! audioconvert ! fakesink
return a steady stream of log messages coming in
2) If I use a simpler pipeline, in the code eg:
rtspsrc location="{}" ! decodebin ! videoconvert ! appsink max-buffers=1 drop=true
I end up getting connection error related to the fact the audio pad cannot be linked, which is why I tried using a fakesink to discard audio. other approaches will be appreciated
3) I also tried adding ! queue ! plugin in various places and combinations. nothing helps...
I'm trying to send audio through an RTP Stream using gstreamer with the lowest latency possible and I want to do it from a Pepper(gstreamer 0.10) to my computer(gstreamer 0.10 or 1.0).
I can send audio with little latency (20 ms) from the computer to Pepper however I doesn't work as well from the Pepper to the computer. When I try to adjust the buffer-time under 200 ms, I get this type of error :
WARNING: Can't record audio fast enough
Dropped 318 samples. This is most likely beacause downstream can't keep up and is consuming samples too slowly.
I used the answers here and so far and worked with the following pipelines:
Sender
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Receiver
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! autoaudiosink buffer-time=80000 sync=false
I don't really know how to tackle this issue as the CPU usage is not anormal.
And to be frank I am quite new in this, so I don't get what are the parameters to play with to get low latency. I hope someone can help me! (and that it is not a hardware problem too ^^)
Thanks a lot!
I don't think gst-launch-0.10 is made to work in real-time (RT).
Please consider writing your own program (even using GStreamer) to perform the streaming from an RT thread. NAOqi OS has the RT patches included and support this.
But with network involved in your pipeline, you may not be able to guarantee it is going to be processed in time.
So maybe the simplest solution could be to keep a queue before the audio processing of the sender:
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
queue ! audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Also consider that the receiver may cause a drop if it cannot process the audio in time, and that a queue might help too:
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! queue ! autoaudiosink buffer-time=80000 sync=false
I am developing an IP Streaming based media player. I am using the following pipelines:
Src:
gst-launch-1.0 -vvv udpsrc port=5004 ! application/x-rtp, payload=96 ! rtph2 64depay ! h264parse ! imxvpudec ! imxipuvideosink sync=false
Sink:
C:\gstreamer\1.0\x86_64\bin\gst-launch-1.0.exe -v filesrc location=C:\\gstreamer\\1.0\\x86_64\\bin\\hash.h264 ! h264parse ! rtph264pay ! udpsink host=153.77.205.139 port=5004 sync=true
This was the proof of concept. Now, We wanted to have an application which can perform the same operation but with a little tweak when there is no streaming or when no data from the sink, we need to switch to offline base media player, means play a set of videos offline and when there is data on udp port, switch to streaming.
Following are my queries:
Is there any way to find out streaming has been completed after video play over IP
Is there any way to find out there is no streaming happening.
Please help. I am ready to help you if you need more details
For udpsrc there is timeout property, which sends a message on bus if there is no data available (you can try setting it to 1 second), for streaming is complted you should get EOS on the bus again. (try this pipeline gst-launch-1.0 -vvvm udpsrc port=5004 timeout=100000000 ! application/x-rtp, payload=96 ! rtph2 64depay ! h264parse ! imxvpudec ! imxipuvideosink sync=false)
I am using Gstreamer to record speech and transmit in real-time( so RTP and UDP ). I have the following code :
Receiver:
gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false
Sender:
gst-launch-0.10 alsasrc ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=localhost port=5000
This works perfectly but it transmits Mono Speech.I also know that my Mic input is Mono. So it means that i have to use the Line-in port which shall be connected by a double-Mic ( left and right, one end a double to single jack and the other end two microphones ).
Now my problem is that i cant seem to find a way to change the input signal source of alsasrc to Line-in. Is there any way to change this?