How to fix 'Lossing Stream Before End of Stream in Gstream-0.10' - rtsp

I have streamed video via vlc player over rtsp and then I have displayed this video via gstreamer-0.10. However, While vlc was streaming video over rtsp, I suddenly lost stream in the first minute of stream before end of stream.
I have used following pipeline:
GST_DEBUG=2 gst-launch-0.10 rtspsrc location=rtsp://127.0.0.1:8554/test !
gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! videorate ! xvimagesink
sync=false
I have got following output:
rtpjitterbuffer.c:428:calculate_skew: delta - skew: 0:00:01.103711536 too big, reset skew
rtpjitterbuffer.c:387:calculate_skew: backward timestamps at server, taking new base time
Got EOS from element "pipeline0".
Execution ended after 59982680309 ns.
Setting pipeline to PAUSED ...
gst_rtspsrc_send: got NOT IMPLEMENTED, disable method PAUSE
How to fix this problem ?

I have found solution. I have used rtspt://... instead of rtsp://... to enforce TCP instead of UDP.
gst-launch-0.10 rtspsrc location= rtspt://127.0.0.1:8554/test ! gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false

Related

Write, then read rtpopus to file with gstreamer?

Is it possible to write rtpopus to a file, then read it back with gstreamer? It seems simple but I'm getting nowhere with it and can't seem to find any information online. Here is my attempt:
gst-launch-1.0.exe audiotestsrc ! opusenc ! rtpopuspay ! filesink location=test.opus
Then, close and run:
gst-launch-1.0.exe filesrc location="test.opus" ! rtpopusdepay ! fakesink dump=true
gstreamer fails with:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data stream error.
Additional debug info:
../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:filesrc0:
streaming stopped, reason error (-5)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
I don't think it could work. RTP is related to UDP packetization so it would work when streaming over UDP.
You'd better use a file container supporting opus audio such as matroskamux:
gst-launch-1.0 -e audiotestsrc ! audioconvert ! opusenc ! matroskamux ! filesink location=test.mkv
# Let play for 5s and stop with Ctrl-C
# Replay:
gst-launch-1.0 filesrc location=test.mkv ! matroskademux ! opusdec ! audioconvert ! autoaudiosink

Low latency audio streaming with gstreamer, Dropped samples when lowering the buffer-time on a pepper robot

I'm trying to send audio through an RTP Stream using gstreamer with the lowest latency possible and I want to do it from a Pepper(gstreamer 0.10) to my computer(gstreamer 0.10 or 1.0).
I can send audio with little latency (20 ms) from the computer to Pepper however I doesn't work as well from the Pepper to the computer. When I try to adjust the buffer-time under 200 ms, I get this type of error :
WARNING: Can't record audio fast enough
Dropped 318 samples. This is most likely beacause downstream can't keep up and is consuming samples too slowly.
I used the answers here and so far and worked with the following pipelines:
Sender
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Receiver
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! autoaudiosink buffer-time=80000 sync=false
I don't really know how to tackle this issue as the CPU usage is not anormal.
And to be frank I am quite new in this, so I don't get what are the parameters to play with to get low latency. I hope someone can help me! (and that it is not a hardware problem too ^^)
Thanks a lot!
I don't think gst-launch-0.10 is made to work in real-time (RT).
Please consider writing your own program (even using GStreamer) to perform the streaming from an RT thread. NAOqi OS has the RT patches included and support this.
But with network involved in your pipeline, you may not be able to guarantee it is going to be processed in time.
So maybe the simplest solution could be to keep a queue before the audio processing of the sender:
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
queue ! audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Also consider that the receiver may cause a drop if it cannot process the audio in time, and that a queue might help too:
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! queue ! autoaudiosink buffer-time=80000 sync=false

GStreamer stream audio and video via UDP to be able to playback on VLC

I am trying to stream audio and video via Gstreamer via UDP but playback on VLC only returns video without audio. Currently I am using a sample of Big Buck Bunny and have confirmed that it does have audio. I am planning to use Snowmix to feed media to Gstreamer output in the future.
Streaming from file source via UDP to playback on VLC I currently perform by:
gst-launch-1.0 -v uridecodebin uri=file:///home/me/files/Snowmix-0.5.1/test/big_buck_bunny_720p_H264_AAC_25fps_3400K.MP4 ! queue ! videoconvert ! x264enc ! mpegtsmux ! queue ! udpsink host=230.0.0.1 port=4012 sync=true
which allows me to open a network stream in VLC on my Windows machine to receive packets and only plays video.
What am I missing from my command?
As RSATom stated previously, the audio is missing from the pipeline.
The correct pipeline for video and audio is the next (tested with the same input file):
gst-launch-1.0 -v uridecodebin name=uridec uri=file:///home/usuario/Desktop/map/big_buck_bunny_720p_H264_AAC_25fps_3400K.MP4 ! queue ! videoconvert ! x264enc ! video/x-h264 ! mpegtsmux name=mux ! queue ! udpsink host=127.0.0.1 port=5014 sync=true uridec. ! audioconvert ! voaacenc ! audio/mpeg ! queue ! mux.
Remember that in this case you're re-encoding all the content from the source video file, which means high CPU consumption. Other option would be to demux the content from the input file and mux again without encoding (using h264parse and aacparse).

gstreamer streaming TS stream (with sound) to RTMP server stops on prerolling

I want be able to stream couple chunks which have TS as a container. If I stream only audio or video, then it's fine. But when I do both simultaneously a pipeline doesn't start. It says prerolling for a long time.
I've tried hints from here. It makes my pipeline easier but still prerolling at start.
I believe there is a problem with tsdemux. Don't really know why it happens. Logs are not so informative, so I ended up writing this post.
Here is my pipeline. It's not perfect and I know it, but separatly it works. So let's stick with it.
gst-launch-1.0 multifilesrc location="chunck%d.ts" index = 0 ! tsdemux name=demux
demux. !queue ! h264parse ! mux.
demux. ! queue ! faad ! audoconvert ! voaacenc !
flvmux name=mux ! filesink location=stream.ts
It this one I've used filesink for testing purposes. I know about rtmpsink and have tried it before.
Thx a lot.
UPDATE
I've managed to combine video and audio. But there is a problem with skipping frames when using my pipeline.
gst-launch-1.0 --gst-debug-level=0 multifilesrc location="chunk%d.ts" index = 0 ! tsdemux name=demux !
queue ! h264parse ! flvmux name=mux ! queue ! filesink location=stream.flv
demux. ! queue ! aacparse ! mux.audio
Next task is to get rid of the problem. Maybe it's the parser issue. Will see.

gstreamer audio error on linux

i am using g streamer-0.10 on Ubuntu os for streaming an web cam video on to an rtmp server i am getting an video output but their is a problem in audio . Below command used for streaming
gst-launch-0.10 v4l2src ! videoscale method=0 ! video/x-raw-yuv,width=852,height=480,framerate=(fraction)24/1 ! ffmpegcolorspace ! x264enc pass=pass1 threads=0 bitrate=900 tune=zerolatency ! flvmux name=mux ! rtmpsink location='rtmp://..../live/testing' demux. alsasrc device="hw:0,0" ! audioresample ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! pulseaudiosink
Blockquote
by running the above command i got an error
gstbaseaudiosrc.c(840): gst_base_audio_src_create (): /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0:
Dropped 13920 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.
Blockquote
so the audio is not audible.
Help me out to solve this problem.
Thanks in advance
Ameeth
I don't understand your pipeline. What is "demux." in the middle?
The problem you are facing is because you have not seperated your elements with queues. Keep a queue before your sinks and after your sources to give the rest all seperate threads to run. It should allow get rid of the issue.
Since I don't have pulse audio or rtmp reciever in my system i have tested out the following and it works.
gst-launch-0.10 v4l2src ! ffmpegcolorspace ! queue ! x264enc pass=pass1 threads=0 bitrate=900000 tune=zerolatency ! queue ! flvmux name=mux ! fakesink alsasrc ! queue ! audioresample ! audioconvert ! queue ! autoaudiosink
You can change it accordingly and use it. The only thing I had to do to make it work and remove the error your are facing is to add the queues.
For me (Logitech c920 on Raspberry Pi3 w/ GStreamer 1.4.4) I was able to get rid of the "Dropped samples" warning by using audioresample to set the sampling rate of the alsasrc to something that flvmux liked. From gst-inspect-1.0 flvmux, it looks like flvmux only supports 5512, 11025, 22050, 44100 sample rates for x-raw and 5512, 8000, 11025, 16000, 22050, 44100 for mp4. Here's my working pipeline
gst-launch-1.0 -v -e \
uvch264src initial-bitrate=800000 average-bitrate=800000 iframe-period=2000 device=/dev/video0 name=src auto-start=true \
src.vidsrc ! video/x-h264,width=864,height=480,framerate=30/1 ! h264parse ! mux. \
alsasrc device=hw:1 ! 'audio/x-raw, rate=32000, format=S16LE, channels=2' ! queue ! audioresample ! "audio/x-raw,rate=44100" ! queue ! voaacenc bitrate=96000 ! mux. \
flvmux name=mux ! rtmpsink location="rtmp://live-sea.twitch.tv/app/MYSTREAMKEY"
I was surprised that flvmux didn't complain about getting an audio source that was at an unsupported sampling rate. Not sure if that's expected behavior.

Resources