I am working with Embedded Linux machine which can get the RTSP stream from other source. Now if I configure my FFMPEG and try to restream it, the CPU usage reaches very high. This probably is due to Embedded Hardware capability.
Is there any possible way we can simply restream the incoming stream with out processing it at all using any type of library?
You can construct a command for GStreamer. My guess (from here):
gst-launch -v rtspsrc location="rtsp://url" ! rtph264depay ! \
rtph264pay ! udpsink port=15000 sync=false
Related
BLUF: I'd like to fan out an RTSP video stream using gstreamer so multiple processes can use the gstreamer process as a source, and I'm having problems doing that with tcpserversink.
I have an IOT camera that serves the video over RTSP, so I can successfully capture video with e.g.
gst-launch-1.0 -e rtspsrc location=rtsp://camera:554/data \
! rtph264depay \
! h264parse \
! mp4mux \
! filesink location=/tmp/data.mp4
I'd like to be able to capture several videos simultaneously from the stream, with arbitrary start and stop times - for example, I might have a video that runs from 0-120, another from 40-80, another from 60-100. For reasons that are not clear, when I request too many simultaneous streams, the camera starts killing existing streams. My theory is that the camera's hardware can't handle multiple connections and is running into resource starvation issues. To get around this, I'd like my recording server to have a single process that is re-hosting the RTSP stream from the camera, and my asynchronous recorder processes can attach to that.
It would seem that the following would work for the server subprocess:
gst-launch-1.0 -e rtspsrc location=rtsp://camera:554/data \
tcpserversink port=29000
and the following for the asynchronous recorder:
gst-launch-1.0 -e tcpclientsrc port=29000 \
! rtph264depay \
! h264parse \
! mp4mux \
! filesink location=/tmp/data.mp4
But it don't. The specific error I'm seeing on my client process is
ERROR: from element /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0: Internal data stream error.
The documentation for tcpserversink seems to indicate that you can just attach any pipeline end there and you're fine. It seems this isn't the case. What am I missing?
Try adding a ! after data "data ! tcpserversink port=29000"
I am running a gstreamer pipeline in a jetson xavier NX and streaming a 4k live stream over udp to a server. I am running a shell script which runs the pipeline directly using CLI. When the connection breaks and the stream cuts, the pipeline says 'network is unreachable`. However as the network resets itself soon and i want the pipeline to restart. How can i find out if the pipeline has stopped and restart it? The pipeline stops but the process continues running and it does not restart on its own. I want to restart the process if the pipeline breaks.
You may try the following for sender: Here using videotestsrc at low resolution, rescaling with HW into 4K in NVMM memory for H264 encoding and RTP/UDP multicast streaming :
gst-launch-1.0 -ev videotestsrc ! video/x-raw,width=320,height=240,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12,width=3840,height=2160' ! nvv4l2h264enc insert-sps-pps=1 ! h264parse ! rtph264pay config-interval=1 ! udpsink port=5000 host=224.1.1.1
Receiver:
gst-launch-1.0 -ev udpsrc port=5000 multicast-group=224.1.1.1 ! application/x-rtp,encoding-name=H264 ! rtpjitterbuffer latency=500 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,width=1920,height=1080 ! fpsdisplaysink text-overlay=0 video-sink=xvimagesink
It may take a few seconds to connect and display after starting or restarting, but seems restarting fine after network connection stopped and then again available, but this was only tested on a single AGX Xavier being both sender and receiver and using Network manager to disconnect/reconnect. Other cases over networks may be more complex.
The proper way is to write your own application instead of using gst-launch as already suggested. The learning curve for this is pretty steep so the alternative is to monitor the stderr output and parse the messages to find "network is unreachable" information, kill the the old process and relaunch gst-launch.
I am currently experiencing a small problem with GStreamer, here are more details:
Configuration:
Intel i7-6700
Intel HD Graphics 530
Ubuntu 18.04 LTS
GStreamer1.0
VAAPI plugin
I receive a UDP stream from a video source, this stream is sent in RAW UYVY format. Here is my command line to decode it:
gst-launch-1.0 -v udpsrc port="1234" caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:2, depth=(string)8, width=(string)1920, height=(string)1080, colorimetry=(string)BT709-2, payload=(int)96, ssrc=(uint)1188110121, timestamp-offset=(uint)4137478200, seqnum-offset=(uint)7257, a-framerate=(string)25" ! rtpvrawdepay ! decodebin ! queue ! videoconvert ! xvimagesink
Problem as we can see on the screenshot below, the CPU load (right) is far too high for this kind of task and we can see the GPU load (left) which is almost zero.
To overcome this problem, I want to use the VAAPI graphics acceleration as I did in a previous project with H264 of which here is the command line below:
gst-launch-1.0 -v udpsrc port=1234 caps= "application/x-rtp, media\=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, profile-level-id=(string)640028, payload=(int)96, ssrc=(uint)2665415388, timestamp-offset=(uint)3571350145, seqnum-offset=(uint)18095, a-framerate=(string)25" ! rtph264depay ! queue ! vaapih264dec low-latency=1 ! autovideosink
The line above works perfectly and the CPU has almost no more loads. So I adapt this command line to use it with a RAW stream, here is the command:
gst-launch-1.0 -v udpsrc port="1234" caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:2, depth=(string)8, width=(string)1920, height=(string)1080, colorimetry=(string)BT709-2, payload=(int)96, ssrc=(uint)1188110121, timestamp-offset=(uint)4137478200, seqnum-offset=(uint)7257, a-framerate=(string)25" ! rtpvrawdepay ! vaapidecodebin ! videoconvert ! xvimagesink
It is the same line as the one at the beginning but I changed the element decodebin by vaapidecodebin as I had replaced avdec_h264 by vaapih264dec for my H264 stream. Unfortunately it doesn't work and I end up with this error:
WARNING: wrong pipeline: unable to connect rtpvrawdepay0 to vaapidecodebin0
How I can solve this problem? Do you have any leads to solve this problem?
What exactly are you trying to accelerate here? The CPU load is probably either due to the videoconvert as this is run in software to convert UYVY into a format your renderer supports (Hopefully that's another YUV format and not RGB) or it is the data transfer of the uncompressed data from CPU memory to GPU memory.
Note that transferring uncompressed image data is a much higher data rate than compressed H.264 video.
If you think the videoconvert is the expensive part you may want to try to use OpenGL for convert and displaying: .. ! glupload ! glcolorconvert ! glimagesink.
Maybe vaapipostproc can help you with color conversion if you don't want to go the OpenGL route.
I'm trying to send audio through an RTP Stream using gstreamer with the lowest latency possible and I want to do it from a Pepper(gstreamer 0.10) to my computer(gstreamer 0.10 or 1.0).
I can send audio with little latency (20 ms) from the computer to Pepper however I doesn't work as well from the Pepper to the computer. When I try to adjust the buffer-time under 200 ms, I get this type of error :
WARNING: Can't record audio fast enough
Dropped 318 samples. This is most likely beacause downstream can't keep up and is consuming samples too slowly.
I used the answers here and so far and worked with the following pipelines:
Sender
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Receiver
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! autoaudiosink buffer-time=80000 sync=false
I don't really know how to tackle this issue as the CPU usage is not anormal.
And to be frank I am quite new in this, so I don't get what are the parameters to play with to get low latency. I hope someone can help me! (and that it is not a hardware problem too ^^)
Thanks a lot!
I don't think gst-launch-0.10 is made to work in real-time (RT).
Please consider writing your own program (even using GStreamer) to perform the streaming from an RT thread. NAOqi OS has the RT patches included and support this.
But with network involved in your pipeline, you may not be able to guarantee it is going to be processed in time.
So maybe the simplest solution could be to keep a queue before the audio processing of the sender:
gst-launch-0.10 -v alsasrc name=mic provide-clock=true do-timestamp=true buffer-time=20000 mic. ! \
audio/x-raw-int, format=S16LE, channels=1, width=16,depth=16,rate=16000 ! \
queue ! audioconvert ! rtpL16pay ! queue ! udpsink host=pepper.local port=4000 sync=false
Also consider that the receiver may cause a drop if it cannot process the audio in time, and that a queue might help too:
gst-launch-0.10 -v udpsrc port=4000 caps = 'application/x-rtp, media=audio, clock-rate=16000, encoding-name=L16, encoding-params=1, channels=1, payload=96' ! \
rtpL16depay ! queue ! autoaudiosink buffer-time=80000 sync=false
I am trying to stream audio and video via Gstreamer via UDP but playback on VLC only returns video without audio. Currently I am using a sample of Big Buck Bunny and have confirmed that it does have audio. I am planning to use Snowmix to feed media to Gstreamer output in the future.
Streaming from file source via UDP to playback on VLC I currently perform by:
gst-launch-1.0 -v uridecodebin uri=file:///home/me/files/Snowmix-0.5.1/test/big_buck_bunny_720p_H264_AAC_25fps_3400K.MP4 ! queue ! videoconvert ! x264enc ! mpegtsmux ! queue ! udpsink host=230.0.0.1 port=4012 sync=true
which allows me to open a network stream in VLC on my Windows machine to receive packets and only plays video.
What am I missing from my command?
As RSATom stated previously, the audio is missing from the pipeline.
The correct pipeline for video and audio is the next (tested with the same input file):
gst-launch-1.0 -v uridecodebin name=uridec uri=file:///home/usuario/Desktop/map/big_buck_bunny_720p_H264_AAC_25fps_3400K.MP4 ! queue ! videoconvert ! x264enc ! video/x-h264 ! mpegtsmux name=mux ! queue ! udpsink host=127.0.0.1 port=5014 sync=true uridec. ! audioconvert ! voaacenc ! audio/mpeg ! queue ! mux.
Remember that in this case you're re-encoding all the content from the source video file, which means high CPU consumption. Other option would be to demux the content from the input file and mux again without encoding (using h264parse and aacparse).