BLUF: I'd like to fan out an RTSP video stream using gstreamer so multiple processes can use the gstreamer process as a source, and I'm having problems doing that with tcpserversink.
I have an IOT camera that serves the video over RTSP, so I can successfully capture video with e.g.
gst-launch-1.0 -e rtspsrc location=rtsp://camera:554/data \
! rtph264depay \
! h264parse \
! mp4mux \
! filesink location=/tmp/data.mp4
I'd like to be able to capture several videos simultaneously from the stream, with arbitrary start and stop times - for example, I might have a video that runs from 0-120, another from 40-80, another from 60-100. For reasons that are not clear, when I request too many simultaneous streams, the camera starts killing existing streams. My theory is that the camera's hardware can't handle multiple connections and is running into resource starvation issues. To get around this, I'd like my recording server to have a single process that is re-hosting the RTSP stream from the camera, and my asynchronous recorder processes can attach to that.
It would seem that the following would work for the server subprocess:
gst-launch-1.0 -e rtspsrc location=rtsp://camera:554/data \
tcpserversink port=29000
and the following for the asynchronous recorder:
gst-launch-1.0 -e tcpclientsrc port=29000 \
! rtph264depay \
! h264parse \
! mp4mux \
! filesink location=/tmp/data.mp4
But it don't. The specific error I'm seeing on my client process is
ERROR: from element /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0: Internal data stream error.
The documentation for tcpserversink seems to indicate that you can just attach any pipeline end there and you're fine. It seems this isn't the case. What am I missing?
Try adding a ! after data "data ! tcpserversink port=29000"
Related
I am running a gstreamer pipeline in a jetson xavier NX and streaming a 4k live stream over udp to a server. I am running a shell script which runs the pipeline directly using CLI. When the connection breaks and the stream cuts, the pipeline says 'network is unreachable`. However as the network resets itself soon and i want the pipeline to restart. How can i find out if the pipeline has stopped and restart it? The pipeline stops but the process continues running and it does not restart on its own. I want to restart the process if the pipeline breaks.
You may try the following for sender: Here using videotestsrc at low resolution, rescaling with HW into 4K in NVMM memory for H264 encoding and RTP/UDP multicast streaming :
gst-launch-1.0 -ev videotestsrc ! video/x-raw,width=320,height=240,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12,width=3840,height=2160' ! nvv4l2h264enc insert-sps-pps=1 ! h264parse ! rtph264pay config-interval=1 ! udpsink port=5000 host=224.1.1.1
Receiver:
gst-launch-1.0 -ev udpsrc port=5000 multicast-group=224.1.1.1 ! application/x-rtp,encoding-name=H264 ! rtpjitterbuffer latency=500 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,width=1920,height=1080 ! fpsdisplaysink text-overlay=0 video-sink=xvimagesink
It may take a few seconds to connect and display after starting or restarting, but seems restarting fine after network connection stopped and then again available, but this was only tested on a single AGX Xavier being both sender and receiver and using Network manager to disconnect/reconnect. Other cases over networks may be more complex.
The proper way is to write your own application instead of using gst-launch as already suggested. The learning curve for this is pretty steep so the alternative is to monitor the stderr output and parse the messages to find "network is unreachable" information, kill the the old process and relaunch gst-launch.
I have streamed video via vlc player over rtsp and then I have displayed this video via gstreamer-0.10. However, While vlc was streaming video over rtsp, I suddenly lost stream in the first minute of stream before end of stream.
I have used following pipeline:
GST_DEBUG=2 gst-launch-0.10 rtspsrc location=rtsp://127.0.0.1:8554/test !
gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! videorate ! xvimagesink
sync=false
I have got following output:
rtpjitterbuffer.c:428:calculate_skew: delta - skew: 0:00:01.103711536 too big, reset skew
rtpjitterbuffer.c:387:calculate_skew: backward timestamps at server, taking new base time
Got EOS from element "pipeline0".
Execution ended after 59982680309 ns.
Setting pipeline to PAUSED ...
gst_rtspsrc_send: got NOT IMPLEMENTED, disable method PAUSE
How to fix this problem ?
I have found solution. I have used rtspt://... instead of rtsp://... to enforce TCP instead of UDP.
gst-launch-0.10 rtspsrc location= rtspt://127.0.0.1:8554/test ! gstrtpjitterbuffer ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false
I am working with Embedded Linux machine which can get the RTSP stream from other source. Now if I configure my FFMPEG and try to restream it, the CPU usage reaches very high. This probably is due to Embedded Hardware capability.
Is there any possible way we can simply restream the incoming stream with out processing it at all using any type of library?
You can construct a command for GStreamer. My guess (from here):
gst-launch -v rtspsrc location="rtsp://url" ! rtph264depay ! \
rtph264pay ! udpsink port=15000 sync=false
I have been stuck on this for days now. I am trying to come up with a GStreamer pipeline that will stream h.264 video and compressed audio (aac, mulaw, whatever, I don't really care) over a single rtp stream. The problem seems to always be with the multiplexer. I've tried asf, avi, mpegts, Matroska and flv multiplexers and it seems they are all oriented towards files (not network streaming) and are therefore requiring header information. Anyway, here's my latest attempt:
gst-launch-1.0 -e --gst-debug-level=4 \
flvmux name=flashmux streamable=true ! flvdemux name=flashdemux ! decodebin name=decode \
videotestsrc ! 'video/x-raw,width=640,height=480,framerate=15/1' ! omxh264enc ! flashmux. \
audiotestsrc ! 'audio/x-raw,format=S16LE,rate=22050,channels=2,layout=interleaved' ! flashmux. \
decode. ! queue ! autovideoconvert ! fpsdisplaysink sync=false \
decode. ! queue ! audioconvert ! alsasink device="hw:1,0"
This pipeline removes rtp and simply feeds the decoder with the encoder. Also, this attempt uses raw audio, not encoded. Any help will be greatly appreciated!
To stream video+audio you should use 2 different ports.
Using rtpbin element to manage rtp session
Example http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/tests/examples/rtp/server-v4l2-H264-alsasrc-PCMA.sh
I am trying to play 2 channels in which audio in one and silence in other channel is being played.
$ gst-launch \
interleave name=i ! alsasink
filesrc location=/home/test1.mp3 \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! i. \
audiotestsrc wave=silence \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! volume volume=1.0 ! i.
After 10 sec I want to play silence in first and some audio in the second channel.
$ gst-launch \
interleave name=i ! alsasink \
audiotestsrc wave=silence \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! i. \
filesrc location=/home/test2.mp3 \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! volume volume=1.0 ! i.
This can be done on PC's side, while playing these pipeline in two different terminals or making one of them run in background. But when I am playing one pipeline on am335x board and trying to play the other one, its something like this:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstAlsaSink:alsasink0: Could not open audio device for playback.
Device is being used by another application.
Additional debug info:
gstalsasink.c(697): gst_alsasink_open (): /GstPipeline:pipeline0/GstAlsaSink:alsasink0:
Device 'default' is busy
Setting pipeline to NULL ...
Freeing pipeline ...
when we check in gstalsasink.c it is calling snd_pcm_open in non-blocking mode .
CHECK (snd_pcm_open (&alsa->handle, alsa->device, SND_PCM_STREAM_PLAYBACK,
SND_PCM_NONBLOCK), open_error);
Then why its blocking other events to use the audio device?
Can anyone suggest me what to do for the target side ,since PC side alsasink is perfect.
could there be a little delay for closing the alsa device on your embedded hardware. Check with fuser which process has it still open. Also consider using gnonlin for developing a sequential playback of streams. This will reuse the existing audio sink.