I have been trying to stream a live video from a RPi to a browser using GStreamer, i.e. RPi -> MediaServer -> Browser.
However the video that is display has become corrupted:
Corrupted Video Output
I have isolated the problem to the Gstreamer pipeline by streaming to a different port on the Pi and saving the stream to .mp4, however this video does not play.
Bash Script to send Stream
PEER_A={KMS_AUDIO_PORT} PEER_V={KMS_VIDEO_PORT} PEER_IP={KMS_PUBLIC_IP} \
SELF_PATH="{PATH_TO_VIDEO_FILE}" \
SELF_A=5006 SELF_ASSRC=445566 \
SELF_V=5004 SELF_VSSRC=112233 \
bash -c 'gst-launch-1.0 -e \
rtpbin name=r sdes="application/x-rtp-source-sdes,cname=(string)\"user\#example.com\"" \
rpicamsrc ! video/x-raw,width=200,height=150,framerate=25/1 ! decodebin name=d \
d. ! x264enc tune=zerolatency \
! rtph264pay ! "application/x-rtp,payload=(int)103,clock-rate=(int)90000,ssrc=(uint)$SELF_VSSRC" \
! r.send_rtp_sink_1 \
r.send_rtp_src_1 ! udpsink host=$PEER_IP port=$PEER_V bind-port=$SELF_V \
r.send_rtcp_src_1 ! udpsink host=$PEER_IP port=$((PEER_V+1)) bind-port=$((SELF_V+1)) sync=false async=false \
udpsrc port=$((SELF_V+1)) ! tee name=t \
t. ! queue ! r.recv_rtcp_sink_1 \
t. ! queue ! fakesink dump=true async=false'
Script to Receive Stream and save to mp4
gst-launch-1.0 udpsrc port=23938 caps="application/x-rtp" ! rtph264depay ! h264parse ! mp4mux ! filesink location=~/Desktop/test.mp4
Any ideas on what is wrong about my pipeline setup would be greatly appreciated, thanks.
Related
I am using GStreamer to stream live video / audio from a Pi3B with a picam module and USB microphone. My end goal is to use the audio from the one USB microphone in both the live video / audio stream AND as the input to a python script. I understand that this can be done with the ALSA dsnoop plugin and have been able to demonstrate it with this /etc/asound.conf config:
pcm.myTest {
type dsnoop
ipc_key 2241234
slave {
pcm "hw:1,0"
channels 1
}
}
pcm.!default {
type asym
playback.pcm {
type plug
slave.pcm "hw:0,0"
}
capture.pcm {
type plug
slave.pcm "myTest"
}
}
The video / audio stream works perfectly using the following GStreamer settings, but i am unable to use the microphone in other applications (note the "hw:1,0"):
#!/bin/bash
gst-launch-1.0 -v rpicamsrc vflip=true hflip=false \
name=src preview=0 fullscreen=0 bitrate=10000000 \
annotation-mode=time annotation-text-size=20 \
! video/x-h264,width=960,height=540,framerate=24/1 \
! h264parse \
! rtph264pay config-interval=1 pt=96 \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5001 \
alsasrc device=hw:1,0 \
! audioconvert \
! audioresample \
! opusenc \
! rtpopuspay \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5002
The following (which uses dsnoop) causes an issue in the video stream which looks like some kind of synchronization problem where instead of a nice smooth 24 frames per second I get one frame every ~2-3 seconds. The audio continues to work well and im able to use the USB mic simultaneously in other applications.
#!/bin/bash
gst-launch-1.0 -v rpicamsrc vflip=true hflip=false \
name=src preview=0 fullscreen=0 bitrate=10000000 \
annotation-mode=time annotation-text-size=20 \
! video/x-h264,width=960,height=540,framerate=24/1 \
! h264parse \
! rtph264pay config-interval=1 pt=96 \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5001 \
alsasrc device=plug:myTest \
! audioconvert \
! audioresample \
! opusenc \
! rtpopuspay \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5002
I've tried a few things that I've found in some peripherally related forums to no avail and im feeling kinda stuck. Do any of you have any suggestions on getting a stream to play nicely with dsnoop so that I can avoid buying another microphone for this project?
Thank you!
For posterity, I received a great tip from the GStreamer developer forum.
Adding provide-clock=false to the alsasrc line did the trick! so the GStreamer call becomes:
#!/bin/bash
gst-launch-1.0 -v rpicamsrc vflip=true hflip=false \
name=src preview=0 fullscreen=0 bitrate=10000000 \
annotation-mode=time annotation-text-size=20 \
! video/x-h264,width=960,height=540,framerate=24/1 \
! h264parse \
! rtph264pay config-interval=1 pt=96 \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5001 \
alsasrc device=plug:myTest provide-clock=false\
! audioconvert \
! audioresample \
! opusenc \
! rtpopuspay \
! queue max-size-bytes=0 max-size-buffers=0 \
! udpsink host=192.168.1.101 port=5002
One minor side effect of this approach is that the audio is
out of sync with the video by about 0.5 seconds. Im curious to know if there is a way to sync the two up a little better or if this just one of the
inevitable tradeoffs when trying to use a dsnoop device with gstreamer?
I am trying to generate MP4s from HLS streams with discontinuity tags. Since the videos are from the same source the FPS and the WXH are the same.
I tested with the following pipeline to demux and play it and it works fine
gst-launch-1.0 -v souphttpsrc location=<HLS_URL> ! hlsdemux ! decodebin name=decoder \
! queue ! autovideosink decoder. ! queue ! autoaudiosink
To this I added the x264 enc and avenc_aac encoder to save it to a file and it keeps failing on
"gstadaptivedemux.c(2651): _src_chain (): /GstPipeline:pipeline0/GstHLSDemux:hlsdemux0"
Failing Pipeline
gst-launch-1.0 -v mp4mux name=mux faststart=true presentation-time=true ! filesink location=dipoza.mp4 \
souphttpsrc location=<HLS_URL> ! hlsdemux ! decodebin name=decoder ! queue name=q1 ! \
videoconvert ! queue name=q2 ! x264enc name=encoder ! mux. decoder. \
! queue name=q3 ! audioconvert ! queue name=q4 ! avenc_aac ! mux.
I really appreciate any help in this.
After a lot of debugging, I found the issue with my pipeline. Thanks a lot to #FlorianZwoch for asking me to move to voaacenc encoder.
voaacenc is not installed by default in gst-plugins-bad for mac. I so I had to use
brew reinstall gst-plugins-bad --with-libvo-aacenc
The following pipeline worked well with my application.
gst-launch-1.0 --gst-debug=3 mp4mux name=mux ! \
filesink location=xxxx.mp4 souphttpsrc location=<hls url> ! decodebin name=decode ! \
videoconvert ! videorate ! video/x-raw, framerate=50/1 ! queue ! x264enc ! mux. decode. ! \
audioconvert ! voaacenc ! mux.
Also in my HLS stream video segments some had 50FPS and some had 59.97FPS. So I used a videorate to default to 50. This might need to change depending on your segments
For those folks who want a C++ code of the same, please checkout my github page
On my Raspberry Pi i create a h.264 timelapse video from jpeg in gestreamer.
sudo gst-launch-1.0 multifilesrc location=/home/pi/Timelapse/20160820/image_%04d.jpg index=0 \
caps="image/jpeg,framerate=\(fraction\)24/1" ! \
jpegdec ! \
queue ! \
omxh264enc target-bitrate=15000000 control-rate=variable ! \
video/x-h264,width=1920,height=1080,framerate=24/1,profile=high ! \
h264parse ! \
mp4mux faststart=true ! \
filesink location=/home/pi/Timelapse/20160820/1.mp4
This works pretty good.
I also want to add a audio to the timelapse video. How can i use a mp3 file as background music and save combined file to disk?
Thanks.
My Logitech C920 webcam provides a video stream encoded in h264. I'm using this "capture" tool to access the data:
So I can view live video:
/usr/local/bin/capture -d /dev/video0 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 \
! h264parse \
! decodebin\
! xvimagesink sync=false
...or record the stream as a raw h264 file:
/usr/local/bin/capture -d /dev/video0 -c 100000 -o | \
gst-launch-0.10 -e filesrc location=/dev/fd/0 \
! h264parse \
! mp4mux \
! filesink location=/tmp/video.mp4
...but I can't for the life of me figure out how to do both at the same time. Having a live feed on screen while recording can be useful sometimes, so I'd like to make this work.
Spent hours and hours looking for a way to grab and screen simultaneously but no luck. No amount of messing around with tees and queues is helping.
Guess it would be a bonus to get ALSA audio (hw:2,0) into this as well, but I can get around that in an ugly hacky way. For now, I get this even though hw:2,0 is a valid input in Audacitu or arecord, for example:
Recording open error on device 'hw:2,0': No such file or directory
Recording open error on device 'plughw:2,0': No such file or directory
So to recap: would love to put those two video bits together, bonus if audio would work too. I feel like such a newbie.
Thanks in advance for any help you can provide.
edit: non-working code:
/usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 ! tee name=myvid ! h264parse ! decodebin \
! xvimagesink sync=false myvid. ! queue ! mux. alsasrc device=plughw:2,0 ! \
audio/x-raw,rate=44100,channels=1,depth=24 ! audioconvert ! queue ! mux. mp4mux \
name=mux ! filesink location=/tmp/out.mp4
...leads to this: WARNING: erroneous pipeline: could not link queue1 to mux
Edit: Tried umlaeute's suggestion, got a nearly empty video file and one frozen frame of live video. With/without audio made no difference after fixing two small errors in the audio-enabled code (double quotation mark typo, not encoding audio to anything compatible with MP4. Adding avenc_aac after audioconvert did that trick). Error output:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
Redistribute latency...
ERROR: from element /GstPipeline:pipeline0/GstMP4Mux:mux: Could not multiplex stream.
Additional debug info:
gstqtmux.c(2530): gst_qt_mux_add_buffer (): /GstPipeline:pipeline0/GstMP4Mux:mux:
DTS method failed to re-order timestamps.
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2809): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:filesrc0:
streaming task paused, reason error (-5)
EDIT:
Okay, umlaeute's corrected code works perfectly, but only if I'm using v4l2src instead of the convert tool. And for now, that means grabbing the MJPEG stream rather than the H264 one. No skin off my nose, though I guess I'd prefer a more modern workflow. So anyway, here's what actually works, outputting an MJPEG video file and a real-time "viewfinder". Not perfectly elegant but very workable. Thanks for all your help!
gst-launch-1.0 -e v4l2src device=/dev/video1 ! videorate ! 'image/jpeg, width=1280, height=720, framerate=24/1' ! tee name=myvid \
! queue ! decodebin ! xvimagesink sync=false \
myvid. ! queue ! mux.video_0 \
alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24" ! audioconvert ! lamemp3enc ! queue ! mux.audio_0 \
avimux name=mux ! filesink location=/tmp/out.avi
gstreamer is often a bit dumb when it comes to automatically combining multiple different streams (e.g. using mp4mux).
in this case you should usually send a stream not only to a named element, but to a specific pad (using the elementname.padname notation; the element. notation is really just a shorthand for "any" pad in the named element).
also, it seems that you forgot the h264parse for the mp4muxer (if you look at the path the video takes, it really boils down to filesrc ! queue ! mp4mux which is probably a bit rough).
while i cannot test the pipeline, i guess something like the following should do the trick:
/usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \
! queue ! decodebin ! xvimagesink sync=false \
myvid. ! queue ! mp4mux ! filesink location=/tmp/out.mp4
with audio it's probably more complicated, try something like this (obviously assuming that you can read audio using the alsasrc device="plughw:2,0" element)
/usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \
! queue ! decodebin ! xvimagesink sync=false \
myvid. ! queue ! mux.video_0 \
alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24"" ! audioconvert ! queue ! mux.audio_0 \
mp4mux name=mux ! filesink location=/tmp/out.mp4
I need to read a pcap file and convert it into a avi file with audio and video by using gstreamer.
If i try the following command, it only works for generating a video file.
Video Only
gst-launch-0.10 -m -v filesrc location=h264Audio.pcap ! pcapparse src-port=44602 \
!"application/x-rtp, payload=96" ! rtph264depay ! "video/x-h264, width=352, height=288, framerate=(fraction)30/1" \
! ffdec_h264 ! videorate ! ffmpegcolorspace \
! avimux ! filesink location=testh264.avi
Audio Only
And if i use the following command, it only works for generating a audio file.
gst-launch-0.10 -m -v filesrc location=h264Audio.pcap ! pcapparse src-port=7892 \
! "application/x-rtp, payload=8" ! rtppcmadepay ! alawdec ! audioconvert ! audioresample ! avimux ! filesink location=test1audio.avi
Video + Audio
When i combine two commands as follows, i encountered an error message --
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc1: Internal data flow error.
gst-launch-0.10 -m -v filesrc location=h264Audio.pcap ! pcapparse src-port=44602 \
!"application/x-rtp, payload=96" ! rtph264depay ! "video/x-h264, width=352, height=288, framerate=(fraction)30/1" \
! ffdec_h264 ! videorate ! ffmpegcolorspace \
! queue ! mux. \
filesrc location=h264Audio.pcap pcapparse src-port=7892 \
! "application/x-rtp, payload=8" ! rtppcmadepay ! alawdec ! audioconvert ! audioresample ! queue ! avimux name=mux ! filesink location=testVideoAudio.avi
Please kindly give me some solutions or suggestions with regard to this issue.
Thank you in advance.
Eric
Instead of the 2nd "filesrc ! pcapparse" give the first pcapparse a name=demux, drop the src-port arg and start the 2nd branch from demux.