Pipeline that Demuxing HLS streams and muxing again - audio

I am trying to generate an MP4 from a HLS stream that contains DISCONTINUITY tags. I am trying to demux and remux the audio and video streams again so that they align.
I tried generating the following pipeline but it doesnt seem to work.
gst-launch-1.0 -v souphttpsrc location=<HLSURL> ! hlsdemux ! decodebin name=decoder \
decoder. ! queue ! x264enc ! mp4mux name=mux ! filesink location=muruga.mp4 \
decoder. ! queue ! mux.
Thanks a lot for your help.

Related

mixing multiple rtp audio streams with gstreamer

I am trying to mix multiple audio udp rtp packets which created with following command on some other computers, but after a lot of searching I could not find some proper command to mix received audios.
I use this command to stream audio on other computers to my computer:
gst-launch-1.0 autoaudiosrc ! audioconvert ! rtpL24pay ! udpsink host=<MY_COMPUTER_IP> port=<some_port_number>
and I can receive the stream on my computer with this command:
gst-launch-1.0 -v udpsrc port=<port_number> caps="application/x-rtp,channels=(int)2,format=(string)S16LE,media=(string)audio,payload=(int)96,clock-rate=(int)44100,encoding-name=(string)L24" ! rtpL24depay ! audioconvert ! autoaudiosink sync=false
but I want to mix received streams together and play them as one audio in only one pipeline, how can I do that?
To mix two audio streams you can use GStreamer's audiomixer plugin. Very basic example would be:
Generator of 2 parallel RTP (over UDP) streams with test audios in different frequencies
gst-launch-1.0 audiotestsrc freq=523 ! audioconvert ! rtpL24pay ! udpsink host=127.0.0.1 port=5000 \
audiotestsrc freq=659 ! audioconvert ! rtpL24pay ! udpsink host=127.0.0.1 port=5001
Receiver of 2 different RTP (over UDP) streams that mixes 2 audios carried by streams
gst-launch-1.0 -v udpsrc port=5000 caps="application/x-rtp,channels=(int)2,format=(string)S16LE,media=(string)audio,payload=(int)96,clock-rate=(int)44100,encoding-name=(string)L24" \
! queue ! rtpL24depay ! audioconvert ! audiomixer name=mixer ! autoaudiosink \
udpsrc port=5001 caps="application/x-rtp,channels=(int)2,format=(string)S16LE,media=(string)audio,payload=(int)96,clock-rate=(int)44100,encoding-name=(string)L24" \
! queue ! rtpL24depay ! audioconvert ! mixer.

Pipeline convert mp3 -> sink with gstreamer

I tryed to make a pipeline to convert a mp3 file to a sink but it does not work.
What I tried :
gst-launch-1.0 filesrc location=myfile.mp3 ! decodebin ! audioresample ! audioconvert ! appsink caps=audio/x-raw,format=S16LE,rate=48000 name=sink
When I put the sink in a .wav file after, its not recognized at .wav and when I open it in audacity raw data it's just noise.
I can't use filesink because I need to use the sink for some purpose unrelated.
My best guess is that my pipeline is wrong, if someone has an idea, don't hesitate to ask me question !
Pipeline was wrong as expected.
The correct pipeline was :
gst-launch-1.0 filesrc location=myfile.mp3 ! decodebin ! audioresample ! audioconvert ! capsfilter caps="audio/xraw,format=S16LE,rate=48000,channel=2" ! appsink name=sink

Generating MP4 from HLS in gstreamer

I am trying to generate MP4s from HLS streams with discontinuity tags. Since the videos are from the same source the FPS and the WXH are the same.
I tested with the following pipeline to demux and play it and it works fine
gst-launch-1.0 -v souphttpsrc location=<HLS_URL> ! hlsdemux ! decodebin name=decoder \
! queue ! autovideosink decoder. ! queue ! autoaudiosink
To this I added the x264 enc and avenc_aac encoder to save it to a file and it keeps failing on
"gstadaptivedemux.c(2651): _src_chain (): /GstPipeline:pipeline0/GstHLSDemux:hlsdemux0"
Failing Pipeline
gst-launch-1.0 -v mp4mux name=mux faststart=true presentation-time=true ! filesink location=dipoza.mp4 \
souphttpsrc location=<HLS_URL> ! hlsdemux ! decodebin name=decoder ! queue name=q1 ! \
videoconvert ! queue name=q2 ! x264enc name=encoder ! mux. decoder. \
! queue name=q3 ! audioconvert ! queue name=q4 ! avenc_aac ! mux.
I really appreciate any help in this.
After a lot of debugging, I found the issue with my pipeline. Thanks a lot to #FlorianZwoch for asking me to move to voaacenc encoder.
voaacenc is not installed by default in gst-plugins-bad for mac. I so I had to use
brew reinstall gst-plugins-bad --with-libvo-aacenc
The following pipeline worked well with my application.
gst-launch-1.0 --gst-debug=3 mp4mux name=mux ! \
filesink location=xxxx.mp4 souphttpsrc location=<hls url> ! decodebin name=decode ! \
videoconvert ! videorate ! video/x-raw, framerate=50/1 ! queue ! x264enc ! mux. decode. ! \
audioconvert ! voaacenc ! mux.
Also in my HLS stream video segments some had 50FPS and some had 59.97FPS. So I used a videorate to default to 50. This might need to change depending on your segments
For those folks who want a C++ code of the same, please checkout my github page

how to mux audio and video in gstreamer

I'm trying to mux a .ogv only video file
with a .mp3 file to get a .ogv file with both video and audio, in gstreamer (v0.10). I try this pipeline :
gst-launch-0.10 filesrc location="video.ogv" ! oggdemux ! queue ! oggmux name=mux ! filesink location="test.ogv" filesrc location="audio.mp3" ! audioconvert ! vorbisenc ! queue ! mux.
When I use this command line, i get an error :
ERROR : of element /GstPipeline:pipeline0/GstAudioConvert:audioconvert0 : not negotiated
Additional debogging information :
gstbasetransform.c(2541): gst_base_transform_handle_buffer (): /GstPipeline:pipeline0/GstAudioConvert:audioconvert0:
not negotiated
ERROR : the pipeline refuse to pass in preparation phase
I can't see what's wrong. Any suggestion ?
Thanks.
You need to add an MP3 decoder between the filesrc and audioconvert, or just decodebin e.g.
gst-launch-0.10 filesrc location="video.ogv" ! oggdemux ! queue ! oggmux name=mux ! filesink location="test.ogv"
filesrc location="audio.mp3" ! decodebin ! audioconvert ! vorbisenc ! queue ! mux.

Using Gstreamer to display audio-less video while recording audio+video

My Logitech C920 webcam provides a video stream encoded in h264. I'm using this "capture" tool to access the data:
So I can view live video:
/usr/local/bin/capture -d /dev/video0 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 \
! h264parse \
! decodebin\
! xvimagesink sync=false
...or record the stream as a raw h264 file:
/usr/local/bin/capture -d /dev/video0 -c 100000 -o | \
gst-launch-0.10 -e filesrc location=/dev/fd/0 \
! h264parse \
! mp4mux \
! filesink location=/tmp/video.mp4
...but I can't for the life of me figure out how to do both at the same time. Having a live feed on screen while recording can be useful sometimes, so I'd like to make this work.
Spent hours and hours looking for a way to grab and screen simultaneously but no luck. No amount of messing around with tees and queues is helping.
Guess it would be a bonus to get ALSA audio (hw:2,0) into this as well, but I can get around that in an ugly hacky way. For now, I get this even though hw:2,0 is a valid input in Audacitu or arecord, for example:
Recording open error on device 'hw:2,0': No such file or directory
Recording open error on device 'plughw:2,0': No such file or directory
So to recap: would love to put those two video bits together, bonus if audio would work too. I feel like such a newbie.
Thanks in advance for any help you can provide.
edit: non-working code:
/usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 ! tee name=myvid ! h264parse ! decodebin \
! xvimagesink sync=false myvid. ! queue ! mux. alsasrc device=plughw:2,0 ! \
audio/x-raw,rate=44100,channels=1,depth=24 ! audioconvert ! queue ! mux. mp4mux \
name=mux ! filesink location=/tmp/out.mp4
...leads to this: WARNING: erroneous pipeline: could not link queue1 to mux
Edit: Tried umlaeute's suggestion, got a nearly empty video file and one frozen frame of live video. With/without audio made no difference after fixing two small errors in the audio-enabled code (double quotation mark typo, not encoding audio to anything compatible with MP4. Adding avenc_aac after audioconvert did that trick). Error output:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
Redistribute latency...
ERROR: from element /GstPipeline:pipeline0/GstMP4Mux:mux: Could not multiplex stream.
Additional debug info:
gstqtmux.c(2530): gst_qt_mux_add_buffer (): /GstPipeline:pipeline0/GstMP4Mux:mux:
DTS method failed to re-order timestamps.
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2809): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:filesrc0:
streaming task paused, reason error (-5)
EDIT:
Okay, umlaeute's corrected code works perfectly, but only if I'm using v4l2src instead of the convert tool. And for now, that means grabbing the MJPEG stream rather than the H264 one. No skin off my nose, though I guess I'd prefer a more modern workflow. So anyway, here's what actually works, outputting an MJPEG video file and a real-time "viewfinder". Not perfectly elegant but very workable. Thanks for all your help!
gst-launch-1.0 -e v4l2src device=/dev/video1 ! videorate ! 'image/jpeg, width=1280, height=720, framerate=24/1' ! tee name=myvid \
! queue ! decodebin ! xvimagesink sync=false \
myvid. ! queue ! mux.video_0 \
alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24" ! audioconvert ! lamemp3enc ! queue ! mux.audio_0 \
avimux name=mux ! filesink location=/tmp/out.avi
gstreamer is often a bit dumb when it comes to automatically combining multiple different streams (e.g. using mp4mux).
in this case you should usually send a stream not only to a named element, but to a specific pad (using the elementname.padname notation; the element. notation is really just a shorthand for "any" pad in the named element).
also, it seems that you forgot the h264parse for the mp4muxer (if you look at the path the video takes, it really boils down to filesrc ! queue ! mp4mux which is probably a bit rough).
while i cannot test the pipeline, i guess something like the following should do the trick:
/usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \
! queue ! decodebin ! xvimagesink sync=false \
myvid. ! queue ! mp4mux ! filesink location=/tmp/out.mp4
with audio it's probably more complicated, try something like this (obviously assuming that you can read audio using the alsasrc device="plughw:2,0" element)
/usr/local/bin/capture -d /dev/video1 -c 100000 -o | \
gst-launch-1.0 -e filesrc location=/dev/fd/0 ! h264parse ! tee name=myvid \
! queue ! decodebin ! xvimagesink sync=false \
myvid. ! queue ! mux.video_0 \
alsasrc device="plughw:2,0" ! "audio/x-raw,rate=44100,channels=1,depth=24"" ! audioconvert ! queue ! mux.audio_0 \
mp4mux name=mux ! filesink location=/tmp/out.mp4

Resources