I tryed to make a pipeline to convert a mp3 file to a sink but it does not work.
What I tried :
gst-launch-1.0 filesrc location=myfile.mp3 ! decodebin ! audioresample ! audioconvert ! appsink caps=audio/x-raw,format=S16LE,rate=48000 name=sink
When I put the sink in a .wav file after, its not recognized at .wav and when I open it in audacity raw data it's just noise.
I can't use filesink because I need to use the sink for some purpose unrelated.
My best guess is that my pipeline is wrong, if someone has an idea, don't hesitate to ask me question !
Pipeline was wrong as expected.
The correct pipeline was :
gst-launch-1.0 filesrc location=myfile.mp3 ! decodebin ! audioresample ! audioconvert ! capsfilter caps="audio/xraw,format=S16LE,rate=48000,channel=2" ! appsink name=sink
Related
I'm trying to resample audio files, I use a shell script and the command I use is this.
But I'm not sure if the parameters I write are right, whatever I write as parameters never gives me an error, so I can't tell if I'm right:
the string is this:
gst-launch-1.0 -v filesrc location="$file" ! decodebin ! audioconvert ! audioresample ! audio/x-raw, format=S24LE,rate=176400,dithering=tpdf_hf,dithering-threshold=24,noise-shaping=high,quality=10,resample-method=blackman-nuttall,sinc-filter-interpolation=linear,sinc-filter-mode=full ! alsasink device=hw:1,0
if I change the string to:
gst-launch-1.0 -v filesrc location="$file" ! decodebin ! audioconvert ! audioresample ! audio/x-raw, format=S24LE,rate=176400,xxxx=xxxx,xxxx=xx,xxxx=xxxx,xxxxx=xxx,resample-method=xxxxx,sinc-filter-interpolation=xxxx,sinc-filter-mode=xxxxx ! alsasink device=hw:1,0
still works!
how do i know that i write the parameter names correctly? and if they are passed?
sender pipeline
pulsesrc name=audio_cap mute=0 ! queue ! audiorate ! queue ! audioconvert ! audioresample name=aud_resample ! audio/x-raw,rate=48000 ! queue silent=true max-size-buffers=100 flush-on-eos=true ! opusenc ! queue ! appsink sync=false async=false
Rx pipeline
appsrc caps="audio/x-opus" ! audio/x-opus,channel-mapping-family=0 ! queue ! opusdec ! audioconvert ! audioresample ! audio/x-raw,format=S16LE,rate=44100,channels=2 ! audiorate ! autoaudiosink
But if add oggmux and oggdemux it will start playing
Rx working pipeline
appsrc caps="audio/x-opus" ! audio/x-opus,channel-mapping-family=0 ! queue ! opusparse ! oggmux ! queue ! oggdemux ! opusdec ! audioconvert ! audioresample ! audio/x-raw,format=S16LE,rate=44100,channels=2 ! audiorate ! autoaudiosink
This is by design. An Opus decoder needs to be fed full opus packets.
Unfortunately, if you just turn your RAW packets that come out of the encoder into a stream of bytes, there is no way to get back to the original packet boundaries.
So you need a container like ogg or mpegts so that the original packet can be recreated.
See also https://www.rfc-editor.org/rfc/rfc6716 section 3
I have faced with almost the same problem. I have created a plugin at the end, but originally it didn't work either. Then I realized that I need to push a gst_event_new_caps event to the source pad and the receiving pipeline started to work.
GStreamer plugin src -> opusdec sink gives "error: decoder not initialized"
I am trying to generate MP4s from HLS streams with discontinuity tags. Since the videos are from the same source the FPS and the WXH are the same.
I tested with the following pipeline to demux and play it and it works fine
gst-launch-1.0 -v souphttpsrc location=<HLS_URL> ! hlsdemux ! decodebin name=decoder \
! queue ! autovideosink decoder. ! queue ! autoaudiosink
To this I added the x264 enc and avenc_aac encoder to save it to a file and it keeps failing on
"gstadaptivedemux.c(2651): _src_chain (): /GstPipeline:pipeline0/GstHLSDemux:hlsdemux0"
Failing Pipeline
gst-launch-1.0 -v mp4mux name=mux faststart=true presentation-time=true ! filesink location=dipoza.mp4 \
souphttpsrc location=<HLS_URL> ! hlsdemux ! decodebin name=decoder ! queue name=q1 ! \
videoconvert ! queue name=q2 ! x264enc name=encoder ! mux. decoder. \
! queue name=q3 ! audioconvert ! queue name=q4 ! avenc_aac ! mux.
I really appreciate any help in this.
After a lot of debugging, I found the issue with my pipeline. Thanks a lot to #FlorianZwoch for asking me to move to voaacenc encoder.
voaacenc is not installed by default in gst-plugins-bad for mac. I so I had to use
brew reinstall gst-plugins-bad --with-libvo-aacenc
The following pipeline worked well with my application.
gst-launch-1.0 --gst-debug=3 mp4mux name=mux ! \
filesink location=xxxx.mp4 souphttpsrc location=<hls url> ! decodebin name=decode ! \
videoconvert ! videorate ! video/x-raw, framerate=50/1 ! queue ! x264enc ! mux. decode. ! \
audioconvert ! voaacenc ! mux.
Also in my HLS stream video segments some had 50FPS and some had 59.97FPS. So I used a videorate to default to 50. This might need to change depending on your segments
For those folks who want a C++ code of the same, please checkout my github page
I have a SAA7134 TV card. I want to record a video with sound using Gstreamer. This command I use to make sure I can hear the audio and it works
gst-launch-1.0 alsasrc device="hw:1,0" ! queue ! audioconvert ! alsasink
This command proves that I can watch the video (also works fine)
gst-launch-1.0 v4l2src device=/dev/video0 ! xvimagesink
This command works fine and allows me to write the sound to a file
gst-launch-1.0 alsasrc device="hw:1,0" ! queue ! audioconvert ! wavenc ! filesink location=/home/out/testout.wav
But this command only writes the video without any sound
gst-launch-1.0 v4l2src device=/dev/video0 ! queue ! videoconvert ! jpegenc ! mux. alsasrc device="hw:1,0" ! queue ! audioconvert ! lamemp3enc bitrate=192 ! mux. avimux name=mux ! filesink location=/home/out/testout.avi
the same for
gst-launch-1.0 v4l2src device=/dev/video0 ! queue ! videoconvert ! theoraenc ! mux. alsasrc device="hw:1,0" ! queue ! audioconvert ! vorbisenc ! mux. oggmux name=mux ! filesink location=/home/out/testout.ogg
How to solve the problem? Thank you.
P.S. I use Ubuntu 16.04.3 LTS.
It looks like I missed one important detail about using gst-launch syntax. I took a better look and found this:
The -e option forces EOS on sources before shutting the pipeline down. This is useful when we write to files and want to shut down by killing gst-launch using CTRL+C or with the kill command
When I tested this option I finally got both the video and audio.
I'm trying to mux a .ogv only video file
with a .mp3 file to get a .ogv file with both video and audio, in gstreamer (v0.10). I try this pipeline :
gst-launch-0.10 filesrc location="video.ogv" ! oggdemux ! queue ! oggmux name=mux ! filesink location="test.ogv" filesrc location="audio.mp3" ! audioconvert ! vorbisenc ! queue ! mux.
When I use this command line, i get an error :
ERROR : of element /GstPipeline:pipeline0/GstAudioConvert:audioconvert0 : not negotiated
Additional debogging information :
gstbasetransform.c(2541): gst_base_transform_handle_buffer (): /GstPipeline:pipeline0/GstAudioConvert:audioconvert0:
not negotiated
ERROR : the pipeline refuse to pass in preparation phase
I can't see what's wrong. Any suggestion ?
Thanks.
You need to add an MP3 decoder between the filesrc and audioconvert, or just decodebin e.g.
gst-launch-0.10 filesrc location="video.ogv" ! oggdemux ! queue ! oggmux name=mux ! filesink location="test.ogv"
filesrc location="audio.mp3" ! decodebin ! audioconvert ! vorbisenc ! queue ! mux.