gstreamer pipeline to mix three audio source? - audio

This mixing of the two files:
gst-launch uridecodebin uri=file:///tmp/file1.mp3 ! adder name = m ! autoaudiosink uridecodebin uri=file:///tmp/file2.mp3 ! audioconvert ! m.
How to mix the 3 files ?

gst-launch-1.0 uridecodebin uri=file:///tmp/file1.mp3 ! audioconvert ! adder name = m ! audioconvert ! autoaudiosink \
uridecodebin uri=file:///tmp/file2.mp3 ! audioconvert ! m. \
uridecodebin uri=file:///tmp/file3.mp3 ! audioconvert ! m.

gst-launch-0.10 adder name=mix ! alsasink filesrc location=file1.wav ! wavparse ! audioconvert ! mix. filesrc location=file2.wav ! wavparse ! audioconvert ! mix. filesrc location=file3.wav ! wavparse ! audioconvert ! mix.
Surely isn't the best way. But, is a little start.

replace "alsasink" with "wavenc ! filesink location= output.wav" ?

Related

Change pitch on playback with gstreamer

I have a compiled pipeline for working with a program in python. So far, I'm checking this pipeline in the console and came up with a strange result. If I try to play the resulting sound with a pipeline that has a pitch, I get only strange clicks, but if I remove the part with a pitch, I get a clean sound.
Generator command:
gst-launch-1.0 -v filesrc location=morse.wav ! wavparse ! audioconvert ! audioresample ! rtpL16pay ! udpsink host=127.0.0.1 port=4000
Receive command:
gst-launch-1.0 audiomixer name=mixer udpsrc name=src0 uri=udp://127.0.0.1:4000 caps="application/x-rtp, media=(string)audio, clock-rate=(int)4000, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96" ! tee name=app ! queue ! rtpL16depay ! mixer.sink_0 udpsrc name=src1 uri=udp://127.0.0.1:5001 caps="application/x-rtp, media=(string)audio, clock-rate=(int)4000, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96" ! queue ! rtpL16depay ! mixer.sink_1 mixer. ! tee name=t ! queue ! audioconvert ! audioresample ! audiorate ! pitch pitch=1.0 !audiopanorama name=panorama panorama=-1.00 ! autoaudiosink name=audio_sink app. ! queue ! appsink name=asink emit-signals=True
When using gstreamer in my program, I would like to shift as many options as possible to it, since it is much more reliable in my opinion.
The question is how to adjust the pitch and what is the reason that the pitch does not allow the command to work?
You may need to set encoding-name=L16 in udpsrc output caps:
Sender:
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! audio/x-raw,format=S16BE ! rtpL16pay ! udpsink host=127.0.0.1 port=4000 -v
Receiver:
gst-launch-1.0 udpsrc port=4000 ! application/x-rtp,media=audio,encoding-name=L16,clock-rate=44100,format=S16BE ! rtpL16depay ! audioconvert ! autoaudiosink -v
[EDIT: This works fine on my side:
Created 2 live sources:
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! audio/x-raw,format=S16BE,rate=44100 ! rtpL16pay ! application/x-rtp,encoding-name=L16 ! udpsink host=127.0.0.1 port=4000 -v
gst-launch-1.0 audiotestsrc ! audioconvert ! audioresample ! audio/x-raw,format=S16BE,rate=44100 ! rtpL16pay ! application/x-rtp,encoding-name=L16 ! udpsink host=127.0.0.1 port=5001 -v
Then used:
gst-launch-1.0 \
audiomixer name=mixer \
udpsrc name=src0 uri=udp://127.0.0.1:4000 caps="application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96" ! tee name=app ! queue ! rtpL16depay ! queue ! mixer.sink_0 \
udpsrc name=src1 uri=udp://127.0.0.1:5001 caps="application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, payload=(int)96" ! queue ! rtpL16depay ! queue ! mixer.sink_1 \
mixer. ! tee name=t ! queue ! audioconvert ! audioresample ! audiorate ! pitch pitch=1.0 ! audiopanorama name=panorama panorama=-1.00 ! autoaudiosink name=audio_sink \
app. ! queue ! appsink name=asink emit-signals=True
without problem.
What seems wrong is the low clock-rate in your pipeline. Try setting rate 44100 in caps after audioresample in senders.

Gstreamer duplicate 2channel audio

I would like to generate 2 audio tones using audiotestsrc but then duplicate those two channels across 16 channels (i.e 8channels of the one tone and 8channels of the other).
I have a command that generates 2 tones for 2 channels:
gst-launch-1.0 interleave name=i ! audioconvert ! wavenc ! filesink location=file.wav audiotestsrc wave=0 freq=100 volume=0.4 ! decodebin ! audioconvert ! "audio/x-raw,channels=1,channel-mask=(bitmask)0x1" ! queue ! i.sink_0 audiotestsrc wave=1 freq=150 volume=0.4 ! decodebin ! audioconvert ! "audio/x-raw,channels=1,channel-mask=(bitmask)0x2" ! queue ! i.sink_1
I also have a command that generates 1 tone across 16 channels:
gst-launch-1.0 audiotestsrc wave=0 freq=100 volume=0.4 ! audio/x-raw,rate=48000,format=S16BE ! queue ! capssetter caps="audio/x-raw,channels=16,rate=48000,channel-mask=(bitmask)0xffff" ! audioconvert ! audioresample ! wavenc ! filesink location=test.wav
So my question:
Is there a way to combine these two commands?
I tried a few different options and assumed a bitmask of
0xaaaa and 0x5555 would be needed to "map" which channels get which tones.
But I keep running into syntax errors or
WARNING: erroneous pipeline: could not link capssetter0 to i
WARNING: erroneous pipeline: could not link queue0 to i
I feel like I'm close but not quite there.
any help would be greatly appreciated.
Looks like I found a solution that works at least for an even output channel count:
gst-launch-1.0 interleave name=i audiotestsrc wave=0 freq=100 volume=0.4 ! decodebin ! audioconvert ! audio/x-raw,format=S16BE,channels=1,channel-mask=(bitmask)0x1 ! queue ! i.sink_0 audiotestsrc wave=2 freq=100 volume=0.4 ! decodebin ! audioconvert ! audio/x-raw,format=S16BE,channels=1,channel-mask=(bitmask)0x1 ! queue ! i.sink_1 i.src ! capssetter caps=audio/x-raw,format=S16BE,channels=6,channel-mask=(bitmask)0x3f ! audioconvert ! audioresample ! wavenc ! filesink location=test.wav
So the next trick would be to handle an odd output channel count. i.e 5
I tried mapping the output to 6 channels but using a channel-mask=0x1f.
Also tried mapping output to 6 channels then using:
! audioconvert ! capssetter caps="audio/x-raw,format=S16BE,channels=5,channel-mask=(bitmask)0x1f" !
neither of these worked.

Generating MP4 from HLS in gstreamer

I am trying to generate MP4s from HLS streams with discontinuity tags. Since the videos are from the same source the FPS and the WXH are the same.
I tested with the following pipeline to demux and play it and it works fine
gst-launch-1.0 -v souphttpsrc location=<HLS_URL> ! hlsdemux ! decodebin name=decoder \
! queue ! autovideosink decoder. ! queue ! autoaudiosink
To this I added the x264 enc and avenc_aac encoder to save it to a file and it keeps failing on
"gstadaptivedemux.c(2651): _src_chain (): /GstPipeline:pipeline0/GstHLSDemux:hlsdemux0"
Failing Pipeline
gst-launch-1.0 -v mp4mux name=mux faststart=true presentation-time=true ! filesink location=dipoza.mp4 \
souphttpsrc location=<HLS_URL> ! hlsdemux ! decodebin name=decoder ! queue name=q1 ! \
videoconvert ! queue name=q2 ! x264enc name=encoder ! mux. decoder. \
! queue name=q3 ! audioconvert ! queue name=q4 ! avenc_aac ! mux.
I really appreciate any help in this.
After a lot of debugging, I found the issue with my pipeline. Thanks a lot to #FlorianZwoch for asking me to move to voaacenc encoder.
voaacenc is not installed by default in gst-plugins-bad for mac. I so I had to use
brew reinstall gst-plugins-bad --with-libvo-aacenc
The following pipeline worked well with my application.
gst-launch-1.0 --gst-debug=3 mp4mux name=mux ! \
filesink location=xxxx.mp4 souphttpsrc location=<hls url> ! decodebin name=decode ! \
videoconvert ! videorate ! video/x-raw, framerate=50/1 ! queue ! x264enc ! mux. decode. ! \
audioconvert ! voaacenc ! mux.
Also in my HLS stream video segments some had 50FPS and some had 59.97FPS. So I used a videorate to default to 50. This might need to change depending on your segments
For those folks who want a C++ code of the same, please checkout my github page

gst-launch-1.0 can't stream audio/video through UDP and display it on a window simultaneously

I am successfully streaming a file (audio/video) through UDP on Windows and watching it on another machine with VLC (this was covered on Stackoverflow before):
gst-launch-1.0 -v filesrc location=video.mkv ! decodebin name=dec ! videoconvert ! x264enc ! video/x-h264 ! mpegtsmux name=mux ! queue ! udpsink host=127.0.0.1 port=5000 sync=true dec. ! queue ! audioconvert ! voaacenc ! audio/mpeg ! queue ! mux.
You can test this on VLC: Media > Open Network Stream > Network URL > udp://#:5000
However, while the video is being streamed I would like to also display it on a window, so I could watch the stream myself (no audio needed).
To accomplish this, I started with a series of small experiments so I could change the original pipeline without any surprises. If you are reading this question you know my plan didn't work so well.
My first experiment was to display just the video on single window:
gst-launch-1.0 -v filesrc location=video.mkv ! decodebin ! autovideosink
Then, I changed it to display the same video on 2 windows, to make sure I understood how to work with multithreading:
gst-launch-1.0 -v filesrc location=video.mkv ! decodebin name=dec ! queue ! tee name=t t. ! queue ! videoconvert ! autovideosink t. ! autovideosink
Finally, it came the moment to blend those 2 parts together and stream the video through the network while it is displayed locally. The result is not what I expected, of course: only the first frame appears to be streamed and then everything freezes:
gst-launch-1.0 -v filesrc location=video.mkv ! decodebin name=dec ! tee name=t ! queue ! autovideosink t. ! queue ! videoconvert ! x264enc ! video/x-h264 ! mpegtsmux name=mux ! queue ! udpsink host=127.0.0.1 port=5000 sync=true dec. ! queue ! audioconvert ! voaacenc ! audio/mpeg ! queue ! mux.
It seems that the data is not flowing through the pipeline anymore (for some reason unknown to me) and my attempt to add autovideosink broke everything.
Any tips on how to do this correctly?
The right moment to split the data is right after filesrc:
gst-launch-1.0 -v filesrc location=video.mkv ! tee name=t ! queue ! decodebin ! autovideosink t. ! queue ! decodebin name=dec ! videoconvert ! x264enc ! video/x-h264 ! mpegtsmux name=mux ! queue ! udpsink host=127.0.0.1 port=5000 sync=true dec. ! queue ! audioconvert ! voaacenc ! audio/mpeg ! queue ! mux.
So the data flows directly to autovideosink before anything else happens while the other thread also joins at this very same moment, carrying the data flow to queue and the 2nd decodebin.

How to convert pcap to avi file with video and audio by gstreamer?

I need to read a pcap file and convert it into a avi file with audio and video by using gstreamer.
If i try the following command, it only works for generating a video file.
Video Only
gst-launch-0.10 -m -v filesrc location=h264Audio.pcap ! pcapparse src-port=44602 \
!"application/x-rtp, payload=96" ! rtph264depay ! "video/x-h264, width=352, height=288, framerate=(fraction)30/1" \
! ffdec_h264 ! videorate ! ffmpegcolorspace \
! avimux ! filesink location=testh264.avi
Audio Only
And if i use the following command, it only works for generating a audio file.
gst-launch-0.10 -m -v filesrc location=h264Audio.pcap ! pcapparse src-port=7892 \
! "application/x-rtp, payload=8" ! rtppcmadepay ! alawdec ! audioconvert ! audioresample ! avimux ! filesink location=test1audio.avi
Video + Audio
When i combine two commands as follows, i encountered an error message --
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc1: Internal data flow error.
gst-launch-0.10 -m -v filesrc location=h264Audio.pcap ! pcapparse src-port=44602 \
!"application/x-rtp, payload=96" ! rtph264depay ! "video/x-h264, width=352, height=288, framerate=(fraction)30/1" \
! ffdec_h264 ! videorate ! ffmpegcolorspace \
! queue ! mux. \
filesrc location=h264Audio.pcap pcapparse src-port=7892 \
! "application/x-rtp, payload=8" ! rtppcmadepay ! alawdec ! audioconvert ! audioresample ! queue ! avimux name=mux ! filesink location=testVideoAudio.avi
Please kindly give me some solutions or suggestions with regard to this issue.
Thank you in advance.
Eric
Instead of the 2nd "filesrc ! pcapparse" give the first pcapparse a name=demux, drop the src-port arg and start the 2nd branch from demux.

Resources