GStreamer split audio to multiple parts by seconds - audio

Platform is windows 10 64bits, download prebuild gstreamer1.0 from their website.
I would like to split audio to multiple parts by gstremaer, example
audio.wav/audio.mp3/audio.ma4, their lengths are 60 seconds, I would like to split the audio.* to 10 seconds per audio, the results should be
audio0.wav~audio9.wav or audio0.mp3~audio9.mp3 or audio0.ma4~audio9.ma4
equivalent command of ffmpeg is
ffmpeg -y -i audio.wav -f segment -segment_time 10 -c copy audio%d.wav
I tried with the multifilesink of gstreamer, example
gst-launch-1.0 filesrc location=audio.mp3! multifilesink location=audio/test/test02%d.mp3 next-file=5 max-file-duration=10000000000
But this function cannot split the audio to 10 seconds length, what kind of error I commit? Thanks

You want to look at splitmuxsink. Check it's properties, especially muxer and max-size-time.
E.g:
gst-launch-1.0 audiotestsrc ! \
splitmuxsink location=out_%d.wav muxer=wavenc max-size-time=10000000000
Will generate 10 second long wav files.
Edit : Original command do not work, it should be
gst-launch-1.0 filesrc location=audio/audio.mp3 ! decodebin ! audioconvert ! splitmuxsink location=audio/test/out_%d.wav muxer=wavenc max-size-time=10000000000

Related

Gstreamer: Encode microphone audio using AAC encode to mp4

Wondering if it is possible to encode using AAC into mp4 container
I have tried using the following
gst-launch-1.0 alsasrc device="hw:0,0" ! "audio/x-raw,rate=48000,channels=2,depth=16" ! queue ! audioconvert ! avenc_aac ! qtmux ! filesink location=audio.mp4
The program runs without a fault but when I inspect the file content, it gives me a null content
However when i run with avimux, the file content gives the encoding and details like lenght of audio
gst-launch-1.0 alsasrc device="hw:0,0" ! "audio/x-raw,rate=48000,channels=2,depth=16" ! queue ! audioconvert ! avenc_aac ! avimux ! filesink location=audio.mp4
Wander what is wrong as I would need AAC encoding (for later rtsp streaming) and need to use mp4 as container and qtmux
thanks
You don't really say what you are doing exactly. But most likely you are missing the -e option for gst-launch-1.0. With that an EOS signal is propagated through the pipleine to correctly finalize the mp4 file. Other file formats are not that picky, but mp4 needs to write a proper index when all samples have been written.

Gstreamer audio problem on embedded linux

I work on embedded linux. I want play video with minimum CPU. So after I completed compile, I tried play video with mplayer and gstreamer. Mplayer use CPU avarage %10-20. I want to obtain this perform on gstreamer. So I tried these command:
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec ! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux ! mpegvideoparse ! mpeg2dec ! xvimagesink
These commands use avarage %10-20 CPU. This number that I want number. But audio did not work with these command. I tried added audio element but I could not achieve.
I also tried gst-launch-1.0 playbin uri=file:///video_path.mpeg. Audio work with this command but CPU usage is so high and I don2t prefer this.
How can I work audio with 1 or 2 commands?
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec
! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux !
mpegvideoparse ! mpeg2dec ! xvimagesink
With the above two pipelines you are asking gtreamer to just play video, as a result you aren’t getting any audio.
gst-launch filesrc location=video_path.mpeg ! mpegdemux name=demuxer
demuxer. ! queue ! mpeg2dec ! autovideosink demuxer. ! queue ! mad !
audioconvert ! audioresample ! autoaudiosink
The above pipeline should play both audio and video.
Note: If you have support for hardware decoding that would reduce further CPU usage.

GStreamer stream audio and video via UDP to be able to playback on VLC

I am trying to stream audio and video via Gstreamer via UDP but playback on VLC only returns video without audio. Currently I am using a sample of Big Buck Bunny and have confirmed that it does have audio. I am planning to use Snowmix to feed media to Gstreamer output in the future.
Streaming from file source via UDP to playback on VLC I currently perform by:
gst-launch-1.0 -v uridecodebin uri=file:///home/me/files/Snowmix-0.5.1/test/big_buck_bunny_720p_H264_AAC_25fps_3400K.MP4 ! queue ! videoconvert ! x264enc ! mpegtsmux ! queue ! udpsink host=230.0.0.1 port=4012 sync=true
which allows me to open a network stream in VLC on my Windows machine to receive packets and only plays video.
What am I missing from my command?
As RSATom stated previously, the audio is missing from the pipeline.
The correct pipeline for video and audio is the next (tested with the same input file):
gst-launch-1.0 -v uridecodebin name=uridec uri=file:///home/usuario/Desktop/map/big_buck_bunny_720p_H264_AAC_25fps_3400K.MP4 ! queue ! videoconvert ! x264enc ! video/x-h264 ! mpegtsmux name=mux ! queue ! udpsink host=127.0.0.1 port=5014 sync=true uridec. ! audioconvert ! voaacenc ! audio/mpeg ! queue ! mux.
Remember that in this case you're re-encoding all the content from the source video file, which means high CPU consumption. Other option would be to demux the content from the input file and mux again without encoding (using h264parse and aacparse).

gstreamer pipeline for streaming multiplexed h.264 and aac audio between two raspberry pi's

I have been stuck on this for days now. I am trying to come up with a GStreamer pipeline that will stream h.264 video and compressed audio (aac, mulaw, whatever, I don't really care) over a single rtp stream. The problem seems to always be with the multiplexer. I've tried asf, avi, mpegts, Matroska and flv multiplexers and it seems they are all oriented towards files (not network streaming) and are therefore requiring header information. Anyway, here's my latest attempt:
gst-launch-1.0 -e --gst-debug-level=4 \
flvmux name=flashmux streamable=true ! flvdemux name=flashdemux ! decodebin name=decode \
videotestsrc ! 'video/x-raw,width=640,height=480,framerate=15/1' ! omxh264enc ! flashmux. \
audiotestsrc ! 'audio/x-raw,format=S16LE,rate=22050,channels=2,layout=interleaved' ! flashmux. \
decode. ! queue ! autovideoconvert ! fpsdisplaysink sync=false \
decode. ! queue ! audioconvert ! alsasink device="hw:1,0"
This pipeline removes rtp and simply feeds the decoder with the encoder. Also, this attempt uses raw audio, not encoded. Any help will be greatly appreciated!
To stream video+audio you should use 2 different ports.
Using rtpbin element to manage rtp session
Example http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/tests/examples/rtp/server-v4l2-H264-alsasrc-PCMA.sh

Error in playing multichannel audio at different instances?

I am trying to play 2 channels in which audio in one and silence in other channel is being played.
$ gst-launch \
interleave name=i ! alsasink
filesrc location=/home/test1.mp3 \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! i. \
audiotestsrc wave=silence \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! volume volume=1.0 ! i.
After 10 sec I want to play silence in first and some audio in the second channel.
$ gst-launch \
interleave name=i ! alsasink \
audiotestsrc wave=silence \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! i. \
filesrc location=/home/test2.mp3 \
! decodebin ! audioconvert \
! audio/x-raw-int,channels=1 ! volume volume=1.0 ! i.
This can be done on PC's side, while playing these pipeline in two different terminals or making one of them run in background. But when I am playing one pipeline on am335x board and trying to play the other one, its something like this:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstAlsaSink:alsasink0: Could not open audio device for playback.
Device is being used by another application.
Additional debug info:
gstalsasink.c(697): gst_alsasink_open (): /GstPipeline:pipeline0/GstAlsaSink:alsasink0:
Device 'default' is busy
Setting pipeline to NULL ...
Freeing pipeline ...
when we check in gstalsasink.c it is calling snd_pcm_open in non-blocking mode .
CHECK (snd_pcm_open (&alsa->handle, alsa->device, SND_PCM_STREAM_PLAYBACK,
SND_PCM_NONBLOCK), open_error);
Then why its blocking other events to use the audio device?
Can anyone suggest me what to do for the target side ,since PC side alsasink is perfect.
could there be a little delay for closing the alsa device on your embedded hardware. Check with fuser which process has it still open. Also consider using gnonlin for developing a sequential playback of streams. This will reuse the existing audio sink.

Resources