Detect and remove silence from audio file using gstreamer - audio

Is there any way by which we can detect and remove silence from an audio file using GStreamer.
Currently, by using level I am able to find the start and end time of silence present in the audio file but how do I remove this silence and produce an output which doesn't have silence in it. I tried using gst_element_seek but no luck there.

There is a removesilence element in gstreamer-plugins-bad, perhaps it is what you want.
Edit: adding example
gst-launch-1.0 autoaudiosrc ! audioconvert ! removesilence remove=true ! fakesink silent=false -v
This launch line will print data to the screen whenever removesilence output buffers, it will be capturing audio from the microphone of your system. It will only output audio when some noise is detected.

Related

How to stream a mp4 video over stdin/stdout?

I need to stream a video to stdout and then read that stream from stdin again to display it. At the end there will be an application in the middle to handle the networking but for now I want to test it directly. When trying this the video timer works however the screen stays black.
vlc -I dummy video.mp4 --sout '#standard{access=file,mux=ogg,dst=-}' | vlc -
I have also tried gstreamer however I was not able to successfully stream a video yet.
gst-launch-1.0 filesrc location=video.mp4 ! fdsink | gst-launch-1.0 fdsrc fd=0 ! decodebin ! autovideosink
Does anyone have an idea how to do this?
Thanks in advance!
I had to struggle quite a bit with VLC options to make this work, not even sure there are not extra / useless / wrong settings there (like the extra verbosity ;-)
For the record, I got help from VLC's wiki.
On the receiving side :
vlc rtp://192.168.56.101
On the sending side :
vlc -vvv video.mp4 --sout '#duplicate{dst=display,dst="transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128,deinterlace}:rtp{mux=ts,dst=192.168.56.101,sdp=sap,name="TestStream"}"}'
NB :
You'll see 2 videos outputs, which is due to the "duplicate" parameter in the streaming option (values of --sout)
Don't forget to add your workstation's IP address in both commands

Joining MP3's via commandline with background track

I'm already taking advatage of two command line utilities. I'm using ffmpeg to convert m4a to mp3, and then I'm combining a few mp3's into one large one using mp3wrap. The resulting file is something like this:
BackgroundMusic.mp3 > Audio1.mp3 > Audio2.mp3
I need something more like
Audio1.mp3 > Audio2.mp3
|_____________________|
|
BackgroundMusic.mp3
To where the background music runs continuously in the background. Would be nice to be able to change the volume of each track too.
Does anyone know a command line program like mp3wrap but can also add in a background track?
I will not be able to use a GUI program such as Audacity, as all of this will be automated on the server.
Thanks!
You can do this with FFmpeg alone.
ffmpeg -i input_audio1 -i input_audio2 -i input_background_audio -filter_complex "
aevalsrc=0:d=10[s1];
[0:a]volume=volume=0.1[volume0];
[1:a]volume=volume=0.1[volume1];
[2:a]volume=volume=0.1[volume2];
[s1][volume1]concat=n=2:v=0:a=1[ac1];
[volume0][ac1]amix=inputs=2[amixed1];
[amixed1][volume2]amix=inputs=2:duration=first" output_audio
You need to use filter_complex to chain all the filters that you are going to use for adjusting the volumes, silent spaces, concat, etc. filters.
As the first step you can concatenate two audio files that you need to play one after the other. To do that I have first created and silent audio using aevalsrc filter with the same duration as the first audio clip. Then use concat filter to concatenate silent audio and the second audio.
To adjust the audio I have used volume filter. You can adjust the volume values accordingly. To mix the audios you can use amix filter. You can specify duration attribute to get the first input which is [amixed1] duration. With that option you can stop the whole audio with the duration of audio1+audio2 without play it for the full duration of background audio duration.
Hope this helps!

Gstreamer, how to route output to a file instead of the framebuffer

Good afternoon,
I need to know how to use Gstreamer to send frame data to a file instead of the framebuffer.
I want to be able to open the file with another program, edit some video data, then forward to the framebuffer.
is there a gstreamer command for doing this?
Thanks,
You need a filesink.
You can simply add one at the end of your pipeline:
mypipeline ! filesink location=myfile
This will write the stream into a file named myfile.

MPlayer — changing brightness/ contrast to video file and save output

I need to change brightness and contrast to a video permanently, I tried this:
mplayer -vf eq=50:50 a.mp4 -dumpstream
mv stream.dump b.mp4
But it saves as a file which look likes the original file. Any idea?
You want to use mencoder to transcode the video to apply the video filter eq=50:50. When you use -dumpstream with mplayer, it simply dumps the stream while the video filter is being applied to playback. Take a look at the mencoder options, but you'll need to chose a video codec and some options for that codec (like bitrate). Then you can apply the brightness and contrast filter.

Record screen and audio then generate to one video file in java.

I am writting a program as http://www.screencast-o-matic.com/. I used applet and import jmf.jar to my project. When I use it, it couldn't get anything capture devices so it couldn't capture audio and video.
I captured screen to video but it hadn't sound. I captured sound but it hadn't video. I use jmf to merger 2 stream to video file. But it error.
Everybody can help me to resol problem. Thanks your help.
You can use Xuggle-Xuggler http://www.javacodegeeks.com/2011/02/introduction-xuggler-video-manipulation.html API which is wrapper of FFmpeg command line tool. Both are open source.

Resources