ffmpeg equivalent for sox -t ima - audio

I am trying to use ffmpeg to combine 1 audio file (ADPCM) and 1 video file (h264) into single mp4. Video by file conversion works fine but ffmpeg chokes on guessing audio input. I can't figure out how to tell ffmpeg which params to use to decode raw audio file.
Currently I first run sox to convert raw audio to wav:
sox -t ima -r 8000 audio.raw audio.wav
... then feed audio.wav from sox as ffmpeg input
ffmpeg -i video.raw -i audio.wav movie.mp4
I am trying to avoid sox step and use audio.raw in ffmpeg.
Thank you

Since you have headless audio, you should tell ffmpeg about the sample format and (optionally) sample rate, audio channels, e.g.:
ffmpeg -i video.raw -f s16le -ar 22050 -ac 1 -i audio.raw movie.mp4
To check supported PCM formats you may use this command:
ffmpeg -formats 2>&1 | grep -i pcm

Related

Detect silence(s) in audio channel from video stream

We need to detect the 'silence'(s) in the audio channel of a video stream. We have been able to receive a UDP video stream and extract audio from it using the command:
ffmpeg -y -i udp://127.0.0.1:23000 -ab 3000k -ar 44100 -ac 1 test.wav
The audio file was saved only to verify whether audio has been extracted correctly or not.
To detect 'silence'(s) in the audio, we are using the silencedetect filter. We referred to some examples and it seems to work for audio files:
ffmpeg -i audio/file/path -af silencedetect=noise=-50dB:d=0.25 -f null -
We are unable to detect silence(s) in the audio from a video stream. This is the command we came up with:
ffmpeg -y -i udp://127.0.0.1:23000 -ab 3000k -ar 44100 -ac 1 -af silencedetect=noise=-50dB:d=0.25 -f null -
What is it that we are doing wrong? Any help would be appreciated.
Thanks!

How do I convert wav into an mxf file with timecode?

I'm looking for a way to convert wav(16bit, 48kHz, LPCM) into an mxf file with timecode.
Since ffmpeg supports mxf, I'm trying, but I don't know the command.
ffmpeg -i ./input.wav [hh:mm:ss.ff, name1] [hh:mm:ss.ff, name2]... ./output.mxf
I'm expecting the above command, but does anyone know?
MXF is a pain
The default MXF muxer requires video.
The -timecode option with MXF requires video.
The mxf_opatom muxer allows just audio, but only mono with 48000 MHz sample rate, so each channel will need to be in its own MXF file.
Workaround 1: Pipe
ffmpeg -i input.wav -ar 48000 -c:a pcm_s16le -timecode 01:02:03:04 -f nut - | ffmpeg -i - -c:a pcm_s16le -f mxf_opatom output.mxf
I'm assuming your audio is mono (you didn't say what it is). If your input is multichannel then output each channel into its own file.
Use 01:02:03:04 for non-drop timecode, and 01:02:03.04 or 01:02:03;04 for drop.
Workaround 2: Dummy/blank video
Just ignore the video.
Non-drop timecode:
ffmpeg -f lavfi -i color=r=25 -i input.wav -timecode 01:02:03:04 -c:a copy -shortest output.mxf
Drop timecode:
ffmpeg -f lavfi -i color=r=30000/1001 -i input.wav -timecode 01:02:03.04 -c:a copy -shortest output.mxf

How to extract audio in 8khz using ffmpeg

I am using ffmpeg to extract the audio from a video. Below code downlaods the audio from a video file. I'm not sure how efficient this program is but I do know that it downloaods it in 48KHZ.
How do I use this program to extract audio from a video in 8Khz because the file is getting too big.
ffmpeg -i video_link -vn output.wav
Use -ar option to change frequency rate
ffmpeg -i video_link -vn -ar 8000 output.wav
If you want to try different formats of audio check the available formats in ffmpeg using ffmpeg -formats and available codecs using ffmpeg -codecs
Here's an example to extract to mp3 file
ffmpeg -i video_link -vn -ar 8000 -f mp3 output.mp3
Edit: as #llogan pointed out, -f option is not needed, ffmpeg automatically mux mp3 file.
ffmpeg -i video_link -vn -ar 8000 output.mp3

ffmpeg to calculate audio/visual difference between compressed and non-compressed video

I'm trying to calculate the audio + visual difference between a harshly compressed video file and one that hasn't been.
I'm using pipes because ultimately I wish this to take src from a camera stream.
I've managed to get the video results that I'm looking for, but I'm struggling with the audio.
I've added a line to invert the phase of the compressed audio, so that when they add up in the blend they should almost cancel each other out, but that doesn't happen.
ffmpeg -i input.avi -f avi -c:v libxvid -qscale:v 30 -c:a wmav1 - | \
ffmpeg -i - -f avi -af "aeval='-val(0)':c=same" - | \
ffmpeg -i input.avi -i - -filter_complex "blend=all_mode=difference" -c:v libx264 -crf 18 -f avi - | \
ffplay -
I can still hear all the audio, when what I should be hearing are solely compression artifacts. thx
To preface, I'm not sure your method would identify audio compression 'artifacts'
Your command doesn't perform any audio comparison, it only inverts a single channel. Also, the audio and video are compressed twice and the codecs the last ffmpeg command receives are the default AVI codecs of mpeg4 and mp3.
Use
ffmpeg -i input.avi -f matroska -c:v libxvid -qscale:v 30 -c:a wmav1 - |\
ffmpeg -i input.avi -i - -filter_complex "[0][1]blend=all_mode=difference;[1]aselect=gt(n\,0),asetpts=PTS-STARTPTS[1a];[0][1a]amerge,aeval=val(0)-val(1):c=mono" -c:v rawvideo -c:a pcm_s16le -f matroska - |\
ffplay -
I assume your audio is mono. If your audio has N channels, your aeval will need N expressions where the Mth expression is val(M-1)-val(N+M-1)
I also trim out the first encoded audio frame in order to mitigate encoder delay that Paul mentioned, and it seems to work here.
There might be some delay introduced with encoded audio samples. Also your command is incorrect.

How to record audio stream using ffmpeg?

I have a problem using ffmpeg:
when i trying to record video+audio from my webcam in result i got only video stream, wthout audio at all.
I have tried different codecs and nothing..
Maybe, someone can give me advice?
ffmpeg -f dshow -i video="Logitech HD Webcam C270" -r 25 -s 800x600 -acodec libmp3lame -vcodec mpeg4 -b 3000k -f avi D:\1.avi
Btw: virtualdub grabs both well.
Thanks.
Assuming that you have installed the driver and codecs, use something like:
ffmpeg -f dshow -i video="Logitech HD Webcam C270" [path]out.mp4
A short explanation is given in capture a webcam input. For using DirectShow you have this examples.

Resources