Redirecting ffmpeg result into another file - linux

I'm trying to get the size of an input video using ffmpeg, below is the code that I use, what I'm trying to do is to first store the result into a txt file and then do some parsing to get the size of the video:
$ ffmpeg -i TheNorth.mp4
The terminal says "At least one output file must be specified"
Then I tried this:
$ ffmpeg -i TheNorth.mp4 result.txt
The terminal says "Unable to find a suitable output format for 'size.txt'"
So how could I get the result and save it to the specified file?

You can store the output ffmpeg generates with piping:
ffmpeg -i TheNorth.mp4 2> result.txt
You need to use 2> here, as ffmpeg writes to StdErr (and not StdOut).

ffprobe
If you just want to get the size of the video then you can get that, and other info, directly with ffprobe. This will avoid redirection, temporary output files, and the additional parsing.
$ ffprobe -v error -select_streams v:0 -show_entries stream=height,width -of csv=p=0:s=x input.mkv
1280x720
See FFmpeg Wiki: FFprobe Tips for more examples.
tee
For users who want to encode and capture the resulting console output, I recommend using tee. The problem with pure redirection is that important messages such as error messages, failures, and prompts can be missed.
You can avoid this by including tee to show the output in the console and to save it to a file:
ffmpeg -i input … output |& tee console.txt
ffmpeg outputs to stderr instead of the more typical stdout, so the & is added to the | pipe to deal with that. This is only for Bash 4+. If you're using something else then change |& to 2>&1 which redirects stderr to stdout before it is sent to the pipe.

Somewhat better idea is to use ffprobe
ffprobe -show_format -print_format json TheNorth.mp4
that will output JSON formated info about video. Guess it is easier to parse than raw output. To redirect output to file use just ordinary pipe > result.txt similar to accepted answer but without two.

Related

ffmpeg doesn't accept input in script

this is a beginner's question but i can't figure out the answer after looking into it for several days:
I want ffmpeg to extract the audio portion of a video and save it in an .ogg container. If i run the following command in terminal it works as expected:
ffmpeg -i example.webm -vn -acodec copy example.ogg
For convenience, i want to do this in a script. However, if i pass a variable to ffmpeg it apparently just considers the first word and produces the error "No such file or directory".
I noticed that my terminal escapes spaces by a \ so i included this in my script. This doesn't solve the problem though.
Can someone please explain to me, why ffmpeg doesn't consider the whole variable that is passed to it in a script while working correctly when getting passed the same content in the terminal?
This is my script that passes the filename with spaces escaped by \ to ffmpeg:
#!/bin/bash
titelschr=$(echo $# | sed "s/ /\\\ /g")
titelohne=$(echo $titelschr | cut -d. -f 1)
titelogg=$(echo -e ${titelohne}.ogg)
ffmpeg -i $titelschr -vn -acodec copy $titelogg
Thank you very much in advance!
You need to quote variable expansions, try this :
#!/usr/bin/env bash
titelschr=$1
titelogg="${titelschr%.*}.ogg"
ffmpeg -i "$titelschr" -vn -acodec copy "$titelogg"
call with :
bash test.sh "Some video file.mp4"
This way, you don't need to escape spaces.

display only lines from output that contains a specified word

I'm looking for a way to get only the lines that contains a specified word, in this case all lines that contains the word Stream from an output
I've tried;
streams=$(ffprobe -i "movie.mp4" | grep "Stream")
but that didn't get any results..
or do I need to output it to a file and then try to extract the lines I'm looking for?
#paulsm4 was spot on ... the output goes to STDERR.
streams=$(ffprobe -i "movie.mp4" |& grep "Stream")
Note the &
No need for grep. Just use ffprobe directly to get whatever info you need.
Output all info
ffprobe -loglevel error -show_format -show_streams input.mp4
Video info only
ffprobe -loglevel error -show_streams -select_streams v input.mp4
Audio info only
ffprobe -loglevel error -show_streams -select_streams a input.mp4
Width x height
See Getting video dimension / resolution / width x height from ffmpeg
Duration
See How to get video duration?
Format / codec
Is there a way to use ffmpeg to determine the encoding of a file before transcoding?
Using ffprobe to check audio-only files
Info on frames
See Get video frames information with ffmpeg
More info and examples
See FFmpeg Wiki: ffprobe

remove audio from mp4 file ffmpeg

I am on a Mac using Python 3.6. I am trying to remove audio from an mp4 file using ffmpeg but unfortunately it does not give me the "silenced" mp4 file I look for. Code I use is:
ffmpeg_extract_audio("input_file.mp4", "output_file.mp4", bitrate=3000, fps=44100)
It gives me a new output file with a low-quality video image, but still the audio. Any suggestion?
ok thank you #sascha. I finally put all my mp4 files in the same folder and run the following code:
for file in *.mp4; do ffmpeg -i "$file" -c copy -an "noaudio_$file"; done
If, like me, one uses Sublime Text or any other text editor (already using Python language), it run with the following:
import subprocess
command = 'for file in *.mp4; do ffmpeg -i "$file" -c copy -an "noaudio_$file"; done'
subprocess.call(command, shell=True)

ffmpeg modify audio length/size ( stretch or shrink)

I am developing a web app, where people can record videos. I have been able to send chunks of audio n video to server successfully, where I am trying to combine them and return as single proper file.
my problem is if the recording is for one hour, after merging the chunks
video length : 1:00:00 , audio length : 00:59:30,
now, this is not a issue of audio not getting recorded( I have checked that), the problem is, somehow, when i merge the chunks of audio, it shrinks,
I find that it is progressive sync issue where it gets worse and worse as time increases.
I have searched the net for the solution, most places say async, I have tried using it, but to no avail, is the below usage correct?
ffmpeg -i audio.wav -async 1 -i video.webm -y -strict -2 v.mp4
(v.mp4 is the final file that I provide to the users.)
found a solution(or a temp fix, depends of how you look at it),
it involves combination of ffmpeg and ffprobe ... i have done audio streching( ratio<1)
ffprobe -i a.mp3 -show_entries format=duration -v quiet -print_format json
ffprobe -i v.mp4 -show_entries format=duration -v quiet -print_format json
ffmpeg -i a.mp3 -filter:a atempo="0.9194791304347826" aSync.mp3 // audio is being stretched.
ffmpeg -i aSync.mp3 -i v.mp4 final.mp4

save console output of programm to file - not simplte 2>&1

i want to analyze multiple mp3 files and get the bpm of the files. Therefore i'm using soundstretch. At first i'm converting the mp3 files using sox
sox -t mp3 -r 44100 -c 2 file.mp3 -t wav file3.wav
after this i want to analyze the track with soundstretch
soundstretch file.wav -bpm
this also gives me the result in the console. But i'm not able to redirect the printed response to a file. i already tried stuff like
soundstretch file.wav -bpm > file.mp3.bpm
soundstretch file.wav -bpm 2>&1 > file.mp3.bpm
the only result is, that the messages are displayed in the console and there is a empty file
Switch it around if you want one file
soundstretch file.wav -bpm > file.mp3.bpm 2>&1
or use two file two files:
soundstretch file.wav -bpm 2> file.mp3.err > file.mp3.bpm
I think you should use the soundstretch as
soundstretch inputFile outputFile options
ex. : soundstretch Sample.waw out.txt -bpm

Resources