ffmpeg : is there a simple way to edit the video resolution, but keep all audio and subtitles - audio

I would like to lower video resolution - usually from .mkv files - but to keep all possible audio tracks (might be only one, might be several) and subtitles (might be none, might be several) from the original one. I also would like to keep as many encoding parameters as I can from the original video file (especially those I do not understand).
I am still new to ffmpeg : at first the idea seemed simple, but after many attempts, it seems it is more complex than that. Do I have to use the -filter_complex option ? It seems to be an overkill (or overcomplex) for what I thought to be an easy conversion, but I might be wrong.
I tried to combine -vf scale=-1:720 with -c copy -map 0, which gave me an error that I now understand, but I am stuck with the next step.
Any lead on to achieve that ? Could it be done with ffmpeg only or would I need a script ?

Your try should've worked. For example,
ffmpeg -i input.mp4 -vf scale=-1:720 -map 0 -c:a copy -c:s copy output.mp4
grabs all the streams from the input, passes all video streams through the scale filter, and copies all audio and subtitle streams.
What was the error?
keep as many encoding parameters
This it cannot do. When you reencode, it's on you to pick the parameters to best match those possibly used for the input.

Related

adding silent audio to channels 3-8 in ffmpeg

This has been discussed before, but my question is a bit different.
Lets say have a video file with 1+2 as stereo.
Now I want to add 3-8 with silent audio - most likely using the anullsrc.
I need to map that generator to only affect channel 3-8.
Anyone has a solution for that?
No need for a source filter. Just need pan.
ffmpeg -i input -af "pan=8C|c0=c0|c1=c1" -c:v copy out
The first two output channels have their maps set to the two input channels. Since the other 6 are omitted, they will be mute.

(no accepted answer) How to merge 2 overlapping videos into one video using ffmpeg or opencv?

Merging two videos is easy, been answered couple of times. What I have is multiple overlapping videos. A video might have overlaps with video before it. Meaning if video 1 covers 1-5 timeline then video 2 may overlap 1, and cover 3 to 8. Merging them as is would result in 1-5|3-8, when i need 1-8 only.
Videos are alphabetically sorted.
My general idea of solution is...
grab last frame of the video
if it's first video continue
if it's not first video, ie. 2nd, search for frame saved in previous steps frame by frame
if it reaches to last frame of current video then there is no overlap continue
if it founds a frame then clip 2nd video up to that frame inclusive and then go to next frame
once all videos have been analyzed, merge them into one video.
I need to translate this to ffmpeg commands. Or opencv if that's a better tool.
If there is better way of doing that, I'm interested in that too.
for ffmepg you can use the script below. it tested it. But timing wise, you have to change of this STARTPTS+5 to +25 in your video. I put 5 here to test the merging is happening.
ffmpeg -i 2.mp4 -i 1.mp4 -filter_complex "[1]setpts=PTS-STARTPTS+5/TB[top];[0:0][top]overlay=enable='between(t\,10,15)'[out]" -shortest -map [out] -map 0:1 -pix_fmt yuv420p -c:a copy -c:v libx264 -crf 18 output1.mp4
Limitation
This one need the source to be long enough which means you need video canvas then use this script to add each video into the canvas.
And there is no fully autonomous way of use it in ffmpeg.
You are right. Opencv cant deal with audio. need 3rd party library support to run concurrently. Before then I have to use ROS to get both sound and vision to the robot system from a webcam. The sound is then process with NLP for natual language user interface and vision is used separately for locozlaiton and mapping.
There is some way to walk around.
First, you use OpenCV template matching or image difference on a local window batch. The smallest error position will give you the correct location A to insert. This should be accurate in terms of mili-second level. (if error is always big, then it means there is no overlap and return exception)
Second, based on the correct location obtained from opencv. call system.call to invoke the above script with A parameter as input to do auto merge.
Depends on your application, if you need to do it frequently, write opencv python script to automatic fuse. If just once every month, do it manually with ffmepg is good enough

recompiling audio from a movie source

Is it possible to recompile movie files to re-stream the audio so they all have the same volume level? We've got users submitting videos and to me it seems some are higher in volume whereas others are not, and they all have the same volume level on the controls, and i'd like to standardize this so all movie files have the same volume levels.
I was thinking of ffmpeg although I only have novice knowledge of this technology and haven't done my research in it yet.
Anyway if there's anything available I'd love to know.
Thanks!
FFmpeg has a loudnorm filter that will normalize the audio to meet EBU R128 recommendations. Basic syntax is
ffmpeg -i video.mp4 -c:v copy -af loudnorm out.mp4
A faster filter is dynaudnorm but this one may alter the audio shape a bit, or so I'm told.
ffmpeg -i video.mp4 -c:v copy -af dynaudnorm=r=0.5 out.mp4

Normalize audio, then reduce the volume in ffmpeg

I have a question relating to ffmpeg. First here is the scenario, I am working on a project where I need to have some audio with a presenter talking and then potentially some background music. I also have the requirement to normalize the audio. I would like to do this without presenting a bunch of options to the user.
For normalization I use something similar to this post:
How to normalize audio with ffmpeg.
In short, I get a volume adjustment which I then apply to ffmpeg like this:
ffmpeg -i <input> -af "volume=xxxdB" <output>
So far so good. Now let's consider the backing track, it doesn't want to be the same volume as the presenters voice, this would be really distracting, so I want to lower that by some percentage. I can also do this with ffmpeg, I could do it like this (example would set volume to 50%):
ffmpeg -i <input> -af "volume=0.5" <output>
Using these two commands back to back, I can get the desired result.
My question has two parts:
Is there a way to do this in one step?
Is there any benefit to doing it in one step?
Thanks for any help!
After testing some more, I actually think the answer was pretty straight forward, I just needed to do this.
ffmpeg -i <input> -af "volume=xxxdB,volume=0.5" <output>
Took me a while to realize it, I had to try with a view samples before I felt confident.

Mix Audio tracks with offset in SOX

From ASP.Net, I am using FFMPEG to convert flv files on a Flash Media Server to wavs that I need to mix into a single MP3 file. I originally attempted this entirely with FFMPEG but eventually gave up on the mixing step because I don't believe it it possible to combine audio only tracks into a single result file. I would love to be wrong.
I am now using FFMPEG to access the FLV files and extract the audio track to wav so that SOX can mix them. The problem is that I must offset one of the audio tracks by a few seconds so that they are synchronized. Each file is one half of a conversation between a student and a teacher. For example teacher.wav might need to begin 3.3 seconds after student.wav. I can only figure out how to mix the files with SOX where both tracks begin at the same time.
My best attempt at this point is:
ffmpeg -y -i rtmp://server/appName/instance/student.flv -ac 1 student.wav
ffmpeg -y -i rtmp://server/appName/instance/teacher.flv -ac 1 teacher.wav
sox -m student.wav teacher.wav combined.mp3 splice 3.3
These tools (FFMEG/SoX) were chosen based on my best research, but are not required. Any working solution would allow an ASP.Net service to input the two FMS flvs and create a combined MP3 using open-source or free tools.
EDIT:
I was able to offset the files using the delay switch in SOX.
sox -M student.wav teacher.wav combined.mp3 delay 2.8
I'm leaving the question open in case someone has a better approach than the combined FFMPEG/SOX solution.
For what it's worth, this should be possible with a combination of -itsoffset and the amix filter, but a bug with -itsoffset prevents it. If it worked, the command would look something like this:
ffmpeg -i student.flv -itsoffset 3.3 -i teacher.flv -vn -filter_complex amix out.mp3
mixing can be pretty simple: how to mix two audio channels?
well i suggest you should use flash.
it may sounds weird, correct me if im wrong but with Flash's new multimedia abilities you can mix a couple tracks.
im not sure, but i'm just trying to help you,
theese 2 link can help you for your aim (specially second link i guess);
http://3d2f.com/programs/25-187-swf-to-mp3-converter-download.shtml
http://blog.debit.nl/2009/02/mp3-to-swf-converter-in-actionscript-3/

Resources