convert animated gif to video on linux server while preserving frame rate - linux

how do I convert an animated gif to a video (e.g. h264#mp4) programmatically on a linux server?
I need this to process user generated content which should be output as several defined video formats; therefore its possible, that users may want to process animated gif files. I already have a set of working php scripts to transcode videofiles to specific formats (like vpx#webm and h264#mp4, scaled to specific resolutions) using avconv, but herefore I need video input.
Usual ways seem to be to extract the frames of the gif and then encode it, like
convert file.gif file%03d.png
avconv -i file%03d.png file.mp4
But this discards the frame rate, determined by the pause-informations within the gif-file. Its possible to define a framerate to avconv with -r, but
this does not respect the pause between frames, as they can differ (like 1st frame 100ms pause, 2nd frame 250ms pause, 3rd frame 100ms pause, ...)
as the input comes from users, it may even vary, as some gifs may have 5fps and others 30fps
I noticed that avconv is able to process gifs by itself and therefore may respect the correct pauses, but when I do (like similarily described in How to convert GIF to Mp4 is it possible?)
avconv -i file.gif -r 30 file.mp4
avconv will only take the first frame of the gif, while it detects the file at least as video:
Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
Stream #0.0: Video: gif, pal8, 640x480, 25 tbn
(example gif 'file.gif' has 15 frames, each with 100ms pause => 1.5s duration, looping)
What am I missing? Whats going wrong?
Are there probably better tools for this use case?
What are big sites like e.g. 9gag using to transcode uploaded gifs to video?

Yet Another Avconv Bug (YAAB)
ffmpeg has better GIF demuxing support (and improved GIF encoding). I recommend ditching avconv and getting ffmpeg (the real one from FFmpeg; not the old charlatan from Libav). A static build is easy, or you can of course compile.
Example
ffmpeg -i in.gif -c:v libx264 -pix_fmt yuv420p -movflags +faststart out.mp4
See the FFmpeg Wiki: H.264 Encoding Guide for more examples.

If for some reason you are required to use avconv and imagemagick, you may want to try something like this:
ticks_per_frame = subprocess.check_output('identify -verbose -format %T_ {0}'.format(gif_path).split()).split('_')[:-1]
ticks_per_frame = [int(i) for i in ticks_per_frame]
num_frames = len(ticks_per_frame)
min_ticks = min(ticks_per_frame)
subprocess.call('convert -coalesce {0} tmp%d.png'.format(gif_path).split())
if len(set(ticks_per_frame)) > 1:
num_dup = 0
num_dup_total = 0
for frame, ticks in enumerate(ticks_per_frame):
num_dup_total += num_dup
frame += num_dup_total
num_dup = 0
if ticks > min_ticks:
num_dup = (ticks / min_ticks) - 1
for i in range(num_frames + num_dup_total - 1, frame, -1):
orig = 'tmp%d.png' % i
new = 'tmp%d.png' % (i + num_dup)
subprocess.call(['mv', orig, new])
for i in range(1, num_dup + 1):
curr = 'tmp%d.png' % frame
dup = 'tmp%d.png' % (i + frame)
subprocess.call(['cp', curr, dup])
framerate = (100 / min_ticks) if min_ticks else 10
subprocess.call('avconv -r {0} -i tmp%d.png -c:v libx264 -crf {1} -pix_fmt yuv420p \
-vf scale=trunc(iw/2)*2:trunc(ih/2)*2 -y {2}.mp4'.format(framerate, quality, STORAGE_DIR + mp4_name).split())
subprocess.call(['rm'] + glob('tmp*.png'))
So, get the ticks in centiseconds for each frame of the gif (via identify), convert to multiple pngs, and then go through them while making duplicates based on the tick values. And don't you worry, the png files will still remain in consecutive order. Using the real FFmpeg is still the best way to go.

Related

How to divide my video horizontally using ffmpeg (without any other side-effects)?

I am processing my video(640 X 1280 dimensions). I want to divide my video horizontally into 2 separate videos(each video will now be 640 X 640 in dimensions),then combine them horizontally (video dimension will be now 1280 X 640)in a single video. I did the research on the internet and my issue was solved and not solved at the same time
I made a batch file and add these commands in it:-
ffmpeg -i input.mp4 -filter_complex "[0]crop=iw:ih/2:0:0[top];[0]crop=iw:ih/2:0:oh[bottom]" -map "[top]" top.mp4 -map "[bottom]" bottom.mp4
ffmpeg -i top.mp4 -i bottom.mp4 -filter_complex hstack output.mp4
Yes,my task got solved but many other issues also came out of it:-
1.) My output video has NO audio in it. No idea why there is no audio in the end results
2.) My main video file (on which I am doing all this) is 258 MB in size. But the result was only 38 MB in size. No idea what is happening? And even worse,I closely looked at the video,results were pretty same (only animation were not as smooth in output file as compared to input file)
3.) It is taking too much time(I know that computing takes some time but maybe there may be some way/sacrifice to make the process much quicker)
Thanks in advance for helping me
Combine your two commands
ffmpeg -i input.mp4 -filter_complex "[0]crop=iw:ih/2:0:0[top];[0]crop=iw:ih/2:0:oh[bottom];[top][bottom]hstack" -preset fast -c:a copy output.mp4
If you need it to encode faster then use a faster -preset as shown in FFmpeg Wiki: H.264.
x264 is a better encoder than your phone so it is not surprising that the file size is smaller.
Or use your player to do it
No need to wait for encoding. Just have your player do everything upon playback. This does not output a file, but only plays the re-arranged video. Example using mpv:
mpv --lavfi-complex="[vid1]split[v0][v1];[v0]crop=iw:ih/2:0:0[c0];[v1]crop=iw:ih/2:0:oh[c1];[c0][c1]hstack[vo]" input.mp4

FFMPEG command to mix audio and video with adjustable volume

I have:
Video file of X length
Audio of Y length
I am trying to achieve an output video that has the following qualities:
The volume level of the added audio should be adjustable
The audio should loop till the end of the video
It should not break even if the input video does not have any audio
I should be able to mute the audio of the source video if needed.
All of the above, in the fastest possible way.
I'm not well versed with FFMPEG, maybe some experts could help.
since you are using a library i assume that you know how to run pure FFmpeg commands
based on your third condition we will divide the solution to two part :
It should not break even if the input video does not have any audio
in order to cover this condition, you can check if there is any audio stream in your video file before running any FFmpeg command with below code:
private boolean isVideoContainAudioStream(String videoPath) {
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(videoPath);
String hasAudioStream = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_HAS_AUDIO);
if (hasAudioStream != null && hasAudioStream.equals("yes"))
return true;
else
return false;
}
1. Part One :
so if the result of above function is equal to true, your video file contain audio stream so you can run below command :
ffmpeg -i video.mp4 -filter_complex "amovie=/path/to/audio/file/audio.mp3:loop=0,asetpts=N/SR/TB,volume=2.0[audio];[0:a]volume=0.5[sa];[sa][audio]amix[fa]" -map 0:v -map [fa] -vcodec libx264 -preset ultrafast -shortest fout.mp4
in above command we take audio file at a specific path with amovie filter
loop=0, Loop audio infinitely
asetpts=N/SR/TB, Generate timestamps by counting samples
volume=2.0, multiply audio volume by 2.0
video's audio stream is accessible with [0:a] filter pad so we take it and set the volume to half of the input's volume and name it [sa] obviously if you want to mute the audio of the source video you change that part to :
[0:a]volume=0.0[sa]
after that we will mix two audio streams using amix filter and name it [fa], so far we have everything we wanted, and we just want to merge audio and video streams
-vcodec libx264, we are using x264 video encoding because it has lots of configs to gain better performance and speed
-shortest, since we loop audio infinitely, we tell the ffmpeg to continue creating frames until the shortest stream ends (video stream is the short one for sure)
-preset ultrafast, preset is one of the x264 options, ultrafast will give you more encoding speed at the cost of more size in output file, usually using veryfast value for this flag is a good combination of speed and size
2. Part Two :
if the isVideoContainAudioStream function return false (which means your input video is muted) you can run below command:
ffmpeg -i mute_video.mp4 -filter_complex "amovie=/path/to/audio/file/audio.mp3:loop=0,asetpts=N/SR/TB,volume=2.0[audio]" -map 0:v -map [audio] -vcodec libx264 -preset ultrafast -crf 18 -shortest m_fout.mp4
in above command we use another x264 options called CRF
Constant Rate Factor (CRF)
Use this rate control mode if you want to keep the best quality and care less about the file size. This is the recommended rate control mode for most uses.
The range of the CRF scale is 0–51, where 0 is lossless, 23 is the default, and 51 is worst quality possible. A lower value generally leads to higher quality, and a subjectively sane range is 17–28. Consider 17 or 18 to be visually lossless or nearly so; it should look the same or nearly the same as the input but it isn't technically lossless.
The range is exponential, so increasing the CRF value +6 results in roughly half the bitrate / file size, while -6 leads to roughly twice the bitrate.
Choose the highest CRF value that still provides an acceptable quality. If the output looks good, then try a higher value. If it looks bad, choose a lower value.
thats it, there is lots of option for x264 encoder, you can check all available options at this link:
H.264 Video Encoding Guide

Extracting Y only (of YUV420p) frame from an MP4 file using fmpeg?

My main objective is to extract the I'th, I+1'th (next), I-1'th (previous) frames in the form of Y only (of YUV 420) from an mp4 video. The procedure which I am using right now is
I extracted the list of all the I frames from a video using the command - ffprobe "input.mp4" -show_frames | grep 'pict_type=I' -A 1 > frame_info.txt
Next, I used a python script to parse through this txt file to find the numbers of all of the I frames and then extracting all of the frames using the command - ffmpeg -i input.mp4 -vf select='eq(n\,{1}),setpts=N/25/TB,extractplanes=y' -vsync 0 -pix_fmt gray {1}.yuv This is happening via a subprocess call from python.
This is working fine for small resolution videos like 240p or 480p but as soon as I move to 1080p videos the time to extract even a single frame increases exponentially. As the ffmpeg seeks to that frame number to extract it and it has to decode the mp4 file till that point.
I have a lot of 1080p files and I was looking to decrease the time. The solution which I was thinking was to extract all of the Y frames (of YUV 420) from mp4 and then selecting only I frames as I've got the list of all of the I frames from step 1.. The command I am using for this is - ffmpeg -y -i input.mp4 -vf "fps=59.94" -pix_fmt gray file_name.yuv
The problem with the above code is that it continuously appends the to yuv file only but I want an individual y file for one frame of the mp4 video.
My restriction is to use FFmpeg only as FFmpeg's Y value is matching with what I want.
TL:DR - I want to extract the Y part only (of YUV 420p) from an mp4 video. The y frames are the I'th and I-1th and I+1th frames.
Thanks for helping out.
In step 1, instead of storing the frame numbers, store the pts_time.
Then, in step 2, run
ffmpeg -copyts -ss X -i input.mp4 -vf select='eq(t\,X),extractplanes=y' -vsync 0 -pix_fmt gray -vframes 1 {1}.yuv
where X is the pts_time.

mkv file out of sync with linear drift

I have a bunch of mkv files, with FLAC as the audio codec and FFV1 as the video one.
The files were created using an EasyCap aquisition dongle from a VCR analog source. Specifically, I used VLC's "open acquisition device" prompt and selected PAL. Then, I converted the files (audio PCM, video raw YUV) to (FLAC, FFV1) using
ffmpeg.exe -i input.avi -acodec flac -vcodec ffv1 -level 3 -threads 4 -coder 1 -context 1 -g 1 -slices 24 -slicecrc 1 output.mkv
Now, the files are progressively out of sync. It may be due to the fact that while (maybe) the video has a constant framerate, the FLAC track has variable framerate. So, is there a way to sync the track to audio, or something alike? Can FFmpeg do this? Thanks
EDIT
On Mulvya hint, I plotted the difference in sync at various times; the first column shows the seconds elapsed, the second shows the difference - in secs. The plot seems to behave linearly, with 0.0078 as a constant slope. NOTE: measurements taken by hands, by means of a chronometer
EDIT 2
Playing around with VirtualDub, I found that changing the framerate to 25 fps from the original 24.889 (Video->Frame rate...->Change frame rate to) and using the track converted to wav definitely does work. Two problems, though: VirtualDub crashes when importing the original FFV1-FLAC mkv file, so I had to convert the video to H264 to try it out; more, I find it difficult to use an external encoder to save VirtualDub output.
So, could I avoid using VirtualDub, and simply use ffmpeg for it? Here's the exported vdscript:
VirtualDub.audio.SetSource("E:\\4_track2.wav", "");
VirtualDub.audio.SetMode(0);
VirtualDub.audio.SetInterleave(1,500,1,0,0);
VirtualDub.audio.SetClipMode(1,1);
VirtualDub.audio.SetEditMode(1);
VirtualDub.audio.SetConversion(0,0,0,0,0);
VirtualDub.audio.SetVolume();
VirtualDub.audio.SetCompression();
VirtualDub.audio.EnableFilterGraph(0);
VirtualDub.video.SetInputFormat(0);
VirtualDub.video.SetOutputFormat(7);
VirtualDub.video.SetMode(3);
VirtualDub.video.SetSmartRendering(0);
VirtualDub.video.SetPreserveEmptyFrames(0);
VirtualDub.video.SetFrameRate2(25,1,1);
VirtualDub.video.SetIVTC(0, 0, 0, 0);
VirtualDub.video.SetCompression();
VirtualDub.video.filters.Clear();
VirtualDub.audio.filters.Clear();
The first line imports the wav-converted audio track.
Can I set an equivalent pipe in ffmpeg (possibly, using FLAC - not wav)? SetFrameRate2 is maybe the key, here.

ffmpeg split avi into frames with known frame rate

I posted this as comments under this related thread. However, they seem to have gone unnoticed =(
I've used
ffmpeg -i myfile.avi -f image2 image-%05d.bmp
to split myfile.avi into frames stored as .bmp files. It seemed to work except not quite. When recording my video, I recorded at a rate of 1000fps and the video turned out to be 2min29sec long. If my math is correct, that should amount to a total of 149,000 frames for the entire video. However, when I ran
ffmpeg -i myfile.avi -f image2 image-%05d.bmp
I only obtained 4472 files. How can I get the original 149k frames?
I also tried to convert the frame rate of my original AVI to 1000fps by doing
ffmpeg -i myfile.avi -r 1000 otherfile.avi
but this didn't seem to fix my concern.
ffmpeg -i myfile.avi -r 1000 -f image2 image-%07d.png
I am not sure outputting 150k bmp files will be a good idea. Perhaps png is good enough?
Part one of your math is good, the 2 minutes and 29 seconds is about 149 seconds. With 1000 fps that makes 149000 frames. However your output filename only has 5 positions for the number where 149000 has 6 positions, so try "image-%06d.bmp".
Then there is the disk size: Do your images fit on the disk? With bmp every image uses its own size. You might try to use jpeg pictures, they compress about 10 times better.
Another idea: If ffmpeg does not find a (reasonable) frame rate, it drops to 25 or 30 frames per second. You might need to specify it. Do so for both source and target, see the man page (man ffmpeg on unix):
To force the frame rate of the input file (valid for raw formats
only) to 1 fps and the frame rate of the output file to 24 fps:
ffmpeg -r 1 -i input.m2v -r 24 output.avi
For what it's worth: I use ffmpeg -y -i "video.mpg" -sameq "video.%04d.jpg" to split my video to pictures. The -sameq is to force the jpeg in a reasonable quality, the -y is to avoid allow overwrite questions. For you:
ffmpeg -y -r 1000 -i "myfile.avi" -sameq "image.%06d.jpg"
I think, there is a misconception here: the output of a HS video system is unlikely to have an output frame rate of 1000 fps but something rather normal as 30 (or 50/60) fps. Apart from overloading most video players with this kind of speed it would be counterproductive to show the sequence in the same speed as it was recorded.
Basically: 1 sec # 1000 fps input is something like 33 sec # 30 fps output.
Was the duration of the scene recorded really 2:29 min (resulting in a video ~82 min at normal rate) or took it about 4.5 sec (4472 frames) which is 2:29 min in normal playback?
I tried this on ubuntu 18.04 terminal.
ffmpeg -i input_video.avi output_frame_path_images%5d.png
where,
-i = Input

Resources