How to generate video screencaps of video files via linux commandline - linux

Is there a command line program for linux (ubuntu) which can generate a large image containing say 6 caps from a given video (e.g. WMV) laid out storyboard style (I know on Windows media player classic can do this)? I need this for part of a script I am writing.

I pulled the answer from this site: http://blog.prashanthellina.com/2008/03/29/creating-video-thumbnails-using-ffmpeg/
ffmpeg -itsoffset -4 -i test.avi -vcodec mjpeg -vframes 1 -an -f rawvideo -s 320x240 test.jpg
Where -4 is the number of seconds into the file to grab the screenshot, 320x240 is the screenshot size, and test.jpg is the output file.
Hope this helps.

Use SlickSlice
./slickslice.sh -x video.avi -s 5x3 -e

I've used MPlayer to save frames as images and ImageMagick to combine them:
mplayer -nosound -sstep 15 -vo png video.mkv
montage *.png -tile 3x3 -geometry 300x+0+0 screencaps.png

vcsi can do this. It is a command-line tool written in Python. Example:
vcsi video.mkv -o output.jpg

Related

Using ffmpeg to split MP3 file to multiple equally sound length files

How to use the command line tool ffmpeg on Windows to split a sound file to multiple sound files without changing the sound properties same everything each one is fixed 30 seconds length. I got this manual example from here:
ffmpeg -i long.mp3 -acodec copy -ss 00:00:00 -t 00:00:30 half1.mp3
ffmpeg -i long.mp3 -acodec copy -ss 00:00:30 -t 00:00:30 half2.mp3
But is there a way to tell it to split the input file to equally sound files each one is 30 seconds and the last one is the remaining what ever length.
You can use the segment muxer.
ffmpeg -i long.mp3 -acodec copy -vn -f segment -segment_time 30 half%d.mp3
Add -segment_start_number 1 to start segment numbering from 1.

ffmpeg concat drops audio frames

I have an mp4 file and I want to take two sequential sections of the video out and render them as individual files, later recombining them back into the original video. For instance, with my video video.mp4, I can run
ffmpeg -i video.mp4 -ss 56 -t 4 out1.mp4
ffmpeg -i video.mp4 -ss 60 -t 4 out2.mp4
creating out1.mp4 which contains 00:00:56 to 00:01:00 of video.mp4, and out2.mp4 which contains 00:01:00 to 00:01:04. However, later I want to be able to recombine them again quickly (i.e., without reencoding), so I use the concat demuxer,
ffmpeg -f concat -safe 0 -i files.txt -c copy concat.mp4
where files.txt contains
file out1.mp4
file out2.mp4
which theoretically should give me back 00:00:56 to 00:01:04 of video.mp4, however there are always dropped audio frames where the concatenation occurs, creating a very unpleasant sound artifact, an audio blip, if you will.
I have tried using async and -af apad on initially creating the two sections of the video but I am still faced with the same problem, and have not found the solution elsewhere. I have experienced this issue in multiple different use cases, so hopefully this simple example will shed some light on the real problem.
I suggest you export segments to MOV with PCM audio, then concat those but with re-encoding audio.
ffmpeg -i video.mp4 -c:a pcm_s16le -ss 56 -t 4 out1.mov
...
and then
ffmpeg -f concat -safe 0 -i files.txt -c:v copy concat.mp4

Create Slideshow video by merge png and jpeg images and audio files using ffmpeg on linux

I am creating slideshow video using png,jpeg and audio files the issue i am getting is the at a time only either i have to use png or jpeg but i want to used both so how can i do this here are my list of commands which i am trying. Please help me regards this.
/usr/bin/ffmpeg '-framerate' '1/27' '-pattern_type' 'glob' '-i' '/var/www/html/phpvideotoolkit-v2-master/examples/media/images/*.jpg' '-i' '/var/www/html/phpvideotoolkit-v2-master/examples/media/1.mp3' '-pix_fmt' 'yuv420p' '-shortest' '-y' '-q' '4' '-strict' 'experimental' '-threads' '1' '-acodec' 'aac' '-ar' '22050' '-vcodec' 'mpeg4' '-s' '320x240' '/var/www/html/ffmpe/1.mp4'
By doing this i am only able to create jpg images video but i want to both.
i have used concat command also but not getting any output i have read about some filter but how to used with images i dont know so could you please help me with this.
FFmpeg supports piping of image sequences, so if you enable extended globbing it's simple enough to get all files ending in either .jpg or .png. However, FFmpeg seems to get confused if you pass it a mix of formats, so it's better to convert (using ImageMagick in this case) all images to PNG and then pass them on.
As you can see, all the images, even those already being PNG, will get converted. This is a bit inefficient, but unless you have lots of large images it shouldn't be too bad.
-quality 01 in the IM command makes sure the images are only minimally compressed. It makes little sense to spend a lot of time and effort on compression when FFmpeg will decompress immediately afterwards.
shopt -s extglob
convert \
/var/www/html/phpvideotoolkit-v2-master\
/examples/media/images/*+(.jpg|.png) -quality 01 png:- |\
/usr/bin/ffmpeg -y -framerate 1/27 \
-f image2pipe -i - \
-i /var/www/html/phpvideotoolkit-v2-master/examples/media/1.mp3 \
-pix_fmt yuv420p \
-shortest -q 4 -strict experimental -threads 1 \
-acodec aac -ar 22050 \
-vcodec mpeg4 -s 320x240 /var/www/html/ffmpe/1.mp4

ffmpeg leaves audio gap when concatening videos

I am trying to cut a video in 2 parts then reassembling with ffmpeg but the final output has a small audio glitch right where the segments meet. I am using the following command to split the video 1.mp4 in 2 parts:
ffmpeg -i 1.mp4 -ss 00:00:00 -t 00:00:02 -async 1 1-1.mp4
and
ffmpeg -i 1.mp4 -ss 00:00:02 -t 00:00:02 -async 1 1-2.mp4
Once I have the 2 parts I am concatening them back together with:
ffmpeg -f concat -i files.txt -c copy output.mp4
files.txt is correctly listing both files. Can anyone point me to where the problem might be?
Thanks
The glitch is likely due to the audio priming sample showing up in between.
Since you're re-encoding the segments, you can do this in one command:
ffmpeg -i 1.mp4 -filter_complex
"[0]trim=duration=2[v1];[0]trim=2:4,setpts=PTS-STARTPTS[v2];
[0]atrim=duration=2[a1];[0]atrim=2:4,asetpts=PTS-STARTPTS[a2];
[v1][a1][v2][a2]concat=n=2:v=1:a=1[v][a]"
-map "[v]" -map "[a]" output.mp4
I had the same problem for about 3 weeks.
just merge the mp3 files using sox
sox in1.mp3 in2.mp3 in3.mp3 out.mp3
When I used concat with FFMPEG it made 12.5ms (I saw them on using Audacity) audio gaps. (I don't know why)
Maybe for your case it'll be better to extract the audio and video to two separate files using ffmpeg, merge them (video using FFMPEG and audio using sox) then put the files together into one container (mp4) file

How can i add cover image on mp3 file using ffmpeg?

I'm trying to convert audio file to mp3 and I want to add cover image into mp3 file.
I tried this:
ffmpeg.exe -i "input audio file" -i image.png out.mp3
When conversion completes there is no cover image into mp3.
I tried and this which is from the official documentation of ffmpeg. The result is the same mp3 file without cover image.
ffmpeg -i input.mp3 -i cover.png -c copy -metadata:s:v title="Album cover"-metadata:s:v comment="Cover (Front)" out.mp3
Thank you in advance!

Resources