FFMpeg drawtext width? - text

I'm using drawtext to print some text on an animated GIF.
Everything is working, but I'm unable to specify a bounding box and make the text wrap.
I'm currently using this:
ffmpeg -i image.gif -filter_complex "drawtext=textfile=text.txt:x=main_w/2 - text_w/2:y=main_h/2 - text_h/2:fontfile=Roboto-Regular.ttf:fontsize=24:fontcolor=000000" image_out.gif
Is there a way to wrap text?

You can use the FFmpeg subtitles filter for automatic word wrapping and to place the text in the middle center.
There are many subtitle formats, but this answer has examples for ASS and SRT subtitles. ASS supports more formatting options, but SRT is a simpler format and can be modified with the force_style option in the subtitles filter.
ASS subtitles
Make your subtitles in Aegisub. Click the button that looks like a pink "S" to open the Styles Manager. Click edit. Choose Alignment value 5. Save subtitles file as ASS format.
ffmpeg example:
ffmpeg -i input -filter_complex subtitles=subs.srt -c:a copy output
SRT subtitles
SRT is simpler than ASS but lacks features so you may need to use the force_style option in the subtitles filter. You can make these subtitles with a text editor. Example SRT file:
1
00:00:00,000 --> 00:00:05,000
Text with
manual line break
and with automatic word wrapping of long lines.
2
00:00:07,000 --> 00:00:10,000
Another line. Displays during 7-10 seconds.
ffmpeg example:
ffmpeg -i input -filter_complex "subtitles=subs.srt:force_style='Alignment=10,Fontsize=24'" -c:a copy output
For more force_style options look at the Styles section in the contents of an ASS file and refer to ASS Tags.
In this case, Alignment is using the legacy line position numbers, but the ASS example above is using the modern "numpad" number system.

Related

How to create watermark underlined text using ffmpeg

I am trying to create a watermark with underlined text, but I am not able to find drawtext filter for underlined text.
Here is my command:
ffmpeg -i ./public/uploads/videos/1577703125107.mp4 -vf "[in]drawtext=fontfile= ./public/fonts/Arial/arial.ttf: text=my watermark text: fontcolor=#000000: fontsize=20: x=23:y=68 [out]" ./public/uploads/videos/1577703128790.mp4
How can I underline text?
drawbox filter
If you want to use the drawtext filter you would have to draw the line separately with drawbox:
ffmpeg input.mpt -filter_complex "drawtext=text='my watermark text':fontsize=20:fontfile=/usr/share/fonts/TTF/VeraMono.ttf:x=23:y=68,drawbox=w=205:h=2:x=23:y=85:t=fill" output.mp4
This my be easiest if you use a monospaced font.
subtitles filter
A possibly simpler, and better looking, alternative method is to use the subtitles filter:
ffmpeg -i input.mp4 -filter_complex "subtitles=underline.srt:force_style='Alignment=3,Fontsize=22'" output.mp4
Contents of underline.srt:
1
00:00:00,000 --> 00:00:05,000
<u>my watermark text</u>
If you don't want to use keypad position Alignment tag you can use ASS file instead of SRT with \pos tag for more accurate placement. See ASS tags.

Using ffmpeg to overlay black line or add border to two side by side videos

I am using the following to generate a video that is side by side.
ffmpeg -i left.mp4 -i right.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS, pad=iw*2:ih[bg]; [1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w" -y final.mp4
It looks like this.
http://www.mo-de.net/d/partnerAcrobatics.mp4
I would like to place a vertical black line on top right in the middle or add a black border to the video on the left. If I add a border to the left video I would like to maintain the original sum dimension of the original videos. This solution would require subtracting the border width from the left videos width. I will take either solution.
Thanks
Solution | Solved: If both videos have no audio use this.
ffmpeg -i left.mp4 -i right.mp4 -filter_complex "[0:v]crop=639:720, pad=640:720:0:0:black[tmp0]; [1:v]crop=639:720, pad=640:720:1:0:black[tmp1]; [tmp0][tmp1]hstack[v] " -map [v] -y o.mp4
If both videos have audio use the following.
ffmpeg -i c2.mov -i c1.mov -filter_complex "[0:v]crop=1279:720, pad=1280:720:0:0:black[tmp0]; [1:v]crop=1279:720, pad=1280:720:1:0:black[tmp1]; [tmp0][tmp1]hstack[v];[0:a][1:a]amerge=inputs=2[a]" -map [v] -map [a] -ac 2 -y o.mp4
Both videos must have the same height.
crop=1279:720
I used crop to remove one pixel from the video width on the right. It was originally 1280 pixels.
pad=1280:720:0:0:black[tmp0]
I padded the left movie by declaring a new canvas size of 1280 pixel. This moved the movie to the left leaving one pixel of space on the right which is colored "black".
The right movie I padded and moved to the right exposing the black border on the left.
pad=1280:720:1:0:black[tmp1]
I did this to both videos so the affect remained centered if the videos are the same dimensions.
Use
ffmpeg -i left.mp4 -i right.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS,crop=iw-10:ih:0:0, pad=2*(iw+10):ih[bg]; [1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w" -y final.mp4
Since you've already joined the videos, seems like you wanna separate them with a vertical black line.
Since this line is stationary (inanimate), this overlay can be a still picture, eg. black.png (10px wide & same height as your video)
If you want this line to be animated or a moving picture, then the overlay can be another video.
If the videos had NOT been joined, you could pad the 2nd video on the left or 1st video on the right, before joining
eg. ffmpeg -i 1.mp4 -vf "pad=width=<new width>:height=<same height>:color=black" out.mp4
The code below answers your question, with a vertical line of width 10 pixels in the middle, separating the 2 videos:
ffmpeg -i in.mp4 -i black.png -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" out.mp4

How can I convert gif to webm preserving alpha channel?

To start out with I have this gif I got from Google Images which has a transparent alpha channel.
Here is the original gif (open it in a new tab to see the transparency):
Here is a recording of it playing on my screen in case it doesn't display right in the browser:
Then I run the following script to convert it to webm, which is what I need for the game framework I'm using.
avconv -f gif img.gif img.webm
However it doesn't maintain the transparency. Here is with an overlay of a properly transparent webm (the water, taken from https://phaser.io/examples/v2/video/alpha-webm)
The white box shouldn't be appearing around the gem.
First convert the gif to png frames:
convert img.gif img%03d.png
Then combine them into a webm with this command (I had to get outside help on this):
ffmpeg -framerate 25 -f image2 -i ./img%03d.png -c:v libvpx -pix_fmt yuva420p img.webm

Gnuplot animation vector

i am trying to make a fluid vector animation in gnuplot. To create the vector values i use FORTRAN. My FORTRAN subroutine program prints vector data in a txt file called vekdata.txt and creates another file called plotvek.txt with gnuplot commands. This subroutine is inside a do loop so for every iteration vekdata.txt gets updated.
So i was wandering how i can make an animation of this as it develops in time? Is there some simple commands? As it is now it prints a huge amount of picture to my screen. Every picture is a bit different so i know the code works.
do t=1,1000
call vektorplot(storu,storv,n,Re,t)
end do
open(21,access='sequential',file='plotvek.txt',status='unknown')
write(21,*)'set term png enhanced'
write(21,*)'# plotvek.txt'
write(21,*)'set output sprintf(''frame_%09d.png'',',t,')'
!animation commands
write(21,*)'set output sprintf("frame_%9d",'t,')'
close(21,status='keep')
call execute_command_line("gnuplot -persist plotvek.txt")
I'm posting here an alternative.
Although I usually prefer the animated gif as Karl answers, sometimes too big gifs are difficult to rented and especially for very long movies, they tend to create unresponsive applications (browser or slide presentations).
Basically you write to a file every frame and then create a movie.
In this link you have both gif and movie examples. I'm going to recall here the principles.
For every frame you set a png terminal and output file. As fortran command, this would be something like:
write(21,*)'set term png enhanced'
write(21,*)'# plotvek.txt'
write(21,*)'set output sprintf("frame_%09d.png",',n+1,')'
[...]
Then, once the program is run, you can create a movie:
mencoder mf://frame_%09d.png -mf fps=30 -ovc lavc -o my_video.avi
Of course mencoder has a tons of options to tune your movie.
Another alternative to mencoder is ffmpeg:
ffmpeg -framerate 1/5 -i frame_%09d.png -c:v libx264 -r 30 -pix_fmt yuv420p my_video.mp4
The gif terminal has an option to make a gif animation, but you have to plot it all in one call to the gnuplot script.
You could try something like this:
$ makevectors | gnuplot
where makevectors is the executable of your fortran code, only it prints everything to STDOUT, first
set term gif animation
set out 'vectors.gif'
# plus the rest of your settings
do for [i=1:100] {plot '-' using 1:2:($3*30):($4*25) with vectors}
, then 100 data sets, with an EOF after each. Lastly print
set out
(Ok, the output would close anyway, but just to be very orderly) and you've got a file with that gif animation.
Update: I'd recommend you move your gnuplot commands to a script file and have gnuplot call that on the command line makevectors | gnuplot script.gp. That way you don't have to recompile the program every time you want to change a line colour or something.

FFmpeg: how to make video out of slides and audio

So i have several images some png and some jpgs. And i have mp3 audio. I want to make a video file don't care what format.
So i want either:
A video made up of xyz size meaning images are centered and cropped if they go beyond dimensions coupled with audio in mp3 format..
or just one image centered or and cropped, still image, video with audio.
I have tried copying and pasting things and even modifying them after reading documents but in the end i got a blank video with audio and huge file that took forever to complete.
I have windows 7.
u have to rename the images in a sequence.
example if u have a.png , bds.png , asda.png....
rename it to image1.png, image2.png , image3.png and so on
(it should be in the sequence u want the images to come in the video)
now make sure u are in the folder u have saved the rename images
now use
ffmpeg -i image%d.png output.mp4 (whichever format u want)
now to add audio(say 'input.mp3') to 'output.mp4'
use ffmpeg -i input.mp3 -i output.mp4 output2.mp4
this should work.
hope this helps.

Resources