How to create watermark underlined text using ffmpeg - node.js

I am trying to create a watermark with underlined text, but I am not able to find drawtext filter for underlined text.
Here is my command:
ffmpeg -i ./public/uploads/videos/1577703125107.mp4 -vf "[in]drawtext=fontfile= ./public/fonts/Arial/arial.ttf: text=my watermark text: fontcolor=#000000: fontsize=20: x=23:y=68 [out]" ./public/uploads/videos/1577703128790.mp4
How can I underline text?

drawbox filter
If you want to use the drawtext filter you would have to draw the line separately with drawbox:
ffmpeg input.mpt -filter_complex "drawtext=text='my watermark text':fontsize=20:fontfile=/usr/share/fonts/TTF/VeraMono.ttf:x=23:y=68,drawbox=w=205:h=2:x=23:y=85:t=fill" output.mp4
This my be easiest if you use a monospaced font.
subtitles filter
A possibly simpler, and better looking, alternative method is to use the subtitles filter:
ffmpeg -i input.mp4 -filter_complex "subtitles=underline.srt:force_style='Alignment=3,Fontsize=22'" output.mp4
Contents of underline.srt:
1
00:00:00,000 --> 00:00:05,000
<u>my watermark text</u>
If you don't want to use keypad position Alignment tag you can use ASS file instead of SRT with \pos tag for more accurate placement. See ASS tags.

Related

FFMPEG color key overlay. audio output not working after conversion

ffmpeg -i hollywood.webm -i chroma.mp4 -filter_complex [1:v]colorkey=0x000000:0.3:0.2[ckout];[0:v][ckout]overlay[out] -map [out] test1.mp4
If you dont know the answer would appreaciate to have the map documentations sent to me, didnt find it
this is what been trying t do where chroma is the overlay video that also has audio and hollywood is the background video with also audio, the output image wise is good but the -map section is probably not going that well.

Using ffmpeg to overlay black line or add border to two side by side videos

I am using the following to generate a video that is side by side.
ffmpeg -i left.mp4 -i right.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS, pad=iw*2:ih[bg]; [1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w" -y final.mp4
It looks like this.
http://www.mo-de.net/d/partnerAcrobatics.mp4
I would like to place a vertical black line on top right in the middle or add a black border to the video on the left. If I add a border to the left video I would like to maintain the original sum dimension of the original videos. This solution would require subtracting the border width from the left videos width. I will take either solution.
Thanks
Solution | Solved: If both videos have no audio use this.
ffmpeg -i left.mp4 -i right.mp4 -filter_complex "[0:v]crop=639:720, pad=640:720:0:0:black[tmp0]; [1:v]crop=639:720, pad=640:720:1:0:black[tmp1]; [tmp0][tmp1]hstack[v] " -map [v] -y o.mp4
If both videos have audio use the following.
ffmpeg -i c2.mov -i c1.mov -filter_complex "[0:v]crop=1279:720, pad=1280:720:0:0:black[tmp0]; [1:v]crop=1279:720, pad=1280:720:1:0:black[tmp1]; [tmp0][tmp1]hstack[v];[0:a][1:a]amerge=inputs=2[a]" -map [v] -map [a] -ac 2 -y o.mp4
Both videos must have the same height.
crop=1279:720
I used crop to remove one pixel from the video width on the right. It was originally 1280 pixels.
pad=1280:720:0:0:black[tmp0]
I padded the left movie by declaring a new canvas size of 1280 pixel. This moved the movie to the left leaving one pixel of space on the right which is colored "black".
The right movie I padded and moved to the right exposing the black border on the left.
pad=1280:720:1:0:black[tmp1]
I did this to both videos so the affect remained centered if the videos are the same dimensions.
Use
ffmpeg -i left.mp4 -i right.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS,crop=iw-10:ih:0:0, pad=2*(iw+10):ih[bg]; [1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w" -y final.mp4
Since you've already joined the videos, seems like you wanna separate them with a vertical black line.
Since this line is stationary (inanimate), this overlay can be a still picture, eg. black.png (10px wide & same height as your video)
If you want this line to be animated or a moving picture, then the overlay can be another video.
If the videos had NOT been joined, you could pad the 2nd video on the left or 1st video on the right, before joining
eg. ffmpeg -i 1.mp4 -vf "pad=width=<new width>:height=<same height>:color=black" out.mp4
The code below answers your question, with a vertical line of width 10 pixels in the middle, separating the 2 videos:
ffmpeg -i in.mp4 -i black.png -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" out.mp4

FFMpeg drawtext width?

I'm using drawtext to print some text on an animated GIF.
Everything is working, but I'm unable to specify a bounding box and make the text wrap.
I'm currently using this:
ffmpeg -i image.gif -filter_complex "drawtext=textfile=text.txt:x=main_w/2 - text_w/2:y=main_h/2 - text_h/2:fontfile=Roboto-Regular.ttf:fontsize=24:fontcolor=000000" image_out.gif
Is there a way to wrap text?
You can use the FFmpeg subtitles filter for automatic word wrapping and to place the text in the middle center.
There are many subtitle formats, but this answer has examples for ASS and SRT subtitles. ASS supports more formatting options, but SRT is a simpler format and can be modified with the force_style option in the subtitles filter.
ASS subtitles
Make your subtitles in Aegisub. Click the button that looks like a pink "S" to open the Styles Manager. Click edit. Choose Alignment value 5. Save subtitles file as ASS format.
ffmpeg example:
ffmpeg -i input -filter_complex subtitles=subs.srt -c:a copy output
SRT subtitles
SRT is simpler than ASS but lacks features so you may need to use the force_style option in the subtitles filter. You can make these subtitles with a text editor. Example SRT file:
1
00:00:00,000 --> 00:00:05,000
Text with
manual line break
and with automatic word wrapping of long lines.
2
00:00:07,000 --> 00:00:10,000
Another line. Displays during 7-10 seconds.
ffmpeg example:
ffmpeg -i input -filter_complex "subtitles=subs.srt:force_style='Alignment=10,Fontsize=24'" -c:a copy output
For more force_style options look at the Styles section in the contents of an ASS file and refer to ASS Tags.
In this case, Alignment is using the legacy line position numbers, but the ASS example above is using the modern "numpad" number system.

How can I convert gif to webm preserving alpha channel?

To start out with I have this gif I got from Google Images which has a transparent alpha channel.
Here is the original gif (open it in a new tab to see the transparency):
Here is a recording of it playing on my screen in case it doesn't display right in the browser:
Then I run the following script to convert it to webm, which is what I need for the game framework I'm using.
avconv -f gif img.gif img.webm
However it doesn't maintain the transparency. Here is with an overlay of a properly transparent webm (the water, taken from https://phaser.io/examples/v2/video/alpha-webm)
The white box shouldn't be appearing around the gem.
First convert the gif to png frames:
convert img.gif img%03d.png
Then combine them into a webm with this command (I had to get outside help on this):
ffmpeg -framerate 25 -f image2 -i ./img%03d.png -c:v libvpx -pix_fmt yuva420p img.webm

FFmpeg: how to make video out of slides and audio

So i have several images some png and some jpgs. And i have mp3 audio. I want to make a video file don't care what format.
So i want either:
A video made up of xyz size meaning images are centered and cropped if they go beyond dimensions coupled with audio in mp3 format..
or just one image centered or and cropped, still image, video with audio.
I have tried copying and pasting things and even modifying them after reading documents but in the end i got a blank video with audio and huge file that took forever to complete.
I have windows 7.
u have to rename the images in a sequence.
example if u have a.png , bds.png , asda.png....
rename it to image1.png, image2.png , image3.png and so on
(it should be in the sequence u want the images to come in the video)
now make sure u are in the folder u have saved the rename images
now use
ffmpeg -i image%d.png output.mp4 (whichever format u want)
now to add audio(say 'input.mp3') to 'output.mp4'
use ffmpeg -i input.mp3 -i output.mp4 output2.mp4
this should work.
hope this helps.

Resources