How to make Video Effects by ffmpeg from command line to Node JS? - node.js

I am working on ffmpeg. I have already worked on several work for editor. But currently I need to make effect on video. I have got command line command to give effect. But I cant convert it to node Js :
cellauto: ffplay -f lavfi -i cellauto=rule=110
life: ffplay -f lavfi -i life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800:flags=16
So I need to converrt it in filter by node js.

Related

No data written to stdin or stderr from ffmpeg

I have a dummy client that is suppose to simulate a video recorder, on this client i want to simulate a video stream; I have gotten so far that i can create a video from bitmap images that i create in code.
The dummy client is a nodejs application running on an Raspberry Pi 3 with the latest version of raspian lite.
In order to use the video I have created, I need to get ffmpeg to dump the video to pipe:1. The problem is that I need the -f rawvideo as a input parameter, else ffmpeg can't understand my video, but when i have that parameter set ffmpeg refuses to write anything to stdio
ffmpeg is running with these parameters
ffmpeg -r 15 -f rawvideo -s 3840x2160 -pixel_format rgba -i pipe:0 -r 15 -vcodec h264 pipe:1
Can anybody help with a solution to my problem?
--Edit
Maybe i sould explain a bit more.
The system i am creating is to be set up in a way, where instead of my stream server ask the video recorder for a video stream, it will be the recorder that tells the server that there is a stream.
I have have slowed my problem on my own. (-:
i now have 2 solutions.
Is to change my -f rawvideo to -f data that works for me anyways.
I can encode my bitmaps as jpeg in code and pipe my jpeg images to stdin. This also requires me to change the ffmpeg parameters to -r 4 -f mjpeg -i pipe:0 -r 4 -vcodec copy -f mjpeg pipe:1 and is by far the slowest thing i have ever done. and i can't use a 4k input
thanks #Mulvya for trying to help.
#eFox Thanks for editing my stupid spelling and grammar mistakes

Passing processed Video from OpenCV to FFmpeg for HLS streaming (Raspberry PI)

Hi I a have a question I have openCV and ffmpeg on the Raspberry Pi and I am trying to stream live video from the raspberry pi. At the moment I have the output output of openCV saving as a .avi file and I have a command for ffmpeg
ffmpeg -i out.avi -hls_segment_filename '%03d.ts' stream.m3u8
This Command take the output creates the playlist(.m3u8) and the segments(.ts).
At present I have openCV programmed in C++ (this can not change) I have an executable programmed from this and I have both the executable C++ and the above ffmpeg in a Bash Script.
#!/bin/bash
while true; do
./OpenCV
ffmpeg -i out.avi -hls_segment_filename '%03d.ts' stream.m3u8
done
This does allow me to stream the processed openCV video my issue is as the Bash script is in a while loop it keeps resetting the playlist and the .ts files, so i have to constantly press play on the client connection.
Is there anyway around this?
I tried including a variable that would increment every loop but if i replace '%03d' with this i get an error.
If you insist on using your program (OpenCV) and ffmpeg in a loop then you can specify the initial hls sequence number for stream.m3u8 using start_number. Something like this:
... as before ...
ffmpeg -i out.avi -hls_segment_filename '%03d.ts' --start_number $I stream.m3u8
where I is a variable that you have to increment each time the loop runs.
But this approach is very fragile and will probably result in an incorrect stream because it assumes that ffmpeg will produce only a single segment but in reality it will probably produce multiple segments.
A much better approach is to run OpenCV and ffmpeg in parallel and make them talk to each other. By doing so there will be no need to write to a temporary file out.avi and run OpenCV and ffmpeg in sequences and keep the media sequences synchronized.
I think you can hack it like this. Note that you may need to change OpenCV so that it writes constantly to out.avi and does not return after a while:
./OpenCV &
tail -n +0 -f out.avi | ffmpeg -i pipe:0 -hls_segment_filename '%03d.ts' stream.m3u8
A better approach is change your program to write to stdout or to a named pipe and run it like so:
./OpenCV | ffmpeg -i pipe:0 -hls_segment_filename '%03d.ts' stream.m3u8

concatenate video files using ffmpeg - garbled images but audio okay

I am trying to concatenate video files so that next one follows the one before it when it is played. The formatting for all of the files are the same. The files all have audio & video.
I think I am very close (hopefully!) to getting this to work, but I have one final problem. The command below takes all of the mp4 files in my folder and creates a big mp4 file, which is the right size in total MB, but the images for all videos after the first video are garbled. The audio is okay (continues just fine from video to video). Also, I don't get any error messages.
ffmpeg -f concat -i <(for f in /folder1/*.mp4; do echo "file '$f'"; done) -c copy /folder1/all.mp4
I'm not very familiar with ffmpeg yet, so I've just been trying the different suggestions I've found on the web. Can anyone suggest other things for me to try? (I've tried reading the FAQs, but I have to confess that I don't fully understand it. Also, there seems to be some posts about audio being missing after concatenation, but I haven't seen anything on images being garbled.) Thx in advance!
I have had good luck using this ... avconv is a fork of ffmpeg
avconv -i 1.mp4 1.mpeg
avconv -i 2.mp4 2.mpeg
avconv -i 3.mp4 3.mpeg
cat 1.mpeg 2.mpeg 3.mpeg | avconv -f mpeg -i - -vcodec mpeg4 -strict experimental output.mp4

Multiple fadeIn/fadeOut effects in one audio file with ffmpeg

I have some problem to add several fade effects to one audio file. When I try to use a command like this:
ffmpeg -y -i /home/user/video/test/sound.mp3 -af "afade=t=in:ss=0:d=3,afade=t=out:st=7:d=3,afade=t=in:st=10:d=3,afade=t=out:st=17:d=3,afade=t=in:st=20:d=3,afade=t=out:st=27:d=3" /tmp/test.mp3
then my output audio file has a fadein and fadeout applied only once. All the next effects don't get applied. Is there any possible way to apply several fade effects to the same audio file? Also, what is the difference between ss and st parameter in this command?
The problem is that after fading out the audio you are trying to fade in the silence.
The solution is to disable the fade out filter when you want to start fading in.
You can achieve that with Timeline Editing to enable the filters for a particular amount of time.
The following example works just fine:
ffmpeg -i input.mp3 -af "afade=enable='between(t,0,3)':t=in:ss=0:d=3,afade=enable='between(t,7,10)':t=out:st=7:d=3,afade=enable='between(t,10,13)':t=in:st=10:d=3,afade=enable='between(t,13,16)':t=out:st=13:d=3" -t 16 output.mp3
Works for me with ffmpeg 2.5.2.
I'm using fade in and fade out audio filter, both for the duration of 3 seconds.
ffmpeg -i audio.mp3 -af 'afade=t=in:ss=0:d=3,afade=t=out:st=27:d=3' out.mp3
I'd recommend to upgrade your ffmpeg, as this might be a bug. More information in the docs.
take a look here: ffmpeg volume filters
volume='if(lt(t,10),1,max(1-(t-10)/5,0))':eval=frame
complete command:
ffmpeg -i movie.wav -filter volume='if(lt(t,10),1,max(1-(t-10)/5,0))':eval=frame modified-movie.wav

Watermarking video from the Linux command line

does anyone know how to watermark video from the Linux command line using a simple tool?
Watermarking in ffmpeg isn't supported in the current version, and requires a custom compile.
Max.
ffmpeg -y -i 'inputFile.mpg' -vhook '/usr/lib/vhook/watermark.so -f /home/user/logo.gif'
Make note of the "-vhook" parameter; watermark.so path may vary.
Another simple way to do this is updating ffmpeg to the newest version and adding the overlay video filter:
ffmpeg -y -i video.mp4 -i watermark.png -filter_complex "overlay=(main_w-overlay_w):(main_h-overlay_h)" watermark.mp4
This also gives you more options on where to place the watermark as well. For example, if you wanted to place the watermark in the center of the video you would use:
-filter_complex "overlay=(main_w-overlay_w/2):(main_h-overlay_h/2)"

Resources