How to remove specific duration from video using ffmpeg - linux

i've tried many ways to solve the problem, i also tried this code before
ffmpeg -i movie.mp4 -vf trim=3:8 cut.mp4
but for the result, the audio is still there but the video is gone. all i want is the video and audio both are removed.
can anyone help me how resolve this?

Use
ffmpeg -i movie.mp4 -vf "select='not(between(t,3,8))',setpts=N/FRAME_RATE/TB" -af aselect='not(between(t,3,8))',asetpts=N/SR/TB cut.mp4

Related

Combine images into video (with audio in background) in ffmpeg

I have a folder with 15 images and 1 audio file:
image_1.jpg, image_2.jpg, image_3.jpg ..... and music.webm
(Also resolution of images is 1440x720)
I want to combine these images into a video with audio in background.And framerate I require is 0.2 (5 second for each frame).I gave a search on Stackoverflow and I found the nearest example and tried.But it failed.
ffmpeg -f image2 -i image%03d.jpg -i music.webm output.mp4
(Actually I have very little knowledge of ffmpeg so please excuse my foolishness)
Please help me with my issue.(Also I didn't understood where in the code I have to enter framerate)
Edit:-If needed I can easily tweak with filename of images.Be free to tell me that too
How your command failed? please paste the text.
According to your image file format, the pattern should be: %3d, but not %03d
e.g.
image_%d.jpg apply to: image_1.jpg, image_2.jpg, image_3.jpg
image_%04d.jpg apply to: image_0001.jpg, image_0002.jpg ... image9999.jpg
and also , when using this "pattern sequence", MAKE SURE:
the sequence should start from xxx001.jpg, otherwise you should specify a parameter -start_index or something.
the sequence must not broken. e.g. image5.jpg, image6.jpg, (missing 7 ) image8.jpg
refer:
https://ffmpeg.org/ffmpeg-formats.html#image2-2
https://en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide/image_sequence
Try this:
ffmpeg -r 0.2 -i image_%02d.jpg -i music.webm -vcodec libx264 -crf 25 -preset veryslow -acodec copy video.mkv
So -r specifies fps (I actually didn't try using a float value there, but try it)
-vcodec specifies video codec, -crf quality, preset the encoding speed (slower is more efficient), -acodec copy says audio should be copied.
I think that should work, give it a try. You will need to rename the images to image_01.jpg image_02...
Also have a look here: How to create a video from images with FFmpeg?

No data written to stdin or stderr from ffmpeg

I have a dummy client that is suppose to simulate a video recorder, on this client i want to simulate a video stream; I have gotten so far that i can create a video from bitmap images that i create in code.
The dummy client is a nodejs application running on an Raspberry Pi 3 with the latest version of raspian lite.
In order to use the video I have created, I need to get ffmpeg to dump the video to pipe:1. The problem is that I need the -f rawvideo as a input parameter, else ffmpeg can't understand my video, but when i have that parameter set ffmpeg refuses to write anything to stdio
ffmpeg is running with these parameters
ffmpeg -r 15 -f rawvideo -s 3840x2160 -pixel_format rgba -i pipe:0 -r 15 -vcodec h264 pipe:1
Can anybody help with a solution to my problem?
--Edit
Maybe i sould explain a bit more.
The system i am creating is to be set up in a way, where instead of my stream server ask the video recorder for a video stream, it will be the recorder that tells the server that there is a stream.
I have have slowed my problem on my own. (-:
i now have 2 solutions.
Is to change my -f rawvideo to -f data that works for me anyways.
I can encode my bitmaps as jpeg in code and pipe my jpeg images to stdin. This also requires me to change the ffmpeg parameters to -r 4 -f mjpeg -i pipe:0 -r 4 -vcodec copy -f mjpeg pipe:1 and is by far the slowest thing i have ever done. and i can't use a 4k input
thanks #Mulvya for trying to help.
#eFox Thanks for editing my stupid spelling and grammar mistakes

concatenate video files using ffmpeg - garbled images but audio okay

I am trying to concatenate video files so that next one follows the one before it when it is played. The formatting for all of the files are the same. The files all have audio & video.
I think I am very close (hopefully!) to getting this to work, but I have one final problem. The command below takes all of the mp4 files in my folder and creates a big mp4 file, which is the right size in total MB, but the images for all videos after the first video are garbled. The audio is okay (continues just fine from video to video). Also, I don't get any error messages.
ffmpeg -f concat -i <(for f in /folder1/*.mp4; do echo "file '$f'"; done) -c copy /folder1/all.mp4
I'm not very familiar with ffmpeg yet, so I've just been trying the different suggestions I've found on the web. Can anyone suggest other things for me to try? (I've tried reading the FAQs, but I have to confess that I don't fully understand it. Also, there seems to be some posts about audio being missing after concatenation, but I haven't seen anything on images being garbled.) Thx in advance!
I have had good luck using this ... avconv is a fork of ffmpeg
avconv -i 1.mp4 1.mpeg
avconv -i 2.mp4 2.mpeg
avconv -i 3.mp4 3.mpeg
cat 1.mpeg 2.mpeg 3.mpeg | avconv -f mpeg -i - -vcodec mpeg4 -strict experimental output.mp4

Multiple fadeIn/fadeOut effects in one audio file with ffmpeg

I have some problem to add several fade effects to one audio file. When I try to use a command like this:
ffmpeg -y -i /home/user/video/test/sound.mp3 -af "afade=t=in:ss=0:d=3,afade=t=out:st=7:d=3,afade=t=in:st=10:d=3,afade=t=out:st=17:d=3,afade=t=in:st=20:d=3,afade=t=out:st=27:d=3" /tmp/test.mp3
then my output audio file has a fadein and fadeout applied only once. All the next effects don't get applied. Is there any possible way to apply several fade effects to the same audio file? Also, what is the difference between ss and st parameter in this command?
The problem is that after fading out the audio you are trying to fade in the silence.
The solution is to disable the fade out filter when you want to start fading in.
You can achieve that with Timeline Editing to enable the filters for a particular amount of time.
The following example works just fine:
ffmpeg -i input.mp3 -af "afade=enable='between(t,0,3)':t=in:ss=0:d=3,afade=enable='between(t,7,10)':t=out:st=7:d=3,afade=enable='between(t,10,13)':t=in:st=10:d=3,afade=enable='between(t,13,16)':t=out:st=13:d=3" -t 16 output.mp3
Works for me with ffmpeg 2.5.2.
I'm using fade in and fade out audio filter, both for the duration of 3 seconds.
ffmpeg -i audio.mp3 -af 'afade=t=in:ss=0:d=3,afade=t=out:st=27:d=3' out.mp3
I'd recommend to upgrade your ffmpeg, as this might be a bug. More information in the docs.
take a look here: ffmpeg volume filters
volume='if(lt(t,10),1,max(1-(t-10)/5,0))':eval=frame
complete command:
ffmpeg -i movie.wav -filter volume='if(lt(t,10),1,max(1-(t-10)/5,0))':eval=frame modified-movie.wav

Watermarking video from the Linux command line

does anyone know how to watermark video from the Linux command line using a simple tool?
Watermarking in ffmpeg isn't supported in the current version, and requires a custom compile.
Max.
ffmpeg -y -i 'inputFile.mpg' -vhook '/usr/lib/vhook/watermark.so -f /home/user/logo.gif'
Make note of the "-vhook" parameter; watermark.so path may vary.
Another simple way to do this is updating ffmpeg to the newest version and adding the overlay video filter:
ffmpeg -y -i video.mp4 -i watermark.png -filter_complex "overlay=(main_w-overlay_w):(main_h-overlay_h)" watermark.mp4
This also gives you more options on where to place the watermark as well. For example, if you wanted to place the watermark in the center of the video you would use:
-filter_complex "overlay=(main_w-overlay_w/2):(main_h-overlay_h/2)"

Resources