Using ffmpeg, I was able to remove duplicate frames from a video using ffmpeg -i in.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB out.mp4. However, the audio went on for longer than the video, obviously because the command only removed the video portion. How would I remove the segments of audio which accompany the removed frames?
Related
I am recording AVI files with Camtasia. For some reason the video stream length is 2,3-5 seconds less than the audio stream.
When I convert the video with ffmpeg from AVI to MP4 it cuts the audio to the video length.
Would duplicating the last frame until the end of the audio be a solution? If yes how can this be done using ffmpeg?
The important thing is to convert the AVI to MP4 using ffmpeg and keep the audio stream of the video complete.
Thank you.
Edit 1: This issue is automatically solved by ffmpeg 2.x somehow but ffmpeg 4.x will cut audio. With the same settings the old version converts correctly.
Edit 2: tpad helped. Thank you very much #kesh. I used
-filter_complex 'tpad=stop=NUMBER_OF_FRAMES:stop_mode=clone'
I tried to get the duration using ffprobe and multiplied the number of seconds with number of frames per second but it was not enough. For each video I had to increase that number with 100,150 frames.
The issue is I cannot detect the exact number of frames to tell tpad. I also tried
-filter_complex 'tpad=stop=-1:stop_mode=clone'
but it freezez while processing.
Is there any other option?
What I want is to be able to create a livestream from a Ubuntu v14.04 server to a RTMP Server (like Twitch) and to be able to use NodeJS to control visual aspects (adding layers, text, images) and add different sources (video files, others livestreams, etc). Like having OBS running on a server.
What I've done/researched so far:
FFmpeg
With ffmpeg I can can create video files streams like that:
ffmpeg -re -i video.mp4 -c:v libx264 -preset fast -c:a aac -ab 128k -ar 44100 -f flv rtmp://example.com
Also using the filter_complex I can create something near to a layer like this tutorial explains:
https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
But I found the following problems:
The streams that I create with ffmpeg only last until the video file is over, if I wanted to stream multiple video files (dynamic playlist) it would interrupt the stream between each file;
The manipulation is very limited as far as I am concerned, I can't edit filter_complex once ffmpeg is executing;
Can't display text and create animated overlays, like sliding text.
I tried to search for any cli/nodejs package that is able to create a continuos video stream and manipulate it to use as input source for ffmpeg which streams to the RTMP server.
Can someone give me more information about what I am trying to do?
I'm playing with github.com/fluent-ffmpeg/node-fluent-ffmpeg to see if I have a different outcome.
I need to make a video which will play on iPhone and Android but the problem is when I click play on the phone it needs minimum 7 seconds to start.
So maybe I need to fix something in this code to make the video play on phones (maybe another format is needed):
ffmpeg -i VIDEO -c:v libx264 -s 640x480 -strict experimental -c:a aac VIDEO.MP4
There must be something to make the video play faster without a delay on start.
I tried a FLV file and it worked fine on Android but the iPhone can't play it.
If you're referring to a progressive download scenario then you can use:
-movflags faststart
Run a second pass moving the index (moov atom) to the beginning of the
file. This operation can take a while, and will not work in various
situations such as fragmented output, thus it is not enabled by
default.
Source
The moov atom is generally at the end of the file and a full download is required before playback in this case. Moving it to the start with the aforementioned command allows the playback to start immediately.
I'm writing chat application with video call using webRTC. I have two MediaStreams, remote and local and want to merge and save them as one file. So when opening a file, i shall see large video frame (remote stream) and little video frame at top right (local stream). Now I can record these two streams separately using RecordRTC. How can i merge them with nodejs? (no code because I don't know how it's done)
You can use FFmpeg with -filter_complex, here is a working and tested example using FFmpeg version N-62162-gec8789a:
ffmpeg -i main_video.mp4 -i in_picture.mp4 -filter_complex "[0:v:0]scale=640x480[main_video]; [1:v:0]scale=240x180[in_picture];[main_video][in_picture]overlay=390:10" output.mp4
So, this command tells FFmpeg to read from two input files, main_video.mp4 and in_picture.mp4, then it send some information to the -filter_complex flag...
The -filter_complex flag takes the [0:v:0] (first input, first video track) and scale this video to be 640x480px and it identifies the video as [main_video], then, takes the [1:v:0] (second input, video track 0) and resize the video to 240x180px naming the video [in_picture], then it merges both videos making an overlay of the second one at x=390 y=10.
Then it saves the output to output.mp4
It is that what you want?
UPDATE: I forgot to add, all you need in node is a module to run FFmpeg, there are plenty of those:
https://nodejsmodules.org/tags/ffmpeg
I know that there are a million ways to download a video from youtube and then convert it to audio or do further processing on it. But recently I was surprised to see an app called YoutubeToMp3 on mac actually showing "Skipping X mb of video" and supposedly only downloading the audio from the video, without the need to use bandwith to download the entire video and then convert it. I was wondering if this is actually correct and possible at all because I cant find any way to do that. Do you have any ideas ?
EDIT:
After some tests here is some additional information on the topic. The video which I tried to get the audio from is just a sample mp4 file from the internet:
http://download.wavetlan.com/SVV/Media/HTTP/MP4/ConvertedFiles/MediaCoder/MediaCoder_test6_1m9s_XVID_VBR_306kbps_320x240_25fps_MPEG1Layer3_CBR_320kbps_Stereo_44100Hz.mp4
I tried
ffmpeg -i "input" out.mp3
ffmpeg -i "input" -vn out.mp3
ffmpeg -i “input” -vn -ac 2 -ar 44100 -ab 320k -f mp3 output.mp3
ffmpeg -i “input” -vn -acodec copy output.mp3
Unfortunately non of these commands seems to be using less bandwith. They all download the entire video. Now that you have the video can you confirm if there is actually a command that downloads only the audio stream from it and lowers the bandwith usage? Thanks!
After a lot of research I found out that this is not possible and developed an alternative approach:
Download the mp4 header
Parse the header and get the locations of the audio bytes
Download the audio bytes with http range requests and offsets
Assemble the audio bytes and wrap them in a simple ADTS container to produce a playing m4a file
That way only bandwidth for the audio bytes is used. If you find a better approach of doing it please let me know.
For a sample Android APP and implementation check out:
https://github.com/feribg/audiogetter/blob/master/audiogetter/src/main/java/com/github/feribg/audiogetter/tasks/download/VideoTask.java
FFmpeg is capable of accepting an URL as input. If the URL is seekable, then FFmpeg could theoretically skip all the video frames, and thus it would need to download only the data for the audio stream.
Try using
ffmpeg -i http://myvideo.avi out.mp3
and see if it takes less bandwidth.