Code has taken from following answer: FFmpeg - Overlay one video onto another video? and then little bit modified.
ffmpeg -i stream1.mp4 -i stream2.mp4 \
-filter_complex "[1:v]setpts=PTS+5/TB[a]; \
[0:v][a]overlay=enable=gte(t\,5):shortest=1[out]; \
[0][1]amix[a]" \
-map [out] -map [a] \
-c:v libx264 -crf 18 -pix_fmt yuv420p \
-y output.mp4
stream1.mp4 duration, eg: 35 seconds and stream2.mp4 duration, eg: 30 seconds. I want to start overlay (stream2.mp4) video+audio after 5 seconds. Overlay (stream2.mp4) video starts as expected.
But overlay (stream2.mp4) audio starts from beginning. How do I start overlay (stream2.mp4) audio after 5 seconds also?
Add the adelay filter:
ffmpeg -y -i stream1.mp4 -i stream2.mp4 \
-filter_complex "[1:v]setpts=PTS+5/TB[v]; \
[0:v][v]overlay=enable=gte(t\,5):shortest=1,format=yuv420p[out]; \
[1:a]adelay=5s:all=1[a1];[0][a1]amix[a]" \
-map [out] -map [a] \
-c:v libx264 -crf 18 \
output.mp4
If your ffmpeg is outdated you will have to use milliseconds and declare the delay for each channel, such as adelay=5000|5000.
Related
I have create 4 null sink pulse monitor and it works fine.
When I publish 4 output to rtmp audio having noise and stopping. When works with 2 outputs it works fine.
If I decrease the resolution from 1920x1080 to 1280*720 it works fine too.
Using 4 different audio inputs and 1 video input
SCREEN_WIDTH=1920
SCREEN_HEIGHT=1080
SCREEN_RESOLUTION=${SCREEN_WIDTH}x${SCREEN_HEIGHT}
COLOR_DEPTH=24
X_SERVER_NUM=2
VIDEO_BITRATE=3000
VIDEO_FRAMERATE=30
VIDEO_GOP=$((VIDEO_FRAMERATE))
AUDIO_BITRATE=160k
AUDIO_SAMPLERATE=44100
AUDIO_CHANNELS=1
#some codes here
ffmpeg -y\
-hide_banner -loglevel error \
-nostdin \
-s ${SCREEN_RESOLUTION} \
-r ${VIDEO_FRAMERATE} \
-draw_mouse 0 \
-f x11grab \
-i ${DISPLAY} \
-f pulse -i MySink1.monitor \
-f pulse -i MySink2.monitor \
-f pulse -i MySink3.monitor \
-f pulse -i MySink4.monitor \
-c:v libx264 \
-pix_fmt yuv420p \
-profile:v main \
-preset veryfast \
-minrate ${VIDEO_BITRATE} \
-maxrate ${VIDEO_BITRATE} \
-g ${VIDEO_GOP} \
-map 0 -f flv -map 1 ${RTMP_URL1} \
-c:v libx264 \
-pix_fmt yuv420p \
-profile:v main \
-preset veryfast \
-minrate ${VIDEO_BITRATE} \
-maxrate ${VIDEO_BITRATE} \
-g ${VIDEO_GOP} \
-map 0 -f flv -map 2 ${RTMP_URL2} \
-c:v libx264 \
-pix_fmt yuv420p \
-profile:v main \
-preset veryfast \
-minrate ${VIDEO_BITRATE} \
-maxrate ${VIDEO_BITRATE} \
-g ${VIDEO_GOP} \
-map 0 -f flv -map 3 ${RTMP_URL3} \
-c:v libx264 \
-pix_fmt yuv420p \
-profile:v main \
-preset veryfast \
-minrate ${VIDEO_BITRATE} \
-maxrate ${VIDEO_BITRATE} \
-g ${VIDEO_GOP} \
-map 0 -f flv -map 4 ${RTMP_URL4} \````
I guess I need some performance issue. How can I add **tee** or use one decoded video in all outputs.
Use the tee muxer. Simplified example:
ffmpeg \
-f x11grab -framerate 30 -video_size 1920x1080 -i :0.0 \
-f pulse -i MySink1.monitor \
-f pulse -i MySink2.monitor \
-f pulse -i MySink3.monitor \
-f pulse -i MySink4.monitor \
-map 0 -map 1 -map 2 -map 3 -map 4 \
-c:v libx264 -vf format=yuv420p -maxrate 3000k -bufsize 6000k -g 60 -c:a aac -flags +global_header \
-f tee "[select=\'v,a:0\':f=flv:onfail=ignore]${RTMP_URL1}|[select=\'v,a:1\':f=flv:onfail=ignore]${RTMP_URL2}|[select=\'v,a:2\':f=flv:onfail=ignore]${RTMP_URL3}|[select=\'v,a:3\':f=flv:onfail=ignore]${RTMP_URL4}"
I have try stream with ffmpeg with output to differents rtmp servers like that
ffmpeg -re -i nameoffile.mp4 -vcodec libx264 -g 60 -c:a aac -b:a 160k -ar 44100 -strict -2 -f flv \
-f flv rtmp://rtmp.1.com/code \
-f flv rtmp://rtmp.2.com/code \
-f flv rtmp://rtmp.3.com/code \
-f flv rtmp://rtmp.4.com/code \
-f flv rtmp://rtmp.5.com/code \
The problem is... is working only the first one but not the others
You have to set the output options for each output: -vcodec libx264 -g 60 -c:a aac -b:a 160k -ar 44100
I have two long audio files. I want to merge cut some part from each video
and merge both parts in one file.
In below command the problem there is no audio in the second audio part. It contain the first part and the second is empty.
What is the problem?
ffmpeg -f lavfi -i color=c=black
-ss 157.824 -t 99.818
-i "file1.mp4"
-ss 315.764 -t 50.308
-i "file2.mp4"
-s 854x480
-aspect 1.779167
-r 25
-c:v libx264
-b:v 800k
-c:a aac
-strict experimental
-b:a 128k
-f mp4
-t 150.126 -async 1
-y "output.mp4"
Currently, using ffmpeg, I am using two commands on my Terminal to:
1) create a video from a bunch of images:
ffmpeg -r 60 -f image2 -s 1920x1080 -i rotated-pano_frame%05d_color_corrected_gradblend.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p test.mp4
2) stream the video to a udp address:
ffmpeg -re -i test.mp4 -c copy -f flv udp://127.0.0.1:48550
I am trying to combine both these instructions into one command line instruction, using &&, as suggested in the answer to a previous question of mine:
ffmpeg -r 60 -f image2 -s 1920x1080 -i rotated-pano_frame%05d_color_corrected_gradblend.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p test.mp4 \
&& ffmpeg -re -i test.mp4 -c copy -f flv udp://127.0.0.1:48550
but am encountering an error which prevents streaming:
[flv # 0x7fa2ba800000] video stream discovered after head already parsed.
Thoughts on a different command line syntax to join the two instructions, different ffmpeg instruction (filters perhaps?), and why I am getting the error?
I am using ffmpeg 0.11.1 onto a panda board for streaming. Linux linaro-developer 3.4.0-1-linaro-lt-omap #1~120625232503-Ubuntu Sx Ubuntu os.
I want to live stream a raw video from a video device or a cam to a rtmp server in an mp4 format, command used:
./ffmpeg \
-f alsa -async 1 -ac 2 -i hw:2,0 \
-f video4linux2 -i /dev/video0 \
-acodec aac -b:a 40k \
-r 50 -s 320x240 -vcodec libx264 -strict -2 -b:v 320K -pass 1 \
-f flv rtmp://...../mp4:demo101
By this command I am able to stream but fps varies between 8 to 7 6 and so on. Help me to figure it out.