ffmpeg mosaic keep audio of all input videos - audio

I have five videos and want to combine them into one big "strip" with all five videos next to each other.
My code so far (following this example):
ffmpeg
-i s-0-h-0.mp4 -i s-1-h-0.mp4 -i s-2-h-0.mp4 -i s-3-h-0.mp4 -i s-4-h-0.mp4
-filter_complex "
nullsrc=size=4240x478 [base];
[0:v] setpts=PTS-STARTPTS, scale=848x478 [vid1];
[1:v] setpts=PTS-STARTPTS, scale=848x478 [vid2];
[2:v] setpts=PTS-STARTPTS, scale=848x478 [vid3];
[3:v] setpts=PTS-STARTPTS, scale=848x478 [vid4];
[4:v] setpts=PTS-STARTPTS, scale=848x478 [vid5];
[base][vid1] overlay=shortest=1 [tmp1];
[tmp1][vid2] overlay=shortest=1:x=848 [tmp2];
[tmp2][vid3] overlay=shortest=1:x=1696 [tmp3];
[tmp3][vid4] overlay=shortest=1:x=2544 [tmp4];
[tmp4][vid5] overlay=shortest=1:x=3392
"
-c:v libx264 output.mkv
However, this only includes the audio from input 1.
How do I keep the audio of all five input videos?

Add [0:a][1:a][2:a][3:a][4:a]amix=5

Related

FFMpeg merge video and audio at specific time into another video

I have a standard mp4 (audio + video)
I am trying to merge a 1.4 second mini mp4 clip into this track, replacing the video for the length of the mini clip but merging the audios together at a specific time
Would anyone know how to do this using ffmpeg?
I've tried quite a few different filters, however can't seem to get what I want
V <------->
miniclip.mp4 A <=======>
V <-----------> ↓ + ↓ <--->
standard.mp4 A <=========================>
Example to show miniclip.mp4 (1.4 seconds long) at timestamp 5.
ffmpeg -i main.mp4 -i miniclip.mp4 -filter_complex "[0:v]drawbox=t=fill:enable='between(t,5,6.4)'[bg];[1:v]setpts=PTS+5/TB[fg];[bg][fg]overlay=x=(W-w)/2:y=(H-h)/2:eof_action=pass;[1:a]adelay=5s:all=1[a1];[0:a][a1]amix" output.mp4
drawbox covers the main video with black. Only needed if miniclip.mp4 has a smaller width or height than main.mp4. You can omit it if miniclip.mp4 width and height is ≥ than main.mp4. Alternatively you could use the scale2ref filter to make miniclip.mp4 have the same width and height as main.mp4.
setpts add a 5 second offset to miniclip.mp4 video.
overlay overlays miniclip.mp4 video over main.mp4 video.
adelay adds a 5 second delay to miniclip.mp4 audio.
amix mixes miniclip.mp4 and main.mp4 audio.
More info
See FFmpeg Filter Documentation for info on each filter.
How to get video duration
Edited (now I understood the question):
First Get 1.4 seconds of standard.mp4 and audio1.mp3
-ss is the start for get the small video that will be 1.4 seconds of length (with -t option you can specify the duration, in this case 1.4 seconds) summary: cut video from min 5, 1.4 seconds
-an is for audio none copy, because you want to add a new audio1.mp3
video_only.mp4
ffmpeg -ss 00:05:00 -i standard.mp4 -t 1.4 -map 0:v -c copy -an small_only_video.mp4
audio_only.mp4
ffmpeg -ss 00:05:00 -i audio1.mp3 -t 1.4 -c copy small_only_audio.mp3
now you can to create a small_clip_audiovideo.mp4
ffmpeg -i small_only_video.mp4 -c:a mp3 -i small_only_audio.mp3 -c copy -map 0:v -map 1:a:0 -disposition:a:0 default -disposition:a:1 default -strict -2 -sn -dn -map_metadata -1 -map_chapters -1 -movflags faststart small_clip_audiovideo.mp4
V <------->
miniclip.mp4 A <=======>
V <-----------> ↓ + ↓ <------->
standard.mp4 A <=============================>
|--|--|--|--|--|--|--|--|--|--|
0 1 2 3 4 5 6 7 8 9 10
standard.mp4 have 10 seconds (aprox) of duration, have audio and video
miniclip.mp4 have 03 seconds (aprox) of duration, have video and audio
ffmpeg -i standard.mp4 |
} have same codes of video and audio?*
ffmpeg -i miniclip.mp4 |
if are not a same audio code or video code of files standard.mp4 and miniclip.mp4, you will be to recode for continue, if you want a good work.
ffmpeg -ss 00:00:00 -i standard.mp4 -t 4 -c copy 01.part_project.mp4
and 7 to 10, in 03.part_project.mp4
ffmpeg -ss 00:00:04.000 -i standard.mp4 -t 3.0000 -c copy 03.part_project.mp4
changue name or create a copy of miniclip.mp4 to 02.part_project.mp4
cp miniclip.mp4 02.part_project.mp4
(the part of 4 second to 7 seconds, of standard.mp4 will be used if you choice the OPTION 2 copy only the audio, santadard_part2_audio.mp4)
NOW THE OPTION N 1: IS TO CONTACT (UNITED) the 3 video parts
make a folder "option1" and copy 01.part_project.mp4 02.part_project.mp4 03.part_project.mp4
mkdir option1 && cp 01.part_project.mp4 02.part_project.mp4 03.part_project.mp4 ./option1 && cd ./option1
now you concat 01.part_project.mp4 + 02.part_project.mp4 + 03.part_project.mp4 into a unique file fin_option1.mp4
ffmpeg -f concat -safe 0 -i <(for f in ./*.mp4; do echo "file '$PWD/$f'"; done) -c copy fin_option1.mp4
V <------->
miniclip.mp4 A <=======>
V <-----------> ↓ + ↓ <------->
standard.mp4 A <============XXXXXXXXX========>
|--|--|--|--|--|--|--|--|--|--|
0 1 2 3 4 5 6 7 8 9 10
THE SECOND OPTION IS TO CONTACT (UNITED) the 3 video parts, BUT MIX
THE AUDIO OF miniclip.mp4 with santadard_part2_audio.mp4
get the audio stream from santadard_part2_audio.mp4 and get the audio
file only from miniclip.mp4
ffmpeg -i santadard_part2_audio.mp4 -map 0:a -c copy -vn -strict -2 mix_audio_santadad.mp4
ffmpeg -i miniclip.mp4 -map 0:a -c copy -vn -strict -2 mix_audio_miniclip.mp4
MIX ALL AUDIOS** IN ONE AND PUT THE VIDEO FROM miniclip.mp4
ffmpeg -i mix_audio_miniclip.mp4 -i mix_audio_santadad.mp4 -filter_complex amix=inputs=2:duration=longest -strict -2 audio_mixed_miniclip.mp4
get only video from miniclip.mp4
ffmpeg -i miniclip.mp4 -c copy -an miniclip_video.mp4
and get miniclip but with mixed audios, I think that it is the solution that you are looking for
ffmpeg -i miniclip_video.mp4 -i audio_mixed_miniclip.mp4 -c copy -map 0:v -map 1:a:0 -disposition:a:0 default -disposition:a:1 default -strict -2 -sn -dn -map_metadata -1 -map_chapters -1 -movflags faststart 02.part_project_OPTION2.mp4
santadard_part2_audio.mp4
+
audio_miniclip.mp4
V <------->
miniclip.mp4 A <MMMMMMMM> (audio miniclip mixed with standard.mp4)
V <-----------> ↓ + ↓ <------->
standard.mp4 A <============ ========>
|--|--|--|--|--|--|--|--|--|--|
0 1 2 3 4 5 6 7 8 9 10
make a folder "option2" and copy 01.part_project.mp4 02.part_project_OPTION2.mp4 03.part_project.mp4
mkdir option2 && cp 01.part_project.mp4 02.part_project_OPTION2.mp4 03.part_project.mp4 ./option2 && cd ./option2
ffmpeg -f concat -safe 0 -i <(for f in ./*.mp4; do echo "file '$PWD/$f'"; done) -c copy fin_option2.mp4
NOTES
** YOU CAN USE A LOT OF AUDIO MANIPULATIONS https://trac.ffmpeg.org/wiki/AudioChannelManipulation

Using ffmpeg on Ubuntu, how can the audio and video from an audio-video USB capture device be recorded?

I have a USB audio-video capture device, something used to digitize video cassettes. I want to record both the video and audio from the device to a video file that has dimensions 720x576 and video codec H.264 and good audio quality.
I am able to record video from the device using ffmpeg and I am able to see video from the device using MPlayer. I am able also to see that audio is being delivered from the device to the computer by looking at Input tab of the Sound Preferences window or by recording the audio using Audacity, however the audio gets delivered from the device apparently only when the video is being accessed using ffmpeg or MPlayer.
I have tried to get ffmpeg to record the audio and I have tried to get MPlayer to play the audio and my efforts have not been successful.
The device is "Pinnacle Dazzle DVC 90/100/101" (as returned by v4l2-ctl --list-devices). The sound cards listing shows it as "DVC100":
$ cat /proc/asound/cards
0 [PCH ]: HDA-Intel - HDA Intel PCH
HDA Intel PCH at 0x601d118000 irq 171
1 [DVC100 ]: USB-Audio - DVC100
Pinnacle Systems GmbH DVC100 at usb-0000:00:14.0-4, high speed
29 [ThinkPadEC ]: ThinkPad EC - ThinkPad Console Audio Control
ThinkPad Console Audio Control at EC reg 0x30, fw N2LHT33W
The PulseAudio listing for the device is as follows:
$ pactl list cards short
0 alsa_card.pci-0000_00_1f.3 module-alsa-card.c
14 alsa_card.usb-Pinnacle_Systems_GmbH_DVC100-01 module-alsa-card.c
The following ffmpeg command successfully records video, but records severely distorted, broken and out-of-sync audio:
ffmpeg -y -f rawvideo -f alsa -thread_queue_size 2048 -ar 48000 -i hw:0 \
-c:a aac -video_size 720x576 -pixel_format uyvy422 -i /dev/video2 out.mp4
The following MPlayer command successfully displays the video but does not play the audio:
mplayer -tv driver=v4l2:norm=PAL:device=/dev/video2:width=720:height=576 \
-ao alsa:device=hw=1.0 -vf pp=lb tv://
Now, when the above MPlayer command is running (not the ffmpeg command) and displaying the input video in a window, Audacity can be opened and set recording audio, and it records the audio from the device clearly and in good quality. While Audacity is doing this, the input device is listed in pavucontrol as "Dazzle DVC Audio Device Analogue Stereo". Equivalently, arecord can be used also to record the audio using the following command (with output shown):
$ arecord -vv -D plughw:DVC100 -fdat out.wav
Recording WAVE 'out.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo
Plug PCM: Hardware PCM card 1 'DVC100' device 0 subdevice 0
Its setup is:
stream : CAPTURE
access : RW_INTERLEAVED
format : S16_LE
subformat : STD
channels : 2
rate : 48000
exact rate : 48000 (48000/1)
msbits : 16
buffer_size : 24000
period_size : 6000
period_time : 125000
tstamp_mode : NONE
tstamp_type : MONOTONIC
period_step : 1
avail_min : 6000
period_event : 0
start_threshold : 1
stop_threshold : 24000
silence_threshold: 0
silence_size : 0
boundary : 6755399441055744000
appl_ptr : 0
hw_ptr : 0
Looking at the output of arecord -L, I tried a variety of audio device input names with ffmpeg and none of them seemed to work. So, for example, I tried commands like the following:
ffmpeg -y -f rawvideo -f alsa -i plughw:DVC100 \
-video_size 720x576 -pixel_format uyvy422 -i /dev/video2 out.mp4
And tried the following audio device names:
plughw:DVC100
plughw:CARD=DVC100,DEV=0
hw:CARD=DVC100,DEV=0
plughw:CARD=DVC100
sysdefault:CARD=DVC100
iec958:CARD=DVC100,DEV=0
dsnoop:CARD=DVC100,DEV=0
So, how might I get ffmpeg to record the audio successfully to the video file? Is there some alternative approach to this problem?
EDIT: The relevant output from the command pactl list sources is as follows:
Source #20
State: SUSPENDED
Name: alsa_input.usb-Pinnacle_Systems_GmbH_DVC100-01.analog-stereo
Description: Dazzle DVC100 Audio Device Analogue Stereo
Driver: module-alsa-card.c
Sample Specification: s16le 2ch 48000Hz
Channel Map: front-left,front-right
Owner Module: 45
Mute: no
Volume: front-left: 99957 / 153% / 11.00 dB, front-right: 99957 / 153% / 11.00 dB
balance 0.00
Base Volume: 35466 / 54% / -16.00 dB
Monitor of Sink: n/a
Latency: 0 usec, configured 0 usec
Flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY
Properties:
alsa.resolution_bits = "16"
device.api = "alsa"
device.class = "sound"
alsa.class = "generic"
alsa.subclass = "generic-mix"
alsa.name = "USB Audio"
alsa.id = "USB Audio"
alsa.subdevice = "0"
alsa.subdevice_name = "subdevice #0"
alsa.device = "0"
alsa.card = "1"
alsa.card_name = "DVC100"
alsa.long_card_name = "Pinnacle Systems GmbH DVC100 at usb-0000:00:14.0-4, high speed"
alsa.driver_name = "snd_usb_audio"
device.bus_path = "pci-0000:00:14.0-usb-0:4:1.1"
sysfs.path = "/devices/pci0000:00/0000:00:14.0/usb1/1-4/1-4:1.1/sound/card1"
udev.id = "usb-Pinnacle_Systems_GmbH_DVC100-01"
device.bus = "usb"
device.vendor.id = "2304"
device.vendor.name = "Pinnacle Systems, Inc."
device.product.id = "021a"
device.product.name = "Dazzle DVC100 Audio Device"
device.serial = "Pinnacle_Systems_GmbH_DVC100"
device.string = "front:1"
device.buffering.buffer_size = "352800"
device.buffering.fragment_size = "176400"
device.access_mode = "mmap+timer"
device.profile.name = "analog-stereo"
device.profile.description = "Analogue Stereo"
device.description = "Dazzle DVC100 Audio Device Analogue Stereo"
alsa.mixer_name = "USB Mixer"
alsa.components = "USB2304:021a"
module-udev-detect.discovered = "1"
device.icon_name = "audio-card-usb"
Ports:
analog-input-linein: Line In (priority: 8100)
Active Port: analog-input-linein
Formats:
pcm
I tested the name from this with ffmpeg (version 4.3.1, compiled with -enable-libpulse) in the following way:
ffmpeg -y -f video4linux2 -f pulse \
-i alsa_input.usb-Pinnacle_Systems_GmbH_DVC100-01.analog-stereo \
-video_size 720x576 -pixel_format uyvy422 -i /dev/video2 out.mp4
Unfortunately this hasn't worked.
I also use Dazzle DVC100 to capture video and -f alsa -i hw:1 works well to me. For instance:
ffmpeg -f alsa -i hw:1 -i /dev/video2 \
-codec:v ffv1 -codec:a pcm_s16le raw.mkv
The number of the device can be found using:
cat /proc/asound/cards
Use the number in the first column after hw: prefix. In your case it is hw:1.
Keep in mind FFmpeg fails opening the device when PulsAudio device is opened. It happens to me when I am runnning pavucontrol at the same time for example. In practice I need to wait about a half of a minute after closing pavucontrol before running FFmpeg successfully.
You can check the output of FFmpeg in real time using:
ffmpeg -f alsa -i hw:1 -i /dev/video2 \
-codec:v ffv1 -codec:a pcm_s16le -f matroska - | ffplay -
You can find more information on capturing video using Dazzle DVC100 in my post.

Piping pi's opencv video to ffmpeg for Youtube streaming

This is a small python3 script reading off picam using OpenCV :
#picamStream.py
import sys, os
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (960, 540)
camera.framerate = 30
rawCapture = PiRGBArray(camera, size=(960, 540))
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
image = frame.array
# ---------------------------------
# .
# Opencv image processing goes here
# .
# ---------------------------------
os.write(1, image.tostring())
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# end
And I am trying to pipe it to ffmpeg to Youtube stream
My understanding is that I need to reference below two commands to somehow come up with a new ffmpeg command.
Piping picam live video to ffmpeg for Youtube streaming.
raspivid -o - -t 0 -vf -hf -w 960 -h 540 -fps 25 -b 1000000 | ffmpeg -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/[STREAMKEY]
Piping OPENCV raw video to ffmpeg for mp4 file.
python3 picamStream.py | ffmpeg -f rawvideo -pixel_format bgr24 -video_size 960x540 -framerate 30 -i - foo.mp4
So far I've had no luck. Can anyone help me with this?
This is the program I use in raspberry pi.
#main.py
import subprocess
import cv2
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
command = ['ffmpeg',
'-f', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s','640x480',
'-i','-',
'-ar', '44100',
'-ac', '2',
'-acodec', 'pcm_s16le',
'-f', 's16le',
'-ac', '2',
'-i','/dev/zero',
'-acodec','aac',
'-ab','128k',
'-strict','experimental',
'-vcodec','h264',
'-pix_fmt','yuv420p',
'-g', '50',
'-vb','1000k',
'-profile:v', 'baseline',
'-preset', 'ultrafast',
'-r', '30',
'-f', 'flv',
'rtmp://a.rtmp.youtube.com/live2/[STREAMKEY]']
pipe = subprocess.Popen(command, stdin=subprocess.PIPE)
while True:
_, frame = cap.read()
pipe.stdin.write(frame.tostring())
pipe.kill()
cap.release()
Youtube needs an audio source, so use -i /dev/zero.
I hope it helps you.

ffmpeg not resize video with dvd format

I have a video and an audio file. I'm trying joining them and slice a piece of video and it's working:
ffmpeg -ss 0:0:1.950 -i "video.avi" -ss 0:0:1.950 -i "audio.mp3" -target pal-dvd -bufsize 9175040 -muxrate 50400000 -acodec ac3 -ac 2 -ab 128k -ar 44100 -t 0:0:5.997 -y "output.mpg"
The problem is when I try resize the video using the -vf filter, example:
ffmpeg -ss 0:0:1.950 -i "video.avi" -ss 0:0:1.950 -i "audio.mp3" -vf scale="1024:420" -target pal-dvd -bufsize 9175040 -muxrate 50400000 -acodec ac3 -ac 2 -ab 128k -ar 44100 -t 0:0:5.997 -y "output.mpg"
It doesn't work because of the argument: -target pal-dvd. If I remove this argument, the video resize but doesn't keep the quality I want.
-target pal-dvd is equal to -c:v mpeg2video -c:a ac3 -f dvd -s 720x576 -r 25 -pix_fmt yuv420p -g 15 -b:v 6000000 -maxrate:v 9000000 -minrate:v 0 -bufsize:v 1835008 -packetsize 2048 -muxrate 10080000 -b:a 448000 -ar 48000. Your other options override these defaults, so you can simply use these options directly and remove the -s 720x576 and use your own size instead.
I'm not sure why you want to resize to 1024x420 and then use -target pal-dvd, but this option implies additional options. From ffmpeg_opt.c:
} else if (!strcmp(arg, "dvd")) {
opt_video_codec(o, "c:v", "mpeg2video");
opt_audio_codec(o, "c:a", "ac3");
parse_option(o, "f", "dvd", options);
parse_option(o, "s", norm == PAL ? "720x576" : "720x480", options);
parse_option(o, "r", frame_rates[norm], options);
parse_option(o, "pix_fmt", "yuv420p", options);
opt_default(NULL, "g", norm == PAL ? "15" : "18");
opt_default(NULL, "b:v", "6000000");
opt_default(NULL, "maxrate:v", "9000000");
opt_default(NULL, "minrate:v", "0"); // 1500000;
opt_default(NULL, "bufsize:v", "1835008"); // 224*1024*8;
opt_default(NULL, "packetsize", "2048"); // from www.mpucoder.com: DVD sectors contain 2048 bytes of data, this is also the size of one pack.
opt_default(NULL, "muxrate", "10080000"); // from mplex project: data_rate = 1260000. mux_rate = data_rate * 8
opt_default(NULL, "b:a", "448000");
parse_option(o, "ar", "48000", options);
Also, option placement matters. If you want to resize and use -target then place the filtering after -target. Note that this will probably resize twice.
Or omit -target and manually declare each option and modify them to your desired specifications.

merge png with mp4 ffmpeg missing audio

I use ffmpeg to merge mp4 and png, I use two way:
use command
String cmd = "-y -i " + in.mp4 + " -i " + in.png + " -filter_complex [0:v][1:v]overlay=0:0[out] -preset veryfast -map [out] -map 1:0 -map 0:0 -codec:a copy " + out.mp4;
output file missing audio:
use command:
String cmd = "-y -i " + in.mp4 + " -i " + in.png + " -filter_complex [0:v][1:v]overlay=0:0[out] -preset veryfast -map [out] -map 0:a -codec:a copy " + out.mp4;
=> There is audio but some mp4 file cannot merge with png file
Log: Stream map '0:a' matches no streams.
What is my command missing here ?
The first, you need use ffmpeg to check mp4 info. Then you will select command with 0:a or not
ffmpeg.execute(("-i " + filepath).split(" "), new ExecuteBinaryResponseHandler() {
boolean hasAudio = false;
#Override
public void onProgress(String s) {
if (s.matches("^\\s+Stream.+Audio.+")) {
hasAudio = true;
}
}
#Override
public void onFinish() {
}
});
You can do this with one command:
String cmd = "-y -i " + in.mp4 + " -i " + in.png + " \
-filter_complex [0:v][1:v]overlay=0:0[out] -preset veryfast \
-map [out] -map 0:a? -codec:a copy " + out.mp4;
The ? teels ffmpeg to only map the stream if it exists.

Resources