i try to change the order of the output of 3 inputs (2 audio + 1 video)
this is my command:
/usr/bin/ffmpeg -async 1 \
-f pulse -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor \
-f pulse -i alsa_input.pci-0000_00_1b.0.analog-stereo \
-f x11grab -video_size 1920x1080 -framerate 8 -i :0.0 \
-filter_complex amix=inputs=2 \
-c:a aac -b:a 128k \
-c:v h264_nvenc -b:v 1500k -maxrate 1500k -minrate 1500k \
-override_ffserver -g 16 http://10.100.102.109:8090/feed1.ffm
this command works but, the first output is audio , therefore my third app cant view this output
this is my output:
Stream mapping:
Stream #0:0 (pcm_s16le) -> amix:input0 (graph 0)
Stream #1:0 (pcm_s16le) -> amix:input1 (graph 0)
amix (graph 0) -> Stream #0:0 (aac)
Stream #2:0 -> #0:1 (rawvideo (native) -> h264 (h264_nvenc))
Press [q] to stop, [?] for help
-async is forwarded to lavfi similarly to -af aresample=async=1:min_hard_comp=0.100000:first_pts=0.
Last message repeated 1 times
Output #0, ffm, to 'http://10.100.102.109:8090/feed1.ffm':
Metadata:
creation_time : now
encoder : Lavf57.83.100
Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
encoder : Lavc57.107.100 aac
Stream #0:1: Video: h264 (h264_nvenc) (Main), bgr0, 1920x1080, q=-1--1, 1500 kb/s, 8 fps, 1000k tbn, 8 tbc
Metadata:
encoder : Lavc57.107.100 h264_nvenc
Side data:
cpb: bitrate max/min/avg: 1500000/0/1500000 buffer size: 3000000 vbv_delay: -1
****how can i replace the output that the video will be first?****
(when i do this command with 1 audio and 1 video inputs, the output is fine, the video is first , and the third part App can view it)
i spent a lot of hours on it please help me.
tnx a lot ...
In the absence of mapping, output streams from complex filtergraphs will be ordered before other streams. So, add a label to the filter_complex output and map in the order required.
Use
/usr/bin/ffmpeg -async 1 \
-f pulse -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor \
-f pulse -i alsa_input.pci-0000_00_1b.0.analog-stereo \
-f x11grab -video_size 1920x1080 -framerate 8 -i :0.0 \
-filter_complex "amix=inputs=2[a]" \
-map 2:v -map '[a]' \
-c:a aac -b:a 128k \
-c:v h264_nvenc -b:v 1500k -maxrate 1500k -minrate 1500k \
-override_ffserver -g 16 http://10.100.102.109:8090/feed1.ffm
Related
In terminal :
ffmpeg -f v4l2 -i /dev/video0 -vcodec libx264 -preset ultrafast -pix_fmt yuv420p -s 1280x720 -r 30 -b:v 1500k -bufsize 1500k -maxrate 7000k -f flv rtmp://192.168.1.6:1935/live/test
I get:
Output #0, flv, to 'rtmp://192.168.1.6:1935/live/test':
Metadata:
encoder : Lavf58.20.100
Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p, 1280x720, q=-1--1, 1500 kb/s, 30 fps, 1k tbn, 30 tbc
Metadata:
encoder : Lavc58.35.100 libx264
Side data:
cpb: bitrate max/min/avg: 7000000/0/1500000 buffer size: 1500000 vbv_delay: -1
frame= 115 fps=0.1 q=21.0 size= 3088kB time=00:00:17.96 bitrate=1407.9kbits/s speed=0.0141x
When I open VLC player and open network stream in network rtmp://192.168.1.6/live/test, but it does not play it shows no error it just keeps on loading.
Thanks in advance
try this way:
rtmpdump -v -r "rtmp://192.168.1.6:1935/live/test" -o - | "vlc" -
https://rtmpdump.mplayerhq.hu/
https://en.wikipedia.org/wiki/RTMPDump
I am currently capturing video via a Blackmagic decklink card on macOS. My audio and video are out of sync. The audio is ahead about a second. I suspect the video is slower on account of encoding latency. My solution is to retard the audio using the ffmpeg adelay filter. I originally added a -af "adelay=1000|1000" to my command to delay the audio by 1000ms but I found that this audio filter did nothing. Consequently, I tried to build a complex_filter, but this failed. My command is producing too many streams that ffmpeg can't route them to the proper rtp endpoint. So what is the best way to delay the audio and can I select which streams map to rtp endpoints?
ffmpeg \
-format_code 23ps \
-f decklink \
-i "DeckLink HD Extreme 3" \
-filter_complex "[0:a] adelay=2s|2s [delayed]" \
-map [delayed] -map 0:v \
-r 24 \
-g 1 \
-s 1920x1080 \
-quality realtime \
-speed 8 \
-threads 8 \
-row-mt 1 \
-tile-columns 2 \
-frame-parallel 1 \
-qmin 30 \
-qmax 35 \
-b:v 2000k \
-pix_fmt yuv420p \
-c:v libvpx-vp9 \
-strict experimental \
-an -f rtp rtp://myurl.com:5004?pkt_size=1300 \
-c:a libopus \
-b:a 128k \
-vn -f rtp rtp://myurl.com:5002?pkt_size=1300
adding a full log when running the command with out any delay:
-filter_complex "[0:a] adelay=2s|2s [delayed]" \
-map [delayed] -map 0:v \
ffmpeg version N-97362-g889ad93c88 Copyright (c) 2000-2020 the FFmpeg developers
built with Apple LLVM version 9.0.0 (clang-900.0.39.2)
configuration: --prefix=/usr/local --pkg-config-flags=--static --extra-cflags='-fno-stack-check -I/Users/admin/Documents/ffmpeg_build/include -I/Users/admin/Documents/BDS/Mac/include' --extra-ldflags=-L/Users/admin/Documents/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/Users/admin/Documents/ffmpeg_build/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --enable-decklink
libavutil 56. 42.102 / 56. 42.102
libavcodec 58. 80.100 / 58. 80.100
libavformat 58. 42.100 / 58. 42.100
libavdevice 58. 9.103 / 58. 9.103
libavfilter 7. 77.101 / 7. 77.101
libswscale 5. 6.101 / 5. 6.101
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
[decklink # 0x7fcfb2000000] Found Decklink mode 1920 x 1080 with rate 23.98
[decklink # 0x7fcfb2000000] Frame received (#2) - No input signal detected - Frames dropped 1
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, decklink, from 'DeckLink HD Extreme 3':
Duration: N/A, start: 0.000000, bitrate: 797002 kb/s
Stream #0:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
Stream #0:1: Video: rawvideo (UYVY / 0x59565955), uyvy422(progressive), 1920x1080, 795466 kb/s, 23.98 tbr, 1000k tbn, 1000k tbc
[decklink # 0x7fcfb2000000] Frame received (#3) - Input returned - Frames dropped 2
Stream mapping:
Stream #0:1 -> #0:0 (rawvideo (native) -> vp9 (libvpx-vp9))
Stream #0:0 -> #1:0 (pcm_s16le (native) -> opus (libopus))
Press [q] to stop, [?] for help
[libvpx-vp9 # 0x7fcfb180d200] v1.8.2
Output #0, rtp, to 'rtp://myurl.com.com:5004?pkt_size=1300':
Metadata:
encoder : Lavf58.42.100
Stream #0:0: Video: vp9 (libvpx-vp9), yuv420p, 1920x1080, q=30-35, 2000 kb/s, 24 fps, 90k tbn, 24 tbc
Metadata:
encoder : Lavc58.80.100 libvpx-vp9
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
Output #1, rtp, to 'rtp://myrul.com:5002?pkt_size=1300':
Metadata:
encoder : Lavf58.42.100
Stream #1:0: Audio: opus (libopus), 48000 Hz, stereo, s16, 128 kb/s
Metadata:
encoder : Lavc58.80.100 libopus
SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
t=0 0
a=tool:libavformat 58.42.100
m=video 5004 RTP/AVP 96
c=IN IP4 54.183.58.143
b=AS:2000
a=rtpmap:96 VP9/90000
m=audio 5002 RTP/AVP 97
c=IN IP4 54.183.58.143
b=AS:128
a=rtpmap:97 opus/48000/2
a=fmtp:97 sprop-stereo=1
frame= 434 fps= 24 q=0.0 size= 37063kB time=00:00:18.09 bitrate=16780.7kbits/s speed=1.01x
The -map option is positional and belongs to the first output URL immediately after it. So, the delayed audio should be mapped after the first output and before the 2nd output URL.
ffmpeg \
-format_code 23ps \
-f decklink \
-i "DeckLink HD Extreme 3" \
-filter_complex "[0:a] adelay=2s|2s [delayed]" \
-map 0:v \
-r 24 \
-g 1 \
-s 1920x1080 \
-quality realtime \
-speed 8 \
-threads 8 \
-row-mt 1 \
-tile-columns 2 \
-frame-parallel 1 \
-qmin 30 \
-qmax 35 \
-b:v 2000k \
-pix_fmt yuv420p \
-c:v libvpx-vp9 \
-strict experimental \
-an -f rtp rtp://myurl.com:5004?pkt_size=1300 \
-map [delayed]
-c:a libopus \
-b:a 128k \
-vn -f rtp rtp://myurl.com:5002?pkt_size=1300
i'm working on merging multiple audio merge and i'm using this command
(in below command tempTxtFile is file for all audiopath)
cmd = "-f concat -safe 0 -i " + tempTxtFile + " -c copy -preset ultrafast " + filepath;
well, because i'm using -c copy it only works if selected audios are of codec mp3,but if i will use mp3 and m4a(aac) or both m4a audios,it is preventing me from merge.
So, i now i'm using another command which is as follows(for 2 audios):
cmd = "-f concat -safe 0 -i " + tempTxtFile + " -filter_complex [0:a][1:a]" + "concat=n=2:v=0:a=1[outa] -map [outa] -c:a mp3 -preset ultrafast " + filepath;
This command showing me given error when running
Invalid file index 1 in filtergraph description [0:a][1:a]concat=n=2:v=0:a=1[outa].
This is whole log
Input #0, concat, from '/storage/emulated/0/Download/tempFile.txt':
Duration: N/A, start: 0.000000, bitrate: 117 kb/s
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 117 kb/s
Metadata:
handler_name : SoundHandler
Invalid file index 1 in filtergraph description [0:a][1:a]concat=n=2:v=0:a=1[outa].
Right now i'm not able to do anything and don't know any working solution.
When codecs are different across files, you should feed all inputs individually and then use the concat filter e.g.
ffmpeg -i file1 -i file2 -i file3 -filter_complex concat=n=3:v=0:a=1 -c:a mp3 -vn out.mp3
I have an mp4 file like this(same format but longer):
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'N1.2.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
creation_time : 2018-10-31T13:44:21.000000Z
Duration: 00:28:54.21, start: 0.000000, bitrate: 10295 kb/s
Stream #0:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 9972 kb/s, 50 fps, 50 tbr, 50k tbn, 100 tbc (default)
Metadata:
creation_time : 2018-10-31T13:44:21.000000Z
handler_name : ?Mainconcept Video Media Handler
encoder : AVC Coding
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default)
Metadata:
creation_time : 2018-10-31T13:44:21.000000Z
handler_name : #Mainconcept MP4 Sound Media Handler
I also have another video file that is 3 minutes long. and has no audio. What is the fastest way to encode the other video in a way that it is encoded like my main video and then replace the last three minutes of the video track of my original video with this?
In other words.
I have video A that is 1 hour long. With the encoding shown above.
I have video B that is 3 minutes long with no audio. with a random encoding.
I want to have video C with the same encoding and same audio as A. But it's video track would be the first 57 minutes of A + B(which is 3 minutes).
I want to do this as fast as possible so I would like to not re encode A.
I know how to concatenate two videos, I use this command:
ffmpeg -f concat -i files.txt -c copy res.mp4
Make end video using parameters of main video:
ffmpeg -i videob.mp4 -f lavfi -i anullsrc=sample_rate=48000:channel_layout=stereo -filter_complex "[0:v]scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,format=yuv420p,fps=50[v]" -map "[v]" -map 1:a -c:v libx264 -profile:v main -c:a aac -video_track_timescale 50000 -shortest videob2.mp4
Get duration of main video:
ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 main.mp4
Make files.txt which is needed for concat demuxer:
file 'main.mp4'
outpoint 3420
file 'videob2.mp4'
In this example outpoint is main video duration minus end video duration.
Concatenate:
ffmpeg -f concat -i files.txt -i main.mp4 -map 0:v -map 1:a -c copy -shortest output.mp4
When using FFmpeg's split filter for video tracks, I want to filter audio track as well. I tested asplit but not sure where to use it in the filter chain.
When running this command:ffmpeg -y -probesize 100M -analyzeduration 5000000 -hide_banner -i $input -i $logo \
-filter_complex "[0:a]aformat=channel_layouts=stereo,aresample=async=1000[a1];[0:v]overlay=20:20,drawtext=fontfile=$font:text='some text':fontcolor=c1ff30:fontsize=50:x=250:y=100,split=3[v1][v2][v3];[v1]setpts=PTS-STARTPTS,yadif=0:-1:0,scale=w=640:h=360:force_original_aspect_ratio=decrease:sws_dither=ed:flags=lanczos,setdar=16/9[v1];[v2]setpts=PTS-STARTPTS,yadif=0:-1:0,scale=w=1024:h=576:force_original_aspect_ratio=decrease:sws_dither=ed:flags=lanczos,setdar=16/9[v2];[v3]setpts=PTS-STARTPTS,yadif=0:-1:0,scale=w=1600:h=900:force_original_aspect_ratio=decrease:sws_dither=ed:flags=lanczos,setdar=16/9[v3]" \
-map "[v1]" -map "[a1]" -c:a libfdk_aac -ac 2 -b:a 128k -ar 48000 -c:v libx264 -crf 23 -maxrate 550k -bufsize 1100k -bsf:v h264_mp4toannexb -forced-idr 1 -sc_threshold 0 -r 25 -g 50 -keyint_min 50 -preset medium -profile:v main -level 3.1 -coder 1 -pix_fmt yuv420p -flags +loop+mv4+cgop -flags2 +local_header -movflags faststart -cmp chroma -hls_time 6 -hls_playlist_type vod /dir/1.m3u8 \
-map "[v2]" -map "[a1]" -c:a libfdk_aac -ac 2 -b:a 128k -ar 48000 -c:v libx264 -crf 23 -maxrate 1400k -bufsize 2800k -bsf:v h264_mp4toannexb -forced-idr 1 -sc_threshold 0 -r 25 -g 50 -preset medium -profile:v main -level 4 -coder 1 -pix_fmt yuv420p -flags +loop+mv4+cgop -flags2 +local_header -movflags faststart -cmp chroma -keyint_min 50 -hls_time 6 -hls_playlist_type vod /dir/2.m3u8 \
-map "[v3]" -map "[a1]" -c:a libfdk_aac -ac 2 -b:a 128k -ar 48000 -c:v libx264 -crf 23 -maxrate 3100k -bufsize 6200k -bsf:v h264_mp4toannexb -forced-idr 1 -sc_threshold 0 -r 25 -g 50 -preset medium -profile:v high -level 3.1 -coder 1 -pix_fmt yuv420p -flags +loop+mv4+cgop -flags2 +local_header -movflags faststart -cmp chroma -keyint_min 50 -hls_time 6 -hls_playlist_type vod /dir/3.m3u8
FFmpeg throws this error:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/Volumes/aaa/bbb/file.mov':
Metadata:
major_brand : qt
minor_version : 512
compatible_brands: qt
encoder : Lavf58.20.100
Duration: 00:00:10.00, start: 0.000000, bitrate: 117945 kb/s
Stream #0:0(eng): Video: prores (apcn / 0x6E637061), yuv422p10le(tv, bt709, top coded first (swapped)), 1920x1080, 115636 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 10k tbn, 10k tbc (default)
Metadata:
handler_name : Telestream, LLC Telestream Media Framework - Local 99.99.999999
encoder : Apple ProRes 422
timecode : 01:25:44:05
Stream #0:1(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s (default)
Metadata:
handler_name : Telestream, LLC Telestream Media Framework - Local 99.99.999999
Stream #0:2(eng): Data: none (tmcd / 0x64636D74), 0 kb/s
Metadata:
handler_name : Telestream, LLC Telestream Media Framework - Local 99.99.999999
timecode : 01:25:44:05
Input #1, png_pipe, from '/Volumes/aaa/bbb/logo.png':
Duration: N/A, bitrate: N/A
Stream #1:0: Video: png, rgba(pc), 1920x1080 [SAR 2835:2835 DAR 16:9], 25 tbr, 25 tbn, 25 tbc
Output with label 'a1' does not exist in any defined filter graph, or was already used elsewhere.
When removing the audio filtering ([0:a]aformat=channel_layouts=stereo,aresample=async=1000[a1]) and mapping 0:a as audio, the command runs fine.
What am I missing?
Filtergraph outputs can be used only once. You'll have to clone the audio output for multiple use.
First,
[0:a]aformat=channel_layouts=stereo,aresample=async=1000,asplit=3[a1][a2][a3]
and then map a1, a2, a3 as required.