Using FFMPEG to split a 16 channel audio input source into 4 seperate 4 channel audio feeds for streaming - audio

I hope someone can help
I am currently trying to split a 16ch Dante audio feed from a separate machine into 4 different audio streams that I can use to then TX via RTMP to Wowza for MPEG-DASH encoding, at present i am just trying to split them into files, I will add the RTMP streaming later.
The biggest issue I am encountering at current is that FFMPEG is returning me this error from my input string
Filter channelsplit:WR has an unconnected output
here is my current input string
ffmpeg -f dshow -i audio="Dante Via Receive (Dante Via)" -filter_complex "[0:a]channelsplit=channel_layout=hexadecagonal[FL][FR][FC][BL][BR][BC][SL][SR][TFL][TFC][TFR][TBL][TBC][TBR][WL][WR]" -map "[FL][FR][FC][BL]" 1-4.wav -map "[BR][BC][SL][SR]" 5-8.wav -map "[TFL][TFC][TFR][TBL]" 9-12.wav -map "[TBC][TBR][WL][WR]" 13-16.wav
and here is the full FFMPEG output
ffmpeg version git-2019-12-26-b0d0d7e Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9.2.1 (GCC) 20191125
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
libavutil 56. 37.100 / 56. 37.100
libavcodec 58. 65.100 / 58. 65.100
libavformat 58. 35.101 / 58. 35.101
libavdevice 58. 9.101 / 58. 9.101
libavfilter 7. 69.101 / 7. 69.101
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, dshow, from 'audio=Dante Via Receive (Dante Via)':
Duration: N/A, start: 103082.790000, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
File '1-4.wav' already exists. Overwrite? [y/N] y
File '5-8.wav' already exists. Overwrite? [y/N] y
File '9-12.wav' already exists. Overwrite? [y/N] y
File '13-16.wav' already exists. Overwrite? [y/N] y
Filter channelsplit:WR has an unconnected output
I'm also getting the issue where FFMPEG is guessing that the channel count is stereo, which is incorrect but i'm having problems figuring out how to define the input stream as 16ch's of audio
Any help with this would be greatly recieved
Cheers
M

ffmpeg -f dshow -channels 16 -i audio="Dante Via Receive (Dante Via)" -filter_complex "[0:a]channelmap=0|1|2|3[1-4];[0:a]channelmap=4|5|6|7[5-8];[0:a]channelmap=8|9|10|11[9-12];[0:a]channelmap=12|13|14|15[13-16]" -map "[1-4]" 1-4.wav -map "[5-8]" 5-8.wav -map "[9-12]" 9-12.wav -map "[13-16]" 13-16.wav
Try adding the -channels 16 dshow input option.
Filter output labels can't be combined in -map, so do all mixing with filters and give each -map a single label.
channelsplit only outputs channels as individual streams, and it does not mix multiple channels into a single stream, so channelmap is used instead.
I don't have dshow so I couldn't test this.

Related

need help using ffmpeg to "concat" multiple audio files (webm, mp4) to make one longer MP3 of audio

I have done some searching (for several hours) and tried to manipulate many examples to work for me, but I still keep coming up empty here.
I am using linux-mint 19, with ffmpeg installed. I have a folder with several audio files. The majority of these are "webm" (with no video) and there are a few "m4a". I am trying to make one long mp3 file from the audio in all of these strung together from start to finish.
Lets say for sake of argument, my directory has the following files:
audio file a.webm
audio file b.webm
audio file c.m4a
audio file d.webm
I found a script that I could write online where someone is creating a file called "mylist.txt" with this bit of code:
# with a bash for loop
for f in ./*.*; do echo "file '$f'" >> mylist.txt; done
# or with printf
printf "file '%s'\n" ./*.* > mylist.txt
this generated a text file with the following type of content:
file './audio file a.webm'
file './audio file b.webm'
file './audio file c.m4a'
file './audio file d.webm'
first, I believe the "./" is causing a problem, because when I look at other examples I don't see this but I am not sure why it is generating this way in my script because I don't see this in web examples online. Second, I have tried to "concate" this with ffmpeg but I'm not sure which is the best option. I found some documentation here:
https://trac.ffmpeg.org/wiki/Concatenate
however this example applies to video specifically
Can anyone lead me in the right direction?
EDIT******
I tried the solution below, with "mylist.txt" as the input and I am getting an error:
user#machine/TEMP$ ffmpeg -i mylist.txt -filter_complex "[0:a][1:a][2:a][3:a]concat=n=20:a=1:v=0[a]" -map "[a]" output.mp3
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, tty, from 'mylist.txt':
Duration: 00:00:00.40, bitrate: 47 kb/s
Stream #0:0: Video: ansi, pal8, 640x400, 25 fps, 25 tbr, 25 tbn, 25 tbc
Stream specifier ':a' in filtergraph description [0:a][1:a][2:a][3:a]concat=n=20:a=1:v=0[a] matches no streams.
The concat demuxer works best with inputs that all have the same attributes. The concat demuxer documentation states, "All files must have the same streams (same codecs, same time base, etc.)" The concat demuxer is good for when you are trying to avoid re-encoding, but that is not possible with inputs of various formats.
You are providing inputs with arbitrary attributes. Use concat filter instead:
ffmpeg -i a.webm -i b.webm -i c.m4a -i d.webm -filter_complex "[0:a][1:a][2:a][3:a]concat=n=4:a=1:v=0[a]" -map "[a]" output.mp3
Note from the concat filter documentation: "The filtering system will automatically select a common sample format, sample rate, and channel layout for audio streams."
If you want to manually select the sample rate and channel layout, so you know exactly what you will get, add the aformat filter:
ffmpeg -i a.webm -i b.webm -i c.m4a -i d.webm -filter_complex "[0:a]aformat=r=44100:cl=stereo[a0];[1:a]aformat=r=44100:cl=stereo[a1];[2:a]aformat=r=44100:cl=stereo[a2];[3:a]aformat=r=44100:cl=stereo[a3];[a0][a1][a2][a3]concat=n=4:a=1:v=0[a]" -map "[a]" output.mp3

FFmpeg ignoring output pixel format

I am using ffmpeg 4.2.2 on an Ubuntu 20.04 machine to clone the video stream of a USB webcam so that multiple applications can use the same feed. To achieve this, I simply clone to a v4l2 loop back device:
ffmpeg -f v4l2 -i /dev/video0 -codec copy -f v4l2 /dev/video1
So far, this works reasonably well. I am able to successfully access /dev/video1 which presents the same feed as /dev/video0.
Note: To make this work you need to ensure that the v4l2loopback device kernel module is enabled:
modprobe v4l2loopback devices=1
Next I'd like to convert the pixel format of the dummy device as the application that is accessing the dummy device can only handle yuv422p or RGB whereas my source device /dev/video0 provides yuv420p. I thought that this would be a simple task that can easily be handled by presenting ffmpeg with an additional -pix_fmt argument on the output device like so:
ffmpeg -f v4l2 -i /dev/video0 -codec copy -f v4l2 -pix_fmt yuv422p /dev/video1
While ffmpeg starts cloning the stream without any warnings or errors, it is still outputting in yuv420p instead:
joel#joel-ubuntu:~$ ffmpeg -f v4l2 -i /dev/video0 -codec copy -f v4l2 -pix_fmt yuv422p /dev/video1
ffmpeg version 4.2.2-1ubuntu1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-3ubuntu1)
configuration: --prefix=/usr --extra-version=1ubuntu1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[video4linux2,v4l2 # 0x55ca407b9700] Time per frame unknown
Input #0, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 6726.737520, bitrate: N/A
Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 640x480, 29.25 tbr, 1000k tbn, 1000k tbc
Output #0, video4linux2,v4l2, to '/dev/video1':
Metadata:
encoder : Lavf58.29.100
Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 640x480, q=2-31, 29.25 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
frame= 76 fps= 34 q=-1.0 Lsize=N/A time=00:00:02.52 bitrate=N/A speed=1.14x
No matter what -pix_fmt I pass, I always end up with yuv420p on the output.
I did several tests with both proper USB UVC webcams as well as DroidCam. The output pixel format never changes as expected. This is also not specific to requesting yuv422p as a pixel format. Also other formats are being ignored. Why is this happening? What am I missing?
Note: I have verified that ffmpeg is capable of the yuv422p pixel format (it is being listed when executing ffmpeg -pix_fmts).
You can't change pixel formats when using -c:v copy. Change to -c:v rawvideo.

Unknown V4L2 pixel format equivalent for yuvj420p

I am trying to pipe a mp4 video located in Videos/video.mp4 to a virtual webcam device located at /dev/video0.
I tried running:
ffmpeg -re -i Videos/video.mp4 -map 0:v -f v4l2 /dev/video0
and I keep getting the following error:
[video4linux2,v4l2 # 0x5580cf270100] Unknown V4L2 pixel format equivalent for yuvj420p
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 --
Conversion failed!
Full log:
ffmpeg version 4.2.2-1+b1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9 (Debian 9.2.1-28)
configuration: --prefix=/usr --extra-version=1+b1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Videos/video.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2020-03-23T04:24:01.000000Z
com.android.version: 8.1.0
Duration: 00:01:00.14, start: 0.000000, bitrate: 20048 kb/s
Stream #0:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuvj420p(pc, smpte170m), 1920x1080, 19898 kb/s, SAR 1:1 DAR 16:9, 29.43 fps, 29.58 tbr, 90k tbn, 180k tbc (default)
Metadata:
rotate : 270
creation_time : 2020-03-23T04:24:01.000000Z
handler_name : VideoHandle
Side data:
displaymatrix: rotation of 90.00 degrees
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 96 kb/s (default)
Metadata:
creation_time : 2020-03-23T04:24:01.000000Z
handler_name : SoundHandle
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Press [q] to stop, [?] for help
[video4linux2,v4l2 # 0x5580cf270100] Unknown V4L2 pixel format equivalent for yuvj420p
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 --
Conversion failed!
The desired result is that the mp4 video is seen by apps that try to view the webcam. I am running this on a desktop without a webcam or video interface, which is why I am using /dev/video0
Add -vf format=yuv420p (or the alias -pix_fmt yuv420p).
The v4l2 output device doesn't support yuvj420p which is the pixel format of your input. In most cases ffmpeg will automatically choose a supported pixel format, but it is unable to do so for V4L2 output, so you have to manually do it:
ffmpeg -re -i Videos/video.mp4 -map 0:v -vf format=yuv420p -f v4l2 /dev/video0

FFMPEG detect silence command runs correctly but does not give the silence duration

I have a .wav audio file and I need to extract silence/pause duration in this file. I'm using ffmpeg with silence detect filter but I'm unable to understand why its not giving silence duration with this file while it gives result with other files. Can anyone help me to understand the out given below that why its not showing detected silences.
Input Command:
ffmpeg -i "input.wav" -af silencedetect=noise=-30dB:d=0.5 -f null -
OutPut
ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9.1.1 (GCC) 20190807
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls -- enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-
libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-
libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-
libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --
enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --
enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --
enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va -- enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from 'D:\Research\PhD\Carolina\AD\wav\media.io_Wakeman_Rhyne_001_01.wav':
Duration: 00:17:38.04, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, null, to 'pipe:':
Adjust the noise and/or d values. From the silencedetect documentation:
The filter accepts the following options:
noise, n - Set noise tolerance. Can be specified in dB (in case "dB" is appended to the specified value) or amplitude ratio. Default
is -60dB, or 0.001.
duration, d - Set silence duration until notification (default is 2 seconds).
"Silence" is often not 100% silent. There could be background noise. In that case you'll need to adjust the noise value until it detects what you want as silence. For example, if you use noise=-15dB, then anything equal to or quieter than -15 dB will be detected as silence.
This screenshot from Audacity shows a "silent" area that is highlighted. It sounds silent compared to the rest of the audio, but if you were to listen carefully you would hear a ventilation fan and other background noise. The VU meter in Audacity shows that it is actually -34 dB at its loudest, so you would have to use at least noise=-34dB.
Additionally you may need to adjust d to tell it the minimum length the silent segment needs to be before it is detected as silence.

ffmpeg replace part of audio file with looped audio

I am quite new to ffmpeg and I am trying to replace a part of a first audio file with another second file. The second file can be too short, so some sort of loop should exist.
After some research I came up with the following command arguments and it gives me the output as long as I only do one replacement. But I would like to do multiple replacements. So any help on what I am doing wrong? Any suggestions/remarks on the way of working are also very welcome.
(Any typos in the commands below can be ignored, I generate the command by script and for ease of use I simplified the names.)
Works (One replacement):
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceA];[0:a]atrim=start=5,asetpts=PTS-STARTPTS[partB];[partA][replaceA][partB]concat=n=3:v=0:a=1[aout]" -map "[aout]" Out.wav
Works Not (Multiple replacements):
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceA];[0:a]atrim=5:4,asetpts=PTS-STARTPTS[partB];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceB];[0:a]atrim=start=6,asetpts=PTS-STARTPTS[partC];[partA][replaceA][partB][replaceB][PartC]concat=n=4:v=0:a=1[aout]" -map "[aout]" Out.wav
ffmpeg version N-76860-g72eaf72 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 5.2.0 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
libavutil 55. 9.100 / 55. 9.100
libavcodec 57. 16.100 / 57. 16.100
libavformat 57. 19.100 / 57. 19.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 15.100 / 6. 15.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from '3897583stereo.wav':
Duration: 00:00:12.07, bitrate: 256 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 8000 Hz, 2 channels, s16, 256 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'beep-021.wav':
Metadata:
encoder : Lavf57.19.100
Duration: 00:00:00.30, bitrate: 1413 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s
[wav # 057242c0] Invalid stream specifier: replaceBase.
Last message repeated 1 times
Stream specifier 'STREAM CUT matches no streams.
Thanks in advance!
I managed to find a workaround (or maybe just how it should be done) by splitting the looped stream with asplit. Remarks for the way of processing are still welcome...
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[replaceBase]asplit=2 [replaceA][replaceB];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceA]atrim=0:2,asetpts=PTS-STARTPTS[replaceTrimmedA];[0:a]atrim=5:6,asetpts=PTS-STARTPTS[partB];[replaceB]atrim=0:2,asetpts=PTS-STARTPTS[replaceTrimmedB];[0:a]atrim=start=8,asetpts=PTS-STARTPTS[partC];[partA][replaceTrimmedA][partB][replaceTrimmedB][PartC]concat=n=4:v=0:a=1[aout]" -map "[aout]" Out.wav
Regards,

Resources