Add SPEEX code support to FFMPEG - audio

How can I add SPEEX support to my FFMPEG installation? I need to extract the audio from a FLV created by FMS.
I just installed it using: app-get install ffmpeg.
ffmpeg -version
FFmpeg version SVN-rUNKNOWN, Copyright (c) 2000-2007 Fabrice Bellard, et al.
configuration: --enable-gpl --enable-pp --enable-swscaler --enable-pthreads --enable-libvorbis --enable-libtheora --enable-libogg --enable-libgsm --enable-dc1394 --disable-debug --enable-shared --prefix=/usr
libavutil version: 1d.49.3.0
libavcodec version: 1d.51.38.0
libavformat version: 1d.51.10.0
built on Apr 23 2010 15:11:13, gcc: 4.2.4 (Ubuntu 4.2.4-1ubuntu3)
ffmpeg SVN-rUNKNOWN
libavutil 3212032
libavcodec 3352064
libavformat 3344896

I could get it working without libspeex. I found that if I don't change the mix rate of the flash/flex application, I can extract the audio from it using mp3lame!!!

Related

How can i configure ffmpeg with both openssl and h264

I use this to configure ffmpeg:
./configure --enable-shared --enable-yasm --enable-openssl --enable-gpl --enable-libx264 --prefix=/mnt/newdatadrive/apps/ffmpeg/ffmpeg-master
But it returned a error:
OpenSSL <3.0.0 is incompatible with the gpl
But i need both of them, how to resolve this?
It seems to be that openSSL 3.0 is not compatible with GPL 2.0. My guess is that you should try using GPL 3.0 or higher instead. The issue seems to be that the apache license is not compatible with GPL 2.0.
You can try using --enable-gpl with --enable-version3 build params which should help.
You can try using --enable-nonfree as well. Not sure if this is useful though
You should consider looking into ./configure --help for more details on what you're doing.
Try to Use.
./configure --enable-shared --enable-yasm --enable-openssl --enable-gpl --enable-libx264 --enable-nonfree --prefix=/mnt/newdatadrive/apps/ffmpeg/ffmpeg-master

FFMpeg; Error when mixing two audio stream

I'm recording two stream by ffmpeg with this command:
ffmpeg -protocol_whitelist pipe,udp,rtp -fflags +genpts -f sdp -i pipe:0 \
-map 0:v:0 -c:v copy \
-filter_complex \
"[0:a:0]volume=0.5[a0]; \
[0:a:1]volume=0.5[a1]; \
[a0][a1]amerge=inputs=2,pan=stereo|c0<c0+c2|c1<c1+c3[out]" \
-map [out] -c:a libopus \
-flags +global_header out.webm
that [0:v:0] is my video stream and [0:a:0] and [0:a:1] are my audio streams that I want to mix them up and record it associate with video stream.
But unfortunately, I get this ugly error sometimes that it causes silence in final video. I mean, when I get this error my final video becomes silence.
LBRR frames is not implemented. Update your FFmpeg version to the
newest one from Git. If the problem still occurs, it means that your
file has a feature which has not been implemented.
Error decoding a SILK frame.
Error decoding an Opus frame.
My ffmpeg version is:
ffmpeg version 3.4.8-0ubuntu0.2 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix=/usr --extra-version=0ubuntu0.2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Where am I wrong?
You should update your ffmpeg. Apparently you installed it by default in ubuntu like this:
apt-get install ffmpeg
that installs the version you mentioned.
You can install it from git repo like this:
apt-get install libvorbis-dev
apt-get install libvpx-dev
git clone https://github.com/FFmpeg/FFmpeg ffmpeg
cd ffmpeg
./configure --extra-cflags=-I/opt/local/include --extra-ldflags=-L/opt/local/lib --enable-nonfree --enable-libvpx --enable-libvorbis
make
make install
You can test it when you installed by this:
./ffmpeg
Please pay attention that you can't run ffmpeg like before. You have to address that to /root/ffmpeg/ffmpeg.

Cannot find 'audio_example.mp3' when using deezer/spleeter

After installing Spleeter in Docker, I tried to run the example docker run -v $(pwd)/output:/output researchdeezer/spleeter separate -i audio_example.mp3 -o /output in Ubuntu as the doc shows.
But the terminal returned audio_example.mp3: No such file or directory.
lzq#lzq-Lenovo-ideapad-700-15ISK:~$ docker run -v $(pwd)/output:/output researchdeezer/spleeter separate -i audio_example.mp3 -o /output
ERROR:spleeter:An error occurs with ffprobe (see ffprobe output below)
ffprobe version 4.1.4-1~deb10u1 Copyright (c) 2007-2019 the FFmpeg developers
built with gcc 8 (Debian 8.3.0-6)
configuration: --prefix=/usr --extra-version='1~deb10u1' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
audio_example.mp3: No such file or directory
Actually there is 'audio_example.mp3' in $(pwd). I tried to copy the mp3 file to $(pwd)/output but it still did not work.
Is it an issue about the root permission? Or Docker cannot find the directory?
Create a directory input and place there your file. Then run the following:
docker run \
-v $(pwd)/output:/output \
-v $(pwd)/input:/input \
researchdeezer/spleeter separate -i /input/audio_example.mp3 -o /output
The command is running inside the container so you need to place the file in the container first and then pass to the command a path inside the container.

Ffmpeg screen tear

Screen tear issue in ffmpeg on manjaro linux, View this
ffmpeg command used-
ffmpeg -y -f x11grab -s 1366x768 -i :0.0 out.mkv
ffmpeg -version
ffmpeg version n4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9.3.0 (Arch Linux 9.3.0-1)
configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmfx --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
uname -a
Linux manjaro 5.4.2-1-MANJARO #1 SMP PREEMPT Thu Dec 5 09:55:57 UTC 2019 x86_64 GNU/Linux
Desktop environment is Budgie
The issue was not caused by ffmpeg but by the compositor.I replaced the compositor with picom.
FFmpeg isn't the culprit but rather the broken architecture of X.Org, X11 is such a old protocol that it doesn't fit today's needs.

enable the "vorbis_parser.h" in ffmpeg build

ffmpeg build configuration is such that I have disabled everything and selectively enabled decoders and encoders and demuxers for the formats that I need. I want to use the vorbis_parser.h for parsing the extradata, I tried using the --enable-parser=vorbis but this does not work. In the include folder of libavcodec there shows no file named vorbis_parser.h. What option should I set so that I can use vorbis_parser.h
As far as i know, the ./configure script for ffmpeg looks like this :
./configure --prefix="$HOME/ffmpeg_build" \
--extra-cflags="-I$HOME/ffmpeg_build/include" --extra-ldflags="-L$HOME/ffmpeg_build/lib" \
--bindir="$HOME/bin" --extra-libs="-ldl" --enable-gpl --enable-libass --enable-libfdk-aac \
--enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx \
--enable-libx264 --enable-nonfree --enable-x11grab
(Source : http://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide)
To me it seems like you just have to --enable-gpl and --enable-libvorbis in your ./configure script.
Nevertheless there may be options to be enabled and disabled in the libvorbis ./configure script. I hope this may help.
Have a nice day ;)
I realized that the functions beginning with avpriv are the private functions of the FFmpeg and those are not included in the header files that are exposed. So probably we cannot include the file vorbis_parser.h

Resources