ffmpeg get all sound devices(input/output) - audio

I have downloaded the static build of ffmpeg for Windows and am trying to get all my sound devices (input/output) I have googled and found this command to retrieve audio devices , but when I use it ffmpeg arecord -l, it shows this error
Unrecognized option 'l'.
Error splitting the argument list: Option not found
what am missing here?

arecord is the command-line sound recorder and player for the ALSA soundcard driver which is available on Linux.
On Windows you can list the dshow devices with:
ffmpeg -list_devices true -f dshow -i dummy
See the Windows section of https://trac.ffmpeg.org/wiki/Capture/Desktop

Related

How do I get amixer PCM numid=3 to work on Raspberry Pi 4?

I have a Raspberry Pi 4 with speakers connected to the 3.5mm jack. I have no HDMI connected, but I have the standard 7 inch monitor connected. It runs raspbian.
Edit: I found out that this was normal behaviour due to an OS update, se my comment below.
If I run amixer cset numid=3 1 I get the error amixer: Cannot find the given element from control default.
If I run amixer contents there are no numid=3, I only get:
numid=2,iface=MIXER,name='Headphone Playback Switch'
; type=BOOLEAN,access=rw------,values=1
: values=on
numid=1,iface=MIXER,name='Headphone Playback Volume'
; type=INTEGER,access=rw---R--,values=1,min=-10239,max=400,step=0
: values=0
| dBscale-min=-102.39dB,step=0.01dB,mute=1
So PCM playback route with numid=3 is missing, and 1+2 say Headphone instead of PCM, that is the normal as far as I can tell from the interweb.
I can still play things with aplay and omxplayer (I'm not sure if it is mono or stereo).
But some other things fails, which I thought might be because of this. If I run espeak, and similar with pyttsx3 in python, I get screens full of errors, a few of the lines are:
ALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.front
Cannot connect to server socket err = No such file or directory
jack server is not running or cannot be started
And that is what I actually would want to get to work.
If that is caused by the first error I don't know. But when I search for rPi sound problems, the cset-numid=3 "solution" seems to appear everywhere, and I can't use that...
Edit: So that turns out not to be the reason, the espeak problem is still there even if I revert to the old way with options in boot.txt.
I needed three days to find out how to use it on Raspberry Pi. I created an Shell Command. When you want to use Audio via 3.5mm Jack just write. You can let the HDMI Cable connected.
sudo bash -c 'echo -e " defaults.pcm.card 1 \ndefaults.ctl.card 1" > /etc/asound.conf'
if you want to use the HDMI Audio output just change the number 1 to 0.
sudo bash -c 'echo -e " defaults.pcm.card 0 \ndefaults.ctl.card 0" > /etc/asound.conf'

How to record microphone to mp3 in Termux on Android?

I'm interested to try out Termux command line on Android to record microphone audio to mp3. I've tried running different commands but without much effect. Can anyone pinpoint correct example command to start recording of the microphone, to mp3, at a default location, for example downloads folder? (This is on Android Oreo)
termux-microphone-record
-d Start recording w/ defaults
-f Start recording to specific file
-l Start recording w/ specified limit (in seconds, unlimited for 0)
-e Start recording w/ specified encoder (aac, amr_wb, amr_nb)
-b Start recording w/ specified bitrate (in kbps)
-r Start recording w/ specified sampling rate (in Hz)
-c Start recording w/ specified channel count (1, 2, ...)
-i Get info about current recording
-q Quits recording
from https://wiki.termux.com/wiki/Termux-microphone-record
Termux does not (yet) appear to support recording directly to mp3 format. To get an mp3, you'll need to convert your recording using ffmpeg.
AWR Wide format has good quality for speech recording.
# Begin recording
termux-microphone-record -e awr_wide -f filename.amr
# Stop recording
termux-microphone-record -q
# Convert to mp3
ffmpeg -i filename.amr filename.mp3
The Below command will record for 10 sec and save your file.mp3 in your termux Home directory.
termux-microphone-record -d -f filename.mp3 -l 10
You need to install 1. the app for Android Termux.Api (i did from F-droid) and 2. the linux package (pkg install) 3. i had to give permission to microphone for termux.Api
after it records and stops automatically with -l.

Piping output from aplay to arecord in centos

I am trying to automate some tests for a websocket client. This client connects to a server on command and the server is basically a speech to text engine. The client supports audio streaming from a microphone, such that people can record themselves in real time and transmitting it to the engine. I am running the client in a centos VM which does not have a physical sound card so I decided to simulate one using
modprobe snd-dummy
My plan is to pipe the output of
aplay audioFile.raw
to the input of
arecord test.raw -r 8000 -t raw
so that I can use that simulate the microphone feature. I read online that the file plugin for ALSA can pipe the results of one command to the next so I made the following modifications to the .asoundrc file in my root directory:
pcm.!default {
type hw
card 0
}
pcm.Ted {
type file
slave mySlave
file "| arecord test.raw -r 8000 -t raw"
}
pcm_slave.mySlave {
pcm "hw:0,0"
}
ctl.!default {
type hw
card 0
}
When I try the following command:
aplay audioFile.raw -D Ted
It seems to run fine but the output of test.raw seems to contain only silence... Does anyone know what I am doing wrong, I am very new to ALSA so if anyone can point me in the right direction, it would be greatly appreciated. Thanks!
Issue Fixed, instead of using snd-dummy I used snd-aloop and audio correctly pipes refer to this question:
Is it possible to arecord output from dummy card?

Record video in background with mencoder

I have an USB tv stick Sundtek MediaTV Pro III which has an analog input.
with the following command, recording works perfectly.
mencoder tv:// -tv driver=v4l2:width=720:height=576:outfmt=uyvy:device=/dev/video0:input=1:fps=25:adevice=/dev/dsp0:audiorate=48000:amode=1:forceaudio:immediatemode=0 -ffourcc DX50 -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:turbo:vbitrate=1200:keyint=15 -oac mp3lame -noskip -o video1.avi
The only problem I have is, that I can hear the sound while recording.
This is kind of annoying because I want to be able to watch a move (a file, not with the usb stick), while I am recoding the analog tv stream.
How can I record without hearing the sound?
Try this:
/opt/bin/mediaclient -c external -d /dev/video0
this is telling the driver not to play back audio via the speaker

HTTP Live Streaming : The Linux nightmare

I'm working on a music VOD app on iPhone, and thanks to Apple guidelines, I have to run a HTTP Live Streaming in order to be accepted on the AppStore. But, since Apple doesn't care about 98% of servers on earth, they don't provide their so magical HTTP Live Streaming Tools for Linux-based systems. And from this point, the nightmare starts.
My goal is simple : Take an MP3, segmentate it and generate a simple .m3u8 index file.
I googled "HTTP Live Streaming Linux" and "Oh great ! lots of people have already done that"!
First, I visited the (so famous) post by Carson McDonald.
Result : the svn segmentate.c was old, buggy and a nightmare to compile (Nobody in this world can precise what version of ffmpeg they are using !).
Then I came across the Carson's git repo, but too bad, there is a lot of annoying ruby stuff and live_segmenter.c can't take mp3 files.
Then I searched more deeply. I found this stackoverflow topic, and it's exactly what I want to do. So I have followed the advice from juuni to use this script (httpsegmenter). Result: Impossible to compile anything, 2 days of works and finally I managed to compile it (ffmpeg 8.1 w/ httpsegmenter rev17). And no, this is not a good script, it does take mp3 files, but the ts files generated and the index file can't be read by a player.
Then the author of the post krisbulman, came with a solution, and even gave a patched version of m3u8-segmenter by his own (git repo). I test it : doesn't compile, do nothing. So I took the original version from johnf https://github.com/johnf/m3u8-segmenter. I managed to compile and miracle it works (not really).
I used this command line (ffmpeg 0.8.1):
ffmpeg -er 4 -i music.mp3 -f mpegts -acodec libmp3lame -ar 44100 -ab 128k -vn - | m3u8-segmenter -i - -d 10 -p outputdir/prefix -m outputdir/output.m3u8 -u http://test.com/
This script encode my mp3 file (it takes 4 seconds, too long), and pass it to the m3u8-segmenter to segment it into 10 seconds .TS files.
I tested this stream with Apple's mediastreamvalidator on my mac, and it said that it was OK. So i played it into quicktime, but there is about 0.2 seconds blank between each .TS files !!
So here is my situation, it's a nightmare, I can't get a simple mp3 stream over the HLS protocol. Is there a simple WORKING solution to segmentate a mp3 ? Why can't I directly segmentate the mp3 file into multiple mp3 files like Apple's mediafilesegmenter does?
Use libfaac insteam of libmp3lame which eliminates the 0.2 second break.
Elastic Transcoder Service - if you don't need AES encryption just throw your MP3 in an S3 bucket and be done with it:
http://aws.amazon.com/elastictranscoder/
You can then even add Cloudfront CDN support. (P.S. I fully appreciate your pain, this whole space is a nightmare).
For live streaming only, you should try Nginx with RTMP module for this one. https://github.com/arut/nginx-rtmp-module
Live HLS works pretty good but with looooong buffer.
However, it does not support on-demand HLS streaming.
Piece of module`s config for example
# HLS requires libavformat & should be configured as a separate
# NGINX module in addition to nginx-rtmp-module:
# ./configure ... --add-module=/path/to/nginx-rtmp-module/hls ...
# For HLS to work please create a directory in tmpfs (/tmp/app here)
# for the fragments. The directory contents is served via HTTP (see
# http{} section in config)
#
# Incoming stream must be in H264/AAC/MP3. For iPhones use baseline H264
# profile (see ffmpeg example).
# This example creates RTMP stream from movie ready for HLS:
#
# ffmpeg -loglevel verbose -re -i movie.avi -vcodec libx264
# -vprofile baseline -acodec libmp3lame -ar 44100 -ac 1
# -f flv rtmp://localhost:1935/hls/movie
#
# If you need to transcode live stream use 'exec' feature.
#
application hls {
live on;
hls on;
hls_path /tmp/app;
hls_fragment 5s;
}
What problems were you having with httpsegmenter? It's a single C source file that only links against some libraries provided by ffmpeg (or libav). I maintain a Gentoo ebuild for it, as I use it to time-shift talk radio. If you're running Gentoo, building is as simple as this:
sudo bash -l
layman -S
layman -a salfter
echo media-video/httpsegmenter ~\* >>/etc/portage/package.accept_keywords
emerge httpsegmenter
exit
On Ubuntu, I had to make sure libavutil-dev and libavformat-dev were both installed, so the build looks something like this:
sudo apt-get install libavutil-dev libavformat-dev
git clone https://gitlab.com/salfter/httpsegmenter.git
cd httpsegmenter
make -f Makefile.txt
sudo make -f Makefile.txt install
Once it's built (and once I have an audio source URL), usage is fairly simple: curl to stream the audio, ffmpeg to transcode it from whatever it is at the source (often MP3) to AAC, and segmenter to chunk it up:
curl -m 3600 http://invalid.tld/stream | \
ffmpeg -i - -acodec libvo_aacenc -ac 1 -ab 32k -f mpegts - 2>/dev/null | \
segmenter -i - -d 20 -o ExampleStream -x ExampleStream.m3u8 2>/dev/null
This grabs one hour of streaming audio (needs to be MP3 or AAC, not Flash), transcodes it to 32 kbps mono AAC, and chunks it up for HTTP live streaming. Have it dump into a directory served up by your webserver and you're good to go.
Once the show's done, converting to a single .m4a that can be served up as a podcast is also simple:
cat `ls -rt ExampleStream-*.ts` | \
ffmpeg -i - -acodec copy -absf aac_adtstoasc ExampleStream.m4a 2>/dev/null
I know this is an old question, but I am using this in VLC:
## To start playing the playlist out to the encoder
cvlc -vvv playlist.m3u --sout rtp:127.0.0.1 --ttl 2
## To start the encoder
cvlc rtp:// --sout='#transcode{acodec=mp3,ab=96}:duplicate{dst=std{access=livehttp{seglen=10,splitanywhere=true,delsegs=true,numsegs=15,index=/var/www/vlctest/mystream.m3u8,index-url=http://IPANDPORT/vlctest/mystream-########.ts},mux=ts,dst=/var/www/vlctest/mystream-########.ts},select=audio}'
I had problems if I didn't stream the playlist file to another copy of VLC, the first step is optional if you already have a live streaming source. (but you can use any source for the "encoder" portion).
You could try to use our media services on Windows Azure platform: http://mingfeiy.com/how-to-generate-http-live-streaming-hls-content-using-windows-azure-media-services/
You could encode and stream your video in HLS format by using our portal with no configuration and coding required.
Your English is fine.
Your frustration is apparent.
Q: What's the real issue here? It sounds like you just need a working HLS server, correct? Because of Apple requirements, correct?
Can you use any of the ready-made implementations listed here:
http://en.wikipedia.org/wiki/HTTP_Live_Streaming

Resources