How to record microphone to mp3 in Termux on Android? - audio

I'm interested to try out Termux command line on Android to record microphone audio to mp3. I've tried running different commands but without much effect. Can anyone pinpoint correct example command to start recording of the microphone, to mp3, at a default location, for example downloads folder? (This is on Android Oreo)
termux-microphone-record
-d Start recording w/ defaults
-f Start recording to specific file
-l Start recording w/ specified limit (in seconds, unlimited for 0)
-e Start recording w/ specified encoder (aac, amr_wb, amr_nb)
-b Start recording w/ specified bitrate (in kbps)
-r Start recording w/ specified sampling rate (in Hz)
-c Start recording w/ specified channel count (1, 2, ...)
-i Get info about current recording
-q Quits recording
from https://wiki.termux.com/wiki/Termux-microphone-record

Termux does not (yet) appear to support recording directly to mp3 format. To get an mp3, you'll need to convert your recording using ffmpeg.
AWR Wide format has good quality for speech recording.
# Begin recording
termux-microphone-record -e awr_wide -f filename.amr
# Stop recording
termux-microphone-record -q
# Convert to mp3
ffmpeg -i filename.amr filename.mp3

The Below command will record for 10 sec and save your file.mp3 in your termux Home directory.
termux-microphone-record -d -f filename.mp3 -l 10

You need to install 1. the app for Android Termux.Api (i did from F-droid) and 2. the linux package (pkg install) 3. i had to give permission to microphone for termux.Api
after it records and stops automatically with -l.

Related

Play video file and audio file simultaneously from Linux command line

I would like to play a separate video stream and audio stream simultaneously from the Linux command line, using e.g. cvlc or mpv.
More specifically, I would like to play a youtube video in high quality format, using youtube-dl along with a player.
More details:
I am using this command to playback a youtube video on my pc:
youtube-dl -i <youtube.com/url> -o - | mpv -
Lets say I have following formats for a youtube video available:
249 webm audio only tiny 62k , opus # 50k (48000Hz), 14.14MiB
251 webm audio only tiny 158k , opus #160k (48000Hz), 35.68MiB
303 webm 1920x1080 1080p60 4429k , vp9, 60fps, video only, 536.78MiB
299 mp4 1920x1080 1080p60 6901k , avc1.64002a, 60fps, video only, 884.09MiB
22 mp4 1280x720 720p 1339k , avc1.64001F, 30fps, mp4a.40.2#192k (44100Hz) (best)
youtube-dl would automatically choose the last entry of this list, as it is a format that includes video and audio in one file.
Is there a way I can play the formats 303 and 251 on my pc?
If I would like to download them I would use:
youtube-dl -i <youtube.com/url> -f 303+bestaudio
What youtube-dl does in this case is to download the video and the audio file seperately and merges them into one file using ffmpeg.
But I can't figure if there is a possibility to playback both streams without first downloading them into a file.
Alright I think I figured a solution.
The command I use is as follows:
ffmpeg -loglevel quiet -i $(youtube-dl -g youtube.com/url -f 303) -i $(youtube-dl -g youtube.com/url -f bestaudio) -f matroska -c copy - | mpv -
The youtube-dl -g option would just return the url to the video or audio stream.
In this case it will pass the urls to ffmpeg which is doing the merging process.
-f matroska tells ffmpeg to use the mkv container format
-c copy says that no re-encoding should be done
edit:
For some reason, on my systemm the input is broken after ffmpeg exits. For now I resolve this by typing reset, until I find a better solution to this issue.

Set up basic Batch or Node.JS prompts for FFMPEG?

I have some game clips from Nvidia shadow play that I like to casually shorten and / or turn them into webms or keep them as mp4s. I use the same ffmpeg line for them. I do slightly change the line because of the input file, start time, and output file.
How could I set up something like a batch file (I was thinking maybe node as well) where it just asks for the input file, start time, and output file?
The current ffmpeg command line I use is like this:
ffmpeg -i desktop.mp4 -ss 00:01:50 -b 900000 -vf scale=640:trunc(ow/a/2)*2 output.webm
You can prompt for user input using the following pattern:
SET /P FILENAME=Enter Filename:
ECHO USER ENTERED %FILENAME%
So with your code you'd setup your 3 variables then use:
ffmpeg -i "%INFILE%" -ss %STARTTIME% -b 900000 -vf scale=640:trunc(ow/a/2)*2 "%OUTFILE%"

arecord audio recording command

I am using the following arecord command to record the audio from a USB microphone. Although I set the arecord to record 10 seconds of audio, the start time and end time do not reflect this. Any suggestions as to why I am facing this issue?
As you can see above, it is taking 22 seconds. The recorded audio file however is 10 seconds. it is the audio of last 10 seconds out of the 22 seconds it seem to have recorded.
Any ideas why I am seeing this issue?
Try modifying the script to get a more verbose output:
arecord -v -D plughw:for -f dat test.wav -d 10
I suspect that arecord is trying to set up the file header before storing the audio. Wav files have file headers as explained here.

Current Level of Microphone Input

How can I get the current audio input level of a microphone via a shell command under Ubuntu 12.04 LTS?
I checked out amixer to set the volume but could not find a way to get the audio input level at the time of the shell call.
Thank you in advance!
To get the level of the input signal, you have to actually record from the input device.
Use the -d 1 parameter for arecord to get a short file.
To read the level of the data in that file, use something like sox recordedfile.wav -n stat.
Based on the above answer, to get the maximum amplitude:
arecord -qd 1 volt && sox volt -n stat &> volt.d && sed '4q;d' volt.d

HTTP Live Streaming : The Linux nightmare

I'm working on a music VOD app on iPhone, and thanks to Apple guidelines, I have to run a HTTP Live Streaming in order to be accepted on the AppStore. But, since Apple doesn't care about 98% of servers on earth, they don't provide their so magical HTTP Live Streaming Tools for Linux-based systems. And from this point, the nightmare starts.
My goal is simple : Take an MP3, segmentate it and generate a simple .m3u8 index file.
I googled "HTTP Live Streaming Linux" and "Oh great ! lots of people have already done that"!
First, I visited the (so famous) post by Carson McDonald.
Result : the svn segmentate.c was old, buggy and a nightmare to compile (Nobody in this world can precise what version of ffmpeg they are using !).
Then I came across the Carson's git repo, but too bad, there is a lot of annoying ruby stuff and live_segmenter.c can't take mp3 files.
Then I searched more deeply. I found this stackoverflow topic, and it's exactly what I want to do. So I have followed the advice from juuni to use this script (httpsegmenter). Result: Impossible to compile anything, 2 days of works and finally I managed to compile it (ffmpeg 8.1 w/ httpsegmenter rev17). And no, this is not a good script, it does take mp3 files, but the ts files generated and the index file can't be read by a player.
Then the author of the post krisbulman, came with a solution, and even gave a patched version of m3u8-segmenter by his own (git repo). I test it : doesn't compile, do nothing. So I took the original version from johnf https://github.com/johnf/m3u8-segmenter. I managed to compile and miracle it works (not really).
I used this command line (ffmpeg 0.8.1):
ffmpeg -er 4 -i music.mp3 -f mpegts -acodec libmp3lame -ar 44100 -ab 128k -vn - | m3u8-segmenter -i - -d 10 -p outputdir/prefix -m outputdir/output.m3u8 -u http://test.com/
This script encode my mp3 file (it takes 4 seconds, too long), and pass it to the m3u8-segmenter to segment it into 10 seconds .TS files.
I tested this stream with Apple's mediastreamvalidator on my mac, and it said that it was OK. So i played it into quicktime, but there is about 0.2 seconds blank between each .TS files !!
So here is my situation, it's a nightmare, I can't get a simple mp3 stream over the HLS protocol. Is there a simple WORKING solution to segmentate a mp3 ? Why can't I directly segmentate the mp3 file into multiple mp3 files like Apple's mediafilesegmenter does?
Use libfaac insteam of libmp3lame which eliminates the 0.2 second break.
Elastic Transcoder Service - if you don't need AES encryption just throw your MP3 in an S3 bucket and be done with it:
http://aws.amazon.com/elastictranscoder/
You can then even add Cloudfront CDN support. (P.S. I fully appreciate your pain, this whole space is a nightmare).
For live streaming only, you should try Nginx with RTMP module for this one. https://github.com/arut/nginx-rtmp-module
Live HLS works pretty good but with looooong buffer.
However, it does not support on-demand HLS streaming.
Piece of module`s config for example
# HLS requires libavformat & should be configured as a separate
# NGINX module in addition to nginx-rtmp-module:
# ./configure ... --add-module=/path/to/nginx-rtmp-module/hls ...
# For HLS to work please create a directory in tmpfs (/tmp/app here)
# for the fragments. The directory contents is served via HTTP (see
# http{} section in config)
#
# Incoming stream must be in H264/AAC/MP3. For iPhones use baseline H264
# profile (see ffmpeg example).
# This example creates RTMP stream from movie ready for HLS:
#
# ffmpeg -loglevel verbose -re -i movie.avi -vcodec libx264
# -vprofile baseline -acodec libmp3lame -ar 44100 -ac 1
# -f flv rtmp://localhost:1935/hls/movie
#
# If you need to transcode live stream use 'exec' feature.
#
application hls {
live on;
hls on;
hls_path /tmp/app;
hls_fragment 5s;
}
What problems were you having with httpsegmenter? It's a single C source file that only links against some libraries provided by ffmpeg (or libav). I maintain a Gentoo ebuild for it, as I use it to time-shift talk radio. If you're running Gentoo, building is as simple as this:
sudo bash -l
layman -S
layman -a salfter
echo media-video/httpsegmenter ~\* >>/etc/portage/package.accept_keywords
emerge httpsegmenter
exit
On Ubuntu, I had to make sure libavutil-dev and libavformat-dev were both installed, so the build looks something like this:
sudo apt-get install libavutil-dev libavformat-dev
git clone https://gitlab.com/salfter/httpsegmenter.git
cd httpsegmenter
make -f Makefile.txt
sudo make -f Makefile.txt install
Once it's built (and once I have an audio source URL), usage is fairly simple: curl to stream the audio, ffmpeg to transcode it from whatever it is at the source (often MP3) to AAC, and segmenter to chunk it up:
curl -m 3600 http://invalid.tld/stream | \
ffmpeg -i - -acodec libvo_aacenc -ac 1 -ab 32k -f mpegts - 2>/dev/null | \
segmenter -i - -d 20 -o ExampleStream -x ExampleStream.m3u8 2>/dev/null
This grabs one hour of streaming audio (needs to be MP3 or AAC, not Flash), transcodes it to 32 kbps mono AAC, and chunks it up for HTTP live streaming. Have it dump into a directory served up by your webserver and you're good to go.
Once the show's done, converting to a single .m4a that can be served up as a podcast is also simple:
cat `ls -rt ExampleStream-*.ts` | \
ffmpeg -i - -acodec copy -absf aac_adtstoasc ExampleStream.m4a 2>/dev/null
I know this is an old question, but I am using this in VLC:
## To start playing the playlist out to the encoder
cvlc -vvv playlist.m3u --sout rtp:127.0.0.1 --ttl 2
## To start the encoder
cvlc rtp:// --sout='#transcode{acodec=mp3,ab=96}:duplicate{dst=std{access=livehttp{seglen=10,splitanywhere=true,delsegs=true,numsegs=15,index=/var/www/vlctest/mystream.m3u8,index-url=http://IPANDPORT/vlctest/mystream-########.ts},mux=ts,dst=/var/www/vlctest/mystream-########.ts},select=audio}'
I had problems if I didn't stream the playlist file to another copy of VLC, the first step is optional if you already have a live streaming source. (but you can use any source for the "encoder" portion).
You could try to use our media services on Windows Azure platform: http://mingfeiy.com/how-to-generate-http-live-streaming-hls-content-using-windows-azure-media-services/
You could encode and stream your video in HLS format by using our portal with no configuration and coding required.
Your English is fine.
Your frustration is apparent.
Q: What's the real issue here? It sounds like you just need a working HLS server, correct? Because of Apple requirements, correct?
Can you use any of the ready-made implementations listed here:
http://en.wikipedia.org/wiki/HTTP_Live_Streaming

Resources