Recording with arecord stops after 1h 33m Under Fedora 23 - linux

I'm using this command to record audio in Linux Fedora 23
/usr/bin/arecord -d 11400 -D hw:1,0 -f S32_LE -c2 -r48000 -t wav | lame -b 192 - longrec.mp3 >> output.txt 2>&1 & echo $!
Basically I want an mp3 record of 3 hours and 10 minutes (11400 seconds) from the input soundcard. Everything works fine when started, but it always stops after 1h33m12s. File output.txt shows nothing of any interest:
LAME 3.99.5 64bits (http://lame.sf.net)
Using polyphase lowpass filter, transition band: 18774 Hz - 19355 Hz
Encoding <stdin> to longrec.mp3
Encoding as 48 kHz j-stereo MPEG-1 Layer III (8x) 192 kbps qval=3
Any clue of what the problem is?

[SOLVED] instead of using arecord I have switched to ffmpeg, command:
ffmpeg -f pulse -i alsa_input.usb-Focusrite_Scarlett_Solo_USB-00.analog-stereo -t 11400 -c:a libmp3lame -b:a 192k longrec.mp3
Has the same effect as arecord one and it also doesn't block sound-card resource (I can run multiple ffmpeg record instance from the same source, while with arecord I can do only one).

Related

arecord | split to wav or ogg

I have the following script on linux:
arecord -t raw -f S16_LE -r 44100 -c 1 | split -d -b 882000 --filter='flac - -f --endian little --sign signed --channels 1 --bps 2 --sample-rate 44100 -s -o "${FILE}.flac"'
this script records audio at 44100 fs, 1 channel, then flac files with 882000 bytes, ie 10 seconds of audio at 44100 sample rate are created and saved. The audios are 2 mb, is there any way I can do this but save it in wav or ogg format?
Flac files, not having loss, take up a lot of memory space, I want to reduce that with another format
You can use lame instead flac:
arecord -t raw -f S16_LE -r 44100 -c 1 | split -d -b 882000 --filter='lame -r -s 44.1 - "${FILE}.mp3"'

How can I clean the sound of aplay received by a caputre card

I am trying to setup my linux desktop to be able to view and listent to the device connected to my capture card. I wrote this 2 liner script to be able to do that however my sound is out of tone and a bit distorted, how could I clean it up?
arecord --buffer-time=1 -f cd - | aplay --buffer-time=1 -c 5 -r 48000 -f S16_LE - 2> /dev/null &
ffplay -f video4linux2 -framerate 30 -video_size 1920x1080 -input_format mjpeg /dev/video1 2> /dev/null &
I also tried to do that with ffmpeg piped to ffplay and the sound is crystal clear however there is 2-3 seconds delay on the video and sound, is there a way to fix this?
ffmpeg -framerate 30 -video_size 1920x1080 -thread_queue_size 1024 -input_format mjpeg -i /dev/video1 -f pulse -i 'Analog Input - USB Video' -r 30 -threads 4 -vcodec libx264 -crf 0 -preset ultrafast -vsync 1 -async 1 -f matroska - |ffplay -
Could you try just using ffplay for your second approach?
ffplay -framerate 30 -video_size 1920x1080 \
-thread_queue_size 1024 -input_format mjpeg -i /dev/video1 \
-f pulse -i 'Analog Input - USB Video'`
I could be off-base as I'm only familiar with ffmpeg and don't personally use ffplay, but they share a lot of things (e.g., backend libraries and command line parsing) so I'm hedging this would work.
Also, what do you mean by "there is 2-3 seconds delay on the video and sound"? Are they 2-3 seconds behind what you are physically seeing and hearing? Or are they out of sync by that many seconds?
[addendum]
Not sure if OP is still checking this post, but there is a solution to combine two inputs for ffplay by using an input filtergraph with movie and amovie filters. The following worked in Windows (despite unacceptably large latency):
ffplay -f lavfi -i \
movie=filename=video="Logitech HD Webcam C310":format_name=dshow:format_opts=rtbufsize=702000k[out0]; \
amovie=filename=audio="Microphone (HD Webcam C310)":format_name=dshow[out1]
Note that this is only for the illustration purpose as dshow device can output multiple streams (though the latency is still too bad for real-time use).
The same should be possible in Linux:
ffplay -f lavfi -i \
movie=filename=/dev/video1:format_name=video4linux2:format_opts='framerate=30:video_size=1920x1080:thread_queue_size=1024:input_format=mjpeg'[out0]; \
amovie=filename='Analog Input - USB Video':format_name=pulse[out1]
(Disclaimer: Untested and it may be missing escaping)
The latency may be better in Linux (and with a higher spec'ed PC than mine) so it might be worth a try.

ffmpeg with Popen (python) on Windows

I am trying to use ffmpeg with Popen. The ffmpeg command I am trying works on cmd but gives me error with Popen.
I am using the standalone ffmpeg .exe:
ffmpeg -f gdigrab -offset_x 10 -offset_y 20 -show_region 1 -i desktop -video_size 1536x864 -b:v 2M -maxrate 1M -bufsize 1M -tune fastdecode -crf 15 -preset ultrafast -pix_fmt yuv420p -r 25 <path>/video.mov -qp 1 -y -an
This gives me Invalid argument, but if I remove the last parameters in order to make the output the last thing on the string, I get a different error:
Output file #0 does not contain any stream
I tried to use -f dshow -i video="UScreenCapture" instead of the gdigrab, but both give me the same error with and without the parameters in the end.
Both commands work on command line.
On command line this ffmpeg -list_devices true -f dshow -i dummy returns this:
[dshow # 000001b24fa6a300] DirectShow video devices (some may be both video and audio devices)
[dshow # 000001b24fa6a300] "Integrated Webcam"
[dshow # 000001b24fa6a300] Alternative name "#device_pnp_\\?\usb#vid_1bcf&pid_2b8a&mi_00#6&2c03619a&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow # 000001b24fa6a300] "UScreenCapture"
[dshow # 000001b24fa6a300] Alternative name "#device_sw_{860BB310-5D01-11D0-BD3B-00A0C911CE86}\UScreenCapture"
[dshow # 000001b24fa6a300] DirectShow audio devices
[dshow # 000001b24fa6a300] "Microphone (Realtek Audio)"
[dshow # 000001b24fa6a300] Alternative name "#device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{35EBFC89-7B09-4557-8032-85AA0B688FE9}"
But on popen I can't check it:
-list_devices true -f dshow -i dummy: Invalid argument
For the python part of the code I am using this:
p = subprocess.Popen([getPathForFile("windows/ffmpeg").replace('\\','/'), " -f gdigrab -offset_x 10 -offset_y 20 -show_region 1 -i desktop -video_size 1536x864 -b:v 2M -maxrate 1M -bufsize 1M -tune fastdecode -crf 15 -preset ultrafast -pix_fmt yuv420p -r 25 -qp 1 -y -an "+ path.replace('\\\\','/').replace('\\','/')+"video.mov"], shell=True)
The getPathForFile is a custom function that returns the path. this is correct, mainly because the errors I am getting are from the ffmpeg, so...
I am on a Windows 10. FFmpeg 4.0. Python 3.5.
Any ideas why am I getting these errors on Popen but not on command line and how to fix them? (mainly the second error)
Put each argument in its own string and turn off the shell.
Like this:
import subprocess
import os
cmd = ["-f", "gdigrab", "-offset_x", "10", "-offset_y", "20",
"-show_region", "1", "-video_size", "1536x864", "-i", "desktop",
"-b:v", "2M", "-maxrate", "1M", "-bufsize", "1M", "-tune", "fastdecode",
"-preset", "ultrafast", "-pix_fmt", "yuv420p",
"-r", "25", "-qp", "1", "-y", "-an",
os.path.join(path, "video.mov")]
p = subprocess.Popen([getPathForFile("windows/ffmpeg")]+cmd)
p.communicate()

Play an MP3 file as it's being written

I'm saving an fm station to an mp3 file using rtl_fm and sox.
rtl_fm to capture the signal and sox to transcode it to mp3.
rtl_fm -M wbfm -f 88.1M -d 0 -s 22050k -l 310 | sox -traw -r8k -es -b16 -c1 -V1 - -tmp3 - | sox -tmp3 - some_file.mp3
Then I'm trying to play that file in a second terminal, as the mp3 is being written using:
play -t mp3 some_file.mp3
The problem is that it only plays up until the time the mp3 had at the time the play command was invoked.
How do I get it to play the appended mp3 over time, while it's being written?
EDIT:
Running on Raspberry Pi 3 (Raspian Jessie), NooElec R820T SDR
There are a couple of things here. I don't think sox supports "tailing" a file, but I know mplayer does. However, in order to have better control over the pipeline, using gstreamer might be the way to go, as it has a parallel event stream built into its effects pipeline.
If you want to stick with sox, I would first get rid of the redundant second invocation of sox, e.g.:
rtl_fm -M wbfm -f 88.1M -d 0 -s 22050k -l 310 |
sox -ts16 -r8k -c1 -V1 - some_file.mp3
And in order to play the stream while transcoding it, you could multiplex it with tee, e.g.:
rtl_fm -M wbfm -f 88.1M -d 0 -s 22050k -l 310 |
tee >(sox -ts16 -r8k -c1 -V1 - some_file.mp3) |
play -ts16 -r8k -c1 -
Or if you want them to be separate processes:
# Save stream to a file
rtl_fm -M wbfm -f 88.1M -d 0 -s 22050k -l 310 > some_file.s16
# Encode stream
sox -ts16 -r8k -c1 -V1 some_file.s16 some_file.mp3
# Start playing the file at 10 seconds in
tail -c+$((8000 * 10)) -f some_file.s16 |
play -ts16 -r8k -c1 -

How to record audio with ffmpeg on linux?

I'd like to record audio from my microphone. My OS is ubuntu. I've tried the following and got errors
$ ffmpeg -f alsa -ac 2 -i hw:1,0 -itsoffset 00:00:00.5 -f video4linux2 -s 320x240 -r 25 /dev/video0 out.mpg
ffmpeg version 0.8.8-4:0.8.8-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav
developers
built on Oct 22 2013 12:31:55 with gcc 4.6.3
*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release.
Please use avconv instead.
ALSA lib conf.c:3314:(snd_config_hooks_call) Cannot open shared library
libasound_module_conf_pulse.so
ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM hw:1,0
[alsa # 0xbda7a0] cannot open audio device hw:1,0 (No such file or directory)
hw:1,0: Input/output error
Then I tried
$ ffmpeg -f oss -i /dev/dsp audio.mp3
ffmpeg version 0.8.8-4:0.8.8-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav
developers
built on Oct 22 2013 12:31:55 with gcc 4.6.3
*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release.
Please use avconv instead.
[oss # 0x1ba57a0] /dev/dsp: No such file or directory
/dev/dsp: Input/output error
I haven't been able to get ffmpeg to find my microphone. How can I tell ffmpeg to record from my microphone?
It seems the 'Deprecated' message can be ignored because of this topic
I realise this is a bit old. Just in case anyone else is looking:
ffmpeg -f alsa -ac 2 -i default -itsoffset 00:00:00.5 -f video4linux2 -s 320x240 -r 25 -i /dev/video0 out.mpg
This way it will use the default device to record from. You were also missing a -i before the video capture device - /dev/device0
If you want to get more specific you should take a look in /proc/asound.
Check the cards, devices, pcm files and the card subdirectories. You should be able to glean enough information there to be able to make an educated guess; e.g hw:1,0 or hw:2,0
The documentation may provide further clues:
http://www.alsa-project.org/main/index.php/DeviceNames
The same goes for the webcam - it may not be /dev/video0, perhaps you have an external webcam plugged in and its at /dev/video1 - Have a look in the /dev directory and see whats available
solved !
ffmpeg -f pulse -ac 2 -i default -f x11grab -r 30 -s 1920x1080 -i :0.0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -threads 0 -y /media/t/TBVolume/desktop/output.mkv
First, list your AV devices using:
ffmpeg -list_devices true -f dshow -i dummy
Assuming your audio device is "Microphone Array", you can use:
ffmpeg -f dshow -i audio="Microphone Array" -c:a libmp3lame -b:a 128k OUTPUT.mp3
Here, 128k is the sampling rate. You can check all options for sampling rates (CBR) here.

Resources