So I'm currently trying to stream my microphone input from my raspberry pi (rasbian)
to some sort of network stream in order to receive it later on my phone.
In order to do this I use arecord -D plughw:1,0 -f dat -r 44100 | top pipe the soundstream from my usb-microphone to stdout which works fine as far as I can see but I needed it to be a bit louder so I can understand people standing far away from it .
So i piped it to the sox play command like this :
arecord -D plughw:1,0 -f dat -r 44100| play -t raw -b 16 -e signed -c 2 -v 7 -r 44100 - test.wav
(test.wav is just some random wav file id doesn't work without it and there is a whitespace between the - behind 44100 and test.wav because i think - is a seperate parameter:
SPECIAL FILENAMES (infile, outfile):
- Pipe/redirect input/output (stdin/stdout); may need -t
-d, --default-device Use the default audio device (where available))
I figured out by using the -v parameter i can increase the volume.
This plays the recorded stream to the speakers I connected to the raspberry pi 3 .
Final goal : pipe the volume increased soundstream to the stdout(or some fifopipe file) so i can get it from stdin inside another script to send it to my phone.
However im very confused by the manpage of the play command http://sox.sourceforge.net/sox.html
i need to select the outputdevice to pipe or stout or something
if you know a better way to just increase the voulme of the i think Recording WAVE 'stdin' : Signed 16 bit Little Endian, Rate 44100 Hz, Stereosoundstream let me know
As far as I'm aware you can't pipe the output from play, you'll have to use the regular sox command for that.
For example:
# example sound file
sox -n -r 48k -b 16 test16.wav synth 2 sine 200 gain -9 fade t 0 0 0.1
# redundant piping
sox test16.wav -t wav - | sox -t wav - gain 8 -t wav - | play -
In the case of the command in your question it should be sufficient to change play to sox and add -t wav to let sox know in what format you want to pipe the sound.
arecord -D plughw:1,0 -f dat -r 44100 | \
sox -t raw -b 16 -e signed -c 2 -v 7 -r 44100 - -t wav -
Related
I am trying to use ffmpeg to combine 1 audio file (ADPCM) and 1 video file (h264) into single mp4. Video by file conversion works fine but ffmpeg chokes on guessing audio input. I can't figure out how to tell ffmpeg which params to use to decode raw audio file.
Currently I first run sox to convert raw audio to wav:
sox -t ima -r 8000 audio.raw audio.wav
... then feed audio.wav from sox as ffmpeg input
ffmpeg -i video.raw -i audio.wav movie.mp4
I am trying to avoid sox step and use audio.raw in ffmpeg.
Thank you
Since you have headless audio, you should tell ffmpeg about the sample format and (optionally) sample rate, audio channels, e.g.:
ffmpeg -i video.raw -f s16le -ar 22050 -ac 1 -i audio.raw movie.mp4
To check supported PCM formats you may use this command:
ffmpeg -formats 2>&1 | grep -i pcm
I have some audio recorded form an i2s mic at 16000hz with arecord. It sounds like it is down an octave so I want to change the file format to 32000hz. When I try to do this with sox it edits the audio, not just the format so it still sounds wrong.
This is the sox command I am using: sox in.wav -r 32000 out.wav What command should I use instead?
Looks like order matters in the command. The correct command is:
sox -r 32000 in.wav out.wav
If you want to change the audio rate, you can do it this way with ffmpeg:
ffmpeg -i input.wav -ar 32000 output.wav
I need to concat multiple mp3 files together then adjust there volume then play via aplay. I currently do this using the following 3 commands
sox file1.mp3 file2.mp3 file3.mp3 out1.wav
sox -v 0.5 out1.wav out2.wav
aplay -D plughw:1,0 out2.wav
This works correctly the only minor issue is it creates temporary files and I know it can be done by piping all these commands together somehow. Sort of like.
sox file1.mp3 file2.mp3 file3.mp3 | sox -v 0.5 | aplay -D plughw:1,0
But can't appear to get the piping to work (I am not really a linux user) Any help would be much appreciated :)
I want to produce a Live audio/video stream from local file.
I tried the following:
ffmpeg -re -thread_queue_size 4 -i source_video_file.ts -strict -2
-vcodec copy -an -f rtp rtp://localhost:10000 -acodec copy -vn -sdp_file saved_sdp_file -f rtp rtp://localhost:20000
and then:
ffplay saved_sdp_file
It seems to work fine, but it looks like a Video on Demand, cause I can replay this file with ffplay whenever I want.
But I need ffplay to show video/audio only during ffmpeg streaming instance is running (the first command above).
How do I achieve this?
Thanks!
This code works for live video streaming :
proc liveStreaming {} {
#ffmpeg command to capture live streaming in background
exec ffplay -f dshow -i video="Integrated Webcam" >& $logFile &
}
liveStreaming
Make use of fmmpeg using following code, this also works :
proc liveStreaming {} {
#ffmpeg command to capture live streaming
exec ffmpeg -f dshow -i video="Integrated Webcam" -f sdl2 -
}
liveStreaming
You can also make use of "sdl" if sdl2 doesn't work.
Trying to record my desktop and also audio with RHEL6.
I'm using the command below, but the quality of the video output is not good.
It is very blurry and I can bearly make out text on screen.
The audio is good so no issues there.
Does anyone know how the make the video quality any better?
ffmpeg -f alsa -ac 2 -i hw:0,0 -f x11grab -s $(xwininfo -root | grep 'geometry' | awk '{print $2;}') -r 25 -i :0.0 -sameq -f mpeg -ar 48000 -s wvga -y sample.avi
I believe the -sameq option means 'same quantizer' not 'same quality' and is depreciated, see here.
Try -q 1 instead.
q being quality 1-32 (1 being highest)