I'm shooting photos with
sudo fswebcam -d /dev/video0 -r 1920x1080 --no-banner /media/networkshare/public/"Temp Photo Holder SolidScape Right"/timelapse_$DATE.jpg
on a Raspberry Pi 3 B+ running a recent version of Raspbian.
The script is controlling a Logitech c920 that has adjustable features like focus, brightness, contrast, etc.
I think the reason the 'manual' settings are not being saved is because I call the v4l2-ctl commands, add a delay, and then use fswebcam to shoot, like so:
#!/bin/bash
DATE=$(date +"%y%m%d%H%M%S")
#sudo v4l2-ctl -d /dev/video0 -c focus_auto=false
#sudo v4l2-ctl -d /dev/video0 -c focus_absolute=35
sudo v4l2-ctl -d /dev/video0 -c brightness=128
sudo v4l2-ctl -d /dev/video0 -c contrast=128
sudo v4l2-ctl -d /dev/video0 -c saturation=128
sudo v4l2-ctl -d /dev/video0 -c gain=15
sudo v4l2-ctl -d /dev/video0 -c sharpness=128
sleep 2
sudo v4l2-ctl -c exposure_auto_priority=1
#fswebcam -d /dev/video0 -r 1920x1080 --no-banner /media/networkshare/public/RasPi/"Temp Photo Holder SS"/timelapse_$DATE.jpg
sudo fswebcam -d /dev/video0 -r 1920x1080 --no-banner /media/networkshare/public/"Temp Photo Holder SolidScape Right"/timelapse_$DATE.jpg
sleep 9
sudo fswebcam -d /dev/video0 -r 1920x1080 --no-banner /media/networkshare/public/"Temp Photo Holder SolidScape Right"/timelapse_$DATE.jpg
sleep 9
How can I take a photo with manual settings appended? Maybe I need to add commands to fswebcam inline instead of calling v4l2-ctl first?
Everything is working.
No changes were needed.
I wasn't changing settings dramatically enough to notice a difference in the output.
Related
I am trying to setup my linux desktop to be able to view and listent to the device connected to my capture card. I wrote this 2 liner script to be able to do that however my sound is out of tone and a bit distorted, how could I clean it up?
arecord --buffer-time=1 -f cd - | aplay --buffer-time=1 -c 5 -r 48000 -f S16_LE - 2> /dev/null &
ffplay -f video4linux2 -framerate 30 -video_size 1920x1080 -input_format mjpeg /dev/video1 2> /dev/null &
I also tried to do that with ffmpeg piped to ffplay and the sound is crystal clear however there is 2-3 seconds delay on the video and sound, is there a way to fix this?
ffmpeg -framerate 30 -video_size 1920x1080 -thread_queue_size 1024 -input_format mjpeg -i /dev/video1 -f pulse -i 'Analog Input - USB Video' -r 30 -threads 4 -vcodec libx264 -crf 0 -preset ultrafast -vsync 1 -async 1 -f matroska - |ffplay -
Could you try just using ffplay for your second approach?
ffplay -framerate 30 -video_size 1920x1080 \
-thread_queue_size 1024 -input_format mjpeg -i /dev/video1 \
-f pulse -i 'Analog Input - USB Video'`
I could be off-base as I'm only familiar with ffmpeg and don't personally use ffplay, but they share a lot of things (e.g., backend libraries and command line parsing) so I'm hedging this would work.
Also, what do you mean by "there is 2-3 seconds delay on the video and sound"? Are they 2-3 seconds behind what you are physically seeing and hearing? Or are they out of sync by that many seconds?
[addendum]
Not sure if OP is still checking this post, but there is a solution to combine two inputs for ffplay by using an input filtergraph with movie and amovie filters. The following worked in Windows (despite unacceptably large latency):
ffplay -f lavfi -i \
movie=filename=video="Logitech HD Webcam C310":format_name=dshow:format_opts=rtbufsize=702000k[out0]; \
amovie=filename=audio="Microphone (HD Webcam C310)":format_name=dshow[out1]
Note that this is only for the illustration purpose as dshow device can output multiple streams (though the latency is still too bad for real-time use).
The same should be possible in Linux:
ffplay -f lavfi -i \
movie=filename=/dev/video1:format_name=video4linux2:format_opts='framerate=30:video_size=1920x1080:thread_queue_size=1024:input_format=mjpeg'[out0]; \
amovie=filename='Analog Input - USB Video':format_name=pulse[out1]
(Disclaimer: Untested and it may be missing escaping)
The latency may be better in Linux (and with a higher spec'ed PC than mine) so it might be worth a try.
Is there any you to stream any music the terminal with youtube-dl and ffplay
I know that ffplay can play audio with shell
$ audio stram | ffplay -i -
You can try this:
youtube-dl -f bestaudio ytsearch:"SONG NAME" -o - 2>/dev/null | ffplay -nodisp -autoexit -i - &>/dev/null
Or:
youtube-dl -f bestaudio VIDEO_URL -o - 2>/dev/null | ffplay -nodisp -autoexit -i - &>/dev/null
..and if you mordernize the command a little you can play YT-videos from the terminal without ads.
youtube-dl -f mp4 YOUTUBE_VIDEO_URL -o - 2>/dev/null | ffplay -autoexit -i - &>/dev/null
Due to YouTube at the moment are throttlening youtube-dl I'm now using yt-dlp instead. Same codebase but no throttlening :)
yt-dlp -f mp4 YOUTUBE_VIDEO_URL -o - 2>/dev/null | ffplay -autoexit -i - &>/dev/null
I'm saving an fm station to an mp3 file using rtl_fm and sox.
rtl_fm to capture the signal and sox to transcode it to mp3.
rtl_fm -M wbfm -f 88.1M -d 0 -s 22050k -l 310 | sox -traw -r8k -es -b16 -c1 -V1 - -tmp3 - | sox -tmp3 - some_file.mp3
Then I'm trying to play that file in a second terminal, as the mp3 is being written using:
play -t mp3 some_file.mp3
The problem is that it only plays up until the time the mp3 had at the time the play command was invoked.
How do I get it to play the appended mp3 over time, while it's being written?
EDIT:
Running on Raspberry Pi 3 (Raspian Jessie), NooElec R820T SDR
There are a couple of things here. I don't think sox supports "tailing" a file, but I know mplayer does. However, in order to have better control over the pipeline, using gstreamer might be the way to go, as it has a parallel event stream built into its effects pipeline.
If you want to stick with sox, I would first get rid of the redundant second invocation of sox, e.g.:
rtl_fm -M wbfm -f 88.1M -d 0 -s 22050k -l 310 |
sox -ts16 -r8k -c1 -V1 - some_file.mp3
And in order to play the stream while transcoding it, you could multiplex it with tee, e.g.:
rtl_fm -M wbfm -f 88.1M -d 0 -s 22050k -l 310 |
tee >(sox -ts16 -r8k -c1 -V1 - some_file.mp3) |
play -ts16 -r8k -c1 -
Or if you want them to be separate processes:
# Save stream to a file
rtl_fm -M wbfm -f 88.1M -d 0 -s 22050k -l 310 > some_file.s16
# Encode stream
sox -ts16 -r8k -c1 -V1 some_file.s16 some_file.mp3
# Start playing the file at 10 seconds in
tail -c+$((8000 * 10)) -f some_file.s16 |
play -ts16 -r8k -c1 -
Is it possible to write the fps with drawtext off avconv, i can't find information about it.
drawtext=fontfile=arial.ttf:text=
The full command i use without drawtext:
avconv -f rawvideo -pix_fmt gray -r 8 -s 640x480 -i - -an -t 00:00:30 -f rawvideo -r 8 -y \"/dev/xillybus_write_32\"
More information about what I want to do here; http://www.studiodust.com/riffmp3.html
I want a way so that my control panel (made with Perl and Webmin) can do this automatically. Right now I have to rely on system calls and have a binary for Linux. Is there a library that does it for Perl or some other language?
What's the best way of doing this?
I know nothing about RIFF files or their structure, uses, etc. But did you try searching CPAN? The first result looks pretty promising.
The website I reference had the answer I needed. I didn't know they made a linux variant.
I have the following script for the exact thing you asked about.
#!/bin/bash
echo "$1"
ffmpeg -y -i "$1" -f wav out.wav > /dev/null 2>&1 && \
normalize-audio -q out.wav && \
lame --silent -a -m m --cbr -b 64 -q 0 out.wav out.mp3 && \
ffmpeg -y -i out.mp3 -f wav -acodec copy "$1" > /dev/null 2>&1 && \
echo "done."
rm out.wav out.mp3
Just edit the parameters to lame or just use the ffmpeg call and you're set.