Raspberry pi picture capture every 5 seconds - linux

I have multiple webcams hooked up to my raspberry pi and they are compatible and work. I want the raspberry to take a picture every 5 seconds (with every camera), save it to /var/www/picture.jpeg (picture2.jpeg and picture3.jpeg) and repeat overwriting the old images. Im not quite sure how to achieve this and need help!
Thanks!

You may use guvcview or motion, I forgot with which argument you need to run, but just lauch it with --help and you will get all the information. Actually, I've never tried with more than onw camera, but I think that it is possible to do it. You may have to search. But for the delay, there is no problem with those to programs.

Related

Reading USB-midi input live in python

I am looking for a way to read the USB-MIDI input live and have triggers, that run when a certain note is played. For example it should run function x, when an "e" is being played. This is Python 3 based either on windows 10 machine or a raspberry pi.
All the information I found has been years to decades old with pygame, py-midi, pyportmidi. Is there any current library that supports this? Pygame seems to rely on polling causing a short delay, which is a problem for this scenario.
In MIDI-OX, the Monitor displays the notes being played in real-time, though I can't do anything useful with it from there, as I need the python triggers, or events.

Intercepting Sound From Other Programs

I want to do a couple of things:
-I want to hear sound from all other programs through max, and max only.
-I want to edit that sound in real time and hear only the edited sound.
-I want to slow down the sound, while stacking the non-slowed, incoming input onto a buffer, which I can then speed through to catch up.
Is this possible in Max? I have had a lot of difficulty working even step 1. Even if I use my speakers as an input device, I am unable to monitor it let alone edit it. I am using Max for Live, for what it's worth.
Step 1 and 2
On Mac, you can use Loopback
You can set your system output to the loopback driver, then set the loopback driver as the input in Max and then the speakers as the output.
For Windows you would do the same, but with a different internal audio routing system like Jack
Step 3
You can do that with the buffer~ object. Of course the buffer will have a finite size, and storing hours of audio might be problematic, but minutes shouldn't be a problem on a decent computer. The buffer~ help file will show you the first steps needed to store and read audio from it.

Determine time since last input in bash

I'm building an arcade cabinet and would like to have random games launch after 15 minutes of inactivity. I'm not sure how to determine the time since the last user input. This is a pretty standard screensaver with xserver, however I can't seem to find a way to do it without x running.
Specifically I'm running RetroPie on a Raspberry Pi 2. X is installed but not running. The front end is emulationstation.
Any help would be appreciated, thanks.

OpenCV VideoCapture / V4L2 latency when grabbing a new webcam image

For a computer vision project that I am working on I need to grab images using a Logitech C920 webcam. I am using OpenCV's VideoCapture to do that, but the problem that I am facing is that the image that I take at a certain moment does not show the latest thing that the camera sees. That is, if I take an image at timestamp t, it shows what the camera saw at timestamp (t - delta), so to say.
I did this by writing a program that increments a counter and shows it on the screen. I pointed the camera at the screen and let it record. When the counter reached a certain value, say 10000, it would grab an image and save it with the filename "counter_value.png" (e.g. 10000.png). That way I was able to compare the current value of the counter with the current value seen by the camera. I noticed that most of the time the delay is about 4-5 frames, but it is not a fixed value.
I saw similar posts about this issue, but none of them really helped. Some people recommended putting the frame grabbing routine into a separate thread and updating a "current_frame" Mat variable. I tried that, but for some reason the issue is still present. Someone else mentioned that the camera worked well on Windows (but I need to use Linux, though). I tried running the same code on Windows and indeed the delay was only about 1 frame (which might as well be that the camera did not see the counter because the screen did not update fast enough).
I then decided to run a simple webcam viewer based only on V4L2 code, thinking that the issue might be coming from OpenCV. I again experienced the same delay, which makes me believe that the driver is using some sort of buffer to cache the images.
I am new to V4L2 and I really need to solve this problem as soon as possible, so my questions to you guys are:
Has anyone found a solution for getting the latest image using V4L2 (and maybe OpenCV)?
If there is no way to solve it using V4L2, does anyone know another alternative to fixing this issue on Linux?
Regards,
Mihai
It looks like that there will be always a delay between the VideoCapture::grab() call and when the frame is actually taken. This is because of frame buffering that is done at hardware/SO level and you cannot avoid that.
OpenCV provides the VideoCapture::get( CV_CAP_PROP_POS_MEC) ) method to give you the exact time a frame was captured, but this is only possible if the camera supports it.
Recently a problem has been discovered in V4L OpenCV implementation:
http://answers.opencv.org/question/61099/is-it-possible-to-get-frame-timestamps-for-live-streaming-video-frames-on-linux/
And a few days ago a fix has been pulled:
https://github.com/Itseez/opencv/pull/3998
In the end, if you have the right setup, you can know what is the time a frame was taken (and therefore compensate).
It is possible the problem is with the Linux UVC driver, but I have been using Microsoft LifeCam Cinemas for machine vision on Ubuntu 12.04 and 14.04 machines, and have not seen a 4-5 frame delay. I operate them in low light conditions, though, in which case they reduce the frame rate to 7.5 fps.
One other possible culprit is a delay in the webcam depending what format is used. The C920 appears to support H.264 (which few webcams do), so Logitech may have put most effort to make this work well, yet OpenCV appears not to support H.264 on Linux; see this answer for what formats it supports. The same question also has an answer with a kernel hack(!) to fix an issue with the UVC driver.
PS: to check the format actually used in my case, I added
fprintf(stderr, ">>> palette: %d\n", capture->palette);
at this line in the OpenCV code.

Trouble with capturing images from webcam with Arch Linux and OpenCV (node.js)

I want to grab a single frame from my webcam with node.js and OpenCV. The first frame captured is as expected. But if I move in front of the camera and take a second one, I get an image that was obviously taken quite after the first one and doesnt show my move. I have to take 5 pictures to see the move. Searching on the net gave me hint about a problem with the camera buffer that holds 4 images (OS depended).
Here is an example of someone having the same issue:
http://opencvarchive.blogspot.de/2010/05/opencv-arm-linux-servo-frame-delay.html
At the moment I'm doing a workaround and capture 5 images within a loop and then save the last image to disk. So the buffer is cleared and the really current image is taken.
Does anyone knows a better solution? Taking five images instead of one takes too much time for my application...
Thanks in advance! :)

Resources