How to disable Buffer in OpenCV Camera? - node.js

I have this situation where I use OpenCV to detect faces in front of the camera and do some ML on those faces. The issue that I have is that once I do all the processing, and go to grab the next frame I get the past, not the present. Meaning, I'll read whats inside the buffer, and not what is actually in front of the camera. Since I don't care which faces came in front of the camera while processing, I care what is now in front of the camera.
I did try to set the buffer size to 1, and that did help quite a lot, but I still will get at least 3 buffer reads. Setting the FPS to 1, also dose not help remove this situation 100%. Bellow is the flow that I have.
let cv = require('opencv4nodejs');
let camera = new cv.VideoCapture(camera_port);
camera.set(cv.CAP_PROP_BUFFERSIZE, 1);
camera.set(cv.CAP_PROP_FPS, 2);
camera.set(cv.CAP_PROP_POS_FRAMES , 1);
function loop()
{
//
// <>> Grab one frame from the Camera buffer.
//
let rgb_mat = camera.read();
// Do to gray scale
// Do face detection
// Crop the image
// Do some ML stuff
// Do whats needs to be done after the results are in.
//
// <>> Release data from memory
//
rgb_mat.release();
//
// <>> Restart the loop
//
loop();
}
My question is:
Is it possible to remove the buffer all-together? And if so, how. If not, a why would be much appreciated.

Whether CAP_PROP_BUFFERSIZE is supported appears quite operating system and backend-specific. E.g., the 2.4 docs state it is "only supported by DC1394 [Firewire] v 2.x backend currently," and for backend V4L, according to the code, support was added only on 9 Mar 2018.
The easiest non-brittle way to disable the buffer is using a separate thread; for details, see my comments under Piotr Kurowski's answer. Here Python code that uses a separate thread to implement a bufferless VideoCapture: (I did not have a opencv4nodejs environment.)
import cv2, Queue, threading, time
# bufferless VideoCapture
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.q = Queue.Queue()
t = threading.Thread(target=self._reader)
t.daemon = True
t.start()
# read frames as soon as they are available, keeping only most recent one
def _reader(self):
while True:
ret, frame = self.cap.read()
if not ret:
break
if not self.q.empty():
try:
self.q.get_nowait() # discard previous (unprocessed) frame
except Queue.Empty:
pass
self.q.put(frame)
def read(self):
return self.q.get()
cap = VideoCapture(0)
while True:
frame = cap.read()
time.sleep(.5) # simulate long processing
cv2.imshow("frame", frame)
if chr(cv2.waitKey(1)&255) == 'q':
break
The frame reader thread is encapsulated inside the custom VideoCapture class, and communication with the main thread is via a queue.
This answer suggests using cap.grab() in a reader thread, but the docs do not guarantee that grab() clears the buffer, so this may work in some cases but not in others.

I set cap value after reading each frame to None and my problem solved in this way:
import cv2
from PyQt5.QtCore import QThread
if __name__ == '__main__':
while True:
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imshow('A', frame)
cv2.waitKey(0)
print('sleep!')
QThread.sleep(5)
print('wake up!')
cap = None

I have the same problem but in C++. I didn't find proper solution in OpenCV but I found a workaround. This buffer accumulates a constant number of images, say n frames. So You can read n frames without analysis and read frame once more. That last frame will be live image from camera. Something like:
buffer_size = n;
for n+1
{
// read frames to mat variable
}
// Do something on Mat with live image

Related

how to take video from screen?

I want to take a video from screen but it shouldn't use a while loop for taking the picture.I am using tkinter for my GUI.
I have tried after method for taking the picture every time when it needs to be taken. But it doesn't work appropriately. is there any way that I can do it without while true loop?
{def recording_loop(out):
"""take video by Imagegrab"""
img = ImageGrab.grab()
img_np = np.array(img)
frame = cv2.cvtColor(img_np, cv2.COLOR_BGR2RGB)
out.write(frame)
self.canvas.after(41, self.recording_loop, res))}
I expect that every 41ms the recording_loop revokes itself, so it can take 24 pictures in 1s (frame=24). but it doesn't work. Any help would appreciate. (out is output of cv2.videowriter(....) )

change frame rate in opencv 3.4.2

I want to reduce the number of frames acquired per second in a webcam, this is the code that I'm using
#!/usr/bin/env python
import cv2
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FPS, 10)
fps = int(cap.get(5))
print("fps:", fps)
while(cap.isOpened()):
ret,frame = cap.read()
if not ret:
break
cv2.imshow('frame', frame)
k = cv2.waitKey(1)
if k == 27:
break
But it doesn't take effect, I still have 30 fps by default instead of 10 set up by cap.set(cv2.CAP_PROP_FPS, 10) . I want to reduce the frame rate because I have a hand detector which take quite a lot of time to process each frame, I can not store frames in buffer since it would detect the hand in previous positions. I could run the detector using a timer or something else but I thought changing the fps was an easier way, but it didn't work and I don't know why.
Im using Opencv 3.4.2 with Python 3.6.3 in Windows 8.1
Setting a frame rate doesn't always work like you expect. It depends on two things:
What your camera is capable of outputting.
Whether the current capture backend you're using supports changing frame rates.
So point (1). Your camera will have a list of formats which it is capable of delivering to a capture device (e.g. your computer). This might be 1920x1080 # 30 fps or 1920x1080 # 60 fps and it also specifies a pixel format. The vast majority of consumer cameras do not let you change their frame rates with any more granularity than that. And most capture libraries will refuse to change to a capture format that the camera isn't advertising.
Even machine vision cameras, which allow you much more control, typically only offer a selection of frame rates (e.g. 1, 2, 5, 10, 15, 25, 30, etc). If you want a non-supported frame rate at a hardware level, usually the only way to do it is to use hardware triggering.
And point (2). When you use cv.VideoCapture you're really calling a platform-specific library like DirectShow or V4L2. We call this a backend. You can specify exactly which backend is in use by using something like:
cv2.VideoCapture(0 + cv2.CAP_DSHOW)
There are lots of CAP_X's defined, but only some will apply to your platform (e.g CAP_V4L2 is for Linux only). On Windows, forcing the system to use DirectShow is a pretty good bet. However as above, if your camera only reports it can output 30fps and 60fps, requesting 10fps will be meaningless. Worse, a lot of settings simply report True in OpenCV when they're not actually implemented. You've seen that most of the time reading parameters will give you sensible results though, however if the parameter isn't implemented (e.g exposure is a common one that isn't) then you might get nonsense.
You're better off waiting for a period of time and then reading the last image.
Be careful with this strategy. Don't do this:
while capturing:
res, image = cap.read()
time.sleep(1)
you need to make sure you're continually purging the camera's frame buffer otherwise you will start to see lag in your videos. Something like the following should work:
frame_rate = 10
prev = 0
while capturing:
time_elapsed = time.time() - prev
res, image = cap.read()
if time_elapsed > 1./frame_rate:
prev = time.time()
# Do something with your image here.
process_image()
For an application like a hand detector, what works well is to have a thread capturing images and the detector running in another thread (which also controls the GUI). Your detector pulls the last image captured, runs and display the results (you may need to lock access to the image buffer while you're reading/writing it). That way your bottleneck is the detector, not the performance of the camera.
I could not set the FPS for my camera so I manage to limit the FPS based on time so that only 1 frame per second made it into the rest of my code. It is not exact, but I do not need exact, just a limiter instead of 30fps. HTH
import time
fpsLimit = 1 # throttle limit
startTime = time.time()
cv = cv2.VideoCapture(0)
While True:
frame = cv.read
nowTime = time.time()
if (int(nowTime - startTime)) > fpsLimit:
# do other cv2 stuff....
startTime = time.time() # reset time
As Josh stated, changing the camera's fps on openCV highly depends if your camera supports the configuration you are trying to set.
I managed to change my camera's fps for openCV in Ubuntu 18.04 LTS by:
Install v4l2 with "sudo apt-get install v4l-utils"
Run command "v4l2-ctl --list-formats-ext" to display supported video formats including frames sizes and intervals.
Results from running v4l2-ctl --list-formats-ext
In my python script:
import cv2
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G')) # depends on fourcc available camera
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
cap.set(cv2.CAP_PROP_FPS, 5)
The property CV_CAP_PROP_FPS only works on videos as far. If you use the follow command:
fps = cap.get(cv2.CAP_PROP_FPS)
It is returned zero. If you want to reduce frames per seconds, then you can increase a parameter of waitkey(). For example:
k = cv2.waitKey(100)
This would work for your problem
import cv2
import time
cap = cv2.VideoCapture(your video)
initial_time = time.time()
to_time = time.time()
set_fps = 25 # Set your desired frame rate
# Variables Used to Calculate FPS
prev_frame_time = 0 # Variables Used to Calculate FPS
new_frame_time = 0
while True:
while_running = time.time() # Keep updating time with each frame
new_time = while_running - initial_time # If time taken is 1/fps, then read a frame
if new_time >= 1 / set_fps:
ret, frame = cap.read()
if ret:
# Calculating True FPS
new_frame_time = time.time()
fps = 1 / (new_frame_time - prev_frame_time)
prev_frame_time = new_frame_time
fps = int(fps)
fps = str(fps)
print(fps)
cv2.imshow('joined', frame)
initial_time = while_running # Update the initial time with current time
else:
total_time_of_video = while_running - to_time # To get the total time of the video
print(total_time_of_video)
break
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

mouse click events on a saved video using opencv and python

I want to draw a rectangle in a saved video. While drawing the rectangle,the video must freeze. I am successful in drawing a rectangle on a image,but I don't know how to do the same on a saved video using opencv and python.
I was in need of a ROI selection mechanism using opencv and I finally figured how to implement it.
The implementation can be found here (opencvdragrect). It uses opencv 3.1.0 and Python 2.7
For a saved video till the time you don't read another frame and display it, the video is considered as paused.
In terms of how to add it to a paused video (frame), this code below might help.
import cv2
import selectinwindow
wName = "select region"
video = cv2.VideoCapture(videoPath)
while(video.isOpened()):
# Read frame
ret, RGB = video.read()
frameCounter += 1
if frameCounter == 1 : # you can pause any frame you like
rectI = selectinwindow.dragRect
selectinwindow.init(rectI, I, wName, I.shape[1], I.shape[0])
cv2.namedWindow(wName)
cv2.setMouseCallback(wName, selectinwindow.dragrect, rectI)
while True:
# display the image
cv2.imshow(wName, rectI.image)
key = cv2.waitKey(1) & 0xFF
# if returnflag is set break
# this loop and run the video
if rectI.returnflag == True:
break
box = [rectI.outRect.x,rectI.outRect.y,rectI.outRect.w,rectI.outRect.h]
# process the video
# ...
# ...
In the (opencvdragrect) library you use double-click to stop the rectangle selection process and continue with video.
Hope this helps.

Overwriting Buffer still leads to memory crash

I'm trying to convert an AVI video into a series of images, saving only the most recent eight seconds of video into the 'buffer'. I'm only saving two bits of information: the timestamp for when the image was pulled from the AVI, and the image itself.
My problem is that even though the program is written for the buffer to overwrite itself, it is still having memory crashes - about 2 minutes in, for an 8s buffer. (error: (-4) Failed to allocate ###### bytes in function cv::OutofMemoryError) The code for this thread is below the aside for context.
(Context: I'm trying to create an emulator for a frame grabber. It constantly/endlessly converts a saved AVI, and restarts itself when it reaches the end of the video. The buffer is supposed to be analogous to whatever hardware - an HDD or SSD or whatever- that the frame grabber is actually saving to, in practice. This emulator runs as its own thread, parallel to another thread that processes & sends images as they are requested. The crash occurs without invoking the second thread.)
import base64, os, cv2, re, time, math
from PIL import Image
from threading import Thread
lobal numbertosave, framearray, sessionlist
framearray = []
vidfile = os.path.normpath("videolocation")
savepath = os.path.normpath("whereIsaverequestedimages"
class FrameInfo:
def __init__(self, frame, time):
self.f = frame
self.t = time
def VidEmulator():
global numbertosave, framearray, hit, fps
video = cv2.VideoCapture(vidfile)
fps = video.get(5)
##### Change the integer in the expression below to the number of seconds you want ot save
numbertosave = int(fps*5)
######## numbertosave = framerate * (# of seconds to store in buffer)
totalframes = video.get(7)
atframe = 1
count = 1
framearray = [0] * numbertosave
deltat = []
converting = True
while converting == True:
try:
ret, frame = video.read()
timestamp = time.time()
x = FrameInfo(frame, timestamp)
framearray[count] = ((x.f, x.t))
atframe = atframe + 1
count = count + 1
if count == numbertosave:
count = 1
VidEmulator()
except KeyboardInterrupt:
#### edited to remove mention of other thread, which isn't
#### invoked when testing this one (it is on request; see
#### comments) I honestly haven't tested this block since I
#### usually ctrl+printbreak
t1.join()
print "threads killed"
exit()
print "hopefully you don't see this"
if __name__ == '__main__':
t1 = Thread(target = VidEmulator)
t1.start()
I'm guessing there's probably a very simple logic problem I'm having here from preventing the buffer from actually overwriting itself, or some other variable is 'hanging', accumulating with consecutive runs and crashing the program. Otherwise, is there a better way to handle this problem, that will evade this error?
Thanks!

PyQtGraph: GUI unresponsive during only one process

I'm writing a GUI for a video camera that can basically run in two modes that I call liveview and recordview. The only difference being that I'm recording in the latter and only viewing in the former.
In liveview mode the image gets updated properly. I've set a button that triggers recordview but during this acquisition the GUI gets unresponsive and the image doesn't get updated. Let me show you the relevant parts of the code:
import numpy as np
from PyQt4 import QtGui, QtCore
import pyqtgraph as pg
from lantz.drivers.andor.ccd import CCD
app = QtGui.QApplication([])
def updateview(): # <-- this works OK
global img, andor
img.setImage(andor.most_recent_image16(andor.detector_shape),
autoLevels=False)
def liveview():
""" Image live view when not recording
"""
global andor, img, viewtimer
andor.acquisition_mode = 'Run till abort'
andor.start_acquisition()
viewtimer.start(0)
def UpdateWhileRec():
global stack, andor, img, n, ishape
j = 0
while j < n:
if andor.n_images_acquired > j:
# Data saving <-- this part (and the whole while-loop) works OK
i, j = andor.new_images_index
stack[i - 1:j] = andor.images16(i, j, ishape, 1, n)
# Image updating <-- this doesn't work
img.setImage(stack[j - 1], autoLevels=False)
liveview() # After recording, it goes back to liveview mode
def record(n):
""" Record an n-frames acquisition
"""
global andor, ishape, viewtimer, img, stack, rectimer
andor.acquisition_mode = 'Kinetics'
andor.set_n_kinetics(n)
andor.start_acquisition()
# Stop the QTimer that updates the image with incoming data from the
# 'Run till abort' acquisition mode.
viewtimer.stop()
QtCore.QTimer.singleShot(1, UpdateWhileRec)
if __name__ == '__main__':
with CCD() as andor:
win = QtGui.QWidget()
rec = QtGui.QPushButton('REC')
imagewidget = pg.GraphicsLayoutWidget()
p1 = imagewidget.addPlot()
img = pg.ImageItem()
p1.addItem(img)
layout = QtGui.QGridLayout()
win.setLayout(layout)
layout.addWidget(rec, 2, 0)
layout.addWidget(imagewidget, 1, 2, 3, 1)
win.show()
viewtimer = QtCore.QTimer()
viewtimer.timeout.connect(updateview)
# Record routine
n = 100
newimage = np.zeros(ishape)
stack = np.zeros((n, ishape[0], ishape[1]))
rec.pressed.connect(lambda: record(n))
liveview()
app.exec_()
viewtimer.stop()
As you see UpdateWhileRec runs only once per acquisition while updateview runs until viewtimer.stop() is called.
I'm new to PyQt and PyQtGraph so regardless of the particular way of solving my present issue, there's probably a better way to do everything else. If that's the case please tell me!
thanks in advanced
Your problem stems from the fact that you need to return control to the Qt event loop for it to redraw the picture. Since you remain in the UpdateWhileRec callback while waiting for the next image to be acquired, Qt never gets a chance to draw the image. It only gets the chance once you exit the function UpdateWhileRec.
I would suggest the following changes.
Then instead of your while loop in UpdateWhileRec, have a QTimer that periodically calls the contents of your current while loop (i would probably suggest a singleshot timer). This ensures control will be returned to Qt so it can draw the image before checking for a new one.
So something like:
def UpdateWhileRec():
# Note, j should be initialised to 0 in the record function now
global stack, andor, img, n, j, ishape
if andor.n_images_acquired > j:
# Data saving <-- this part (and the whole while-loop) works OK
i, j = andor.new_images_index
stack[i - 1:j] = andor.images16(i, j, ishape, 1, n)
# Image updating <-- this should now work
img.setImage(stack[j - 1], autoLevels=False)
if j < n:
QTimer.singleShot(0, UpdateWhileRec)
else:
liveview() # After recording, it goes back to liveview mode
Note, you should probably put functions and variables in a class, and create an instance of the class (an object). That way you don't have to call global everywhere and things are more encapsulated.
Ultimately, you may want to look into whether your andor library supports registering a function to be called when a new image is available (a callback) which would save you doing this constant polling and/or acquiring the images in a thread and posting them back to the GUI thread to be drawn. But one step at a time!

Resources