I'm working on raspberry pi3 model b+
I installed opencv 3.4.4 on my raspberry pi , and it installed fine. I'm just running a basic code to see my camera output . (i have plugged in two cameras)
Here is the code .
import cv2
import time
def show_webcam(mirror=False):
frame_rate = 30
prev = 0
cam = cv2.VideoCapture(0)
cam1 = cv2.VideoCapture(1)
ff= 0.5
fxx = ff
fyy = ff
while True:
ret_val, img = cam.read()
img2 = cam1.read()[1]
time_elapsed = time.time() - prev
# print('data type of frame', type(img))
if time_elapsed > 1/frame_rate:
prev = time.time()
cv2.rectangle(img,(100,100),(500,500),(255,255,0),2)
small_frame = cv2.resize(img, (0, 0), fx=fxx, fy=fyy)
cv2.resize(img2,(0, 0), fx = fxx, fy = fyy)
#print("helo")
#if mirror:
# img = cv2.flip(img, 1)
cv2.imshow('my webcam', img)
cv2.imshow('my 2nd webcam', img2)
#if cv2.waitKey(1) == 27:
# break # esc to quit
if cv2.waitKey(1) == 27:
break
cv2.destroyAllWindows()
print (cam)
def main():
show_webcam(mirror=True)
if __name__ == '__main__':
main()
The videos appear for a few frames but after a few seconds i get this error
select timeout
VIDIOC_DQBUF: Resource temporarily unavailable
Traceback (most recent call last):
File "camera.py", line 39, in <module>
main()
File "camera.py", line 36, in main
show_webcam(mirror=True)
File "camera.py", line 21, in show_webcam
small_frame = cv2.resize(img, (0, 0), fx=fxx, fy=fyy)
cv2.error: OpenCV(3.4.4) /home/pi/packaging/opencv-python/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'
This same code works fine when i implement it on my laptop. What can i do to correct this error and ensure the video is not interrupted ?
have tried doing cv2.waitKey(30) doesn't work
why you are using a time_elapsed variable while you can just use the waitkey function and pass to it the milli-sec you want to wait, 1000/framePerSecond. and about your error, the frame you try to resize is empty, sometimes that happens. so before you do any image processing try to check if image image not empty then proceed what you want to do.
Same code works on laptop but not on pi. This means you have issues to lesser memory and / or cpu on a smaller device.
Try reducing the framerate to adjust how many frames you can work with a smaller device.
You should check if the ret_val of both cam.read() is true simultaneously before continuing processing. So when a frame is not properly grabbed, it is dropped and the process is retried instead of throwing an error and exiting.
This does not technically resolve the error, but it does solve your problem, provided the resulting framerate is sufficient for your application.
Related
I'm currently in the works of making an auto-aiming turret, and my camera's have a noticeable fish eye effect, which is totally fine. I'm using OpenCV's undistort() function to handle this, with data from a camera checkerboard calibration.
I will most likely be running the vision system on a raspberry pi 4, and currently, my undistort function takes 80-90% of my CPU (i5-8600k OC 5GHz) when processing both of my cameras at 1280x720px, ideally this px as it's the largest and will provide best accuracy. Also note I'm aiming for a 15Hz update time.
Any ideas on how to make this more lightweight? Here's my code that I'm currently running as a test:
from cv2 import cv2
import numpy as np
import yaml
import time
cam1 = cv2.VideoCapture(0)
cam2 = cv2.VideoCapture(1)
cam1.set(3, 1280)
cam1.set(4, 720)
cam2.set(3, 1280)
cam2.set(4, 720)
#load calibration matrix
with open ("calibration_matrix.yaml") as file:
documents = yaml.full_load(file)
x=0
for item, doc in documents.items():
if x == 0:
mtx = np.matrix(doc)
x = 1
else:
dist = np.matrix(doc)
def camera(ID, asd):
if asd == -1:
ID.release()
ret, frame = ID.read()
if ret:
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
undistortedFrame = cv2.undistort(frame, mtx, dist, None, newcameramtx)
undistortedFrame = undistortedFrame[y:y+h, x:x+w]
return undistortedFrame
while True:
frame1 = camera(cam1, 0)
frame2 = camera(cam2, 0)
cv2.imshow('Frame 1', frame1)
cv2.imshow('Frame 2', frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
camera(cam1, -1)
camera(cam2, -1)
cv2.destroyAllWindows()
Comments above resolved, here's the solution:
As #Micka said,
use initundistortrectifymap() (once) and remap() (for each image)
initundistortrectifymap() basically takes the heavy load off of the undistort function (Micka) In practice, you run initundistortrectifymap() at the start of the program with the image calibration matrix and distance coefficients, and then initundistortrectifymap() returns two maps, map1 and map2.
These maps can be passed into the remap() function to remap your image, which is a significantly lighter function than undistort(). In my particular case, I have a fisheye camera, and OpenCV has fisheye modules that are optimized to undistort fish eye cameras with ease.
I am trying to capture image in every 5 seconds using opencv through my laptop's built-in webcam. I am using time.sleep(5) for the required pause. In every run, the first image seems to be correctly saved but after that rest all the images are being saved as an unsupported format (when i am opening them). Also I am unable to break the loop by pressing 'q'.
Below is my code.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture(0)
framerate = cap.get(5)
x=1
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cap.release()
# Our operations on the frame come here
filename = 'C:/Users/shjohari/Desktop/Retail_AI/MySection/to_predict/image' + str(int(x)) + ".png"
x=x+1
cv2.imwrite(filename, frame)
time.sleep(5)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Any help is appreciated.
Just a small change, solved my issue.
I included the below code inside the loop :P
cap = cv2.VideoCapture(0)
framerate = cap.get(5)
Background - I have python and required scripts installed on my desktop.
I am developing a face recognition WebApp.
It is working fine from Command Line but when I try to run it from localhost on wampserver, the webcam lights get on but no webcam window appears and the page starts loading for unlimited time.
Here is the code for data training
#!C:\Users\Gurminders\AppData\Local\Programs\Python\Python35-32\python.exe
import cv2
import os
def assure_path_exists(path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs(dir)
# Start capturing video
vid_cam = cv2.VideoCapture(0)
# Detect object in video stream using Haarcascade Frontal Face
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# For each person, one face id
face_id = input('Please Enter Casual ID --> ')
# Initialize sample face image
count = 0
assure_path_exists("dataset/")
# Start looping
while(True):
# Capture video frame
_, image_frame = vid_cam.read()
# Convert frame to grayscale
gray = cv2.cvtColor(image_frame, cv2.COLOR_BGR2GRAY)
# Detect frames of different sizes, list of faces rectangles
faces = face_detector.detectMultiScale(gray, 1.3, 5)
# Loops for each faces
for (x,y,w,h) in faces:
# Crop the image frame into rectangle
cv2.rectangle(image_frame, (x,y), (x+w,y+h), (255,0,0), 2)
# Increment sample face image
count += 1
# Save the captured image into the datasets folder
cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])
# Display the video frame, with bounded rectangle on the person's face
cv2.imshow('frame', image_frame)
# To stop taking video, press 'q' for at least 100ms
if cv2.waitKey(100) & 0xFF == ord('q'):
break
# If image taken reach 100, stop taking video
elif count>100:
break
# Stop video
vid_cam.release()
# Close all started windows
cv2.destroyAllWindows()
It works fine on command line but not from localhost on wampserver.
I solved this problem
I replaced
if cv2.waitKey(100) & 0xFF == ord('q'):
With
if cv2.waitKey(5000):
here 5000 are 5 seconds
I have 2 cameras (1 - Kayeton KYT-U200-R01 and 2 - similar one but for IR) running on my Raspberry Pi 3. Everything worked fine but suddenly after resolution has been changed from 640x480 to 1024x768 on both the second camera refused to work. The error I get switching over to the second camera is as follows:
cv2.error:
/home/pi/opencv-3.4.0/modules/core/include/opencv2/core/mat.inl.hpp:500:
error: (-215) total() == 0 || data != __null in function Mat
Even after restoring previous resolution and system reboot the second camera won't work. If someone came across this issue before, please write your solution, thanks in advance.
Here is my Python code:
Camera initialization
def init_cameras(self):
self.cap_1 = cv2.VideoCapture(1)
self.cap_1.set(cv2.CAP_PROP_FPS, 1)
# set resolution
self.cap_1.set(3, 640)
self.cap_1.set(4, 480)
self.cap_2 = cv2.VideoCapture(0)
self.cap_2.set(cv2.CAP_PROP_FPS, 1)
# set resolution
self.cap_2.set(3, 640)
self.cap_2.set(4, 480)
Frame reading
def get_img_frame_from_camera_two(self):
ret, img_frame = self.cap_2.read()
# to save it when 'capture' button will be clicked
self.save_last_img(img_frame)
qFormat = QtGui.QImage.Format_RGB888
img_frame = QtGui.QImage(img_frame, img_frame.shape[1], img_frame.shape[0], img_frame.strides[0], qFormat)
img_frame = img_frame.rgbSwapped()
return img_frame
Run method of a separate thread reading a frame
def run(self):
while(True):
while(self.event_running.isSet()):
img_frame = self.camera_manager.get_img_frame_from_camera_two()
frame = {}
frame['img'] = img_frame
if self.camera_manager.queue_frames_camera_two.qsize() < 10:
# put frame to queue
self.camera_manager.queue_frames_camera_two.put(frame)
# notify listener on new frame added in
self.camera_manager.signal_camera_two.emit()
else:
print(self.camera_manager.queue_frames_camera_two.qsize())
Traceback (most recent call last):
File "test.py", line 10, in <module>
tracker = cv2.Tracker_create("MIL")
AttributeError: module 'cv2.cv2' has no attribute 'Tracker_create
I get the above error when I try to run:
import cv2
import sys
if __name__ == '__main__' :
# Set up tracker.
# Instead of MIL, you can also use
# BOOSTING, KCF, TLD, MEDIANFLOW or GOTURN
tracker = cv2.Tracker_create("MIL")
# Read video
video = cv2.VideoCapture(0)
# Exit if video not opened.
if not video.isOpened():
print ("Could not open video")
sys.exit()
# Read first frame.
ok, frame = video.read()
if not ok:
print ('Cannot read video file')
sys.exit()
# Define an initial bounding box
bbox = (287, 23, 86, 320)
# Uncomment the line below to select a different bounding box
# bbox = cv2.selectROI(frame, False)
# Initialize tracker with first frame and bounding box
ok = tracker.init(frame, bbox)
while True:
# Read a new frame
ok, frame = video.read()
if not ok:
break
# Update tracker
ok, bbox = tracker.update(frame)
# Draw bounding box
if ok:
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
cv2.rectangle(frame, p1, p2, (0,0,255))
# Display result
cv2.imshow("Tracking", frame)
# Exit if ESC pressed
k = cv2.waitKey(1) & 0xff
if k == 27 : break
I found an answer here: How to add "Tracker" in openCV python 2.7
But this confused me more. I'm on MacOSX and I'm just getting started with OpenCV and I'm not really sure how to recompile OpenCV with the correct modules.
Thanks in advance, and sorry if I'm missing something obvious.
So it wasn't a case of the installation, but the constructor name had changed.
tracker = cv2.Tracker_create("MIL")
Should be:
tracker = cv2.TrackerMIL_create()