raspberry pi4 opencv and usb web camera python3 I had install opencv - python-3.x

in face recognition I write below code but I still have problem that USB webcam could not open in new window I try to put -1 and 1 and it dose not work
import cv2
cam = cv2.VideoCapture(0)
cv2.namedWindow("press space to take a photo", cv2.WINDOW_NORMAL)
cv2.resizeWindow("press space to take a photo", 500, 300)
img_counter = 0
while True:
ret, frame = cam.read()
if not ret:
print("failed to grab frame")
break
cv2.imshow("press space to take a photo", frame)
error message
[ WARN:0] global /tmp/pip-wheel-qd18ncao/opencv-python/opencv/modules/videoio/src/cap_v4l.cpp (893) open VIDEOIO(V4L2:/dev/video0): cant open camera by index
failed to grab frame

Related

OpenCV webcam reconnect

I'm using the OpenCV webcam example. Works great, but wondering, if it's possible to add a "camera reconnect function". When I unplug the camera, the code will crash. That's normal, but I would like to keep it running until I replug the camera again.
I tried "try: & except:" as you can see below. Now the python doesn't crash & when I unplug the camera, it will start printing "disconnected" to the console. However, when I reconnect the camera back, it would not start automatically, need to reset the program.
Thanks
import numpy as np
import cv2
cap = cv2.VideoCapture(2)
#cap = cv2.VideoCapture(1)
if cap.isOpened():
while(True):
try:
# Capture frame-by-frame
ret, im= cap.read()
# Display the resulting frame
cv2.imshow('frame', im)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
except:
print('Disconnected')
else:
print("camera open failed")
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

Multi-threaded PyOpenCV display [duplicate]

I have to stitch the images captured from many (9) cameras. Initially, I tried to capture the frames from 2 cameras with rate 15 FPS. Then, I connected 4 cameras (I also used externally powered USB hub to provide enough power) but I could only see only one stream.
For testing, I used the following script:
import numpy as np
import cv2
import imutils
index = 0
arr = []
while True:
cap = cv2.VideoCapture(index)
if not cap.read()[0]:
break
else:
arr.append(index)
cap.release()
index += 1
video_captures = [cv2.VideoCapture(idx) for idx in arr]
while True:
# Capture frame-by-frame
frames = []
frames_preview = []
for i in arr:
# skip webcam capture
if i == 1: continue
ret, frame = video_captures[i].read()
if ret:
frames.append(frame)
small = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
frames_preview.append(small)
for i, frame in enumerate(frames_preview):
cv2.imshow('Cam {}'.format(i), frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
for video_capture in video_captures:
video_capture.release()
cv2.destroyAllWindows()
Is there any limit for the number of cameras? Does anyone know what is the right way to capture frames from multiple cameras?
To capture multiple streams with OpenCV, I recommend using threading which can improve performance by alleviating the heavy I/O operations to a separate thread. Since accessing the webcam/IP/RTSP stream using cv2.VideoCapture().read() is a blocking operation, our main program is stuck until the frame is read from the camera device. If you have multiple streams, this latency will definitely be visible. To remedy this problem, we can use threading to spawn another thread to handle retrieving the frames using a deque in parallel instead of relying on a single thread to obtain the frames in sequential order. Threading allows frames to be continuously read without impacting the performance of our main program. The idea to capture a single stream using threading and OpenCV, is from a previous answer in Python OpenCV multithreading streaming from camera.
But if you want to capture multiple streams, OpenCV alone is not enough. You can use OpenCV in combination with a GUI framework to stitch each image onto a nice display. I will use PyQt4 as the framework, qdarkstyle for GUI CSS, and imutils for OpenCV convenience functions.
Here is a very stripped down version of the camera GUI I currently use without the placeholder images, credential admin login page, and camera switching ability. I've kept the automatic camera reconnect feature incase the internet dies or the camera connection is lost. I only have 8 cameras as shown in the image above, but it is very simple to add in another camera and should not impact performance. This camera GUI currently performs at about ~60 FPS so it is real-time. You can easily rearrange the layout using PyQt layouts so feel free to modify the code! Remember to change the stream links!
from PyQt4 import QtCore, QtGui
import qdarkstyle
from threading import Thread
from collections import deque
from datetime import datetime
import time
import sys
import cv2
import imutils
class CameraWidget(QtGui.QWidget):
"""Independent camera feed
Uses threading to grab IP camera frames in the background
#param width - Width of the video frame
#param height - Height of the video frame
#param stream_link - IP/RTSP/Webcam link
#param aspect_ratio - Whether to maintain frame aspect ratio or force into fraame
"""
def __init__(self, width, height, stream_link=0, aspect_ratio=False, parent=None, deque_size=1):
super(CameraWidget, self).__init__(parent)
# Initialize deque used to store frames read from the stream
self.deque = deque(maxlen=deque_size)
# Slight offset is needed since PyQt layouts have a built in padding
# So add offset to counter the padding
self.offset = 16
self.screen_width = width - self.offset
self.screen_height = height - self.offset
self.maintain_aspect_ratio = aspect_ratio
self.camera_stream_link = stream_link
# Flag to check if camera is valid/working
self.online = False
self.capture = None
self.video_frame = QtGui.QLabel()
self.load_network_stream()
# Start background frame grabbing
self.get_frame_thread = Thread(target=self.get_frame, args=())
self.get_frame_thread.daemon = True
self.get_frame_thread.start()
# Periodically set video frame to display
self.timer = QtCore.QTimer()
self.timer.timeout.connect(self.set_frame)
self.timer.start(.5)
print('Started camera: {}'.format(self.camera_stream_link))
def load_network_stream(self):
"""Verifies stream link and open new stream if valid"""
def load_network_stream_thread():
if self.verify_network_stream(self.camera_stream_link):
self.capture = cv2.VideoCapture(self.camera_stream_link)
self.online = True
self.load_stream_thread = Thread(target=load_network_stream_thread, args=())
self.load_stream_thread.daemon = True
self.load_stream_thread.start()
def verify_network_stream(self, link):
"""Attempts to receive a frame from given link"""
cap = cv2.VideoCapture(link)
if not cap.isOpened():
return False
cap.release()
return True
def get_frame(self):
"""Reads frame, resizes, and converts image to pixmap"""
while True:
try:
if self.capture.isOpened() and self.online:
# Read next frame from stream and insert into deque
status, frame = self.capture.read()
if status:
self.deque.append(frame)
else:
self.capture.release()
self.online = False
else:
# Attempt to reconnect
print('attempting to reconnect', self.camera_stream_link)
self.load_network_stream()
self.spin(2)
self.spin(.001)
except AttributeError:
pass
def spin(self, seconds):
"""Pause for set amount of seconds, replaces time.sleep so program doesnt stall"""
time_end = time.time() + seconds
while time.time() < time_end:
QtGui.QApplication.processEvents()
def set_frame(self):
"""Sets pixmap image to video frame"""
if not self.online:
self.spin(1)
return
if self.deque and self.online:
# Grab latest frame
frame = self.deque[-1]
# Keep frame aspect ratio
if self.maintain_aspect_ratio:
self.frame = imutils.resize(frame, width=self.screen_width)
# Force resize
else:
self.frame = cv2.resize(frame, (self.screen_width, self.screen_height))
# Add timestamp to cameras
cv2.rectangle(self.frame, (self.screen_width-190,0), (self.screen_width,50), color=(0,0,0), thickness=-1)
cv2.putText(self.frame, datetime.now().strftime('%H:%M:%S'), (self.screen_width-185,37), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (255,255,255), lineType=cv2.LINE_AA)
# Convert to pixmap and set to video frame
self.img = QtGui.QImage(self.frame, self.frame.shape[1], self.frame.shape[0], QtGui.QImage.Format_RGB888).rgbSwapped()
self.pix = QtGui.QPixmap.fromImage(self.img)
self.video_frame.setPixmap(self.pix)
def get_video_frame(self):
return self.video_frame
def exit_application():
"""Exit program event handler"""
sys.exit(1)
if __name__ == '__main__':
# Create main application window
app = QtGui.QApplication([])
app.setStyleSheet(qdarkstyle.load_stylesheet_pyqt())
app.setStyle(QtGui.QStyleFactory.create("Cleanlooks"))
mw = QtGui.QMainWindow()
mw.setWindowTitle('Camera GUI')
mw.setWindowFlags(QtCore.Qt.FramelessWindowHint)
cw = QtGui.QWidget()
ml = QtGui.QGridLayout()
cw.setLayout(ml)
mw.setCentralWidget(cw)
mw.showMaximized()
# Dynamically determine screen width/height
screen_width = QtGui.QApplication.desktop().screenGeometry().width()
screen_height = QtGui.QApplication.desktop().screenGeometry().height()
# Create Camera Widgets
username = 'Your camera username!'
password = 'Your camera password!'
# Stream links
camera0 = 'rtsp://{}:{}#192.168.1.43:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera1 = 'rtsp://{}:{}#192.168.1.45/axis-media/media.amp'.format(username, password)
camera2 = 'rtsp://{}:{}#192.168.1.47:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera3 = 'rtsp://{}:{}#192.168.1.40:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera4 = 'rtsp://{}:{}#192.168.1.44:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera5 = 'rtsp://{}:{}#192.168.1.42:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera6 = 'rtsp://{}:{}#192.168.1.46:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera7 = 'rtsp://{}:{}#192.168.1.41:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
# Create camera widgets
print('Creating Camera Widgets...')
zero = CameraWidget(screen_width//3, screen_height//3, camera0)
one = CameraWidget(screen_width//3, screen_height//3, camera1)
two = CameraWidget(screen_width//3, screen_height//3, camera2)
three = CameraWidget(screen_width//3, screen_height//3, camera3)
four = CameraWidget(screen_width//3, screen_height//3, camera4)
five = CameraWidget(screen_width//3, screen_height//3, camera5)
six = CameraWidget(screen_width//3, screen_height//3, camera6)
seven = CameraWidget(screen_width//3, screen_height//3, camera7)
# Add widgets to layout
print('Adding widgets to layout...')
ml.addWidget(zero.get_video_frame(),0,0,1,1)
ml.addWidget(one.get_video_frame(),0,1,1,1)
ml.addWidget(two.get_video_frame(),0,2,1,1)
ml.addWidget(three.get_video_frame(),1,0,1,1)
ml.addWidget(four.get_video_frame(),1,1,1,1)
ml.addWidget(five.get_video_frame(),1,2,1,1)
ml.addWidget(six.get_video_frame(),2,0,1,1)
ml.addWidget(seven.get_video_frame(),2,1,1,1)
print('Verifying camera credentials...')
mw.show()
QtGui.QShortcut(QtGui.QKeySequence('Ctrl+Q'), mw, exit_application)
if(sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
Related camera/IP/RTSP, FPS, video, threading, and multiprocessing posts
Python OpenCV streaming from camera - multithreading, timestamps
Video Streaming from IP Camera in Python Using OpenCV cv2.VideoCapture
How to capture multiple camera streams with OpenCV?
OpenCV real time streaming video capture is slow. How to drop frames or get synced with real time?
Storing RTSP stream as video file with OpenCV VideoWriter
OpenCV video saving
Python OpenCV multiprocessing cv2.VideoCapture mp4

Video runs for a few frames then gives error

I'm working on raspberry pi3 model b+
I installed opencv 3.4.4 on my raspberry pi , and it installed fine. I'm just running a basic code to see my camera output . (i have plugged in two cameras)
Here is the code .
import cv2
import time
def show_webcam(mirror=False):
frame_rate = 30
prev = 0
cam = cv2.VideoCapture(0)
cam1 = cv2.VideoCapture(1)
ff= 0.5
fxx = ff
fyy = ff
while True:
ret_val, img = cam.read()
img2 = cam1.read()[1]
time_elapsed = time.time() - prev
# print('data type of frame', type(img))
if time_elapsed > 1/frame_rate:
prev = time.time()
cv2.rectangle(img,(100,100),(500,500),(255,255,0),2)
small_frame = cv2.resize(img, (0, 0), fx=fxx, fy=fyy)
cv2.resize(img2,(0, 0), fx = fxx, fy = fyy)
#print("helo")
#if mirror:
# img = cv2.flip(img, 1)
cv2.imshow('my webcam', img)
cv2.imshow('my 2nd webcam', img2)
#if cv2.waitKey(1) == 27:
# break # esc to quit
if cv2.waitKey(1) == 27:
break
cv2.destroyAllWindows()
print (cam)
def main():
show_webcam(mirror=True)
if __name__ == '__main__':
main()
The videos appear for a few frames but after a few seconds i get this error
select timeout
VIDIOC_DQBUF: Resource temporarily unavailable
Traceback (most recent call last):
File "camera.py", line 39, in <module>
main()
File "camera.py", line 36, in main
show_webcam(mirror=True)
File "camera.py", line 21, in show_webcam
small_frame = cv2.resize(img, (0, 0), fx=fxx, fy=fyy)
cv2.error: OpenCV(3.4.4) /home/pi/packaging/opencv-python/opencv/modules/imgproc/src/resize.cpp:3784: error: (-215:Assertion failed) !ssize.empty() in function 'resize'
This same code works fine when i implement it on my laptop. What can i do to correct this error and ensure the video is not interrupted ?
have tried doing cv2.waitKey(30) doesn't work
why you are using a time_elapsed variable while you can just use the waitkey function and pass to it the milli-sec you want to wait, 1000/framePerSecond. and about your error, the frame you try to resize is empty, sometimes that happens. so before you do any image processing try to check if image image not empty then proceed what you want to do.
Same code works on laptop but not on pi. This means you have issues to lesser memory and / or cpu on a smaller device.
Try reducing the framerate to adjust how many frames you can work with a smaller device.
You should check if the ret_val of both cam.read() is true simultaneously before continuing processing. So when a frame is not properly grabbed, it is dropped and the process is retried instead of throwing an error and exiting.
This does not technically resolve the error, but it does solve your problem, provided the resulting framerate is sufficient for your application.

Unable to capture image every 5 seconds using OpenCV

I am trying to capture image in every 5 seconds using opencv through my laptop's built-in webcam. I am using time.sleep(5) for the required pause. In every run, the first image seems to be correctly saved but after that rest all the images are being saved as an unsupported format (when i am opening them). Also I am unable to break the loop by pressing 'q'.
Below is my code.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture(0)
framerate = cap.get(5)
x=1
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cap.release()
# Our operations on the frame come here
filename = 'C:/Users/shjohari/Desktop/Retail_AI/MySection/to_predict/image' + str(int(x)) + ".png"
x=x+1
cv2.imwrite(filename, frame)
time.sleep(5)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Any help is appreciated.
Just a small change, solved my issue.
I included the below code inside the loop :P
cap = cv2.VideoCapture(0)
framerate = cap.get(5)

python cv2 VideoCapture not working on wamp server

Background - I have python and required scripts installed on my desktop.
I am developing a face recognition WebApp.
It is working fine from Command Line but when I try to run it from localhost on wampserver, the webcam lights get on but no webcam window appears and the page starts loading for unlimited time.
Here is the code for data training
#!C:\Users\Gurminders\AppData\Local\Programs\Python\Python35-32\python.exe
import cv2
import os
def assure_path_exists(path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs(dir)
# Start capturing video
vid_cam = cv2.VideoCapture(0)
# Detect object in video stream using Haarcascade Frontal Face
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# For each person, one face id
face_id = input('Please Enter Casual ID --> ')
# Initialize sample face image
count = 0
assure_path_exists("dataset/")
# Start looping
while(True):
# Capture video frame
_, image_frame = vid_cam.read()
# Convert frame to grayscale
gray = cv2.cvtColor(image_frame, cv2.COLOR_BGR2GRAY)
# Detect frames of different sizes, list of faces rectangles
faces = face_detector.detectMultiScale(gray, 1.3, 5)
# Loops for each faces
for (x,y,w,h) in faces:
# Crop the image frame into rectangle
cv2.rectangle(image_frame, (x,y), (x+w,y+h), (255,0,0), 2)
# Increment sample face image
count += 1
# Save the captured image into the datasets folder
cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])
# Display the video frame, with bounded rectangle on the person's face
cv2.imshow('frame', image_frame)
# To stop taking video, press 'q' for at least 100ms
if cv2.waitKey(100) & 0xFF == ord('q'):
break
# If image taken reach 100, stop taking video
elif count>100:
break
# Stop video
vid_cam.release()
# Close all started windows
cv2.destroyAllWindows()
It works fine on command line but not from localhost on wampserver.
I solved this problem
I replaced
if cv2.waitKey(100) & 0xFF == ord('q'):
With
if cv2.waitKey(5000):
here 5000 are 5 seconds

Resources