'Assertion failed error while im trying to read frame with opencv' - python-3.x

I'm trying to use a .pt model to detect an object via webcam. When I execute this code:
# Importamos librerias
import torch
import cv2
import numpy as np
import pandas
# Leemos el modelo
model = torch.hub.load('ultralytics/yolov5', 'custom',
path = 'C:/Users/maxim/OneDrive/Documentos/jupyterbooks/detector.pt')
# Realizo Videocaptura
cap = cv2.VideoCapture(1)
# Empezamos
while True:
# Realizamos lectura de frames
ret, frame = cap.read()
# Correccion de color
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Realizamos las detecciones
detect = model(frame)
# Mostramos FPS
cv2.imshow('Detector de Figuras', np.squeeze(detect.render()))
# Leemos el teclado
t = cv2.waitKey(5)
if t == 27:
break`
cap.release()
cv2.destroyAllWindows()
It's giving me this error:
error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
I'm new into this so I really don't know why is giving me this type of error.
best regards,
maximiliano
I'm expecting to solve this error so I can test my model in my webcam.

Most probably, your frame is empty. That means the webcam did not load properly. You may check something like print(frame.shape) just after cap.read() and see what it prints. If it is not empty, it will print the webcam frame size. Make sure your webcam is in location 1 as you specified. If you have only one webcam, it should be 0.

Related

Background removal from webcam OPENCV PYTHON

I'm creating a script that will read the state of a supermarket and tell me if there is products missing.
for example in the image below there is some places where there is products missing. I'm using FAST method to find all the corners in the frame. but sometimes the scripts detects the floor corners. What I want to do is remove the floor from the frame before I find the corners.
import cv2
import numpy as np
image = cv2.imread('gondola_imagem.jpeg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fast = cv2.FastFeatureDetector_create()
# Obtain Key points, by default non max suppression is On
# to turn off set fast.setBool('nonmaxSuppression', False)
keypoints = fast.detect(gray, None)
print ("Number of keypoints Detected: ", len(keypoints))
image = cv2.drawKeypoints(image, keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow('Feature Method - FAST', image)
cv2.waitKey()
cv2.destroyAllWindows()
You can use a mask to remove the areas you are not interested. For example with the following image as a mask you can get the bellow results.
Mask
Result
Code is as follow:
import numpy as np
import cv2
image = cv2.imread('test.jpg')
mask = cv2.imread('mask.jpg', 0)
cv2.imshow('Original', image)
cv2.imshow('Mask', mask)
res = cv2.bitwise_and(image,image,mask = mask)
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
fast = cv2.FastFeatureDetector_create()
# Obtain Key points, by default non max suppression is On
# to turn off set fast.setBool('nonmaxSuppression', False)
keypoints = fast.detect(gray, None)
print ("Number of keypoints Detected: ", len(keypoints))
image = cv2.drawKeypoints(image, keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('result.jpg', image)
cv2.imshow('Feature Method - FAST', image)
cv2.waitKey()
cv2.destroyAllWindows()
Edit:
If you want to do this in realtime (video from webcam) you just need to do it for every frame you get from the video camera. As long as the camera is not moving you should be able to use the same mask for all the frames. You could make the code above a function and then call it with an image as a parameter, as per the following code:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Following function will have to be created with the previews code
CallFunctionToPreviewsCode(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The code above was taken from OpenCV Python-Tutorials It is a good place for learning OpenCV for Python programming language.

Opencv in python gives me a AttributeError

Im trying to get opencv working for a face recognition program. My webcam is working since I tested it using cv2.VideoCapture and the window popped up, then when i added the next step which is turning the image gray using cv2.cvtcolor(frame, cv2.COLOR_BGR2GRAY) I got the error message saying AttributeError: module 'cv2.cv2' has no attribute 'cvtcolor. I tried uninstalling and reinstalling opencv using pip3 install opencv-contrib-python. Im not sure what im doing wrong here, my guess is maybe my files arnt in the write place but im not sure where they would need to be
When I use this code my window of my webcam pops up and works fine.
Working code
import os
import PIL
import cv2
import numpy
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
"haarcascade_frontalface_alt2.xml")
cap = cv2.VideoCapture(0)
while(True):
#frame by frame
ret, frame = cap.read()
gray = cv2.cvtcolor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiscale(gray, scaleFactor=1.5,
minNeighbors=5)
#display frame
cv2.imshow('frame',frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
#when done relase cap
cap.release()
cv2.destroyAllWindows()
Traceback (most recent call last):
File "c:/Users/Nick Miller/all_for_vs/Code/facerecg.py", line 14, in
gray = cv2.cvtcolor(frame, cv2.COLOR_BGR2GRAY)
AttributeError: module 'cv2.cv2' has no attribute 'cvtcolor'
[ WARN:0] terminating async callback
Change
cv2.cvtcolor(frame, cv2.COLOR_BGR2GRAY) to cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY).
Also, change faces = face_cascade.detectMultiscale(gray,scaleFactor=1.5,minNeighbors=5) to faces = face_cascade.detectMultiScale(gray,scaleFactor=1.5,minNeighbors=5)

Loading keras model on webcam video stream

I have a keras model on depth perception, I want to load it using tensorflowjs, and apply it frame by frame on my webcam stream. Currently I am unable to capture my webcam video stream using HTML. How to do it?
you can use opencv and python to easily capture your video and make changes on it. for example you can use following code:
import cv2
import sys
from time import sleep
video_capture = cv2.VideoCapture(0)
anterior = 0
while True:
if not video_capture.isOpened():
print('Unable to load camera.')
sleep(5)
pass
# Capture frame-by-frame
ret, frame = video_capture.read()
# Display the resulting frameA
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Display the resulting frame
cv2.imshow('Video', frame)
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()

Build Error For cv2.imshow()

Was using opencv to capture a preexisting video. The video pops up on a frame
buts ends in the following error:
cv2.error: D:\Build\OpenCV\opencv-3.2.0\modules\highgui\src\window.cpp:312: error: (-215) size.width>0 && size.height>0 in function cv::imshow
My code is
import numpy as np
import cv2
cap = cv2.VideoCapture('lol.avi')
while(cap.isOpened()):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
PS: I do have ffmpeg in windows and environment path set to it.

python opencv threshold red image

I am trying to threshold a BGR image after I separate the red channel, but
my code always return "Segmentation fault".
import numpy as np
import cv2
def mostrarVentana (titulo, imagen):
print('Mostrando imagen')
cv2.imshow(titulo, imagen)
k = cv2.waitKey(0)
if k == 27: # wait for ESC key to exit
cv2.destroyAllWindows()
img = cv2.imread('RepoImagenes/640x480/P5.jpg', 1) # loading image in BGR
redImg = img[:, :, 2] # extracting red channel
rbin, threshImg = cv2.threshold(redImg, 58, 255, cv2.THRESH_BINARY) # thresholding
mostrarVentana('Binary image', threshImg)
I have read the documentation on how to use the threshold() function and I can not figure out what's wrong. I only need to work on the red channel, how can I get this done?
I am using python 3.4 and opencv 3.1.0
First of all opencv provides a simple API to split n-channel image, using cv2.split() which would return a list of various channels in the image.
There is also a bug in your mostrarVentana method, you have never created a cv2.namedWindow() and you are directly referencing to cv2.imshow(), but you cannot simply cv2.imshow(), without creating a cv2.namedWindow().
Also you must be sure that the image is properly loaded and then access the desired channel, otherwise it would lead to weird errors. Your code with some scenario handling would look like this:
import numpy as np
import cv2
def mostrarVentana (titulo, imagen):
print('Mostrando imagen')
cv2.namedWindow(titulo, cv2.WINDOW_NORMAL)
cv2.imshow(titulo,imagen)
k = cv2.waitKey(0)
if k == 27: # wait for ESC key to exit
cv2.destroyAllWindows()
img = cv2.imread('RepoImagenes/640x480/P5.jpg', 1) # loading image in BGR
print img.shape #This should not print error response
if not img is None and len(img.shape) == 3 and img.shape[2] == 3:
blue_img, green_img, red_img = cv2.split(img) # extracting red channel
rbin, threshImg = cv2.threshold(red_img, 58, 255, cv2.THRESH_BINARY) # thresholding
mostrarVentana('Binary image', threshImg)
else:
if img is None:
print ("Sorry the image path was not valid")
else:
print ("Sorry the Image was not loaded in BGR; 3-channel format")

Resources