Unable to capture image every 5 seconds using OpenCV - python-3.x

I am trying to capture image in every 5 seconds using opencv through my laptop's built-in webcam. I am using time.sleep(5) for the required pause. In every run, the first image seems to be correctly saved but after that rest all the images are being saved as an unsupported format (when i am opening them). Also I am unable to break the loop by pressing 'q'.
Below is my code.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture(0)
framerate = cap.get(5)
x=1
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cap.release()
# Our operations on the frame come here
filename = 'C:/Users/shjohari/Desktop/Retail_AI/MySection/to_predict/image' + str(int(x)) + ".png"
x=x+1
cv2.imwrite(filename, frame)
time.sleep(5)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Any help is appreciated.

Just a small change, solved my issue.
I included the below code inside the loop :P
cap = cv2.VideoCapture(0)
framerate = cap.get(5)

Related

OpenCV webcam reconnect

I'm using the OpenCV webcam example. Works great, but wondering, if it's possible to add a "camera reconnect function". When I unplug the camera, the code will crash. That's normal, but I would like to keep it running until I replug the camera again.
I tried "try: & except:" as you can see below. Now the python doesn't crash & when I unplug the camera, it will start printing "disconnected" to the console. However, when I reconnect the camera back, it would not start automatically, need to reset the program.
Thanks
import numpy as np
import cv2
cap = cv2.VideoCapture(2)
#cap = cv2.VideoCapture(1)
if cap.isOpened():
while(True):
try:
# Capture frame-by-frame
ret, im= cap.read()
# Display the resulting frame
cv2.imshow('frame', im)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
except:
print('Disconnected')
else:
print("camera open failed")
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

Playing a movie in OpenCV

I get the following error while trying to show a movie:
cv2.imshow("Video Output", frames)
TypeError: Expected Ptr<cv::UMat> for argument 'mat'
The commented-out lines are my attempts to fix the problem, but I still get the error.
What am i doing wrong?
import cv2
import numpy as np
vid = cv2.VideoCapture("resources/Plaza.mp4")
while True:
frames = vid.read()
# frames = cv2.cvtColor(frames, cv2.COLOR_RGB2BGR)
# frames_arr = np.array(frames)
cv2.imshow("Video Output", frames)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
Better implementation would be:
Check if the video is opened using vid.isOpened()
vid = cv2.VideoCapture("resources/Plaza.mp4")
while vid.isOpened():
If the frame is returned successfully then display it.
ret, frames = vid.read()
if ret:
# frames = cv2.cvtColor(frames, cv2.COLOR_RGB2BGR)
# frames_arr = np.array(frames)
cv2.imshow("Video Output", frames)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
Make sure always close all the windows and release the VideoCapture object
cv2.destoyAllWindows()
vid.release()
Code:
import cv2
import numpy as np
vid = cv2.VideoCapture("resources/Plaza.mp4")
while vid.isOpened():
ret, frames = vid.read()
if ret:
# frames = cv2.cvtColor(frames, cv2.COLOR_RGB2BGR)
# frames_arr = np.array(frames)
cv2.imshow("Video Output", frames)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else
# continue with the next frame
continue
cv2.destoyAllWindows()
vid.release()
In your code, vid.read() returns two values. The first contains a boolean value, which, according to the documentation:
returns a bool (True/False). If frame is read correctly, it will be True. So you can check end of the video by checking this return value.
So your frames variable essentially is a tuple containing the boolean and the frames themselves. You need to index in to the second element (frames[1]) to play the video with imshow.
Always read the docs well!

python cv2 VideoCapture not working on wamp server

Background - I have python and required scripts installed on my desktop.
I am developing a face recognition WebApp.
It is working fine from Command Line but when I try to run it from localhost on wampserver, the webcam lights get on but no webcam window appears and the page starts loading for unlimited time.
Here is the code for data training
#!C:\Users\Gurminders\AppData\Local\Programs\Python\Python35-32\python.exe
import cv2
import os
def assure_path_exists(path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs(dir)
# Start capturing video
vid_cam = cv2.VideoCapture(0)
# Detect object in video stream using Haarcascade Frontal Face
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# For each person, one face id
face_id = input('Please Enter Casual ID --> ')
# Initialize sample face image
count = 0
assure_path_exists("dataset/")
# Start looping
while(True):
# Capture video frame
_, image_frame = vid_cam.read()
# Convert frame to grayscale
gray = cv2.cvtColor(image_frame, cv2.COLOR_BGR2GRAY)
# Detect frames of different sizes, list of faces rectangles
faces = face_detector.detectMultiScale(gray, 1.3, 5)
# Loops for each faces
for (x,y,w,h) in faces:
# Crop the image frame into rectangle
cv2.rectangle(image_frame, (x,y), (x+w,y+h), (255,0,0), 2)
# Increment sample face image
count += 1
# Save the captured image into the datasets folder
cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])
# Display the video frame, with bounded rectangle on the person's face
cv2.imshow('frame', image_frame)
# To stop taking video, press 'q' for at least 100ms
if cv2.waitKey(100) & 0xFF == ord('q'):
break
# If image taken reach 100, stop taking video
elif count>100:
break
# Stop video
vid_cam.release()
# Close all started windows
cv2.destroyAllWindows()
It works fine on command line but not from localhost on wampserver.
I solved this problem
I replaced
if cv2.waitKey(100) & 0xFF == ord('q'):
With
if cv2.waitKey(5000):
here 5000 are 5 seconds

Output 2 Functions on 1 Video

I'm a beginner on Python and want to ask whether can I draw images of different function onto one video? Below is my practice code.
import numpy as np
import cv2
from multiprocessing import Process
cap = cv2.VideoCapture('C:/Users/littl/Desktop/Presentation/Crop_DownResolution.mp4')
def line_drawing():
while cap.isOpened():
ret, img = cap.read()
if ret is True:
cv2.line (img,(50,180),(380,180),(0,255,0),5)
cv2.imshow('img',img)
k = cv2.waitKey(1) & 0xff
if k == 27:
break
else:
break
cap.release()
cv2.destroyAllWindows()
def rectangle_drawing():
while cap.isOpened():
ret, img = cap.read()
if ret is True:
cv2.rectangle(img,(180,0),(380,128),(0,255,0),3)
cv2.imshow('img',img)
k = cv2.waitKey(1) & 0xff
if k == 27:
break
else:
break
cap.release()
cv2.destroyAllWindows()
if __name__=='__main__':
p1 = Process(target = rectangle_drawing)
p1.start()
p2 = Process(target = line_drawing)
p2.start()
When I run the code, it gives me 2 tabs running the same video, one with the line drawn, the other with the rectangle drawn. How to I make both the rectangle and line to be on the video and have the functions separated instead of putting both in the same function?
I won't be able to give you an answer with Python code but...
What you have is two different threads, both capturing data from the video feed independently, and drawing the elements on separate pieces of data.
What you need to do is Have one process that is just in charge of data capture from your video feed which then provides that data for the other two threads. You would likely need to look into Mutexes so that the two threads don't clash with each other.
Resources
There are quite a few questions on SO and the internet that will help you achieve this:
opencv python Multi Threading Video Capture
https://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/
http://algomuse.com/c-c/developing-a-multithreaded-real-time-video-processing-application-in-opencv
http://forum.piborg.org/node/2363

Cannot read video output

I am using the example for background subtraction. It works well but the video output is unreadable. My video is in gray so that might be the reason why I get that problem. I couldn't find much information how to work with VideoWriter_fourcc & VideoWriter different parameters. I know that the video is 256x320 uint8.
import numpy as np
import cv2
#MOG2 Backgroundsubstrator
cap = cv2.VideoCapture('videotest.avi')
fgbg = cv2.createBackgroundSubtractorMOG2()
##
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (256,320))
##
while(cap.isOpened()):
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
if ret==True:
cv2.imshow('frame',fgmask)
out.write(fgmask)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
Using:
fourcc = cv2.VideoWriter_fourcc(*'XVID')
Works if you write the video as is. In this case, I am trying to write the video with a background subtraction. The fix is:
fourcc = cv2.VideoWriter_fourcc(*'DIB ')
Note: Do not forget the space after DIB. I am using Python 3.5 & OpenCV3.1

Resources