Output 2 Functions on 1 Video - python-3.x

I'm a beginner on Python and want to ask whether can I draw images of different function onto one video? Below is my practice code.
import numpy as np
import cv2
from multiprocessing import Process
cap = cv2.VideoCapture('C:/Users/littl/Desktop/Presentation/Crop_DownResolution.mp4')
def line_drawing():
while cap.isOpened():
ret, img = cap.read()
if ret is True:
cv2.line (img,(50,180),(380,180),(0,255,0),5)
cv2.imshow('img',img)
k = cv2.waitKey(1) & 0xff
if k == 27:
break
else:
break
cap.release()
cv2.destroyAllWindows()
def rectangle_drawing():
while cap.isOpened():
ret, img = cap.read()
if ret is True:
cv2.rectangle(img,(180,0),(380,128),(0,255,0),3)
cv2.imshow('img',img)
k = cv2.waitKey(1) & 0xff
if k == 27:
break
else:
break
cap.release()
cv2.destroyAllWindows()
if __name__=='__main__':
p1 = Process(target = rectangle_drawing)
p1.start()
p2 = Process(target = line_drawing)
p2.start()
When I run the code, it gives me 2 tabs running the same video, one with the line drawn, the other with the rectangle drawn. How to I make both the rectangle and line to be on the video and have the functions separated instead of putting both in the same function?

I won't be able to give you an answer with Python code but...
What you have is two different threads, both capturing data from the video feed independently, and drawing the elements on separate pieces of data.
What you need to do is Have one process that is just in charge of data capture from your video feed which then provides that data for the other two threads. You would likely need to look into Mutexes so that the two threads don't clash with each other.
Resources
There are quite a few questions on SO and the internet that will help you achieve this:
opencv python Multi Threading Video Capture
https://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/
http://algomuse.com/c-c/developing-a-multithreaded-real-time-video-processing-application-in-opencv
http://forum.piborg.org/node/2363

Related

Playing a movie in OpenCV

I get the following error while trying to show a movie:
cv2.imshow("Video Output", frames)
TypeError: Expected Ptr<cv::UMat> for argument 'mat'
The commented-out lines are my attempts to fix the problem, but I still get the error.
What am i doing wrong?
import cv2
import numpy as np
vid = cv2.VideoCapture("resources/Plaza.mp4")
while True:
frames = vid.read()
# frames = cv2.cvtColor(frames, cv2.COLOR_RGB2BGR)
# frames_arr = np.array(frames)
cv2.imshow("Video Output", frames)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
Better implementation would be:
Check if the video is opened using vid.isOpened()
vid = cv2.VideoCapture("resources/Plaza.mp4")
while vid.isOpened():
If the frame is returned successfully then display it.
ret, frames = vid.read()
if ret:
# frames = cv2.cvtColor(frames, cv2.COLOR_RGB2BGR)
# frames_arr = np.array(frames)
cv2.imshow("Video Output", frames)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
Make sure always close all the windows and release the VideoCapture object
cv2.destoyAllWindows()
vid.release()
Code:
import cv2
import numpy as np
vid = cv2.VideoCapture("resources/Plaza.mp4")
while vid.isOpened():
ret, frames = vid.read()
if ret:
# frames = cv2.cvtColor(frames, cv2.COLOR_RGB2BGR)
# frames_arr = np.array(frames)
cv2.imshow("Video Output", frames)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else
# continue with the next frame
continue
cv2.destoyAllWindows()
vid.release()
In your code, vid.read() returns two values. The first contains a boolean value, which, according to the documentation:
returns a bool (True/False). If frame is read correctly, it will be True. So you can check end of the video by checking this return value.
So your frames variable essentially is a tuple containing the boolean and the frames themselves. You need to index in to the second element (frames[1]) to play the video with imshow.
Always read the docs well!

Easy way to make OpenCV's "undistort" run more efficiently?

I'm currently in the works of making an auto-aiming turret, and my camera's have a noticeable fish eye effect, which is totally fine. I'm using OpenCV's undistort() function to handle this, with data from a camera checkerboard calibration.
I will most likely be running the vision system on a raspberry pi 4, and currently, my undistort function takes 80-90% of my CPU (i5-8600k OC 5GHz) when processing both of my cameras at 1280x720px, ideally this px as it's the largest and will provide best accuracy. Also note I'm aiming for a 15Hz update time.
Any ideas on how to make this more lightweight? Here's my code that I'm currently running as a test:
from cv2 import cv2
import numpy as np
import yaml
import time
cam1 = cv2.VideoCapture(0)
cam2 = cv2.VideoCapture(1)
cam1.set(3, 1280)
cam1.set(4, 720)
cam2.set(3, 1280)
cam2.set(4, 720)
#load calibration matrix
with open ("calibration_matrix.yaml") as file:
documents = yaml.full_load(file)
x=0
for item, doc in documents.items():
if x == 0:
mtx = np.matrix(doc)
x = 1
else:
dist = np.matrix(doc)
def camera(ID, asd):
if asd == -1:
ID.release()
ret, frame = ID.read()
if ret:
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
undistortedFrame = cv2.undistort(frame, mtx, dist, None, newcameramtx)
undistortedFrame = undistortedFrame[y:y+h, x:x+w]
return undistortedFrame
while True:
frame1 = camera(cam1, 0)
frame2 = camera(cam2, 0)
cv2.imshow('Frame 1', frame1)
cv2.imshow('Frame 2', frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
camera(cam1, -1)
camera(cam2, -1)
cv2.destroyAllWindows()
Comments above resolved, here's the solution:
As #Micka said,
use initundistortrectifymap() (once) and remap() (for each image)
initundistortrectifymap() basically takes the heavy load off of the undistort function (Micka) In practice, you run initundistortrectifymap() at the start of the program with the image calibration matrix and distance coefficients, and then initundistortrectifymap() returns two maps, map1 and map2.
These maps can be passed into the remap() function to remap your image, which is a significantly lighter function than undistort(). In my particular case, I have a fisheye camera, and OpenCV has fisheye modules that are optimized to undistort fish eye cameras with ease.

Thread inside a 'while True' loop

I have this project where I detect driver's face (Real-Time), I'd like to combine it with an Age Estimation NN (Pre-trained model, developed by Tal Hassner).
The thing is I only want to send ONE single frame to the NN while keep running the rest of my code.. I tried using threading.Thread as well as multiprocessing.pool.ThreadPool.
It seems that the NN executed successfuly and predicted the driver's Age, however, as soon as it's done the 'while' loop breaks.
attached the relevant piece of my code:
cap = cv2.VideoCapture(0)
num_faces = {str(n):0 for n in range(0,10)}
CAP_AGE = True
try:
while True:
ret, frame = cap.read()
if CAP_AGE:
pool = ThreadPool(processes=1)
async_result = pool.apply_async(predict_age,(frame,))
age_label = async_result.get()
CAP_AGE = False
frame = np.flip(frame,axis=1)
(detected_img, rects, num_faces) = adj_detect_face(frame, age_label)
print("Found {} faces!".format(len(rects)))
cv2.imshow('Normal', detected_img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
except:
print('An Error Occured, Cannot Run The Program\nConsider Restarting the
Kernel')
finally:
cap.release()
cv2.destroyAllWindows()

Unable to capture image every 5 seconds using OpenCV

I am trying to capture image in every 5 seconds using opencv through my laptop's built-in webcam. I am using time.sleep(5) for the required pause. In every run, the first image seems to be correctly saved but after that rest all the images are being saved as an unsupported format (when i am opening them). Also I am unable to break the loop by pressing 'q'.
Below is my code.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture(0)
framerate = cap.get(5)
x=1
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cap.release()
# Our operations on the frame come here
filename = 'C:/Users/shjohari/Desktop/Retail_AI/MySection/to_predict/image' + str(int(x)) + ".png"
x=x+1
cv2.imwrite(filename, frame)
time.sleep(5)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Any help is appreciated.
Just a small change, solved my issue.
I included the below code inside the loop :P
cap = cv2.VideoCapture(0)
framerate = cap.get(5)

Cannot read video output

I am using the example for background subtraction. It works well but the video output is unreadable. My video is in gray so that might be the reason why I get that problem. I couldn't find much information how to work with VideoWriter_fourcc & VideoWriter different parameters. I know that the video is 256x320 uint8.
import numpy as np
import cv2
#MOG2 Backgroundsubstrator
cap = cv2.VideoCapture('videotest.avi')
fgbg = cv2.createBackgroundSubtractorMOG2()
##
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (256,320))
##
while(cap.isOpened()):
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
if ret==True:
cv2.imshow('frame',fgmask)
out.write(fgmask)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
Using:
fourcc = cv2.VideoWriter_fourcc(*'XVID')
Works if you write the video as is. In this case, I am trying to write the video with a background subtraction. The fix is:
fourcc = cv2.VideoWriter_fourcc(*'DIB ')
Note: Do not forget the space after DIB. I am using Python 3.5 & OpenCV3.1

Resources