Easy way to make OpenCV's "undistort" run more efficiently? - python-3.x

I'm currently in the works of making an auto-aiming turret, and my camera's have a noticeable fish eye effect, which is totally fine. I'm using OpenCV's undistort() function to handle this, with data from a camera checkerboard calibration.
I will most likely be running the vision system on a raspberry pi 4, and currently, my undistort function takes 80-90% of my CPU (i5-8600k OC 5GHz) when processing both of my cameras at 1280x720px, ideally this px as it's the largest and will provide best accuracy. Also note I'm aiming for a 15Hz update time.
Any ideas on how to make this more lightweight? Here's my code that I'm currently running as a test:
from cv2 import cv2
import numpy as np
import yaml
import time
cam1 = cv2.VideoCapture(0)
cam2 = cv2.VideoCapture(1)
cam1.set(3, 1280)
cam1.set(4, 720)
cam2.set(3, 1280)
cam2.set(4, 720)
#load calibration matrix
with open ("calibration_matrix.yaml") as file:
documents = yaml.full_load(file)
x=0
for item, doc in documents.items():
if x == 0:
mtx = np.matrix(doc)
x = 1
else:
dist = np.matrix(doc)
def camera(ID, asd):
if asd == -1:
ID.release()
ret, frame = ID.read()
if ret:
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),1,(w,h))
undistortedFrame = cv2.undistort(frame, mtx, dist, None, newcameramtx)
undistortedFrame = undistortedFrame[y:y+h, x:x+w]
return undistortedFrame
while True:
frame1 = camera(cam1, 0)
frame2 = camera(cam2, 0)
cv2.imshow('Frame 1', frame1)
cv2.imshow('Frame 2', frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
camera(cam1, -1)
camera(cam2, -1)
cv2.destroyAllWindows()

Comments above resolved, here's the solution:
As #Micka said,
use initundistortrectifymap() (once) and remap() (for each image)
initundistortrectifymap() basically takes the heavy load off of the undistort function (Micka) In practice, you run initundistortrectifymap() at the start of the program with the image calibration matrix and distance coefficients, and then initundistortrectifymap() returns two maps, map1 and map2.
These maps can be passed into the remap() function to remap your image, which is a significantly lighter function than undistort(). In my particular case, I have a fisheye camera, and OpenCV has fisheye modules that are optimized to undistort fish eye cameras with ease.

Related

Multi-threaded PyOpenCV display [duplicate]

I have to stitch the images captured from many (9) cameras. Initially, I tried to capture the frames from 2 cameras with rate 15 FPS. Then, I connected 4 cameras (I also used externally powered USB hub to provide enough power) but I could only see only one stream.
For testing, I used the following script:
import numpy as np
import cv2
import imutils
index = 0
arr = []
while True:
cap = cv2.VideoCapture(index)
if not cap.read()[0]:
break
else:
arr.append(index)
cap.release()
index += 1
video_captures = [cv2.VideoCapture(idx) for idx in arr]
while True:
# Capture frame-by-frame
frames = []
frames_preview = []
for i in arr:
# skip webcam capture
if i == 1: continue
ret, frame = video_captures[i].read()
if ret:
frames.append(frame)
small = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
frames_preview.append(small)
for i, frame in enumerate(frames_preview):
cv2.imshow('Cam {}'.format(i), frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
for video_capture in video_captures:
video_capture.release()
cv2.destroyAllWindows()
Is there any limit for the number of cameras? Does anyone know what is the right way to capture frames from multiple cameras?
To capture multiple streams with OpenCV, I recommend using threading which can improve performance by alleviating the heavy I/O operations to a separate thread. Since accessing the webcam/IP/RTSP stream using cv2.VideoCapture().read() is a blocking operation, our main program is stuck until the frame is read from the camera device. If you have multiple streams, this latency will definitely be visible. To remedy this problem, we can use threading to spawn another thread to handle retrieving the frames using a deque in parallel instead of relying on a single thread to obtain the frames in sequential order. Threading allows frames to be continuously read without impacting the performance of our main program. The idea to capture a single stream using threading and OpenCV, is from a previous answer in Python OpenCV multithreading streaming from camera.
But if you want to capture multiple streams, OpenCV alone is not enough. You can use OpenCV in combination with a GUI framework to stitch each image onto a nice display. I will use PyQt4 as the framework, qdarkstyle for GUI CSS, and imutils for OpenCV convenience functions.
Here is a very stripped down version of the camera GUI I currently use without the placeholder images, credential admin login page, and camera switching ability. I've kept the automatic camera reconnect feature incase the internet dies or the camera connection is lost. I only have 8 cameras as shown in the image above, but it is very simple to add in another camera and should not impact performance. This camera GUI currently performs at about ~60 FPS so it is real-time. You can easily rearrange the layout using PyQt layouts so feel free to modify the code! Remember to change the stream links!
from PyQt4 import QtCore, QtGui
import qdarkstyle
from threading import Thread
from collections import deque
from datetime import datetime
import time
import sys
import cv2
import imutils
class CameraWidget(QtGui.QWidget):
"""Independent camera feed
Uses threading to grab IP camera frames in the background
#param width - Width of the video frame
#param height - Height of the video frame
#param stream_link - IP/RTSP/Webcam link
#param aspect_ratio - Whether to maintain frame aspect ratio or force into fraame
"""
def __init__(self, width, height, stream_link=0, aspect_ratio=False, parent=None, deque_size=1):
super(CameraWidget, self).__init__(parent)
# Initialize deque used to store frames read from the stream
self.deque = deque(maxlen=deque_size)
# Slight offset is needed since PyQt layouts have a built in padding
# So add offset to counter the padding
self.offset = 16
self.screen_width = width - self.offset
self.screen_height = height - self.offset
self.maintain_aspect_ratio = aspect_ratio
self.camera_stream_link = stream_link
# Flag to check if camera is valid/working
self.online = False
self.capture = None
self.video_frame = QtGui.QLabel()
self.load_network_stream()
# Start background frame grabbing
self.get_frame_thread = Thread(target=self.get_frame, args=())
self.get_frame_thread.daemon = True
self.get_frame_thread.start()
# Periodically set video frame to display
self.timer = QtCore.QTimer()
self.timer.timeout.connect(self.set_frame)
self.timer.start(.5)
print('Started camera: {}'.format(self.camera_stream_link))
def load_network_stream(self):
"""Verifies stream link and open new stream if valid"""
def load_network_stream_thread():
if self.verify_network_stream(self.camera_stream_link):
self.capture = cv2.VideoCapture(self.camera_stream_link)
self.online = True
self.load_stream_thread = Thread(target=load_network_stream_thread, args=())
self.load_stream_thread.daemon = True
self.load_stream_thread.start()
def verify_network_stream(self, link):
"""Attempts to receive a frame from given link"""
cap = cv2.VideoCapture(link)
if not cap.isOpened():
return False
cap.release()
return True
def get_frame(self):
"""Reads frame, resizes, and converts image to pixmap"""
while True:
try:
if self.capture.isOpened() and self.online:
# Read next frame from stream and insert into deque
status, frame = self.capture.read()
if status:
self.deque.append(frame)
else:
self.capture.release()
self.online = False
else:
# Attempt to reconnect
print('attempting to reconnect', self.camera_stream_link)
self.load_network_stream()
self.spin(2)
self.spin(.001)
except AttributeError:
pass
def spin(self, seconds):
"""Pause for set amount of seconds, replaces time.sleep so program doesnt stall"""
time_end = time.time() + seconds
while time.time() < time_end:
QtGui.QApplication.processEvents()
def set_frame(self):
"""Sets pixmap image to video frame"""
if not self.online:
self.spin(1)
return
if self.deque and self.online:
# Grab latest frame
frame = self.deque[-1]
# Keep frame aspect ratio
if self.maintain_aspect_ratio:
self.frame = imutils.resize(frame, width=self.screen_width)
# Force resize
else:
self.frame = cv2.resize(frame, (self.screen_width, self.screen_height))
# Add timestamp to cameras
cv2.rectangle(self.frame, (self.screen_width-190,0), (self.screen_width,50), color=(0,0,0), thickness=-1)
cv2.putText(self.frame, datetime.now().strftime('%H:%M:%S'), (self.screen_width-185,37), cv2.FONT_HERSHEY_SIMPLEX, 1.2, (255,255,255), lineType=cv2.LINE_AA)
# Convert to pixmap and set to video frame
self.img = QtGui.QImage(self.frame, self.frame.shape[1], self.frame.shape[0], QtGui.QImage.Format_RGB888).rgbSwapped()
self.pix = QtGui.QPixmap.fromImage(self.img)
self.video_frame.setPixmap(self.pix)
def get_video_frame(self):
return self.video_frame
def exit_application():
"""Exit program event handler"""
sys.exit(1)
if __name__ == '__main__':
# Create main application window
app = QtGui.QApplication([])
app.setStyleSheet(qdarkstyle.load_stylesheet_pyqt())
app.setStyle(QtGui.QStyleFactory.create("Cleanlooks"))
mw = QtGui.QMainWindow()
mw.setWindowTitle('Camera GUI')
mw.setWindowFlags(QtCore.Qt.FramelessWindowHint)
cw = QtGui.QWidget()
ml = QtGui.QGridLayout()
cw.setLayout(ml)
mw.setCentralWidget(cw)
mw.showMaximized()
# Dynamically determine screen width/height
screen_width = QtGui.QApplication.desktop().screenGeometry().width()
screen_height = QtGui.QApplication.desktop().screenGeometry().height()
# Create Camera Widgets
username = 'Your camera username!'
password = 'Your camera password!'
# Stream links
camera0 = 'rtsp://{}:{}#192.168.1.43:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera1 = 'rtsp://{}:{}#192.168.1.45/axis-media/media.amp'.format(username, password)
camera2 = 'rtsp://{}:{}#192.168.1.47:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera3 = 'rtsp://{}:{}#192.168.1.40:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera4 = 'rtsp://{}:{}#192.168.1.44:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera5 = 'rtsp://{}:{}#192.168.1.42:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera6 = 'rtsp://{}:{}#192.168.1.46:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
camera7 = 'rtsp://{}:{}#192.168.1.41:554/cam/realmonitor?channel=1&subtype=0'.format(username, password)
# Create camera widgets
print('Creating Camera Widgets...')
zero = CameraWidget(screen_width//3, screen_height//3, camera0)
one = CameraWidget(screen_width//3, screen_height//3, camera1)
two = CameraWidget(screen_width//3, screen_height//3, camera2)
three = CameraWidget(screen_width//3, screen_height//3, camera3)
four = CameraWidget(screen_width//3, screen_height//3, camera4)
five = CameraWidget(screen_width//3, screen_height//3, camera5)
six = CameraWidget(screen_width//3, screen_height//3, camera6)
seven = CameraWidget(screen_width//3, screen_height//3, camera7)
# Add widgets to layout
print('Adding widgets to layout...')
ml.addWidget(zero.get_video_frame(),0,0,1,1)
ml.addWidget(one.get_video_frame(),0,1,1,1)
ml.addWidget(two.get_video_frame(),0,2,1,1)
ml.addWidget(three.get_video_frame(),1,0,1,1)
ml.addWidget(four.get_video_frame(),1,1,1,1)
ml.addWidget(five.get_video_frame(),1,2,1,1)
ml.addWidget(six.get_video_frame(),2,0,1,1)
ml.addWidget(seven.get_video_frame(),2,1,1,1)
print('Verifying camera credentials...')
mw.show()
QtGui.QShortcut(QtGui.QKeySequence('Ctrl+Q'), mw, exit_application)
if(sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
Related camera/IP/RTSP, FPS, video, threading, and multiprocessing posts
Python OpenCV streaming from camera - multithreading, timestamps
Video Streaming from IP Camera in Python Using OpenCV cv2.VideoCapture
How to capture multiple camera streams with OpenCV?
OpenCV real time streaming video capture is slow. How to drop frames or get synced with real time?
Storing RTSP stream as video file with OpenCV VideoWriter
OpenCV video saving
Python OpenCV multiprocessing cv2.VideoCapture mp4

How to implement the code for shadow removal from mask in opencv?

I am using MoG method in opencv to detect the moving object in a static background frame,but it also detects shadows too. I want to remove the shadows from the mask. I tried using using threshold for grey color(as shaodws are marked in grey in mask) ,but threshold also removes the grey part of an object too. I am trying to implement https://pdfs.semanticscholar.org/53e0/7f60d03461def8ed4f765f2a6b7dfc4bfbd0.pdf
this paper's algorithm. Can anyone tell me how to implement this in python?
import cv2
import numpy as np
cap = cv2.VideoCapture('TownCentreXVID.avi')
fgbg = cv2.createBackgroundSubtractorMOG2()
while(1):
_, frame = cap.read()
mask = fgbg.apply(frame)
kernel = np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
closing = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
window = cv2.namedWindow('Original', cv2.WINDOW_NORMAL| cv2.WINDOW_KEEPRATIO )
window = cv2.namedWindow('Mask', cv2.WINDOW_NORMAL| cv2.WINDOW_KEEPRATIO)
window = cv2.namedWindow('Opening', cv2.WINDOW_NORMAL| cv2.WINDOW_KEEPRATIO )
#window = cv2.namedWindow('Closing', cv2.WINDOW_NORMAL| cv2.WINDOW_KEEPRATIO)
cv2.imshow('Original',frame)
cv2.imshow('Mask',thresh)
cv2.imshow('Opening',opening)
#cv2.imshow('Closing',closing)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
cap.release()
If you are using BackgroundSubtractorMOG2 or BackgroundSubtractorKNN background subtractor, then you can easily set the ShadowValue to false or 0:
mog2->setShadowValue(0);
// Or
knn->setShadowValue(0);
And it will remove the shadow from the mask.
With shadow value = true
With shadow value = false

Reading a barcode using OpenCV QRCodeDetector

I am trying to use OpenCV on Python3 to create an image with a QR code and read that code back.
Here is some relevant code:
def make_qr_code(self, data):
qr = qrcode.QRCode(
version=2,
error_correction=qrcode.constants.ERROR_CORRECT_H,
box_size=10,
border=4,
)
qr.add_data(data)
return numpy.array( qr.make_image().get_image())
# // DEBUG
img = numpy.ones([380, 380, 3]) * 255
index = self.make_qr_code('Hello StackOverflow!')
img[:index.shape[0], :index.shape[1]][index] = [0, 0, 255]
frame = img
# // DEBUG
self.show_image_in_canvas(0, frame)
frame_mono = cv.cvtColor(numpy.uint8(frame), cv.COLOR_BGR2GRAY)
self.show_image_in_canvas(1, frame_mono)
qr_detector = cv.QRCodeDetector()
data, bbox, rectifiedImage = qr_detector.detectAndDecode(frame_mono)
if len(data) > 0:
print("Decoded Data : {}".format(data))
self.show_image_in_canvas(2, rectifiedImage)
else:
print("QR Code not detected")
(the calls to show_image_in_canvas are just for showing the images in my GUI so I can see what is going on).
When inspecting the frame and frame_mono visually, it looks OK to me
However, the QR Code Detector doesn't return anything (going into the else: "QR Code not detected").
There is literally nothing else in the frame than the QR code I just generated. What do I need to configure about cv.QRCodeDetector or what additional preprocessing do I need to do on my frame to make it find the QR code?
OP here; solved the problem by having a good look at the generated QR code and comparing it to some other sources.
The problem was not in the detection, but in the generation of the QR codes.
Apparently the array that qrcode.QRCode returns has False (or maybe it was 0 and I assumed it was a boolean) in the grid squares that are part of the code, and True (or non-zero) in the squares that are not.
So when I did img[:index.shape[0], :index.shape[1]][index] = [0, 0, 255] I was actually creating a negative image of the QR code.
When I inverted the index array the QR code changed from the image on the left to the image on the right and the detection succeeded.
In addition I decided to switch to the ZBar library because it's much better at detecting these codes under less perfect circumstances (like from a webcam image).
import cv2
import sys
filename = sys.argv[1]
# Or you can take file directly like this:
# filename = f'images/filename.jpg' where images is folder for files that you trying to read
# read the QRCODE image
# in case if QR code is not black/white it is better to convert it into grayscale
# Zero means grayscale
img = cv2.imread(filename, 0)
img_origin = cv2.imread(filename)
# initialize the cv2 QRCode detector
detector = cv2.QRCodeDetector()
# detect and decode
data, bbox, straight_qrcode = detector.detectAndDecode(img)
# if there is a QR code
if bbox is not None:
print(f"QRCode data:\n{data}")
# display the image with lines
# length of bounding box
# Cause bbox = [[[float, float]]], we need to convert fload into int and loop over the first element of array
n_lines = len(bbox[
0])
bbox1 = bbox.astype(int) # Float to Int conversion
for i in range(n_lines):
# draw all lines
point1 = tuple(bbox1[0, [i][0]])
point2 = tuple(bbox1[0, [(i + 1) % n_lines][0]])
cv2.line(img_origin, point1, point2, color=(255, 0, 0), thickness=2)
# display the result
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
else:
print("QR code not detected")
To re-state the accepted answer, the background of the QRcode must be white and the foreground must be black. So if the generated QRcode has a white foreground you must invert the colors, e.g.:
from cv2 import cv2
img = cv2.imread('C:/Users/N/qrcode.jpg')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Invert colors so foreground is black
img_invert = cv2.bitwise_not(img_gray)
cv2.imshow('gray', img_gray)
cv2.imshow('inverted', img_invert)
cv2.waitKey(1)
qr_detector = cv2.QRCodeDetector()
text, _, _ = qr_detector.detectAndDecode(img_invert)
print(text)

Output 2 Functions on 1 Video

I'm a beginner on Python and want to ask whether can I draw images of different function onto one video? Below is my practice code.
import numpy as np
import cv2
from multiprocessing import Process
cap = cv2.VideoCapture('C:/Users/littl/Desktop/Presentation/Crop_DownResolution.mp4')
def line_drawing():
while cap.isOpened():
ret, img = cap.read()
if ret is True:
cv2.line (img,(50,180),(380,180),(0,255,0),5)
cv2.imshow('img',img)
k = cv2.waitKey(1) & 0xff
if k == 27:
break
else:
break
cap.release()
cv2.destroyAllWindows()
def rectangle_drawing():
while cap.isOpened():
ret, img = cap.read()
if ret is True:
cv2.rectangle(img,(180,0),(380,128),(0,255,0),3)
cv2.imshow('img',img)
k = cv2.waitKey(1) & 0xff
if k == 27:
break
else:
break
cap.release()
cv2.destroyAllWindows()
if __name__=='__main__':
p1 = Process(target = rectangle_drawing)
p1.start()
p2 = Process(target = line_drawing)
p2.start()
When I run the code, it gives me 2 tabs running the same video, one with the line drawn, the other with the rectangle drawn. How to I make both the rectangle and line to be on the video and have the functions separated instead of putting both in the same function?
I won't be able to give you an answer with Python code but...
What you have is two different threads, both capturing data from the video feed independently, and drawing the elements on separate pieces of data.
What you need to do is Have one process that is just in charge of data capture from your video feed which then provides that data for the other two threads. You would likely need to look into Mutexes so that the two threads don't clash with each other.
Resources
There are quite a few questions on SO and the internet that will help you achieve this:
opencv python Multi Threading Video Capture
https://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/
http://algomuse.com/c-c/developing-a-multithreaded-real-time-video-processing-application-in-opencv
http://forum.piborg.org/node/2363

Text written on the frame also turns gray when video frames turned into grayscale

I am building a motion detector application.So for the motion detection algorithm to work, I converted the frames to grayscale so now the application is able to detect the motions.But when I try to put a text on the frame trying to post a message like "MOVING", even the text has turned gray and is hardly visible.How do I draw colored text on a video frame?
Below is my motion detection application code
import cv2
import numpy as np
from skimage.measure import compare_ssim
from twilio.rest import Client
#we can compare two images using Structural Similarity
#so a small change in pixel value won't prompt this method to term both images as dissimilar
#the closer the value is to 1,the more similar two images are
def ssim(A, B):
return compare_ssim(A, B, data_range=A.max() - A.min())
#capture a video either from a file or a live video stream
cap = cv2.VideoCapture(0)
first_frame = True
prev_frame = None
current_frame = None
#we keep a count of the frames
frame_counter = 0
while True:
if frame_counter == 0:
#prev_frame will always trail behind the current_frame
prev_frame = current_frame
#get a frame from the video
ret, current_frame = cap.read()
#if we reach the end of the video in case of a video file,stop reading
if current_frame is None:
break
#convert the image to grayscale
current_frame = cv2.cvtColor(current_frame,cv2.COLOR_BGR2GRAY)
if first_frame:
#for the first time prev_frame and current_frame will be the same
prev_frame = current_frame
first_frame = False
if frame_counter == 9:
#compare two images based on SSIM
ssim_val = ssim(current_frame, prev_frame)
print(ssim_val)
#if there is a major drop in the SSIM value ie it has detected an object
if ssim_val < 0.8:
# Here I want to put a colored text to the screen
cv2.putText(current_frame, "MOVING", (100, 300),
cv2.FONT_HERSHEY_TRIPLEX, 4, (255, 0, 0))
frame_counter = -1
#show the video as a series of frames
cv2.imshow("Motion Detection",current_frame) #(name of the window,image file)
frame_counter += 1
key = cv2.waitKey(1) & 0xFF #cv2.waitKey(1) returns a value of -1 which is masked using & 0xFF to get char value
if key == ord('q'): #gives ASCII value of 'q'
break
#release the resources allocated to the video file or video stream
cap.release()
#destroy all the windows
cv2.destroyAllWindows()
I searched online and I got this piece of code which basically suggests to convert grayscale back to BGR
backtorgb = cv2.cvtColor(current_frame, cv2.COLOR_GRAY2RGB)
But this didn't work.I even took a copy of the current frame before it being converted to grayscale frame and then tried to write on the copied color frame but still the text comes gray and not colored.What should I do?

Resources