Regionprops (skimage.measure) for Video in Python - python-3.x

I got a piece of code on internet that 'Label image regions' and tried to run it over a video , but all I get is first frame and than an error after closing the first frame window " max() arg is an empty sequence" from line" plt.tight_layout() of my code. I am trying to get label for all the frames in my video instead of single image example as shown in the given example above (link). Basically the code should display/plot all the frames with labels.
Any help will be really useful.Please find my code below
import cv2
import numpy as np
from matplotlib import pyplot as plt
import time
import matplotlib.patches as mpatches
from skimage import data
from skimage.filters import threshold_otsu
from skimage.segmentation import clear_border
from skimage.measure import label, regionprops
from skimage.morphology import closing, square
from skimage.color import label2rgb
cap = cv2.VideoCapture('test3.mp4')
fig, ax = plt.subplots(figsize=(10, 6))
while(1):
t = time.time()
ret, frame2 = cap.read()
image = cv2.cvtColor(frame2, cv2.COLOR_BGR2GRAY)
thresh = threshold_otsu(image)
bw = closing(image > thresh, square(3))
# remove artifacts connected to image border
cleared = clear_border(bw)
# label image regions
label_image = label(cleared)
image_label_overlay = label2rgb(label_image, image=frame2)
x = regionprops(label_image)
area2 = [r.area for r in x]
print(area2)
ax.imshow(image_label_overlay)
for region in regionprops(label_image):
# take regions with large enough areas
if region.area >= 100:
# draw rectangle around segmented coins
minr, minc, maxr, maxc = region.bbox
rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr -minr,
fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
plt.tight_layout()
plt.show()
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

Wola!
And the solution is:
1.) Error rectification: "max() arg is an empty sequence" from line plt.tight_layout() can be removed using fig.tight_layout rather than plt.tight_layout. Because after I was closing the first frame of video (that was not updating, well that's another problem I am still pondering on!!) the figure was empty and it was raising an exception as tight.layout trying to run on an empty figure.
2.) Running Label image regions code for video is made possible if you replace line
rect = mpatches.Rectangle((minc, minr), maxc - minc+50, maxr - minr+50,fill=False, edgecolor='red', linewidth=2)
ax.add_patch(rect)
ax.set_axis_off()
plt.tight_layout()
plt.show()
with
cv2.rectangle(frame2, (minc, minr), (minc +maxc - minc , minr + maxr - minr), (0, 255, 0), 2)
cv2.imshow('ObjectTrack', frame2) # this line outside the if loop
Basically display the video the way it is in simple Capture Video from Camera program of Python.

Related

Serial visualisation with MatPlotLib animate not updating properly

I want to visualise values from a pressure sensing mat (32x32 pressure point) in realtime as a heatmap with MatPlotLib animation.
The mat outputs 1025 bytes (1024 values + 'end byte' which is always 255). I print these out from inside the animate function but it only works if I comment out plt.imshow(np_ints).
With plt.imshow the MatPlotLib window pops up and even reads the values... I see it in the heatmap when I start the program while pressing down on the sensor but when I release it, it seems like it slowly goes through all the readings in the serial buffer, instead of being realtime. Not sure if it's because I'm not handling the serial properly or something to do with how the FuncAnimation works. Can someone point me in the right direction please?
import numpy as np
import serial
import matplotlib.pyplot as plt
import matplotlib.animation as animation
np.set_printoptions(threshold=1024,linewidth=1500)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
def animate(i):
# np_ints = np.random.random((200, 200)) # FOR TESTING ONLY
if ser.inWaiting:
ser_bytes = bytearray(ser.read_until(b'\xFF')) # should read 1025 bytes (1024 values + end byte)
if len(ser_bytes) != 1025: return # prevent error from an 'incomplete' serial reading
ser_ints = [int(x) for x in ser_bytes]
np_ints = np.array(ser_ints[:-1]) # drop the end byte
np_ints = np_ints.reshape(32, 32)
print(len(ser_ints))
print(np.matrix(np_ints))
plt.imshow(np_ints) # THIS BRAKES IT
if __name__ == '__main__':
ser = serial.Serial('/dev/tty.usbmodem14101', 11520)
ser.flushInput()
ani = animation.FuncAnimation(fig, animate, interval=10)
plt.show()
The code below allows to animate random numbers using blitting. The trick is to not use plt.imshow but update the artist data. plt.imshow would create another image by getting the current axis. The slowdown would be caused by the many artists that are then in the figure.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
np.set_printoptions(threshold=1024,linewidth=1500)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
# create dummy data
h = ax.imshow(np.random.rand(32, 32))
def animate(i):
# np_ints = np.random.random((200, 200)) # FOR TESTING ONLY
# put here code for reading data
np_ints = np.random.rand(32, 32) # not ints here, but principle stays the same
# start blitting
h.set_data(np_ints)
return h
if __name__ == '__main__':
ani = animation.FuncAnimation(fig, animate, interval=10)
plt.show()

How to loop and plot correctly on 4D Nifti MRI image

I have 4D NIFTI images with different dimensions [x,y,slices,frames], the first two are the spatial resolution, the third is slice number, while the last one is frame number, I tried to plot all the slices of a specific frame into one figure and update frame by frame using for loops instead of doing all the indexing manually as before, but I have a problem that my images are not updating the frame (except the last one down) as you can see in the attached photo, how can I solve this issue please ??
#==================================
import nibabel as nib
import numpy as np
import matplotlib.pyplot as plt
#==================================
# load image (4D) [X,Y,Z_slice,time]
nii_img = nib.load(path)
nii_data = nii_img.get_fdata()
#===================================================
fig, ax = plt.subplots(4,3,constrained_layout=True)
fig.canvas.set_window_title('4D Nifti Image')
fig.suptitle('4D_Nifti 10 slices 30 time Frames', fontsize=16)
#-------------------------------------------------------------------------------
mng = plt.get_current_fig_manager()
mng.full_screen_toggle()
slice_counter = 0
for i in range(30):
for j in range(3):
for k in range(3):
if slice_counter<9:
ax[j,k].cla()
ax[j,k].imshow(nii_data[:,:,slice_counter,i],cmap='gray', interpolation=None)
ax[j,k].set_title("frame {}".format(i))
ax[j,k].axis('off')
slice_counter+=1
else:
#---------------------------------
ax[3,0].axis('off')
ax[3,2].axis('off')
#---------------------------------
ax[3,1].cla()
ax[3,1].nii_data(nii_data[:,:,9,i],cmap='gray', interpolation=None)
ax[3,1].set_title("frame {}".format(i))
ax[3,1].axis('off')
#---------------------------------
# Note that using time.sleep does *not* work here!
#---------------------------------
plt.pause(.05)
plt.close('all')
At the moment it is not quite clear to me how your output should look like because the second column in the image has more entries than the others.
Please clarify this better in your questions as well as updating your code which is not working due to inconsistent variable names and messed up indenting.
In the meanwhile, I will try it with a first shot where your goal is to print all your slices on the x-axis whereas each frame is on the y-axis.
The code I adapted that it will print for the first three slices the first four frames.
#==================================
import nibabel as nib
import numpy as np
import matplotlib.pyplot as plt
#==================================
# load image (4D) [X,Y,Z_slice,time]
nii_img = nib.load(path)
nii_data = nii_img.get_fdata()
#===================================================
number_of_slices = 3
number_of_frames = 4
fig, ax = plt.subplots(number_of_frames, number_of_slices,constrained_layout=True)
fig.canvas.set_window_title('4D Nifti Image')
fig.suptitle('4D_Nifti 10 slices 30 time Frames', fontsize=16)
#-------------------------------------------------------------------------------
mng = plt.get_current_fig_manager()
mng.full_screen_toggle()
for slice in range(number_of_slices):
for frame in range(number_of_frames):
ax[frame, slice].imshow(nii_data[:,:,slice,frame],cmap='gray', interpolation=None)
ax[frame, slice].set_title("layer {} / frame {}".format(slice, frame))
ax[frame, slice].axis('off')
plt.show()
The sample output for a black image looks like this:
sample output
Update - 05.04.2020
Given the information from the discussion in the comments here the updated version:
#==================================
import nibabel as nib
import numpy as np
import matplotlib.pyplot as plt
from math import ceil
#==================================
# Load image (4D) [X,Y,Z_slice,time]
nii_img = nib.load(path)
nii_data = nii_img.get_fdata()
#===================================================
number_of_slices = nii_data.shape[2]
number_of_frames = nii_data.shape[3]
# Define subplot layout
aspect_ratio = 16./9
number_of_colums = int(number_of_slices / aspect_ratio)
if( number_of_slices % number_of_colums > 0):
number_of_colums += 1
number_of_rows = ceil(number_of_slices / number_of_colums)
# Setup figure
fig, axs = plt.subplots(number_of_rows, number_of_colums,constrained_layout=True)
fig.canvas.set_window_title('4D Nifti Image')
fig.suptitle('4D_Nifti {} slices {} time Frames'.format(number_of_slices, number_of_frames), fontsize=16)
#-------------------------------------------------------------------------------
mng = plt.get_current_fig_manager()
mng.full_screen_toggle()
for frame in range(number_of_frames):
for slice, ax in enumerate(axs.flat):
# For every slice print the image otherwise show empty space.
if slice < number_of_slices:
ax.imshow(nii_data[:,:,slice,frame],cmap='gray', interpolation=None)
ax.set_title("layer {} / frame {}".format(slice, frame))
ax.axis('off')
else:
ax.axis('off')
plt.pause(0.05)
plt.close('all')
The output will look like: second sample output

Background removal from webcam OPENCV PYTHON

I'm creating a script that will read the state of a supermarket and tell me if there is products missing.
for example in the image below there is some places where there is products missing. I'm using FAST method to find all the corners in the frame. but sometimes the scripts detects the floor corners. What I want to do is remove the floor from the frame before I find the corners.
import cv2
import numpy as np
image = cv2.imread('gondola_imagem.jpeg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fast = cv2.FastFeatureDetector_create()
# Obtain Key points, by default non max suppression is On
# to turn off set fast.setBool('nonmaxSuppression', False)
keypoints = fast.detect(gray, None)
print ("Number of keypoints Detected: ", len(keypoints))
image = cv2.drawKeypoints(image, keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imshow('Feature Method - FAST', image)
cv2.waitKey()
cv2.destroyAllWindows()
You can use a mask to remove the areas you are not interested. For example with the following image as a mask you can get the bellow results.
Mask
Result
Code is as follow:
import numpy as np
import cv2
image = cv2.imread('test.jpg')
mask = cv2.imread('mask.jpg', 0)
cv2.imshow('Original', image)
cv2.imshow('Mask', mask)
res = cv2.bitwise_and(image,image,mask = mask)
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
fast = cv2.FastFeatureDetector_create()
# Obtain Key points, by default non max suppression is On
# to turn off set fast.setBool('nonmaxSuppression', False)
keypoints = fast.detect(gray, None)
print ("Number of keypoints Detected: ", len(keypoints))
image = cv2.drawKeypoints(image, keypoints, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('result.jpg', image)
cv2.imshow('Feature Method - FAST', image)
cv2.waitKey()
cv2.destroyAllWindows()
Edit:
If you want to do this in realtime (video from webcam) you just need to do it for every frame you get from the video camera. As long as the camera is not moving you should be able to use the same mask for all the frames. You could make the code above a function and then call it with an image as a parameter, as per the following code:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Following function will have to be created with the previews code
CallFunctionToPreviewsCode(frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The code above was taken from OpenCV Python-Tutorials It is a good place for learning OpenCV for Python programming language.

Image Cropping to get just the particular shape out of image

[I have the images as below, i need to extract just the white strip portion from all the images.
i Have tried using PIL to extract the rectangular portion by manually specifying the pixel value, Can there be any automated way to get this work done where by just feeding the image gives back the rectangular portion
Below is My snipped code:
from PIL import Image
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = Image.open('C:/Users/ShAgarwal/Documents/image_dataset/pic9.jpg')
half_the_width = img.size[0] / 2
half_the_height = img.size[1] / 2
img4 = img.crop(
(
half_the_width-1632,
half_the_height - 440,
half_the_width+1632,
half_the_height + 80
)
)
sample image
import cv2
import numpy as np
from matplotlib import pyplot as plt
image='IMG_3134.JPG'
# read image
imgc = cv2.imread(image)
img = cv2.resize(imgc, None, fx=0.25, fy=0.25) # resize since image is huge
#cropping the strip dimensions
#crop_img = img[1010:1650,140:1099723]
blurred = cv2.blur(img, (3,3))
canny = cv2.Canny(blurred, 50, 200)
Marking coordinates through auto image detection using canny's algorithm
## find the non-zero min-max coords of canny
pts = np.argwhere(canny>0)
y1,x1 = pts.min(axis=0)
y2,x2 = pts.max(axis=0)`
`## crop the region
cropped = img[y1:y2, x1:x2]
cv2.imwrite("cropped.png", cropped)
#Select the bounded area around white boundary
tagged = cv2.rectangle(img.copy(), (x1,y1), (x2,y2), (0,255,0), 3, cv2.LINE_AA)
r = cv2.selectROI(tagged)
imCrop = im[int(r[1]):int(r[1]+r[3]), int(r[0]):int(r[0]+r[2])]
#Bounded Area
cv2.imwrite("taggd2.png", imcrop)
cv2.waitKey()
Results from above code

displaying a map in the background with matplotlib animation

I want to show a map in the background when a vehicle is moving. I am using matplotlib animate function. The movement looks fine. But I tried the following while loading the map. The map is not loading. Only a black patch is visible. I tried to specify the zorder as well. but nothing works.
ani = animation.FuncAnimation(fig, animate, len(x11),interval=150,
blit=True, init_func=init, repeat=False)
img = cbook.get_sample_data('..\\maps.png')
image = plt.imread(img)
plt.imshow(image)
plt.show()
You can read background image with scipy.misc import imread and use plt.imshow to render in the background of your animation.
Below example generates a circle (we'll assume its your car), puts "usa_map.jpg" in the background and then moves circle over map.
Bonus, you can save the animation using encoders such as ffmpeg as a movie in mp4 format using anim.save('the_movie.mp4', writer = 'ffmpeg', fps=30)
Source Code
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from scipy.misc import imread
img = imread("usa_map.jpg")
fig = plt.figure()
fig.set_dpi(100)
fig.set_size_inches(7, 6.5)
ax = plt.axes(xlim=(0, 20), ylim=(0, 20))
patch = plt.Circle((5, -5), 0.75, fc='y')
def init():
patch.center = (20, 20)
ax.add_patch(patch)
return patch,
def animate(i):
x = 10 + 3 * np.sin(np.radians(i))
y = 10 + 3 * np.cos(np.radians(i))
patch.center = (x, y)
return patch,
anim = animation.FuncAnimation(fig, animate,
init_func=init,
frames=360,
interval=20,
blit=True)
plt.imshow(img,zorder=0, extent=[0.1, 20.0, 0.1, 20.0])
anim.save('the_movie.mp4', writer = 'ffmpeg', fps=30)
plt.show()
Above code will generate animaton with a circle moving around USA map. It will also be saved as 'the_movie.mp4' , which I cant upload here.
Result Image

Resources