combine overlapping labelled objects and modify label values - python-3.x

I have a Z-stack of 2D confocal microscopy images (2D slices) and I want to segment cells. The Z-stack of 2D images is actually a 3D data. In different slices along the Z-axis, I see same cells do appear in multiple slices. I am interested in cell shape in the XY so I want to preserve the largest cell area from different Z-axis slices. I thought to combine the consecutive 2D slices after converting them to labelled binary images but I am having few issues and I need some help to proceed further.
I have two images img_a and img_b. I first converted them to binary images using OTSU, then applied some morphological operations and then used cv2.connectedComponentsWithStats() to obtain labelled objects. After labeling images, I combined them using cv2.bitwise_or() but it messes up with the labels. You can see this in the attached processed image (cell higlighted by red circles). I see multiple labels for overlapping cell. However, I want to assign one unique label for every combined overlapping object.
What I want at the end is that when I combine two labelled images, I want to assign one single label (a unique value) to the combined overlapping objects and keep the largest cell area by combining both images. Does anyone know how to do it?
Here is the code:
from matplotlib import pyplot as plt
from skimage import io, color, measure
from skimage.util import img_as_ubyte
from skimage.segmentation import clear_border
import cv2
import numpy as np
cells_a=img_a[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_a, thresh_a = cv2.threshold(cells_a, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Morphological operations to remove small noise - opening
kernel = np.ones((3,3),np.uint8)
opening_a = cv2.morphologyEx(thresh_a,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_a = clear_border(opening_a) #Remove edge touchingpixels
numlabels_a, labels_a, stats_a, centroids_a = cv2.connectedComponentsWithStats(opening_a)
img_a1 = color.label2rgb(labels_a, bg_label=0)
## now do the same with image_b
cells_b=img_b[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_b, thresh_b = cv2.threshold(cells_b, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Morphological operations to remove small noise - opening
opening_b = cv2.morphologyEx(thresh_b,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_b = clear_border(opening_b) #Remove edge touchingpixels
numlabels_b, labels_b, stats_b, centroids_b = cv2.connectedComponentsWithStats(opening_b)
img_b1 = color.label2rgb(labels_b, bg_label=0)
## Now combined two images
combined = cv2.bitwise_or(labels_a, labels_b) ## combined both labelled images to get maximum area per cell
combined_img = color.label2rgb(combined, bg_label=0)
plt.imshow(combined_img)
Images can be found here:

Based on the comments from Christoph Rackwitz and beaker, I started to look around for 3D connected components labeling. I found one python library that can handle such things and I installed it and give it a try. It seems to be doing pretty good. It does assign labels in each slice and keeps the labels same for the same cells in different slices. This is exactly what I wanted.
Here is the link to the library that I used to label objects in 3D.
https://pypi.org/project/connected-components-3d/

Related

Generate an Image Dataset from a Single Image

I have a single image that looks like this:
And I need to generate an image dataset that keeps the basic characteristics of this image but adds some noise, such as we see a line at 1:30 time in the image.
Mainly, there's the pink part of the image (vertical lines), blue part (central bluesh hue) and yellow/green part at the edges. I'm looking to "learn" the image in a way that I could control these 3 things and randomly generate:
bluesh central hue's small colors changes and size
vertical pink lines thickness and color
Yellow/Green edges and their size (I could expand them at the expense of blue in the middle or vice virsa
CONSTRAINT: The yellowish circle (which is image of a semi-conductor wafer) cannot change in size or shape. It can move on top of the black square though. structures inside it can change as well, as mentioned in above 3 points.
This might be an easy question for people with experience in computer vision but I, unfortunately, don't have a lot of experience in this domain. So, I'd love to get any ideas on making progress in this direction. Thanks.
Changing the shape of your inner structures while safely keeping all possible characteristics seems non-trivial to me. There are however a number of simple transformation you could do to create an augmented dataset such as:
Mirroring: Horizontally, vertically, diagonally - will keep all of your line characteristics
Rotation: Normally you would also do some rotations, but this will obviously change the orientation of your lines which you want to preserve, so this does not apply in your case
Shearing: Might still apply and work nicely to add some robustness, as long as you don't overdo it and end up bending your features too much
Other than that you might also want to add some noise to your image, or transformed versions of it as listed above, such as Gaussian noise or salt and pepper noise.
You could also play around with the color values, e.g. by slighly shifting the saturation of different hue values in HSV space.
You can combine any of those methods in different combinations, if you try all possible permutations with different amount/type of noise you will get quite a big dataset.
One approach is using keras's ImageDataGenerator
Decide how many samples you want? Assume 5.
total_number = 5
Initialize ImageDataGenerator class. For instance
data_gen = ImageDataGenerator(rescale=1. / 255, shear_range=0.2,
zoom_range=0.2, horizontal_flip=True)
Turn your image to the tensor.
img = load_img("xIzEG.png", grayscale=False) # You can also create gray-images.
arr = img_to_array(img)
tensor_img = arr.reshape((1, ) + arr.shape)
Create a folder you want to store the result, i.e. populated, then Populate
for i, _ in enumerate(data_gen.flow(x=tensor_img,
batch_size=1,
save_to_dir="populated",
save_prefix="generated",
save_format=".png")):
if i > total_number:
break
Now, if you look at your populated folder:
Code
from keras.preprocessing.image import load_img, img_to_array
from keras.preprocessing.image import ImageDataGenerator
# Total Generated number
total_number = 5
data_gen = ImageDataGenerator(rescale=1. / 255, shear_range=0.2,
zoom_range=0.2, horizontal_flip=True)
# Create image to tensor
img = load_img("xIzEG.png", grayscale=False)
arr = img_to_array(img)
tensor_image = arr.reshape((1, ) + arr.shape)
for i, _ in enumerate(data_gen.flow(x=tensor_image,
batch_size=1,
save_to_dir="populated",
save_prefix="generated",
save_format=".png")):
if i > total_number:
break

How do I extract each road in terms of the pixel coordinates from Google Map Screenshot and place them into different lists?

I'm working on a project related to road recognition from a standard Google Map view. Some navigation features will be added to the project later on.
I already extracted all the white pixels (representing road on the map) according to the RGB criteria. Also, I stored all the white pixel (roads) coordinates (2D) in one list named "all_roads". Now I want to extract each road in terms of the pixel coordinates and place them into different lists (one road in one list), but I'm lacking ideas.
I'd like to use Dijkstra's algorithm to calculate the shortest path between two points, but I need to create "nodes" on each road intersection. That's why I'd like to store each road in the corresponding list for further processing.
I hope someone could provide some ideas and methods. Thank you!
Note: The RGB criteria ("if" statements in "threshold" method) seems unnecessary for the chosen map screenshot, but it becomes useful in some other map screenshot with other road colours other than white. (NOT the point of the question anyway but I hope to avoid unnecessary confusion)
# Import numpy to enable numpy array
import numpy as np
# Import time to handle time-related task
import time
# Import mean to calculate the averages of the pixals
from statistics import mean
# Import cv2 to display the image
import cv2 as cv2
def threshold(imageArray):
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Purpose: Display a given image with road in white according to pixel RGBs
Argument(s): A matrix generated from a given image.
Return: A matrix of the same size but only displays white and black.
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
newAr = imageArray
for eachRow in newAr:
for eachPix in eachRow:
if eachPix[0] == 253 and eachPix[1] == 242:
eachPix[0] = 255
eachPix[1] = 255
eachPix[2] = 255
else:
pass
return newAr
# Import the image
g1 = cv2.imread("1.png")
# fix the output image with resolution of 800 * 600
g1 = cv2.resize(g1,(800,600))
# Apply threshold method to the imported image
g2 = threshold(g1)
index = np.where(g2 == [(255,255,255)])
# x coordinate of the white pixels (roads)
print(index[1])
# y coordinate of the white pixels (roads)
print(index[0])
# Storing the 2D coordinates of white pixels (roads) in a list
all_roads = []
for i in range(len(index[0]))[0::3]:
all_roads.append([index[1][i], index[0][i]])
#Display the modified image
cv2.imshow('g2', g2)
cv2.waitKey(0)
cv2.destroyAllWindows()

Octave GNU FFT nested mirrored?

I have written a short Octave script for grabbing and summing the individual FFT's of each row in an image. When I plot the summed FFT's, I get the usual FFT mirror from the real values (which is fine), but I also get a secondary nested mirror. I don't understand why I am getting the nested mirror. The nested mirror has a lower amplitude, but the peak locations have a 1 to 1 correspondence to each other. Please help me understand why the nested mirror.
This is what the original image looks like:
Note that the linked image is downsampled from the original and will not display the behavior shown. I've posted the original image here: https://1drv.ms/u/s!AhAaA6XQyp6gqp1NgKNqL4QmcMw5Pw?e=P7rdRy
The image is acquired from a Fourier Transform spectrometer. The fringes are the interference pattern of the different wavelengths of light. The spectrum of the light source is derived by doing the FFT.
And finally here is the script:
#get image and convert to grayscale...
sum = abs(fftn(gray(1,:))); #get first row FFT and init sum
for i = 2:(rows(gray))
sum += abs(fftn(gray(i,:))); # add each row FFT together
end;
sum = sum/max(sum); # normalize 0-1 scale
sumHalf = sum(1:(end/2)); # move to single sided FFT
sumHalf = 2*sumHalf;
x = 1:numel(sumHalf);
sumHalf(1) = 0; #removed oversized DC component
semilogy(x,sumHalf); #plot in log scale

Using pixel_labels, how to separate objects in an image by color, which will result in three images in python

I am using Kmeans algorithm for creating clusters in an image but I wanted to display seperate clusters of an image. Example if value of K=3 for an image then I wanted to save each seperated cluster portion in a different file. I want to implement this code using python.
I have applied KMeans clustering algorithm clusters are showing but in the same plot.
Let's start with paddington on the left, and assume you have k-means clustered him down to 3 colours on the right/second image:
Now we find the unique colours, and iterate over them. Inside the loop, we use np.where() to set all pixels of the current colour to white and all others to black:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load kmeans output image
im = cv2.imread('kmeans.png')
# Get list of unique colours
uniquecols = np.unique(im.reshape(-1,3), axis=0)
# Iterate over unique colours
for i, c in enumerate(uniquecols):
filename = f"colour-{i}.png"
print(f"Processing colour {c} into file {filename}")
# Make output image white wherever it matches this colour, and black elsewhere
result = np.where(np.all(im==c,axis=2)[...,None], 255, 0)
cv2.imwrite(filename, result)
Sample Output
Processing colour [48 38 35] into file colour-0.png
Processing colour [138 140 152] into file colour-1.png
Processing colour [208 154 90] into file colour-2.png
And the three images are:
Change the np.where() line as follows if you prefer the alternative output:
# Make output image white wherever it doesn't match this colour
result = np.where(np.all(im==c,axis=2)[...,None], c, 255)
Keywords: Image, image processing, k-means clustering, colour reduction, color reduction, Python, OpenCV, color separation, unique colours, unique colors.

Detect Color of particular area of Image Nodejs OpenCV

I'm trying to write code to detect the color of a particular area of an image.
So far I have come across is using OpenCV, we can do this, But still haven't found any particular tutorial to help with this.
I want to do this with javascript, but I can also use python OpenCV to get the results.
can anyone please help me with sharing any useful link or can explain how can I achieve detecting the color of the particular area in the image.
For eg.
The box in red will show a different color. I need to figure out which color it is showing.
What I have tried:
I have tried OpenCV canny images, though I am successful to get area separated with canny images, how to detect the color of that particular canny area is still a challenge.
Also, I tried it with inRange method from OpenCV which works perfect
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask = mask)
# show the images
cv2.imshow("images", np.hstack([image, output]))
It works well and extracts the color area from the image But is there any callback which responds if the image has particular color so that it can be all done automatically?
So I am assuming here that, you already know the location of the rect which is going to be dynamically changed and need to find out the single most dominant color in the desired ROI. There are a lot of ways to do the same, one is by getting the average, of all the pixels in the ROI, other is to count all the distinct pixel values in the given ROI, with some tolerance difference.
Method 1:
import cv2
import numpy as np
img = cv2.imread("path/to/img.jpg")
region_of_interest = (356, 88, 495, 227) # left, top, bottom, right
cropped_img = img[region_of_interest[1]:region_of_interest[3], region_of_interest[0]:region_of_interest[2]]
print cv2.mean(cropped_img)
>>> (53.430516018839604, 41.05708814243569, 244.54991977640907, 0.0)
Method 2:
To find out the various dominant clusters in the given image you can use cv2.kmeans() as:
import cv2
import numpy as np
img = cv2.imread("path/to/img.jpg")
region_of_interest = (356, 88, 495, 227)
cropped_img = img[region_of_interest[1]:region_of_interest[3], region_of_interest[0]:region_of_interest[2]]
Z = cropped_img.reshape((-1, 3))
Z = np.float32(Z)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 4
ret, label, center = cv2.kmeans(Z, K, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# Sort all the colors, as per their frequencies, as:
print center[sorted(range(K), key=lambda x: np.count_nonzero(label == [x]), reverse=True)[0]]
>>> [ 52.96525192 40.93861389 245.02325439]
#Prateek... nice to have the question narrowed down to the core. The code you provided does not address this issue at hand and remains just a question. I'll hint you towards a direction but you have to code it yourself.
steps that guide you towards a scripting result:
1) In your script add two (past & current) pixellists to store values (pixeltype + occurance).
2) Introduce a while-loop with an action true/stop statement (link to "3") for looping purpose because then it becomes a dynamic process.
3) Write a GUI with a flashy warning banner.
4) compare the pixellist with current_pixellist for serious state change (threshhold).
5) If the delta state change at "4" meets threshold throw the alert ("3").
When you've got written the code and enjoyed the trouble of tracking the tracebacks... then edit your question, update it with the code and reshape your question (i can help wiht that if you want). Then we can pick it up from there. Does that sound like a plan?
I am not sure why you need callback in this situation, but maybe this is what you mean?
def test_color(image, lower, upper):
mask = cv2.inRange(image, lower, upper)
return np.any(mask == 255)
Explanations:
cv2.inRange() will return 255 when pixel is in range (lower, upper), 0 otherwise (see docs)
Use np.any() to check if any element in the mask is actually 255

Resources