Using pixel_labels, how to separate objects in an image by color, which will result in three images in python - python-3.x

I am using Kmeans algorithm for creating clusters in an image but I wanted to display seperate clusters of an image. Example if value of K=3 for an image then I wanted to save each seperated cluster portion in a different file. I want to implement this code using python.
I have applied KMeans clustering algorithm clusters are showing but in the same plot.

Let's start with paddington on the left, and assume you have k-means clustered him down to 3 colours on the right/second image:
Now we find the unique colours, and iterate over them. Inside the loop, we use np.where() to set all pixels of the current colour to white and all others to black:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load kmeans output image
im = cv2.imread('kmeans.png')
# Get list of unique colours
uniquecols = np.unique(im.reshape(-1,3), axis=0)
# Iterate over unique colours
for i, c in enumerate(uniquecols):
filename = f"colour-{i}.png"
print(f"Processing colour {c} into file {filename}")
# Make output image white wherever it matches this colour, and black elsewhere
result = np.where(np.all(im==c,axis=2)[...,None], 255, 0)
cv2.imwrite(filename, result)
Sample Output
Processing colour [48 38 35] into file colour-0.png
Processing colour [138 140 152] into file colour-1.png
Processing colour [208 154 90] into file colour-2.png
And the three images are:
Change the np.where() line as follows if you prefer the alternative output:
# Make output image white wherever it doesn't match this colour
result = np.where(np.all(im==c,axis=2)[...,None], c, 255)
Keywords: Image, image processing, k-means clustering, colour reduction, color reduction, Python, OpenCV, color separation, unique colours, unique colors.

Related

combine overlapping labelled objects and modify label values

I have a Z-stack of 2D confocal microscopy images (2D slices) and I want to segment cells. The Z-stack of 2D images is actually a 3D data. In different slices along the Z-axis, I see same cells do appear in multiple slices. I am interested in cell shape in the XY so I want to preserve the largest cell area from different Z-axis slices. I thought to combine the consecutive 2D slices after converting them to labelled binary images but I am having few issues and I need some help to proceed further.
I have two images img_a and img_b. I first converted them to binary images using OTSU, then applied some morphological operations and then used cv2.connectedComponentsWithStats() to obtain labelled objects. After labeling images, I combined them using cv2.bitwise_or() but it messes up with the labels. You can see this in the attached processed image (cell higlighted by red circles). I see multiple labels for overlapping cell. However, I want to assign one unique label for every combined overlapping object.
What I want at the end is that when I combine two labelled images, I want to assign one single label (a unique value) to the combined overlapping objects and keep the largest cell area by combining both images. Does anyone know how to do it?
Here is the code:
from matplotlib import pyplot as plt
from skimage import io, color, measure
from skimage.util import img_as_ubyte
from skimage.segmentation import clear_border
import cv2
import numpy as np
cells_a=img_a[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_a, thresh_a = cv2.threshold(cells_a, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Morphological operations to remove small noise - opening
kernel = np.ones((3,3),np.uint8)
opening_a = cv2.morphologyEx(thresh_a,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_a = clear_border(opening_a) #Remove edge touchingpixels
numlabels_a, labels_a, stats_a, centroids_a = cv2.connectedComponentsWithStats(opening_a)
img_a1 = color.label2rgb(labels_a, bg_label=0)
## now do the same with image_b
cells_b=img_b[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_b, thresh_b = cv2.threshold(cells_b, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Morphological operations to remove small noise - opening
opening_b = cv2.morphologyEx(thresh_b,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_b = clear_border(opening_b) #Remove edge touchingpixels
numlabels_b, labels_b, stats_b, centroids_b = cv2.connectedComponentsWithStats(opening_b)
img_b1 = color.label2rgb(labels_b, bg_label=0)
## Now combined two images
combined = cv2.bitwise_or(labels_a, labels_b) ## combined both labelled images to get maximum area per cell
combined_img = color.label2rgb(combined, bg_label=0)
plt.imshow(combined_img)
Images can be found here:
Based on the comments from Christoph Rackwitz and beaker, I started to look around for 3D connected components labeling. I found one python library that can handle such things and I installed it and give it a try. It seems to be doing pretty good. It does assign labels in each slice and keeps the labels same for the same cells in different slices. This is exactly what I wanted.
Here is the link to the library that I used to label objects in 3D.
https://pypi.org/project/connected-components-3d/

Change image color in PIL module

I am trying to vary the intensity of colors to obtain a different colored image...
import PIL
from PIL import Image
from PIL import ImageEnhance
from PIL import ImageDraw
# read image and convert to RGB
image=Image.open("readonly/msi_recruitment.gif")
image=image.convert('RGB')
# build a list of 9 images which have different brightnesses
enhancer=ImageEnhance.Brightness(image)
images=[]
for i in range(1, 10):
images.append(enhancer.enhance(i/10))
# create a contact sheet from different brightnesses
first_image=images[0]
contact_sheet=PIL.Image.new(first_image.mode, (first_image.width*3,first_image.height*3))
x=0
y=0
for img in images:
# Lets paste the current image into the contact sheet
contact_sheet.paste(img, (x, y) )
# Now we update our X position. If it is going to be the width of the image, then we set it to 0
# and update Y as well to point to the next "line" of the contact sheet.
if x+first_image.width == contact_sheet.width:
x=0
y=y+first_image.height
else:
x=x+first_image.width
# resize and display the contact sheet
contact_sheet = contact_sheet.resize((int(contact_sheet.width/2),int(contact_sheet.height/2) ))
display(contact_sheet)
But the above code just varies brightness....
Please tell me what changes should i make to vary color intensity in this code.....
Im sorry but i am unable to upload the picture now, consider any image you find suitable and help me out... Appreciated!!!!
Please go to this link and answer this question instead of this one, I apologise for inconvenience....
Pixel colour intensity
Many colour operations are best done in a colourspace such as HSV which you can get in PIL with:
HSV = rgb.convert('HSV')
You can then use split() to get 3 separate channels:
H, S, V = hsv.split()
Now you can change your colours. You seem a little woolly on what you want. If you want to change the intensity of the colours, i.e. make them less saturated and less vivid decrease the S (Saturation). If you want to change the reds to purples, i.e. change the Hues, then add something to the Hue channel. If you want to make the image brighter or darker, change the Value (V) channel.
When you have finished, merge merge((H,S,V)) the edited channels back together and convert back to RGB with convert('RGB').
See Splitting and Merging and Processing Individual Bands on this page.
Here is an example, using this image:
Here is the basic framework to load the image, convert to HSV colourspace, split the channels, do some processing, recombine the channels and revert to RGB colourspace and save the result.
#!/usr/bin/env python3
from PIL import Image
# Load image and create HSV version
im = Image.open('colorwheel.jpg')
HSV= im.convert('HSV')
# Split into separate channels
H, S, V = HSV.split()
######################################
########## PROCESSING HERE ###########
######################################
# Recombine processed H, S and V back into a recombined image
HSVr = Image.merge('HSV', (H,S,V))
# Convert recombined HSV back to reconstituted RGB
RGBr = HSVr.convert('RGB')
# Save processed result
RGBr.save('result.png')
So, if you find the chunk labelled "PROCESSING HERE" and put code in there to divide the saturation by 2, it will make the colours less vivid:
# Desaturate the colours by halving the saturation
S = S.point(lambda p: p//2)
If, instead, we halve the brightness (V), like this:
# Halve the brightness
V=V.point(lambda p: p//2)
the result will be darker:
If, instead, we add 80 to the Hue, all the colours will rotate around the circle - this is called a "Hue rotation":
# Rotate Hues around the Hue circle by 80 on a range of 0..255, so around 1/3 or a circle, i.e. 120 degrees:
H = H.point(lambda p: p+80)
which gives this:

How do I extract each road in terms of the pixel coordinates from Google Map Screenshot and place them into different lists?

I'm working on a project related to road recognition from a standard Google Map view. Some navigation features will be added to the project later on.
I already extracted all the white pixels (representing road on the map) according to the RGB criteria. Also, I stored all the white pixel (roads) coordinates (2D) in one list named "all_roads". Now I want to extract each road in terms of the pixel coordinates and place them into different lists (one road in one list), but I'm lacking ideas.
I'd like to use Dijkstra's algorithm to calculate the shortest path between two points, but I need to create "nodes" on each road intersection. That's why I'd like to store each road in the corresponding list for further processing.
I hope someone could provide some ideas and methods. Thank you!
Note: The RGB criteria ("if" statements in "threshold" method) seems unnecessary for the chosen map screenshot, but it becomes useful in some other map screenshot with other road colours other than white. (NOT the point of the question anyway but I hope to avoid unnecessary confusion)
# Import numpy to enable numpy array
import numpy as np
# Import time to handle time-related task
import time
# Import mean to calculate the averages of the pixals
from statistics import mean
# Import cv2 to display the image
import cv2 as cv2
def threshold(imageArray):
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Purpose: Display a given image with road in white according to pixel RGBs
Argument(s): A matrix generated from a given image.
Return: A matrix of the same size but only displays white and black.
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
newAr = imageArray
for eachRow in newAr:
for eachPix in eachRow:
if eachPix[0] == 253 and eachPix[1] == 242:
eachPix[0] = 255
eachPix[1] = 255
eachPix[2] = 255
else:
pass
return newAr
# Import the image
g1 = cv2.imread("1.png")
# fix the output image with resolution of 800 * 600
g1 = cv2.resize(g1,(800,600))
# Apply threshold method to the imported image
g2 = threshold(g1)
index = np.where(g2 == [(255,255,255)])
# x coordinate of the white pixels (roads)
print(index[1])
# y coordinate of the white pixels (roads)
print(index[0])
# Storing the 2D coordinates of white pixels (roads) in a list
all_roads = []
for i in range(len(index[0]))[0::3]:
all_roads.append([index[1][i], index[0][i]])
#Display the modified image
cv2.imshow('g2', g2)
cv2.waitKey(0)
cv2.destroyAllWindows()

How can I quickly change pixels in a image from a color dictionary?

I have an image, I want to change all the colors in the image from a color map eg. {(10,20,212) : (60,40,112)...}
Currently, I am reading the image OpenCV and then iterating over the image array and changing each pixel, but this is very slow.
Is there any way I can do it faster?
I am providing two answers to this question. This answer is more based in OpenCV and the other is more based in PIL/Pillow. Read this answer in conjunction with my other answer and potentially mix and match.
You can use Numpy's linalg.norm() to find the distances between colours and then argmin() to choose the nearest. You can then use a LUT "Look Up Table" to look up a new value based on the existing values in an image.
#!/usr/bin/env python3
import numpy as np
import cv2
def QuantizeToGivenPalette(im, palette):
"""Quantize image to a given palette.
The input image is expected to be a Numpy array.
The palette is expected to be a list of R,G,B values."""
# Calculate the distance to each palette entry from each pixel
distance = np.linalg.norm(im[:,:,None] - palette[None,None,:], axis=3)
# Now choose whichever one of the palette colours is nearest for each pixel
palettised = np.argmin(distance, axis=2).astype(np.uint8)
return palettised
# Open input image and palettise to "inPalette" so each pixel is replaced by palette index
# ... so all black pixels become 0, all red pixels become 1, all green pixels become 2...
im=cv2.imread("image.png",cv2.IMREAD_COLOR)
inPalette = np.array([
[0,0,0], # black
[0,0,255], # red
[0,255,0], # green
[255,0,0], # blue
[255,255,255]], # white
)
r = QuantizeToGivenPalette(im,inPalette)
# Now make LUT (Look Up Table) with the 5 new colours
LUT = np.zeros((5,3),dtype=np.uint8)
LUT[0]=[255,255,255] # white
LUT[1]=[255,255,0] # cyan
LUT[2]=[255,0,255] # magenta
LUT[3]=[0,255,255] # yellow
LUT[4]=[0,0,0] # black
# Look up each pixel in the LUT
result = LUT[r]
# Save result
cv2.imwrite('result.png', result)
Input Image
Output Image
Keywords: Python, PIL, Pillow, image, image processing, quantise, quantize, specific palette, given palette, specified palette, known palette, remap, re-map, colormap, map, LUT, linalg.norm.
I am providing two answers to this question. This answer is more based in PIL/Pillow and the other is more based in OpenCV. Read this answer in conjunction with my other answer and potentially mix and match.
You can do it using the palette. In case you are unfamiliar with palettised images, rather than having an RGB value at each pixel location, you have a simple 8-bit index into a palette of up to 256 colours.
So, what we can do, is load your image as a PIL Image, and quantise it to the set of input colours you have. Then each pixel will have the index of the colour in your map. Then just replace the palette with the colours you want to map to.
#!/usr/bin/env python3
import numpy as np
from PIL import Image
def QuantizeToGivenPalette(im, palette):
"""Quantize image to a given palette.
The input image is expected to be a PIL Image.
The palette is expected to be a list of no more than 256 R,G,B values."""
e = len(palette)
assert e>0, "Palette unexpectedly short"
assert e<=768, "Palette unexpectedly long"
assert e%3==0, "Palette not multiple of 3, so not RGB"
# Make tiny, 1x1 new palette image
p = Image.new("P", (1,1))
# Zero-pad the palette to 256 RGB colours, i.e. 768 values and apply to image
palette += (768-e)*[0]
p.putpalette(palette)
# Now quantize input image to the same palette as our little image
return im.convert("RGB").quantize(palette=p)
# Open input image and palettise to "inPalette" so each pixel is replaced by palette index
# ... so all black pixels become 0, all red pixels become 1, all green pixels become 2...
im = Image.open('image.png').convert('RGB')
inPalette = [
0,0,0, # black
255,0,0, # red
0,255,0, # green
0,0,255, # blue
255,255,255 # white
]
r = QuantizeToGivenPalette(im,inPalette)
# Now simply replace the palette leaving the indices unchanged
newPalette = [
255,255,255, # white
0,255,255, # cyan
255,0,255, # magenta
255,255,0, # yellow
0,0,0 # black
]
# Zero-pad the palette to 256 RGB colours, i.e. 768 values
newPalette += (768-len(newPalette))*[0]
# And finally replace the palette with the new one
r.putpalette(newPalette)
# Save result
r.save('result.png')
Input Image
Output Image
So, to do specifically what you asked with a dictionary that maps old colour values to new ones, you will want to initialise oldPalette to the keys of your dictionary and newPalette to the values of your dictionary.
Keywords: Python, PIL, Pillow, image, image processing, quantise, quantize, specific palette, given palette, specified palette, known palette, remap, re-map, colormap, map.
There are some hopefully useful words about palettised images here, and here.
I think you might find using the built in LUT function of opencv helpful, as documented here.
There is already a python binding for the function, and it takes as input the original matrix and a LUT, and returns the new matrix as an output.
There isn't a tutorial for using it in python, but there is one for using it in C++ which I imagine will be useful, found here. That tutorial lists this method as the fastest one for this sort of problem.

How to convert the background of the entire image to white when both white and black backgrounds are present?

The form image contains text in different background. The image needs to be converted to one background (here white) and hence the heading needs to be converted into black.
input image :
output image:
My approach was to detect the grid(horizontal lines and vertical lines and sum them up) and then crop each section of the grid into new sub-images and then check the majority pixel color and transform accordingly. But after implementing that, the blue background image is not getting detected and getting cropped like :
So I am trying to convert the entire form image into one background so that I can avoid such outcomes.
Here's a different way of doing it that will cope with the "reverse video" being black, rather than relying on some colour saturation to find it.
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image, greyscale and threshold
im = cv2.imread('form.jpg',cv2.IMREAD_GRAYSCALE)
# Threshold and invert
_,thr = cv2.threshold(im,127,255,cv2.THRESH_BINARY)
inv = 255 - thr
# Perform morphological closing with square 7x7 structuring element to remove details and thin lines
SE = np.ones((7,7),np.uint8)
closed = cv2.morphologyEx(thr, cv2.MORPH_CLOSE, SE)
# DEBUG save closed image
cv2.imwrite('closed.png', closed)
# Find row numbers of dark rows
meanByRow=np.mean(closed,axis=1)
rows = np.where(meanByRow<50)
# Replace selected rows with those from the inverted image
im[rows]=inv[rows]
# Save result
cv2.imwrite('result.png',im)
The result looks like this:
And the intermediate closed image looks like this - I artificially added a red border so you can see its extent on Stack Overflow's white background:
You can read about morphology here and an excellent description by Anthony Thyssen, here.
Here's a possible approach. Shades of blue will show up with a higher saturation than black and white if you convert to HSV colourspace, so...
convert to HSV
find mean saturation for each row and select rows where mean saturation exceeds a threshold
greyscale those rows, invert and threshold them
This approach should work if the reverse (standout) backgrounds are any colour other than black or white. It assumes you have de-skewed your images to be truly vertical/horizontal per your example.
That could look something like this in Python:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image
im = cv2.imread('form.jpg')
# Make HSV and extract S, i.e. Saturation
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
s=hsv[:,:,1]
# Save saturation just for debug
cv2.imwrite('saturation.png',s)
# Make greyscale version and inverted, thresholded greyscale version
gr = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
_,grinv = cv2.threshold(gr,127,255,cv2.THRESH_BINARY_INV)
# Find row numbers of rows with colour in them
meanSatByRow=np.mean(s,axis=1)
rows = np.where(meanSatByRow>50)
# Replace selected rows with those from the inverted, thresholded image
gr[rows]=grinv[rows]
# Save result
cv2.imwrite('result.png',gr)
The result looks like this:
The saturation image looks as follows - note that saturated colours (i.e. the blues) show up as light, everything else as black:
The greyscale, inverted image looks like this:

Resources