Convert all the BGR value within a specific range to white(255,255,255) - python-3.x

I have input png image of which I want to convert all the pixels belonging to specific range starting (2,2,2) ending (255,255,255) to white(255,255,255)
im = cv2.imread('3.png') # I am reading the image
lower_range = np.array([2,2,2]). # I specific the lower range
upper_range = np.array([255,255,255]) # I specify the upper range
im[np.where((im == [0,0,255]).all(axis = 2))] = [255,255,255] # converts all red pixels to white
cv2.imwrite('out.png', im)
My question is how can I modify im[np.where((im == [0,0,255]).all(axis = 2))] = [255,255,255]. Such that it covers range of colours mentioned in line 2 and 3 and converts them all to white.

There's cv2.inRange which yields a mask which can be used to change color as you wish.
mask1 = cv2.inRange(im, lower_range, upper_range)
im[np.where(mask)] = [255,255,255]
On a side note, your range of colors is pretty big (almost covers everything).

Related

How to fill cell with 50% intensity of an hexadecimal color using openpyxl

I'm using openpyxl with python 3.10 to create a xls file from a dataBase.
I extract an hexadacimal color from this dataBase and I want to fill cells with this color but only with 50% intensity. Currently, i have this :
import sqlite3, openpyxl
def getRouteColor(route):
cursor.execute('''SELECT route_color
FROM routes
WHERE route_short_name = "{}"'''.format(route))
return cursor.fetchall()[0][0]
[...]
color = getRouteColor(route) #hexadecimal
for column in sheet.columns:
for cell in column:
if not cell.row % 2:
cell.fill = openpyxl.styles.PatternFill(patternType = 'solid', fgColor = color)
I've try to use different patternType to lower intensity but it wasn't conclusive because I don't want lines or dots.
Do you have any ideas of how to get a 50% intensity (using openpyxl, or other way like get a 50% intensity hexadecimal code from the 100% intensity hexadecimal code) ?
I have found a way to do it :
I convert my hexadecimal color to RGB color and apply an alpha of 0.5 to it. Then, I convert it back to hexadecimal :
def color50(color): #color : hexadecimal 'XXXXXX'
rgb = tuple(int(color[i:i+2], 16) for i in (0, 2, 4))
rgb50 = []
for i in rgb :
rgb50.append(int(0.5 * i + (1 - 0.5) * 255))
return '{:X}{:X}{:X}'.format(rgb50[0], rgb50[1], rgb50[2])
If you have a better way to do it I'm still interested

how to improve edge smoothness of an image rotated using pillow

I have this image
And I want to rotate it, and keep a smooth looking edge.
I have tried this approach below which adds some transparent borders to the image, to allow for the interpolation of the rotation to sample the transparent padding and the opaque image intensities when it renders the edge.
img = Image.open("sunset200x100.jpg")
im_array = np.asarray(img)
w, h = img.size
padding = 4
new_padded_size = (w+padding, h+padding)
img = img.convert('RGBA') # converting to RGBA adds transparency to the areas that aren't opaque
img = ImageOps.pad(img, size=new_padded_size)
im_array_rgba_padded = np.asarray(img)
rotated_im = img.rotate(56, expand=True, resample=PIL.Image.BICUBIC)
as_array = np.asarray(rotated_im)
#rotated_im.show()
rotated_im.save("rotated_sunset200x100_padded_with_2px.png")
However, it doesn't seem to do interpolation on the left, and right sides of the image. Inspecting the im_array_rgba_padded, I see that the first line, and last line of pixels have been made all black, however the left and right haven't got the same zero padding.
So the result ends up looking like this:-
wondering how I can get the padding into the left and right aswell, using the pad function, so that the left and right edges also look smooth ?? or why it is that the padding is not applied to the left and right aswell ?
you can use this change your code :
mg = Image.open("sunset200x100.jpg")
im_array = np.asarray(img)
w, h = img.size
print((w,h))
padding =4
new_padded_size = (w+padding, h+padding)
img = img.convert('RGBA') # converting to RGBA adds transparency to the areas that aren't opaque
# img = ImageOps.pad(img, size=new_padded_size)
img = ImageOps.expand(img,new_padded_size,fill='black')
im_array_rgba_padded = np.asarray(img)
rotated_im = img.rotate(56, expand=True, resample=PIL.Image.BICUBIC)
as_array = np.asarray(rotated_im)
rotated_im.show()
this work in windows10,python 3.9, pillow 8.3.
for more information go to this and pillow/ImageOps

Reduce components included by otsu threshold python opencv

I am trying to segment the blue components from a set of images. In most images where blue components have a large spread, otsu thresholded image works properly well. However, for images where blue components are minimal, the results are not ok and seems to include the non-relevant sections. Example below:
Are there ways to improve the otsu thresholding such that only relevant parts are segmented but not necessarily making the other images suffer?
I already tried global and adaptive thresholding but otsu particularly captured betters which however included unnecessary details.
Here's the code:
l_image = remove_background(image)
l_image = cv2.cvtColor(l_image, cv2.COLOR_BGR2GRAY)
ret1,th1 = cv2.threshold(l_image,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
mask = (th1 != 255)
sel = np.ones_like(image)
sel[mask] = image[mask]
sel = cv2.cvtColor(sel, cv2.COLOR_HSV2BGR)
#we simply set these channels to 0 to remove excess background
sel[:,:,1] = 0
sel[:,:,2] = 0
Here's the sample image.
The main issue with the logic in your code is that you are looking for something that is distinguished primarily by color, but throw away the color information first by converting the image to grayscale.
Instead, consider looking at color properties of each pixel. One easy way to do so is to look at the HCV color space. This is a similar color space to the more common HSV, with "C" for chroma instead of "S" for saturation, where S = C / V. I'm suggesting this because it's so easy to compute the "C" channel, which is the one that would have most of the contrast in this image. Note that all the complexity is in computing "H", the hue, and that would be ideally used to find a specific color independently of its brightness, but that requires a double threshold on the "H" channel plus a threshold on the "S" channel. For this simple case, a single threshold on the "S" channel is sufficient to find the colored regions: we have only blue, we don't care about what color it is, we just want to find the color.
To compute the "C" (chroma) channel, we find the difference between the largest and the smallest of the RGB values (for each pixel independently):
rgbmax = np.amax(image, axis=2)
rgbmin = np.amin(image, axis=2)
c = rgbmax - rgbmin
As you can guess, a simple threshold of this image leads to finding the colored regions. The green background can easily be subtracted before processing, or after.
Edit: after #Cris Luengo comment, the green channel works better than the blue one.
You can apply Otsu's threshold on the green channel (of BGR).
Results are not perfect but much better.
img = img[:,:,1] #get the green channel
th, img = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)
output:

How Could I Increase the Accuracy on Contour Detection

I have a study which provides the length and width values of the objects in an image. What I need is to have exact measurements as length and width but my results deviate too little and I need to reach at the exact values.
I have a ready program but it needs to be developed to reach best result.
(contours, _) = contours.sort_contours(contours)
for cnt in contours:
box = cv2.minAreaRect(cnt)
box = cv2.boxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
box = np.array(box, dtype="float")
box = perspective.order_points(box)
cv2.drawContours(orig, [box.astype("int")], -1, (0, 255, 0), 1)
To see the dataset I have I am sharing my test image:
It detects te contours inside of the purple lines but I would like to have it as the yellow lines.
Wht should I update obn my code to reach the aim?

2 bit per pixel tga's color to qRgb

I need to read tga's with pyqt and so far this seems to be working fine except where a tga has 2 bytes per pixel as opposed to 3 or 4. My code is taken from here http://pastebin.com/b5Vz61dZ.
Specifically this section:
def getPixel( file, bytesPerPixel):
'Given the file object f, and number of bytes per pixel, read in the next pixel and return a qRgba uint'
pixel = []
for i in range(bytesPerPixel):
pixel.append(ord(file.read(1)))
if bytesPerPixel==4:
pixel = [pixel[2], pixel[1], pixel[0], pixel[3]]
color = qRgba(*pixel)
elif bytesPerPixel == 3:
pixel = [pixel[2], pixel[1], pixel[0]]
color = qRgb(*pixel)
elif bytesPerPixel == 2:
# if greyscale
color = QColor.fromHsv( 0, pixel[0] , pixel[1])
color = color.value()
return color
and this part:
elif bytesPerPixel == 2:
# if greyscale
color = QColor.fromHsv( 0, pixel[0] , pixel[1])
color = color.value()
how would I input the pixel[0] and pixel[1] values to create get the values in the correct format and colorspace?
Any thoughts, ideas or help please!!!
pixel = [ pixel[1]*2 , pixel[1]*2 , pixel[1]*2 ]
color = qRgb(*pixel)
works for me. Correct luminance and all. Though I'm not sure doubling the pixel[1] value would work for all instances.
Thank you for all the help istepura :)
http://lists.xcf.berkeley.edu/lists/gimp-developer/2000-August/013021.html
"Pixels are stored in little-endian order, in BGR555 format."
So, you have to take "leftmost" 5 bits of pixel[1] as Blue, rest 3 bits + 2 "leftmost" bits of pixel[0] would be Green, and next 5 bits of pixel[0] would be Red.
In your case, I suppose, the code should be something like:
pixel = [(pixel[1]&0xF8)>>3, ((pixel[1]&0x7)<<2)|((pixel[0]&0xC0)>>6), (pixel[0]&0x3E)>>1)
color = qRgb(*pixel)

Resources