The bitwise operation error for jpg and png images - python-3.x

I get stuck in a wired problem. A very simple example of bitwise operation. However, it will show error when treating png images.
import matplotlib.pyplot as plt
import numpy as np
import cv2
image = mpimg.imread('test1.png')
mask = np.zeros((image.shape[0], image.shape[1], 3), dtype=np.uint8)
result = cv2.bitwise_and(image,mask)
It will show the following error about bitwise_and:
The operation is neither 'array op array' (where arrays have the same size and type), nor 'array op scalar', nor 'scalar op array' in function binary_op
The operation does not have error when treating test2.jpg
test1.png
test2.jpg
I know the first image looks like a gray image but it does have three channels!

I think .png images have four channels (one for opacity). So you try to operate on two images which has different numbers of channels (jpg - 3 and png - 4).

Related

combine overlapping labelled objects and modify label values

I have a Z-stack of 2D confocal microscopy images (2D slices) and I want to segment cells. The Z-stack of 2D images is actually a 3D data. In different slices along the Z-axis, I see same cells do appear in multiple slices. I am interested in cell shape in the XY so I want to preserve the largest cell area from different Z-axis slices. I thought to combine the consecutive 2D slices after converting them to labelled binary images but I am having few issues and I need some help to proceed further.
I have two images img_a and img_b. I first converted them to binary images using OTSU, then applied some morphological operations and then used cv2.connectedComponentsWithStats() to obtain labelled objects. After labeling images, I combined them using cv2.bitwise_or() but it messes up with the labels. You can see this in the attached processed image (cell higlighted by red circles). I see multiple labels for overlapping cell. However, I want to assign one unique label for every combined overlapping object.
What I want at the end is that when I combine two labelled images, I want to assign one single label (a unique value) to the combined overlapping objects and keep the largest cell area by combining both images. Does anyone know how to do it?
Here is the code:
from matplotlib import pyplot as plt
from skimage import io, color, measure
from skimage.util import img_as_ubyte
from skimage.segmentation import clear_border
import cv2
import numpy as np
cells_a=img_a[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_a, thresh_a = cv2.threshold(cells_a, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Morphological operations to remove small noise - opening
kernel = np.ones((3,3),np.uint8)
opening_a = cv2.morphologyEx(thresh_a,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_a = clear_border(opening_a) #Remove edge touchingpixels
numlabels_a, labels_a, stats_a, centroids_a = cv2.connectedComponentsWithStats(opening_a)
img_a1 = color.label2rgb(labels_a, bg_label=0)
## now do the same with image_b
cells_b=img_b[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_b, thresh_b = cv2.threshold(cells_b, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Morphological operations to remove small noise - opening
opening_b = cv2.morphologyEx(thresh_b,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_b = clear_border(opening_b) #Remove edge touchingpixels
numlabels_b, labels_b, stats_b, centroids_b = cv2.connectedComponentsWithStats(opening_b)
img_b1 = color.label2rgb(labels_b, bg_label=0)
## Now combined two images
combined = cv2.bitwise_or(labels_a, labels_b) ## combined both labelled images to get maximum area per cell
combined_img = color.label2rgb(combined, bg_label=0)
plt.imshow(combined_img)
Images can be found here:
Based on the comments from Christoph Rackwitz and beaker, I started to look around for 3D connected components labeling. I found one python library that can handle such things and I installed it and give it a try. It seems to be doing pretty good. It does assign labels in each slice and keeps the labels same for the same cells in different slices. This is exactly what I wanted.
Here is the link to the library that I used to label objects in 3D.
https://pypi.org/project/connected-components-3d/

I can't generate a word cloud with some images

I just start the module worcloud in Python 3.7, and I'm using the next cxode to generate wordclouds from a dictionary and I'm trying to use differents masks, but this works for some images: in two cases works with images of 831x816 and 1000x808. This has to be with the size of the image? Or is because the images is kind a blurry? Or what is it?
I paste my code:
from PIL import Image
our_mask = np.array(Image.open('twitter.png'))
twitter_cloud = WordCloud(background_color = 'white', mask = our_mask)
twitter_cloud.generate_from_frequencies(frequencies)
twitter_cloud.to_file("twitter_cloud.jpg")
plt.imshow(twitter_cloud)
plt.axis('off')
plt.show()
How can i fix this?
I had a similar problem with a black-and-white image I used. What fixed it for me was when I cropped the image more closely to the black drawing so there was no unnecessary bulk white area on the edges.
Some images should be adjusted for the process. Note only white point values for image is mask_out (other values are mask_in). The problem is that some of images are not suitable for masking. The reason is that the color's np.array somewhat mismatches. To solve this, following can be done:
1.Creating mask object: (Please try with your own image as I couldn't upload:)
import numpy as np;
import pandas as pd;
from PIL import Image;
from wordcloud import WordCloud
mask = np.array(Image.open("filepath/picture.png"))
print(mask)
If the output values for white np.array is 255, then it is okay. But if it is 0 or probably other value, we have to change this to 255.
2.In the case of other values, the code for changing the values:
2-1. Create function for transforming (here our value = 0)
def transform_zeros(val):
if val == 0:
return 255
else:
return val
2-2. Creating the same shaped np.array:
maskable_image = np.ndarray((mask.shape[0],mask.shape[1]), np.int32)
2-3. Transformation:
for i in range(len(mask)):
maskable_image[i] = list(map(transform_zeros, mask[i]))
3.Checking:
print(maskable_image)
Then you can use this array for your mask.
mask = maskable_image
All this is copied and interpreted from this link, so check it if you find my attempted explanation unclear, as I just provided solution but don't understand that much about color arrays of image and its transformation.

Vast difference in cv2 imshow vs matplotlib imshow?

I am currently working on a program that requires me to read DICOM files and display them correctly. After extracting the pixel array from the DICOM file, I ran it through both the imshow function from matplotlib and cv2. To my surprise they both yield vastly different images. One has color while the other has no, and one shows more detail than the other. Im confused as to why this is happening. I found Difference between plt.show and cv2.imshow? and tried converting the pixels to BRG instead of RGB what cv2 uses but this changes nothing. I am wondering why it is that these 2 frameworks show the same pixel buffer so differently. below is my code and an image to show the outcomes
import cv2
import os
import pydicom
import numpy as np
import matplotlib.pyplot as plt
inputdir = 'datasets/dicom/98890234/20030505/CT/CT2/'
outdir = 'datasets/dicom/pngs/'
test_list = [ f for f in os.listdir(inputdir)]
for f in test_list[:1]: # remove "[:10]" to convert all images
ds = pydicom.dcmread(inputdir + f)
img = np.array(ds.pixel_array, dtype = np.uint8) # get image array
rows,cols = img.shape
cannyImg = cv2.Canny(img, cols, rows)
cv2.imshow('thing',cv2.cvtColor(img, cv2.COLOR_BRG2RBG))
cv2.imshow('thingCanny', cannyImg)
plt.imshow(ds.pixel_array)
plt.show()
cv2.waitKey()
Using the cmap parameter with imshow() might solve the issue. Try this:
plt.imshow(arr, cmap='gray', vmin=0, vmax=255)
Refer to the docs for more info.
Not an answer but too long for a comment. I think the root cause of your problems is in the initialization of the array already:
img = np.array(ds.pixel_array, dtype = np.uint8)
uint8 is presumably not what you have in the DICOM file. First because it looks like a CT image which is usually stored with 10+ bpp and second because the artifacts you are facing look very familiar to me. These kind of artifacts (dense bones displayed in black, gradient effects) usually occur if >8 bit pixeldata is interpreted as 8bit.
BTW: To me, both renderings look obviously incorrect.
Sorry for not being a python expert and just being able to tell what is wrong but unable to tell how to get it right.

Creating a greyscale image with a Matrix in python

I'm Marius, a maths student in the first year.
We have recieved a team-assignment where we have to implement a fourier transformation and we chose to try to encode the transformation of an image to a JPEG image.
to simplify the problem for ourselves, we chose to do it only for pictures that are greyscaled.
This is my code so far:
from PIL import Image
import numpy as np
import sympy as sp
#
#ALLEMAAL INFORMATIE GEEN BEREKENINGEN
img = Image.open('mario.png')
img = img.convert('L') # convert to monochrome picture
img.show() #opens the picture
pixels = list(img.getdata())
print(pixels) #to see if we got the pixel numeric values correct
grootte = list(img.size)
print(len(pixels)) #to check if the amount of pixels is correct.
kolommen, rijen = img.size
print("het aantal kolommen is",kolommen,"het aantal rijen is",rijen)
#tot hier allemaal informatie
pixelMatrix = []
while pixels != []:
pixelMatrix.append(pixels[:kolommen])
pixels = pixels[kolommen:]
print(pixelMatrix)
pixelMatrix = np.array(pixelMatrix)
print(pixelMatrix.shape)
Now the problem forms itself in the last 3 lines. I want to try to convert the matrix of values back into an Image with the matrix 'pixelMatrix' as it's value.
I've tried many things, but this seems to be the most obvious way:
im2 = Image.new('L',(kolommen,rijen))
im2.putdata(pixels)
im2.show()
When I use this, it just gives me a black image of the correct dimensions.
Any ideas on how to get back the original picture, starting from the values in my matrix pixelMatrix?
Post Scriptum: We still have to implement the transformation itself, but that would be useless unless we are sure we can convert a matrix back into a greyscaled image.

Why does cv2.addweighted() give an error that the operation is neither 'array op array', nor 'array op scalar', nor ' scalar op array'?

This is my code for image blending but there is something wrong with the cv2.addweighted() function:
import cv2
import numpy as np
img1 = cv2.imread('1.png')
img2 = cv2.imread('messi.jpg')
dst= cv2.addWeighted(img1,0.5,img2,0.5,0)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
The error is :
Traceback (most recent call last):
dst= cv2.addWeighted(img1,0.5,img2,0.5,0)
cv2.error: C:\projects\opencv-python\opencv\modules\core\src\arithm.cpp:659: error: (-209) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function cv::arithm_op
What is the problem? I searched the function and I'm sure that the function is correct. I didn't understand the error!
When you run this:
dst= cv2.addWeighted(img1,0.5,img2,0.5,0)
Error info:
error: (-209) The operation is neither 'array op array'
(where arrays have the same size and the same number of channels),
nor 'array op scalar', nor 'scalar op array' in function cv::arithm_op
Possible reasons:
One or more of the img1/img2 is not np.ndarray, such as None. Maybe you havn't read it.
img1.shape does not equal to img2.shape. They have different size.
You should check img1.shape and img2.shape before you directly do cv2.addWeighted if you are not sure whether they are the same size.
Or, if you want to add small image on the big one, you should use ROI/mask/slice op.
As pointed in one of the comments for the question and reason 2 in the answer above, you could also try to resize one of the images to match the other and then try the addWeighted.
Your code would then look like the below:
import cv2
import numpy as np
img1 = cv2.imread('1.png')
img2 = cv2.imread('messi.jpg')
# Read about the resize method parameters here: https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html?highlight=resize#resize
img2_resized = cv2.resize(img2, (img1.shape[1], img1.shape[0]))
dst = cv2.addWeighted(img1, 0.7, img2_resized, 0.3, 0)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
even i was getting the same error. Its because of different sizes of images,then i used ROI(Region of Image), i,e., just take a portion of the image which is same as another image size.
use this code:
part=img[0:168,0:300]
img=is the any image among the two images, which on you want perform operation and inside[ ] size of another image.
Then you get same size images and then perform operations on them.

Resources