Convert image to ndarray and vice-versa - python-3.x

I have this image and following python code.
import numpy as np
from PIL import Image
image = Image.open("File.png")
image = image.convert("1")
image.show()
bw = np.asarray(image).copy()
im01 = np.where(bw, 0, 1)
new_img = Image.fromarray(im01)
new_img.show()
Since the image is black and white, I could see the im01 as a 2D ndarray with 0s and 1 as white and black pixels in 184x184 matrix.
Why didn't Image.fromarray() work?
Is there something that I am missing?

Related

Color diffusion when merging multiple images in a folder using PIL in python

I have set of 17 images and one of them has a highlighted pixel for my use. But, when I merge these 17 images, I get the color but it diffuses out of the pixel boundaries and I start seeing some colored pixel in black background.
I am using PIL library for the merging. I am attaching my code and the images for the reference. Any help would be appreciated.
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Cretaing the Pixel array
from PIL import Image
from PIL import ImageColor
img_path = '/Volumes/MY_PASSPORT/JRF/cancer_genome/gopal_gen/png_files/'
image_list = []
for entry in os.listdir(img_path):
if entry.endswith('.png'):
entry = int(entry.rstrip('.csv.png'))
image_list.append(entry)
image_list.sort()
list_img = []
for j in range(len(image_list)):
stuff = str(image_list[j])+'.csv.png'
list_img.append(stuff)
#print(list_img[0])
images = [Image.open(img_path+x) for x in list_img]
widths, heights = zip(*(i.size for i in images))
total_width = sum(widths)
max_height = max(heights)
#print(total_width, max_height)
new_im = Image.new('RGB', (total_width, max_height))
x_offset = 0
for im in images:
new_im.paste(im, (x_offset,0))
#print(im.size)
x_offset += im.size[0]
#print(x_offset)
new_im.save(img_path+'final_result_image.jpg')
Here is the combined image:
The third column has a pixel highlighted.
Here is the zoomed in part with the problem.
The JPEG format is lossy - it is allowed to change your pixels to make the file smaller. If your image is a conventional photo of a real-life scene, this doesn't normally matter. If your data is a blocky, computer-generated image, or a set of classes from a classification process, it can go horribly wrong if you use JPEG.
So, the answer is to use PNG (or potentially TIFF) format for images that need to be lossless.

How do you change the color of specified pixels in an image?

I want to be able to detect a certain area of pixels based on their RGB values and change them to some other color (not black/white).
I have tried changing these values in the code, but my resulting images always show black pixels replacing the specified locations:
pixelMap[i,j]= (255,255,255)
from PIL import Image
im = Image.open('Bird.jpg')
pixelMap = im.load()
img = Image.new(im.mode, im.size)
pixelsNew = img.load()
for i in range(img.size[0]):
for j in range(img.size[1]):
toup = pixelMap[i,j]
if(int(toup[0]>175) and int(toup[1]<100 and int(toup[2])<100) ):
pixelMap[i,j]= (255,255,255)
else:
pixelsNew[i,j] = pixelMap[i,j]
img.show()
You will find that iterating over images with Python loops is really slow and should get in the habit of using Numpy or optimised OpenCV or skimage code.
So, starting with this image:
from PIL import Image
import numpy as np
# Open image
im = Image.open('bird.jpg')
# Make into Numpy array
imnp = np.array(im)
# Make all reddish pixels white
imnp[(imnp[:,:,0]>170) & (imnp[:,:,1]<100) & (imnp[:,:,2]<100)] = [255,255,255]
# Convert back to PIL and save
Image.fromarray(imnp).save('result.jpg')
It looks like a tiny bug:
Instead of: pixelMap[i,j]= (255,255,255)
Use: pixelsNew[i,j] = (255,255,255)

How can I get an array of ImageDraw object?

I'm writing a generic algorithm for the pictures so I started with Image class of PIL library and created a numpy array of input image. So now I want to draw some figures and the easiest way is to use ImageDraw, but then again I should use arrays for the next evolution so I need to convert ImageDraw object either to the Image object or to a numpy array.
Any suggestions how can I do that?
I tried to use a numpy conversion which worked on the Image objects. Tried to find included methods of conversion
from PIL import Image, ImageDraw
import numpy
input_image = Image.open("i2.jpg")
width, height = input_image.size
num_weights = width * height
image_draw = ImageDraw.Draw(Image.new('RGB', (width, height), 'WHITE'))
input_image = numpy.array(input_image.getdata())
#Do some staff with image_draw using information from input_image
#And try to convert image_draw to input_image
I want to have as the output a numpy array or Image object
I think you want to process an image both as a PIL Image so you can draw on it, and also as a Numpy array so you can do processing on it.
So, here is an example of how to draw on an image with PIL, then convert it to a Numpy array and do some processing on it, then convert it back to a PIL Image.
#!/usr/bin/env python3
from PIL import Image, ImageDraw
# Create a black 600x200 image
img = Image.new('RGB', (600, 200))
# Get a drawing handle
draw = ImageDraw.Draw(img)
# Draw on image
draw.rectangle(xy=[10,20,300,80], fill='red')
# Save as "result1.png"
img.save('result1.png')
# Convert PIL Image to Numpy array for processing
na = np.array(img)
# Make mask selecting red pixels then make them blue
Rmask =(na[:, :, 0:3] == [255,0,0]).all(2)
na[Rmask] = [0,0,255]
# Convert Numpy array back to PIL Image
img = Image.fromarray(na)
# Save as "result2.png"
img.save('result2.png')
The two images are "result1.png":
and "result2.png":

Image Cropping to get just the particular shape out of image

[I have the images as below, i need to extract just the white strip portion from all the images.
i Have tried using PIL to extract the rectangular portion by manually specifying the pixel value, Can there be any automated way to get this work done where by just feeding the image gives back the rectangular portion
Below is My snipped code:
from PIL import Image
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = Image.open('C:/Users/ShAgarwal/Documents/image_dataset/pic9.jpg')
half_the_width = img.size[0] / 2
half_the_height = img.size[1] / 2
img4 = img.crop(
(
half_the_width-1632,
half_the_height - 440,
half_the_width+1632,
half_the_height + 80
)
)
sample image
import cv2
import numpy as np
from matplotlib import pyplot as plt
image='IMG_3134.JPG'
# read image
imgc = cv2.imread(image)
img = cv2.resize(imgc, None, fx=0.25, fy=0.25) # resize since image is huge
#cropping the strip dimensions
#crop_img = img[1010:1650,140:1099723]
blurred = cv2.blur(img, (3,3))
canny = cv2.Canny(blurred, 50, 200)
Marking coordinates through auto image detection using canny's algorithm
## find the non-zero min-max coords of canny
pts = np.argwhere(canny>0)
y1,x1 = pts.min(axis=0)
y2,x2 = pts.max(axis=0)`
`## crop the region
cropped = img[y1:y2, x1:x2]
cv2.imwrite("cropped.png", cropped)
#Select the bounded area around white boundary
tagged = cv2.rectangle(img.copy(), (x1,y1), (x2,y2), (0,255,0), 3, cv2.LINE_AA)
r = cv2.selectROI(tagged)
imCrop = im[int(r[1]):int(r[1]+r[3]), int(r[0]):int(r[0]+r[2])]
#Bounded Area
cv2.imwrite("taggd2.png", imcrop)
cv2.waitKey()
Results from above code

How to detect Roads from satellite images and get the outline using opencv python

Hey guys, I am trying to detect roads from satellite images.
After identifying the roads am getting the co-ordinates like roads coordinates and building coordinates.
The code which i tried to extract roads from satellite images.
Input image
from __future__ import print_function, division
from PIL import Image
import operator
import cv2
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
#reading image directly from current working directly
build_image = cv2.imread("avilla_san_marcos_gilbert,az.png")
#Doing MeanShift Filtering
#shifted = cv2.pyrMeanShiftFiltering(image, 21, 51)
#GrayScale Conversion
build_gray = cv2.cvtColor(build_image, cv2.COLOR_BGR2GRAY)
#OTSU Thresholding
thresh = cv2.threshold(build_gray, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
cv2.imshow("OTSU Thresholded Image",thresh)
cv2.imwrite("OTSU_1 image.jpg",thresh)
#Checking Coordinates of White Pixels
build_white = np.argwhere(thresh == 255)
#Creating an array
build_data= np.array(build_white)
np.savetxt('build_coords.csv', build_data,delimiter=",")
#Checking Coordinates of White Pixels
road_white = np.argwhere(thresh == 0)
#Creating an array
road_data = np.array(road_white)
print(road_data)
#Saving vector of roads in a csv file
np.savetxt('road_coords.csv', road_data,delimiter=",")
Output image is plotted based on csv of Road pixel Coordinates
I have a problem with the output image obtained. It has detected trees i have to eliminate it by obtaining the pixel coordinates of tree.
So From that output image i have to extract the layout of the roads alone.Please try to help me out guys. Thanks in advance.

Resources