Code for image manipulation produces no result/output - python-3.x

I am working on a project that where I am required to use classes and objects to manipulate an image in Python using PIL.
I have eliminated formatted the path to the file correctly so there must be something in the code itself.
class image_play(object):
def __init__(self,im_name):
self.im_name = im_name
def rgb_to_gray_image(self):
im = Image.open(self.im_name)
im = im.convert('LA')
return im
# editing pixels of image to white
def loop_over_image(self):
im = Image.open(self.im_name)
width, height = im.size
# nested loop over all pixels of image
temp = []
for i in range(width):
for j in range(height):
temp.append((255,255,255)) # append tuple for the RGB values for each pixel
image_out = Image.new(im.mode,im.size) #create new image using PIl
image_out.putdata(temp) #use the temp list to create the image
return image_out
pic = image_play('test.png')
picGray = pic.rgb_to_gray_image()
picWhite = pic.loop_over_image()

I simply added picGray.show() and picWhite.show() an now I have view-able output. Hmmmm...

Related

Changing x&y coordinates for a photoshop layer using python

I'm trying to change the position (x & y coordinates) of a text layer in Photoshop using python
import win32com.client
import os
psApp = win32com.client.Dispatch("Photoshop.Application") #Opens PS if its not already running
psApp.Open(r"E:\AutoEdit2023\Sample.psd") #Opens PSD
doc = psApp.Application.ActiveDocument
#Edit Text Layer
layer_first_panel = doc.layerSets["Panel1"].layerSets["Group 4"].layerSets["Box1"].ArtLayers["Text1"]
text_of_layer = layer_first_panel.TextItem
text_of_layer.contents = "sample text"
#text_of_layer.position = [500,280] ## here i want to change it in pixels format
print(text_of_layer.position)
The print result is (2.04111687479926, 4.222114584656246) so my way doesn't work obviously.
Edit: its position now in pixels is (453.38 px,279.43 px)

Python PIL efficienty glueing images together, while keeping image optimizations

TL;DR : When concatenating 10mb worth of images into one large image, resulting image is 1GB worth of memory, before I save/optimize it to disk. How can make this in-memory size smaller?
I am working on a project where I am taking a list of lists of Python Pil image objects (image tiles), and gluing them together to:
Generate a list of images that have been concatenated together into columns
Taking #1, and making a full blown image out of all the tiles
This post has been great at providing a function that accomplishes 1&2 by
Figuring out the final image size
Creating a blank canvas for images to be added to
Adding all the images, in a sequence, to canvas we just generated
However, the issue I am encountering with the code:
The size of the original objects in the list of lists, is ~50mb.
When I do the first past over the list of lists of image object, to generated list of images that are columns, the memory increases by 1gb... And when I make the final image, the memory increases by another 1gb.
Since the resulting image is 105,985 x 2560 pixels... the 1gb is somewhat expected ((105984*2560)*3 /1024 /1024) [~800mb]
My hunch is that the canvases that are being created, are non-optimized, hence, take up a bit of space (pixels * 3 bytes), but the image tile objects I am trying to paste onto canvas, are optimized for size.
Hence my question - utilizing PIL/Python3, is there a better way to concatenate images together, keeping their original sizes/optimizations? After I do process image/re-optimize it via
.save(DiskLocation, optimize=True, quality=94)
The resulting image is ~30 MB (which is, roughly the size of the original list of lists containing PIL objects)
For reference, from the post linked above, this is the function that I use to concatenate images together:
from PIL import Image
#written by teekarna
# https://stackoverflow.com/questions/30227466/combine-several-images-horizontally-with-python
def append_images(images, direction='horizontal',
bg_color=(255,255,255), aligment='center'):
"""
Appends images in horizontal/vertical direction.
Args:
images: List of PIL images
direction: direction of concatenation, 'horizontal' or 'vertical'
bg_color: Background color (default: white)
aligment: alignment mode if images need padding;
'left', 'right', 'top', 'bottom', or 'center'
Returns:
Concatenated image as a new PIL image object.
"""
widths, heights = zip(*(i.size for i in images))
if direction=='horizontal':
new_width = sum(widths)
new_height = max(heights)
else:
new_width = max(widths)
new_height = sum(heights)
new_im = Image.new('RGB', (new_width, new_height), color=bg_color)
offset = 0
for im in images:
if direction=='horizontal':
y = 0
if aligment == 'center':
y = int((new_height - im.size[1])/2)
elif aligment == 'bottom':
y = new_height - im.size[1]
new_im.paste(im, (offset, y))
offset += im.size[0]
else:
x = 0
if aligment == 'center':
x = int((new_width - im.size[0])/2)
elif aligment == 'right':
x = new_width - im.size[0]
new_im.paste(im, (x, offset))
offset += im.size[1]
return new_im
While I do not have explanation for what was causing my runaway memory issue, I was able to tack on some code that seemed to have fixed the issue.
For each tile that I am trying to glue together, I ran a 'resizing' script (below). Somehow, this fixed the issue I was having ¯\ (ツ) /¯
Resize images script:
def resize_image(image, ImageQualityReduction = .78125):
#resize image, by percentage
width, height = image.size
#print(width,height)
new_width = int(round(width * ImageQualityReduction))
new_height = int(round(height * ImageQualityReduction))
resized_image = image.resize((new_width, new_height), Image.ANTIALIAS)
return resized_image#, new_width, new_height

How to read and display an image from a .rec file

I am using im2rec.py tool to first generate lst files and then to generate .rec and .idx files as following:
BASE_DIR = './'
IMAGES_DIR = os.path.join(BASE_DIR,'IMAGES')
DATASET_DIR = os.path.join(BASE_DIR,'Dataset')
TRAIN_RATIO = 0.8
TEST_DATA_RATIO = 0.1
Dataset_lst_file = os.path.join(DATASET_DIR,"dataset")
!python $BASE_DIR/tools/im2rec.py --list --recursive --test-ratio=$TEST_DATA_RATIO --train-ratio=$TRAIN_RATIO $Dataset_lst_file $IMAGES_DIR
!python $BASE_DIR/tools/im2rec.py --resize 224 --center-crop --num-thread 4 $Dataset_lst_file $IMAGES_DIR
I am successfully generating .lst, .rec and .idx files. However, my doubt is how can I read a specific image from the .rec file and plot it. For instance, to know if the images were recorded ok or just to explore my dataset.
------------Update----------
I was able to plot as following:
#https://mxnet.apache.org/versions/1.5.0/tutorials/basic/data.html
data_iter = mx.image.ImageIter(batch_size=4, data_shape=(3, 224, 224),
path_imgrec=Dataset_lst_file+'_train.rec',
path_imgidx=Dataset_lst_file+'_train.idx')
data_iter.reset()
for j in range(4):
batch = data_iter.next()
data = batch.data[0]
#print(batch)
label = batch.label[0].asnumpy()
for i in range(4):
ax = plt.subplot(1,4,i+1)
plt.imshow(data[i].asnumpy().astype(np.uint8).transpose((1,2,0)))
ax.set_title('class: ' + str(label[i]))
plt.axis('off')
plt.show()
This tutorial includes an example of image visualization from a .rec file: https://gluon-cv.mxnet.io/build/examples_detection/finetune_detection.html
dataset = gcv.data.RecordFileDetection('pikachu_train.rec')
classes = ['pikachu'] # only one foreground class here
image, label = dataset[0]
print('label:', label)
# display image and label
ax = viz.plot_bbox(image, bboxes=label[:, :4], labels=label[:, 4:5], class_names=classes)
plt.show()
For completeness to previous answer. This will display images to screen using plot_bbox or render images to a folder.
Usage :
dumpRecordFileDetection('./data/val.rec', False, True, classes, ctx)
def dumpRecordFileDetection(record_filename, display_ui, output_to_directory, classes, ctx):
"""Dump RecordFileDetection to screen or a directory"""
if isinstance(ctx, mx.Context):
ctx = [ctx]
dataset = gcv.data.RecordFileDetection(record_filename)
print('images:', len(dataset))
image, label = dataset[0]
bboxes=label[:, :4]
labels=label[:, 4:5]
print(image.shape, label.shape)
print('labeldata:', label)
print('bboxes:', bboxes)
print('labels:', labels)
image_dump_dir = os.path.join("./dump")
if not os.path.exists(image_dump_dir):
os.makedirs(image_dump_dir)
for i, batch in enumerate(dataset):
size = len(batch)
image, label = batch
print(image.shape, label.shape)
bboxes = label[:, :4]
labels = label[:, 4:5].astype(np.uint8)
if output_to_directory:
file_path = os.path.join("./dump", "{0}_.png".format(i))
# Format (c x H x W)
img = image.asnumpy().astype(np.uint8)
for box, lbl, in zip(bboxes, labels):
cv2.rectangle(img,(box[0], box[1]),(box[2], box[3]),(0, 0, 255), 2)
txt = "{0}".format(classes[lbl[0]])
cv2.putText(img,txt,(box[0], box[1]), cv2.FONT_HERSHEY_PLAIN,1,(0,255,0),1,cv2.LINE_AA, False)
cv2.imwrite(file_path, img)
if display_ui:
ax = viz.plot_bbox(image, bboxes=bboxes, labels=labels, class_names=classes)
plt.show()

Increase width/height of image(not resize)

]From https://www.pyimagesearch.com/2018/07/19/opencv-tutorial-a-guide-to-learn-opencv/
I'm able to extract the contours and write as files.
For example I've a photo with some scribbled text : "in there".
I've been able to extract the letters as separate files but what I want is that these letter files should have same width and height. For example in case of "i" and "r" width will differ. In that case I want to append(any b/w pixels) to the right of "i" photo so it's width becomes same as that of "r"
How to do it in Python? Just increase the size of photo(not resize)
My code looks something like this:
# find contours (i.e., outlines) of the foreground objects in the
# thresholded image
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
output = image.copy()
ROI_number = 0
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
ROI = image[y:y+h, x:x+w]
file = 'ROI_{}.png'.format(ROI_number)
cv2.imwrite(file.format(ROI_number), ROI)
[][1
Here are a couple of other ways to do that using Python/OpenCV using cv2.copyMakeBorder() to extend the border to the right by 50 pixels. The first way simply extends the border by replication. The second extends it with the mean (average) blue background color using a mask to get only the blue pixels.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('i.png')
# get mask of background pixels (for result2b only)
lowcolor = (232,221,163)
highcolor = (252,241,183)
mask = cv2.inRange(img, lowcolor, highcolor)
# get average color of background using mask on img (for result2b only)
mean = cv2.mean(img, mask)[0:3]
color = (mean[0],mean[1],mean[2])
# extend image to the right by 50 pixels
result = img.copy()
result2a = cv2.copyMakeBorder(result, 0,0,0,50, cv2.BORDER_REPLICATE)
result2b = cv2.copyMakeBorder(result, 0,0,0,50, cv2.BORDER_CONSTANT, value=color)
# view result
cv2.imshow("img", img)
cv2.imshow("mask", mask)
cv2.imshow("result2a", result2a)
cv2.imshow("result2b", result2b)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save result
cv2.imwrite("i_extended2a.jpg", result2a)
cv2.imwrite("i_extended2b.jpg", result2b)
Replicated Result:
Average Background Color Result:
In Python/OpenCV/Numpy you create a new image of the size and background color you want. Then you use numpy slicing to insert the old image into the new one. For example:
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('i.png')
ht, wd, cc= img.shape
# create new image of desired size (extended by 50 pixels in width) and desired color
ww = wd+50
hh = ht
color = (242,231,173)
result = np.full((hh,ww,cc), color, dtype=np.uint8)
# copy img image into image at offsets yy=0,xx=0
yy=0
xx=0
result[yy:yy+ht, xx:xx+wd] = img
# view result
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save result
cv2.imwrite("i_extended.jpg", result)

How can the code be modified so that multiple images can be read and stored in an array? so that they will be used for LSB steganography

The problem here is that this is only used for one image and i need to optimize it so that multiple images can be stored. (their width,height etc)
I am not fluent in python. I have worked on it about 4 years ago but now i have almost forgotten most part of the syntax.
def __init__(self, im):
self.image = im
self.height, self.width, self.nbchannels = im.shape
self.size = self.width * self.height
self.maskONEValues = [1,2,4,8,16,32,64,128]
#Mask used to put one ex:1->00000001, 2->00000010 .. associated with OR bitwise
self.maskONE = self.maskONEValues.pop(0) #Will be used to do bitwise operations
self.maskZEROValues = [254,253,251,247,239,223,191,127]
#Mak used to put zero ex:254->11111110, 253->11111101 .. associated with AND bitwise
self.maskZERO = self.maskZEROValues.pop(0)
self.curwidth = 0 # Current width position
self.curheight = 0 # Current height position
self.curchan = 0 # Current channel position
I want to store multiple images (their width, height etc) from a file path (that contains these images) in an array
TRY:-
from PIL import Image
import os
# This variable will store the data of the images
Image_data = []
dir_path = r"C:\Users\vasudeos\Pictures"
for file in os.listdir(dir_path):
if file.lower().endswith(".png"):
# Creating the image file object
img = Image.open(os.path.join(dir_path, file))
# Getting Dimensions of the image
x, y = img.size
# Getting channels of the image
channel = img.mode
img.close()
# Adding the data of the image file to our list
Image_data.append(tuple([channel, (x, y)]))
print(Image_data)
Just change the dir_path variable with the directory of your Image files. This code stores the color channel and dimensions of the Images, in a separate tuple unique to that file. And adds the tuple to a list.
P.S.:
Tuple format = (channels, dimensions)

Resources