generate an overlay against a given image in OpenCV or Scikit-image - python-3.x

given an image, are there any ways to generate a grid that can be overlay against the original image. How to do it either in OpenCV or scikit-image?

Like this. It should work with scikit-image and OpenCV, though you'll need to reverse the order of the colours for OpenCV as it uses BGR rather than RGB.
import numpy as np
spacing = 40
# Make black image
im = np.zeros((480,640,3), dtype=np.uint8)
# Draw grid
im[:, spacing:-1:spacing] = [0,0,255] # red horizontal lines
im[spacing:-1:spacing, :] = [255,0,0] # blue vertical lines

Related

Why is a generated SVG image less rich than the corresponding PNG image

To set this up, I used svgwrite library to create a sample SVG image (20 squares of length 100 at random locations on a display size of length 400)
import svgwrite
import random
random.seed(42)
dwg = svgwrite.Drawing('x.svg', size=(400,400))
dwg.add(dwg.rect(insert=(0,0), size=('100%', '100%'), fill='white')) # White background
for i in range(20):
coordinates = (random.randint(0,399), random.randint(0,399))
color = (random.randint(0,255), random.randint(0,255), random.randint(0,255))
dwg.add(dwg.rect(coordinates, (100, 100),
stroke='black',
fill=svgwrite.rgb(*color),
stroke_width=1)
)
dwg.save()
I then wrote a sample pygame program to generate a PNG image of the same sample. (A seed has been used to generate the same sequence of squares.)
import pygame
import random
random.seed(42)
display = pygame.display.set_mode((400,400))
display.fill((255,255,255)) # White background
for i in range(20):
coordinates = (random.randint(0,399), random.randint(0,399))
color = (random.randint(0,255), random.randint(0,255), random.randint(0,255))
pygame.draw.rect(display, color, coordinates+(100,100), 0)
pygame.draw.rect(display, (0,0,0), coordinates+(100,100), 1) #For black border
pygame.image.save(display, "x.png")
These are the images that I got (SVG's can't be uploaded to SO, so I have provided a screenshot. Nevertheless, the programs above can be run to output the same).
My question is, why is the PNG (on the left) richer and sharper than the corresponding SVG image? The SVG looks blurred and bland, comparatively.
EDIT: One can notice the fine white line between the first two squares at the top-left corner. It's not very clear in the SVG.
Two things I think may impact:
You are using an image viewer, which could distort the vectorial SVG image. I think all of the vector images viewers get the actual screen size, then export the vectorial image into a matrix image sized in function of the size of the screen you have. Then they display the matrix image. If they render the image with softened sharpness, or if they have a problem by getting the size of your screen, the image may be blurred.
To make the PNG image, you use pygame. But you are using another module to make the SVG image. This module may function differently, and also exports the image with another quality than if you were exporting it with pygame.
For me personally the SVG image appears blurred with Gimp, for example, but not with another SVG viewer.
So I think the problem comes from your image viewer.

Fill text after canny detection

Now I have an image that contains some text and it has a colored background , I want to extract it using tesseract but first i want to replace the colored background with white one and make the text itself black to increase the accuracy of detection process .
i was trying to use Canny Detection
import cv2
import numpy as np
image=cv2.imread('tt.png')
cv2.imshow('input image',image)
cv2.waitKey(0)
gray=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
edged=cv2.Canny(gray,30,200)
edged = cv2.bitwise_not(edged)
cv2.imshow('canny edges',edged)
cv2.waitKey(0)
that worked fine to replace the colored background with white but made the text's color white with black outlines (check the below images) .
so is there any way to make the whole text colored black ?
or
is there another way i can use to make that ?
before Canny detection
after Canny detection
Edit
the image may has mixed background colors like
input image
You should simply do it by using THRESH_BINARY_INV, it is the code:
cv::namedWindow("Original_Image", cv::WINDOW_FREERATIO);
cv::namedWindow("Result", cv::WINDOW_FREERATIO);
cv::Mat originalImg = cv::imread("BCQqn.png");
cv::Mat gray;
cv::cvtColor(originalImg, gray, cv::COLOR_BGR2GRAY);
cv::threshold(gray, gray, 130, 255, cv::THRESH_BINARY_INV);
cv::imshow("Original_Image", originalImg);
cv::imshow("Result", gray);
cv::waitKey();
And it is the result:
You can play with the threshold value (130 in the above example).
Note: The code is in C++, if you are using Python, then you can go the same steps, and is that.
Good Luck!!

overlay one part of image onto another image

There are two corresponding images, the second one reflects the mask area of the first one.
How to over lay the red area in the second image onto the first image?
You can do it with OpenCV like this:
#!/usr/local/bin/python3
import numpy as np
import cv2
# Load base image and overlay
base = cv2.imread("image.jpg", cv2.IMREAD_UNCHANGED)
over = cv2.imread("overlay.jpg", cv2.IMREAD_UNCHANGED)
# Anywhere the red channel of overlay image exceeds 127, make base image red
# Remember OpenCV uses BGR ordering, not RGB
base[over[...,2]>127] = [0,0,255]
# Save result
cv2.imwrite('result.jpg',base)
If you wanted to blend a small percentage of red (say 20%) while retaining the structure of the underlying image, you could do this:
#!/usr/local/bin/python3
import numpy as np
import cv2
# Load base image and overlay
base = cv2.imread("image.jpg", cv2.IMREAD_UNCHANGED)
over = cv2.imread("overlay.jpg", cv2.IMREAD_UNCHANGED)
# Blend 80% of the base layer with 20% red
blended = cv2.addWeighted(base,0.8,(np.zeros_like(base)+[0,0,255]).astype(np.uint8),0.2,0)
# Anywhere the red channel of overlay image exceeds 127, use blended image, elsewhere use base
result = np.where((over[...,2]>127)[...,None], blended, base)
# Save result
cv2.imwrite('result.jpg',result)
By the way, you don't actually need any Python, you can just do it in Terminal with ImageMagick like this:
magick image.jpg \( overlay.jpg -fuzz 30% -transparent blue \) -composite result.png
Keywords: Python, image processing, overlay, mask.

How to programmatically (preferably using PIL in python) calculate the total number of pixels of an object with a stripped background?

I have multiple pictures, each of which has an object with its background removed. The pictures are 500x400 pixels in size.
I am looking for a way to programmatically (preferably using python) calculate the total number of pixels of the image inside the picture (inside the space without the background).
I used the PIL package in Python to get the dimensions of the image object, as follows:
print(image.size)
This command successfully produced the dimensions of the entire picture (500x400 pixels) but not the dimensions of the object of interest inside the picture.
Does anyone know how to calculate the dimensions of an object inside a picture using python? An example of a picture is embedded below.
You could floodfill the background pixels with some colour not present in the image, e.g. magenta, then count the magenta pixels and subtract that number from number of pixels in image (width x height).
Here is an example:
#!/usr/bin/env python3
from PIL import Image, ImageDraw
import numpy as np
# Open the image and ensure RGB
im = Image.open('man.png').convert('RGB')
# Make all background pixels magenta
ImageDraw.floodfill(im,xy=(0,0),value=(255,0,255),thresh=50)
# Save for checking
im.save('floodfilled.png')
# Make into Numpy array
n = np.array(im)
# Mask of magenta background pixels
bgMask =(n[:, :, 0:3] == [255,0,255]).all(2)
count = np.count_nonzero(bgMask)
# Report results
print(f"Background pixels: {count} of {im.width*im.height} total")
Sample Output
Background pixels: 148259 of 199600 total
Not sure how important the enclosed areas between arms and body are to you... if you just replace all greys without using the flood-filling technique, you risk making, say, the shirt magenta and counting that as background.

How to draw a rectangle with edge roughness

I use Imagedraw module to draw rectangle, which is very simple:
blank=Image.new("RGB",[pixelx,pixely],"black:")
draw=ImageDraw.Draw(blank)
draw.rectangle(x1,y1,x2,y2,fill='white')
This gives me a straight line rectangle in white.
But can I change the roughness of edge of this rectangle?
I am trying to make the rectangle look more similar to practical image.
If I cannot achieve this by Imagedraw, what module can help me do that.
Thanks a lot!
It's difficult to know precisely what effect you are after.
I can tell you that Pillow has filters - https://pillow.readthedocs.io/en/5.2.x/reference/ImageFilter.html
from PIL import ImageFilter
im = im.filter(ImageFilter.BLUR)
Changing the angle of your rectangle - by selecting co-ordinates that are not a flat rectangle - would create pixelated edges.
Otherwise, I might suggest randomly changing individual pixels along the edges using https://pillow.readthedocs.io/en/5.2.x/reference/PixelAccess.html
from PIL import Image
px = im.load()
px[0, 0] = (0, 0, 0)
Although that could be slow. It depends on the size of your image and the need for speed.

Resources