I have an image with size 75x75 pixels and I want to select a Region of Interest in this image.
The small size of the image does not allow an accurate selection of the interesting pixels using opencv.
I already have a ROI selection function using the opencv library.
Is there a way to magnify each pixel so that opencv displays each image pixel using 4 pixels of the screen? Resizing is not an option since I need the exact pixel positions of the ROI.
Related
I have an image which consists of an rectangular object in it and i want to find the 4 corners of the rectangle so that i can calculate the angle of inclination of that object to rotate the image based on that angle.I wanted to know if there are ways to identify the 4 corners of rectangular object so that i can wrap the image using the calculated angle.
I have tried doing some image processing stuff such as converting it gray scale and reducing the noise through Gaussian filter and after which i detect the edge using edge detection filter followed by thresholding and finding the contour.
The problem is that the contours that are found is not consistent and its not performing well on different images from my dataset .Also the background for each of these images is not constant it varies.
Try cv.findContours() on the binarized image, with white object on black background. Then run either cv.boundingRect() or cv.minAreaRect() on the contour.
See Tutorial here: https://docs.opencv.org/3.4/dd/d49/tutorial_py_contour_features.html
I have an image dataset and before feeds it to deep learning algorithm I need to crop it to the same size. All images have different size of black margins as the below image demonstrates.
Any suggestions for a way to crop images with different margin size.
Since your border color is black (nearly perfect black) and will be the same in all the images, I would suggest applying binary threshold making everything white (255) except the black region. Now some of the image regions may get affected too but that's not a problem.
Now find contours in the image and second largest contour will be your region. Calculate rectangular bounding box for this contour and crop the same region in the original image.
First, do a thresholding with a low-intensity threshold value (if your background is definitely completely black, you could even threshold at an intensity of 1) to determine all non-border components.
Next, use Connected-component labeling to determine all isolated foreground components. The central scan-image you are interested in should then always result in the biggest component. Crop out this biggest component to remove the border together with all possible non-black artifacts (labels, letters etc.). You should be left with only the borderless scan.
You can find all the algorithms needed in any basic image processing library. I'd personally recommend looking into OpenCV, they also include phyton bindings.
One way to this could be as follows:
Flood-fill the image with red starting at the top-left corner, and allowing around 5% divergence from the black pixel there.
Now make everything that is not red into white - because the next step after this looks for white pixels.
Now use findContours() (which looks for white objects) and choose the largest white contour as your image and crop to that.
You could consider making things more robust by considering some of the following ideas:
You could normalise a copy of the image to the full range of black to white first in case you get any with near-black borders.
You could check that more than one, or all corner pixels are actually black in case you get images without a border.
You could also flag up issues if your cropped image appears to be less than, say 70%, of the total image area.
You could consider a morphological opening with 9x9 square structuring element as the penultimate step to tidy things up before findContrours().
here is the solution code for this question:
import warnings
warnings.filterwarnings('always')
warnings.filterwarnings('ignore')
import cv2
import numpy as np
import os
path = "data/benign/"
img_resized_dir = "data/pre-processed/benign/"
dirs = os.listdir(path)
def thyroid_scale():
for item in dirs:
if os.path.isfile(path+item):
img = cv2.imread(path+item)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,0,255,0)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
crop_img = img[y+35:y+h-5,x+25:x+w-10]
resize_img = cv2.resize(crop_img, (300, 250), interpolation = cv2.INTER_CUBIC)
cv2.imwrite(img_resized_dir+item, resize_img)
thyroid_scale()
I have gone through canvas and SVG in html5. When it comes to the difference, It is mentioned that canvas is pixel based and SVG is vector based. I have not got what do they mean by these.
Thanks in advance
There is 2 way to register an image in your computer:
Register in pixel: It means that your image is register as a table of pixel. And in every box of your table a color is register. Such images, have a defined size (1 computer pixel for 1 table box). If you want to reduce the size, an algorithm will mix pixel to render a lower size image. And if you want to display bigger than it is you will see pixel or the image will become blurred.
Register in vector: This kind of image do not own a size. Indeed the file format register vector (direction and scale). And when you want to display it, you will specify a size and the computer will process your image to display it. If you zoom on the image (for example a line). You will never see pixels. Indeed every time you zoom, the image is reprocessed and a line stay a line.
How can I draw bounding rectangle for more than one contours in a binary image and display their coordinates in original image??I am using opencv with C++ platform
In a nutshell, use findContours to get your contours, then use boundingRect to generate a bounding box for each contour, then draw each bounding box to your original image using the rectangle function.
If you're looking for an example, the OpenCV documentation already has it here
I have drawn a canvas and i want to know how to get a pixel color of the canvas ?
Create a mutable Image the same size as your Canvas. Then, any operations you perform on your Canvas's Graphics object, perform the same ones on your Image's Graphics object.
Finally, get the pixel data from the Image using getRGB(); it should be the same as the Canvas.
Unfortunately, you can't. Graphics class, which is used to draw to Canvas, is for painitng only, it can't give you any information on the pixels.
If you are targeting a platform that supports the NokiaUI API you can use the DirectGraphics#getPixels to read pixel data. On mobile platforms with graphics accelerator hardware, reading pixels tend to be slower so you should use this very sparingly.