I want segment the patches of size 32x32px or more using python's opencv - python-3.x

Here I have a image:
Then I have generated threshold image using the code below.
img = cv2.imread('Image_Original.jpg')
hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
lower_gr = np.array([40,0,0])
upper_gr = np.array([90,255,255])
mask = cv2.inRange(hsv,lower_gr,upper_gr)
mask=~mask
res = cv2.bitwise_and(img,img,mask = ~mask)
cv2.imshow('Masked',mask)
cv2.imshow('Result',res)
Then the following images (masked):
and (result):
Now what I want is to remove the black pixels(FROM THE ORiGINAL IMAGE ONLY) by making them zero and I want to extract only patches of size 32x32px or more.

Use cv2.findContours() to find the boundaries of the white patches in your mask image.
Each boundary is returned as a list of 2D points.
Use cv2.boundingRect() to get the width/height of each patch and filter accordingly.
You could also use cv2.minAreaRect(), or cv2.contourArea() to filter based on actual area of the patch.
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.htm
Once you have determined which patches should be discarded, overwrite them with black on the colour image using cv2.fillPoly().

Related

SVG filters feDisplacementMap same inputs, different result

I have a filtered image rendering as expected here.
feImage = imageA
img element = imageB
in = feImage
in2 = SourceGraphic
However, if I swap the images around but maintain the same input values here:
feImage = imageB
img element = imageA
in = SourceGraphic
in2 = feImage
The result is different. This doesn't make sense to me.
Per the SVG specification
The ‘color-interpolation-filters’ property only applies to the ‘in2’ source image and does not apply to the ‘in’ source image. The ‘in’ source image must remain in its current color space.
So in2 has to be converted to the linearRGB colour space (since that's the default value for color-interpolation-filters), while the in input is RGB because images are RGB by definition. Some UAs may not be doing this properly because it certainly looks the same in Firefox.

OpenCV Seamless Cloning shift position after finish the process

I am trying to used the seamsless cloning to blend to image together.
but I notice that after using the seamsless clone function the area in the
mask that I want to transfer is shift upward. So I have a question that
is this a normal behaviour of the seamsless clone function or it is a bug
on my implementation.
Here are the Source photo
Here are the destination photo
Here are the result photo
I encountered similar situation. Moreover, like #JoshuaCWebDeveloper noted, this shift disappeared when all one mask is used. Nevertheless, I got a fix for this. What I did is this. I cropped valid mask (non-zero sub-section) out using cv2.boundingRect. So my source image and mask image are reduced to a smaller size, while center is now calculated from boundingRect outputs (Since reference point is marked on destination image). This way, error got solved/shift got ridden.
(Based on the answer posted by Fractalic Forieu) You can achieve the same result without reducing the image size.
Instead of using the image center:
center = (width // 2, height // 2)
poissonImage = cv2.seamlessClone(srcImage, dstImage, maskImage, center)
use the center of the bounding rect:
monoMaskImage = cv2.split(maskImage)[0] # reducing the mask to a monochrome
br = cv2.boundingRect(monoMaskImage) # bounding rect (x,y,width,height)
centerOfBR = (br[0] + br[2] // 2, br[1] + br[3] // 2)
poissonImage = cv2.seamlessClone(srcImage, dstImage, maskImage, centerOfBR )

Get pixel data from Graphics object in Codename One

I'm trying to implement the Gaussian blur filter on Graphics object, but I can't find function for get pixel information or transform Graphics object to byte array (with RGB data).
That isn't supported since hardware accelerated surfaces might not provide that information.
However, you can do something else. Just paint the current form onto a mutable image and then just get the RGB of the mutable image which you can then use to create a new Image from an RGB e.g. something close to this:
Display d = Display.getInstance();
Image img = Image.createImage(d.getDisplayWidth(), d.getDisplayHeight());
Graphics g = img.getGraphics();
d.getCurrent().paintBackgrounds(g);
d.getCurrent().paintComponent(g, false);
int[] bufferArray = img.getRGB();
// blur...
Image blurredImage = Image.createImage(bufferArray, img.getWidth(), img.getHeight());

Error in Pyramid mean shift filtering of image of certain dimensions?

I'm trying to run the mean shift segmentation using pyramids as explained in the Learning OpenCV book on some images. Both source and destination images are 8-bit, three-channel color images of the same width and height as mentioned.
However correct output is obtained only on a 1600x1200 or 1024x768 images. Other images of sizes 625x391 and 644x438 are causing a runtime error
"Sizes of input arguments do not match in function cvPyrUp()"
My code is this:
IplImage *filtered = cvCreateImage(cvGetSize(img),img->depth,img->nChannels);
cvPyrMeanShiftFiltering( img, filtered, 20, 40, 1);
The program uses the parameters as given in the sample. I've tried decreasing values thinking it to be an image dimensions problem, but no luck.
By resizing image dimensions to 644x392 and 640x320 the mean-shift is running properly. I've read that "pyramid segmentation requires images that are N-times divisible by 2, where N is the number of pyramid layers to be computed" but how is that applicable here?
Any suggestions please.
Well you have anything wring except that when you apply cvPyrMeanShiftFiltering
you should do it like this:
//A suggestion to avoid the runtime error
IplImage *filtered = cvCreateImage(cvGetSize(img),img->depth,img->nChannels);
cvCopy(img,filtered,NULL);
//Values only you should know
int level = kLevel;
int spatial_radius = kSpatial_Radius;
int color_radius = = kColor_Radius;
//Here comes the thing
filtered->width &= -(1<<level);
filtered->height &= -(1<<level);
//Now you are free to do your thing
cvPyrMeanSihftFiltering(filtered, filtered,spatial_radius,color_radius,level);
The thing is that this kind of pyramidal filter modifies som things acording the level you use. Try this and tell me later if worked.
Hope i can help.

How can I "best fit" an arbitrary cairo (pycairo) path?

It seems like given the information in stroke_extents() and the translate(x, y) and scale(x, y) functions, I should be able to take any arbitrary cairo (I'm using pycairo) path and "best fit" it. In other words, center it and expand it to fill the available space.
Before drawing the path, I have scaled the canvas such that the origin is the lower left corner, up is y+, right is x+, and the height and width are both 1. Given these conditions, this code seems to correctly scale the path:
# cr is the canvas
extents = cr.stroke_extents()
x_size = abs(extents[0]) + abs(extents[2])
y_size = abs(extents[1]) + abs(extents[3])
cr.scale(1.0 / x_size, 1.0 / y_size)
I cannot for the life of me figure out the translating though. Is there a simpler approach? How can I "best fit" a cairo path on its canvas?
Please ask for clarification if anything is unclear in this question.
I have found a solution that I like (at least for my purposes). Just create a new surface and paint the old surface on to the new one.
As for the scale only, I have done a similar thing to adjust an image inside a box with a "best-fit" approach. As about scale, here is the code:
available_width = 800
available_height = 600
path_width = 500
figure_height = 700
# formulas
width_ratio = float(available_width)/path_width
height_ratio = float(available_height)/figure_height
scale = min(height_ratio, width_ratio)
# result
new_path_width = path_width*scale
new_figure_height = figure_height*scale
print new_path_width, new_figure_height
The image gets drawn aligned to the origin (top left in my case), so perhaps a similar thing should be done to translate the path.
Also, this best fit is intended to preserve aspect ratio. If you want to stretch the figure, use each of the ratios instead of the 'scale' variable.
Hope I have helped

Resources