I am trying to used the seamsless cloning to blend to image together.
but I notice that after using the seamsless clone function the area in the
mask that I want to transfer is shift upward. So I have a question that
is this a normal behaviour of the seamsless clone function or it is a bug
on my implementation.
Here are the Source photo
Here are the destination photo
Here are the result photo
I encountered similar situation. Moreover, like #JoshuaCWebDeveloper noted, this shift disappeared when all one mask is used. Nevertheless, I got a fix for this. What I did is this. I cropped valid mask (non-zero sub-section) out using cv2.boundingRect. So my source image and mask image are reduced to a smaller size, while center is now calculated from boundingRect outputs (Since reference point is marked on destination image). This way, error got solved/shift got ridden.
(Based on the answer posted by Fractalic Forieu) You can achieve the same result without reducing the image size.
Instead of using the image center:
center = (width // 2, height // 2)
poissonImage = cv2.seamlessClone(srcImage, dstImage, maskImage, center)
use the center of the bounding rect:
monoMaskImage = cv2.split(maskImage)[0] # reducing the mask to a monochrome
br = cv2.boundingRect(monoMaskImage) # bounding rect (x,y,width,height)
centerOfBR = (br[0] + br[2] // 2, br[1] + br[3] // 2)
poissonImage = cv2.seamlessClone(srcImage, dstImage, maskImage, centerOfBR )
Related
Here I have a image:
Then I have generated threshold image using the code below.
img = cv2.imread('Image_Original.jpg')
hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
lower_gr = np.array([40,0,0])
upper_gr = np.array([90,255,255])
mask = cv2.inRange(hsv,lower_gr,upper_gr)
mask=~mask
res = cv2.bitwise_and(img,img,mask = ~mask)
cv2.imshow('Masked',mask)
cv2.imshow('Result',res)
Then the following images (masked):
and (result):
Now what I want is to remove the black pixels(FROM THE ORiGINAL IMAGE ONLY) by making them zero and I want to extract only patches of size 32x32px or more.
Use cv2.findContours() to find the boundaries of the white patches in your mask image.
Each boundary is returned as a list of 2D points.
Use cv2.boundingRect() to get the width/height of each patch and filter accordingly.
You could also use cv2.minAreaRect(), or cv2.contourArea() to filter based on actual area of the patch.
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.htm
Once you have determined which patches should be discarded, overwrite them with black on the colour image using cv2.fillPoly().
I must be missing something...why isn't this working? Instead of clipping to the circle the entire 800x800 backdrop image is displayed...
var mask;
var img;
function preload() {
game.load.image('back', 'backdrop.jpg');
}
function create() {
img = game.add.image(game.world.centerX, game.world.centerY,'back').anchor.setTo(0.5);
mask = game.add.graphics(0, 0);
mask.beginFill(0xffffff);
mask.drawCircle(game.world.centerX, game.world.centerY, 600);
img.mask = mask;
}
jsfiddle here
Disclaimer: I have no formal experience in phaser.io
I was able to fix this in your fiddle by changing
img = game.add.image(game.world.centerX, game.world.centerY,'back').anchor.setTo(0.5);
to
img = game.add.image(0, 0, 'back');
JSFiddle Fork
I would assume that placing the image in the centerX,centerY position results in the mask being offset of the image. Hopefully someone with more experience than I could explain the specifics here, but I will research further and update my answer as I figure out the why to go along with the how.
Update
Okay so I've done some digging through the documentation. First, you want to use img = game.add.image(0, 0, 'back'); due to the fact that the x and y parameters in this case dictate the upper-left origin of the image, not the center. By using game.world.centerX and game.world.centerY you are trying to throw the background image to the center of the canvas even though the canvas is the same size as the image.
using .anchor.setTo(0.5) from what I can gather, is attempting to set the anchor point at which the image originates to the centerX position. However, when you remove this anchor, suddenly the mask works, even though it is not showing correctly (because the position of the background image is incorrect).
Theory - By anchoring the image, I believe that it is no longer possible to apply a mask to it. By all experimenting that I've done, having an anchor set on the background image prevents it from being masked, so the mask simply is added as a child to img and is placed as it's center, thus why you are seeing the white circle instead of the circle properly masking the image.
it appears I was a mistaken about the fluidity of the api in trying to chain that last function call... if I break it up:
img = game.add.image(game.world.centerX, game.world.centerY, 'back');
img.anchor.setTo(0.5);
it now works!
Fiddle Here
I am using Raphael to draw some paths. Each path has an associated rectangle [container] the size and position of the bounding box. I am using the container for dragging both shapes.
In the move callback, I update the both positions so they both move together.
This all works great until I serialize. I am only serializing the path, then creating the container on the fly after deserialization.
Immediately after converting to json and back, things look fine. I can print out the current transform of the path and it looks correct. Doing any transform on the path after this results in the path being reset and moved to 0,0.
Here is a fiddle that shows the problem.
If you move the rect, you can see both objects move together.
If you click 'Save/Load', things look fine, and the path prints the same.
If you now drag, the path gets reset to 0,0. Printing shows the transform has been reset from 0,0.
I am trying to find out how to make the path move as it did before serialization. Is something getting lost in the process? Or is there an internal state that needs to be updated?
Raphael.JSON serialises data stored in the elements. It does not preserve temporary data stored in the paper object so something does indeed get lost in the process when calling R.clear(). For example drag events bound to elements are not preserved.
However the main issue here is with your drag function, notice how dragging the square a second time applies the transformation from the top left of the paper. I suggest using Raphael.FreeTransform (which you already included in the Fiddle) to handle this.
I wrote both Raphael.JSON and Raphael.FreeTransform plugins and have struggled with the same issues. I'm currently working on an application that lets you save save and restore the state of the paper (similar to what you're doing) and it works fine. If you need any help feel free to open an issue on Github.
You need to capture the initial transform offsets of your elements when the drag starts and use those as the basis for your drag-move transforms. Consider the following:
var start_x, start_y;
cont.drag(function(x, y, e)
{
p.transform('t' + ( start_x + x ) + ',' + ( start_y + y ) );
cont.transform('t' + ( start_x + x ) + ',' + ( start_y + y ) );
},
function( x, y )
{
var start_bbox = p.getBBox();
start_x = start_bbox.x;
start_y = start_bbox.y;
console.log("Drag start at %s,%s", start_x, start_y );
} );
I've staged this in a fiddle located here.
Unfortunately, there is still an issue with the path -- it's offset is being incremented by the difference between it's bounding box y value and the y axis (a difference of 12, to be precise) each time drag is used. I'm not sure where that's coming from exactly.
I'm trying to run the mean shift segmentation using pyramids as explained in the Learning OpenCV book on some images. Both source and destination images are 8-bit, three-channel color images of the same width and height as mentioned.
However correct output is obtained only on a 1600x1200 or 1024x768 images. Other images of sizes 625x391 and 644x438 are causing a runtime error
"Sizes of input arguments do not match in function cvPyrUp()"
My code is this:
IplImage *filtered = cvCreateImage(cvGetSize(img),img->depth,img->nChannels);
cvPyrMeanShiftFiltering( img, filtered, 20, 40, 1);
The program uses the parameters as given in the sample. I've tried decreasing values thinking it to be an image dimensions problem, but no luck.
By resizing image dimensions to 644x392 and 640x320 the mean-shift is running properly. I've read that "pyramid segmentation requires images that are N-times divisible by 2, where N is the number of pyramid layers to be computed" but how is that applicable here?
Any suggestions please.
Well you have anything wring except that when you apply cvPyrMeanShiftFiltering
you should do it like this:
//A suggestion to avoid the runtime error
IplImage *filtered = cvCreateImage(cvGetSize(img),img->depth,img->nChannels);
cvCopy(img,filtered,NULL);
//Values only you should know
int level = kLevel;
int spatial_radius = kSpatial_Radius;
int color_radius = = kColor_Radius;
//Here comes the thing
filtered->width &= -(1<<level);
filtered->height &= -(1<<level);
//Now you are free to do your thing
cvPyrMeanSihftFiltering(filtered, filtered,spatial_radius,color_radius,level);
The thing is that this kind of pyramidal filter modifies som things acording the level you use. Try this and tell me later if worked.
Hope i can help.
It seems like given the information in stroke_extents() and the translate(x, y) and scale(x, y) functions, I should be able to take any arbitrary cairo (I'm using pycairo) path and "best fit" it. In other words, center it and expand it to fill the available space.
Before drawing the path, I have scaled the canvas such that the origin is the lower left corner, up is y+, right is x+, and the height and width are both 1. Given these conditions, this code seems to correctly scale the path:
# cr is the canvas
extents = cr.stroke_extents()
x_size = abs(extents[0]) + abs(extents[2])
y_size = abs(extents[1]) + abs(extents[3])
cr.scale(1.0 / x_size, 1.0 / y_size)
I cannot for the life of me figure out the translating though. Is there a simpler approach? How can I "best fit" a cairo path on its canvas?
Please ask for clarification if anything is unclear in this question.
I have found a solution that I like (at least for my purposes). Just create a new surface and paint the old surface on to the new one.
As for the scale only, I have done a similar thing to adjust an image inside a box with a "best-fit" approach. As about scale, here is the code:
available_width = 800
available_height = 600
path_width = 500
figure_height = 700
# formulas
width_ratio = float(available_width)/path_width
height_ratio = float(available_height)/figure_height
scale = min(height_ratio, width_ratio)
# result
new_path_width = path_width*scale
new_figure_height = figure_height*scale
print new_path_width, new_figure_height
The image gets drawn aligned to the origin (top left in my case), so perhaps a similar thing should be done to translate the path.
Also, this best fit is intended to preserve aspect ratio. If you want to stretch the figure, use each of the ratios instead of the 'scale' variable.
Hope I have helped