Halcon - outer 5 pixels of a image as a region - region

I have a image with a random shape. What I need to do is to get the outermost 5 pixels of the image as a region. How can this be done in halcon?
what I did so far is this:
threshold (ImageL, FullRegion, 0,255)
erosion_circle(FullRegion, FullRegionErosion,5)
complement(FullRegionErosion, Region2)
intersection(FullRegion, Region2, Border)
It works, but I dont like it.. it seems like a hack to me..

Image domain and get_image_size can be used for a less "hacky" solution:
get_image_size (Image, Width, Height)
get_domain (Image, Domain)
gen_rectangle1 (Rectangle, 5, 5, Height-6, Width-6)
difference (Domain, Rectangle, RegionDifference)

Related

NodeJS using package Sharp to resize images. How do I resize based on the long side of the image?

In Lightroom, I can resize images to a pixel value for the long side. That way if, for example, an image is five times as tall as it is wide, and I asked for a 2,000px (long side) image, the result will be 2000x400px. Not 10,000x2,000px.
In other words, the aspect ratio is maintained, and 2,000px is a limiter.
Can sharp do this, and maintain the aspect ratio -- when I don't know in advance whether the height or the width will be greater?
Otherwise I will have to pull the exif data from the images so that I know in advance which is greater, and can resize based on the greater side (width or height). I am trying to avoid this step.
Below is my current code and it creates 300x300 images, ignoring the aspect ratio. However the desired result is:
300x200 images, when given images whose aspect ratio is 3:2
200x300 images, when given images whose aspect ratio is 2:3.
etc.
exports.resizeResolutions = {
mapThumbnail: {x: 300, y: 300},
}
exports.fitMethods = {
inside: "inside",
}
proc = sharp(path.join(pathToLocalFSGalleries, folder, file),
{ fit: fitMethods.inside }).resize(resizeResolutions.mapThumbnail.x, resizeResolutions.mapThumbnail.y);
await proc.toFile(path.join(pathToThumbnails, folder, file));

Filter out everything of a certain color using openCV

I have pictures of buildings I want to classify, and I want to get rid of the sky as I think it is messing with my classifier. I know that OpenCV has a function called inRange, that takes in the image and blacks out everything not in the range of the two color bounds that you provide. I was wondering if there was a function that literally did the opposite. Or another way I can accomplish what I want.
Thank you!
cv2.InRange creates a mask, which basically means it creates an image of the same size where the pixel values that are in the range are 255, and the values outside the range are 0.
https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#void%20inRange(InputArray%20src,%20InputArray%20lowerb,%20InputArray%20upperb,%20OutputArray%20dst)
If you want to have the opposite of that you can take the output of cv2.inRange and perform a bitwise_not:
https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#bitwise-not
If you want to then use that to black out the pixels in your original image you could do a bitwise_and:
https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#bitwise-and
So I would do something like:
mask = cv2.inRange(img, (255, 0, 0), (100, 0, 0)) # modify your thresholds
inv_mask = cv2.bitwise_not(mask)
no_sky = cv2.bitwise_and(img, inv_mask)

OpenCV Seamless Cloning shift position after finish the process

I am trying to used the seamsless cloning to blend to image together.
but I notice that after using the seamsless clone function the area in the
mask that I want to transfer is shift upward. So I have a question that
is this a normal behaviour of the seamsless clone function or it is a bug
on my implementation.
Here are the Source photo
Here are the destination photo
Here are the result photo
I encountered similar situation. Moreover, like #JoshuaCWebDeveloper noted, this shift disappeared when all one mask is used. Nevertheless, I got a fix for this. What I did is this. I cropped valid mask (non-zero sub-section) out using cv2.boundingRect. So my source image and mask image are reduced to a smaller size, while center is now calculated from boundingRect outputs (Since reference point is marked on destination image). This way, error got solved/shift got ridden.
(Based on the answer posted by Fractalic Forieu) You can achieve the same result without reducing the image size.
Instead of using the image center:
center = (width // 2, height // 2)
poissonImage = cv2.seamlessClone(srcImage, dstImage, maskImage, center)
use the center of the bounding rect:
monoMaskImage = cv2.split(maskImage)[0] # reducing the mask to a monochrome
br = cv2.boundingRect(monoMaskImage) # bounding rect (x,y,width,height)
centerOfBR = (br[0] + br[2] // 2, br[1] + br[3] // 2)
poissonImage = cv2.seamlessClone(srcImage, dstImage, maskImage, centerOfBR )

Draw string in canvas

I'm having an hard time finding how to draw/print a String in a canvas, rotated 90º, NOT vertical letters. After some different approaches without success I was trying to follow one that would involve printing the Graphics object to an Image object. As the API is reduced it has been a difficult task.
So basically what I'm asking you guys is if you know how to draw a String rotated 90º in a Canvas or if you don't, how can I save the graphics object to an Image one so I can follow my "hint".
Thank you very much!
Guilherme
Finally, one last research in the web and here it is:
//The text that will be displayed
String s="java";
//Create the blank image, specifying its size
Image img=Image.createImage(50,50);
//Create an instance of the image's Graphics class and draw the string to it
Graphics gr=img.getGraphics();
gr.drawString(s, 0, 0, Graphics.TOP|Graphics.LEFT);
//Display the image, specifying the rotation value. For example, 90 degrees
g.drawRegion(img, 0, 0, 50, 50, Sprite.TRANS_ROT90, 0, 0, Graphics.TOP|Graphics.LEFT);
from: http://wiki.forum.nokia.com/index.php/How_to_display_rotated_text_in_Java_ME
Thank you!

How can I "best fit" an arbitrary cairo (pycairo) path?

It seems like given the information in stroke_extents() and the translate(x, y) and scale(x, y) functions, I should be able to take any arbitrary cairo (I'm using pycairo) path and "best fit" it. In other words, center it and expand it to fill the available space.
Before drawing the path, I have scaled the canvas such that the origin is the lower left corner, up is y+, right is x+, and the height and width are both 1. Given these conditions, this code seems to correctly scale the path:
# cr is the canvas
extents = cr.stroke_extents()
x_size = abs(extents[0]) + abs(extents[2])
y_size = abs(extents[1]) + abs(extents[3])
cr.scale(1.0 / x_size, 1.0 / y_size)
I cannot for the life of me figure out the translating though. Is there a simpler approach? How can I "best fit" a cairo path on its canvas?
Please ask for clarification if anything is unclear in this question.
I have found a solution that I like (at least for my purposes). Just create a new surface and paint the old surface on to the new one.
As for the scale only, I have done a similar thing to adjust an image inside a box with a "best-fit" approach. As about scale, here is the code:
available_width = 800
available_height = 600
path_width = 500
figure_height = 700
# formulas
width_ratio = float(available_width)/path_width
height_ratio = float(available_height)/figure_height
scale = min(height_ratio, width_ratio)
# result
new_path_width = path_width*scale
new_figure_height = figure_height*scale
print new_path_width, new_figure_height
The image gets drawn aligned to the origin (top left in my case), so perhaps a similar thing should be done to translate the path.
Also, this best fit is intended to preserve aspect ratio. If you want to stretch the figure, use each of the ratios instead of the 'scale' variable.
Hope I have helped

Resources