Getting a specific contour in VTK - vtk

I like to get a specific contour from image data.
My main goal is to remesh a polydata in grid form. So I followed below pipeline:
convert polydata to image using PolyDataToImageData
convert above image output to vtkImageDataGeometryFilter
use vtkImplicitPolyDataDistance to compute the distance from the original polydata
copy the distance values to image output scalars in step 2
The result is below:
I then tried to use vtkContourFilter to get polydata with SetValue(0, 0.0). And as you can see the result is not entirely correct:
The value of distance array is https://pastebin.ubuntu.com/p/2mZsgdrcmX/ and it is never 0 so I think I am doing it wrong in SetValue but I am also not sure how to get that specific green contour.
Is there any way to get those green points contour?

I am not completely sure to understand your pipeline.
In the vtkContourFilter, the SetValue takes two parameters. The first one is the id of the contour (as the filter can extract several contours at once, see the SetNumberOfContours). The second is the isovalue of the contour.
Here, you set an isovalue of 0.0. Which means you want the points at a distance 0 of the original data set. Looking at the first image, it seems these are the red points. If you want a contour at the green points, you may want to specify a higher scalar value.
PS: If the goal of your pipeline is to have a "larger version" of your shape, you may also have a look at the vtkWarpVector (and give it the normals of your polydata).

Related

How do I obtain the left, right, upper and lower shifts in coordinates between a cropped image and its original in Python?

I have the following original image:
and the cropped image:
I will like to obtain the left (a), right (c), upper (b) and lower shifts (d) from the original image to obtain the crop:
As of now, I can only think of matching the pixel array values (row and column-wise) and then subtracting the the overlapping pixel arrays coordinates from the original image's coordinates to get the shifts. However this approach seems computationally expensive and a search on 4 sides will have to be undertaken. Also if it helps, I do not have the transformations that led to the cropped image, and I'm assuming that there are no pixel value changes between the original and cropped image on regions of overlap.
Is there a more efficient approach for this? I'm not sure if there are existing built-in functions in OpenCV or other imaging libraries that can do this, so some insights on this will be deeply appreciated.

How to calculate what percentage of a pixel is within the bounds of a shape

I have a 2d grid where pixel centers are at the intersection of two half-grid lines, as shown below.
I also have a shape that is drawn on this grid. In my case the shape is a glyph, and is described by segments. Each segment has a start point, end point and a number of off-curve points. These segments can be quadratic curves or lines. What's important is that I can know the points and functions that make up the outline of the shape.
The rule for deciding which pixels should be turned on is simple: if the center of the pixel falls within the shape outline, turn that pixel on. The following image shows an example of applying this rule.
Now the problem I'm facing has to do with anti aliasing. What I'd like to do is to calculate what percentage of the area of a given pixel falls within the outline. As an example, in the image above, I've drawn a red square around a pixel that would be about 15% inside the shape.
The purpose of this would be so that I can then turn that pixel on only by 15% and thus get some cleaner edges for the final raster image.
While I was able to find algorithms for determining if a given point falls within a polygon (ray casting), I wasn't able to find anything about this type of problem.
Can someone can point me toward some algorithms to achieve this? Also let me know if I'm going about this problem in the wrong way!
This sounds like an X, Y problem.
You are asking for a way to calculate the perecentage of pixel coverage, but based on your question, it sounds that what you want to do is anti alias a polygon.
If you are working only with single color 2D shapes (i.e red, blue, magenta... squares, lines, curves...) A very simple solution is to create your image and blur the result afterwards.
This will automatically give you a smooth outline and is simple to implement in many languages.

How does vtkMarchingCubes::SetValue() work?

I am using VTK to build meshes from CT images. I find myself stuck trying to understand the cryptic vtkMarchingCubes::SetValue(). As per the documentation The first parameter is said to be the contour number, the second one is the "contour value", my question here is what exactly is "contour value"? is that an intensity value?
If it is indeed an intensity value, does VTK just look for that exact value or does it look around? is there any way I can specify a range rather than a single number? My last question is, how do I extract multiple contours from the image using vtkMarchingCubes in one pass?
Yes, it is the image intensity, I.e. the level for the level-set.
Image intensities are interpolated, so if you have a voxel with intensity 0 and a neighboring voxel with intensity 1, and set the value to 0.5, the generated surface will be half-way in-between. If you set the value to 0.9, the surface will be closer to the 1 voxel.
To extract multiple contours, you'd specify multiple values, I.e
mc->SetValue(0, 60);
mc->SetValue(1, 600);
I am not sure what you would want to achieve by specifying a range?

how to choose a range for filtering points by RGB color?

I have an image and I am picking colors by RGB (data sampling). I select N points from a specific region in the image which has the "same" color. By "same" I mean, that part of the image belongs to an object, (let's say a yellow object). Each picked point in the RGB case has three values [R,G,B]. For example: [120,150,225]. And the maximum and minimum for each field are 255 and 0 respectively.
Let's assume that I picked N points from the region of the object in the image. The points obviously have different RGB values but from the same family (a gradient of the specific color).
Question:
I want to find a range for each RGB field that when I apply a color filter on the image the pixels related to that specific object remain (to be considered as inliers). Is it correct to find the maximum and minimum from the sampled points and consider them as the filter range? For example if the max and min of the field R are 120 ,170 respectively, can it be used as a the range that should be kept.
In my opinion, the idea is not true. Because when choosing the max and min of a set of sampled data some points will be out of that range and also there will be some point on the object that doesn't fit in this range.
What is a better solution to include more points as inliers?
If anybody needs to see collected data samples, please let me know.
I am not sure I fully grasp what you are asking for, but in my opinion filtering in RGB is not the way to go. You should use a different color space than RGB if you want to compare pixels of similar color. RGB is good for representing colors on a screen, but you actually want to look at the hue, saturation and intensity (lightness, or luminance) for analysing visible similarities in colors.
For example, you should convert your pixels to HSI or HSL color space first, then compare the different parameters you get. At that point, it is more natural to compare the resulting hue in a hue range, saturation in a saturation range, and so on.
Go here for further information on how to convert to and from RGB.
What happens here is that you implicitly try to reinvent either color indexing or histogram back-projection. You call it color filter but it is better to focus on probabilities than on colors and color spaces. Colors of course not super reliable and change with lighting (though hue tends to stay the same given non-colored illumination) that's why some color spaces are better than others. You can handle this separately but it seems that you are more interested in the principles of calculating "filtering operation" that will do segmentation of the foreground object from background. Hopefully.
In short, a histogram back-projection works by first creating a histogram for R, G, B within object area and then back-projecting them into the image in the following way. For each pixel in the image find its bin in the histogram, calculate its relative weight (probability) given overall sum of the bins and put this probability into the image. In such a way each pixel would have probability that it belongs to the object. You can improve it by dividing with probability of background if you want to model background too.
The result will be messy but somewhat resemble an object segment plus some background noise. It has to be cleaned and then reconnected into object using separate methods such as connected components, grab cut, morphological operation, blur, etc.

Finding a point clicked in a grid

Given this grid ( http://i.stack.imgur.com/Nz39I.jpg is a trapezium/trapezoid, not a square), how do you find the point clicked by the user? I.e. When the user clicks a point in the grid, it should return the coordinates like A1 or D5.
I am trying to write pseudo code for this and I am stuck. Can anyone help me? Thanks!
EDIT: I am still stuck... Does anyone know of any way to find the height of the grid?
If it is a true perspective projection, you can run the click-point through the inverse projection to find it's X,Z coordinates in the 3D world. That grid has regular spacing and you can use simple math to get the A1,D5,etc.
If it's just something you drew, then you'll have to compare the Y coordinates to the positions of the horizontal lines to figure out which row. Then you'll need to check its position (left/right) relative to the angled lines to get the column - for that, you'll need either coordinates of the end-points, or equations for the lines.
Yet another option is to store an identical image where each "square" is flood-filled with a different color. You then check the color of the pixel where the user clicked but in this alternate image. This method assumes that it's a fixed image and is the least flexible.
If you have the coordinates of end points of the grid lines then
Try using the inside-outside test for each grid line and find the position
Since this grid is just a 3D view of a 2D grid plane, there is a projective transform that transforms the coordinates on the grid into coordinates on the 2D plane. To find this transform, it is sufficient to mark 4 different points on the plane (say, the edges), assign them coordinates on the 2D plane and solve the resulting linear equation system.

Resources