How to convert vtkPolyData to itkImage? - vtk

I've set of vtk polygonal data files for segmented vessels:-
How to (voxelize) convert it to itk image with specific (size, origin and spacing)?

This is not a trivial problem. It is not possible to do that given your raw contours. If you can convert your contours to a closed surface, then you can use vtkVoxelModeller to create a vtkImage. Then you can create an itk image using vtkITKImageFilter.
Alternatively, you can fit a closed geometry to your contours and create voxels based on the parameterization of your geometry:
http://www.mit.edu/~adalca/files/papers/nerve_segmentation.pdf

Related

how to convert the 2d image into 3d object file using vtk

how to convert the image into object file like as .obj or .ply . I need some code written in visualization toolkit and c++.
Thanks
Image data is pixel data and .obj/ .ply or for that matter .stl is 3D geometry data with Point and Cell (for .obj Cell is Triangle) information.
Your question is not clear, but to give you some steps -
First, you need to identify how would you convert the pixels into points? vtkImageDataGeometryFilter might be of help here. Although it might not be sufficient as you will also need triangles data.
Once you get vtkPolyData from image data, you can write this data to STL or OBJ or PLY format. You can use following VTK classes for that
vtkSTLWriter, vtkOBJWriter and vtkPLYWriter.

Does vtk mesh generation change coordinates?

I converted nifti file to vtk using python-implemented vtk. The main function was vtkMarchingCubes.
contour=vtk.vtkMarchingCubes()
The result vtk meshes have proper shape but their locations seem changed.
For example, when I load them with the pial surface made from exactly the same nifti image using different pipelines (freesurfer) in the same scene, the result is like below.
Does vtk converting of nifti changes the coordinate of vertices or somehow 'reset' them?
VTK's MarchingCubes filter should produce triangles in the same coordinate system as the volume. The only issue is that the Nifti image also includes a coordinate system of the image, and VTK is probably not correctly using it. I'd guess there's a transform in the Nifti that VTK isn't properly using.
Try using either Slicer (slicer.org) or ITK-Snap (itksnap.org). They do better at maintaining coordinate systems for medical images.
Yes, VTK changes the coordinate when read nifti.
-get Q-matrix using GetQFormMatrix()
-transform coordinate using vtkTransform()
is reqiured.

VTK - create 3D model

I'm trying to create a 3D mask model from the 3D coordinate points that are stored in the txt file. I use the Marching cubes algorithm. It looks like it´s not able to link individual points, and therefore holes are created in the model.
Steps: (by https://lorensen.github.io/VTKExamples/site/Cxx/Modelling/MarchingCubes/)
First, load 3D points from file as vtkPolyData.
Then, use vtkVoxelModeller
Put voxelModeller output to MC algorithm and finally visualize
visualization
Any ideas?
Thanks
The example takes a spherical mesh (a.k.a. a set of triangles forming a sealed 3D shape), converts it to a voxel representation (a 3D image where the voxels outside the mesh are black and those inside are not) then converts it back to a mesh using Marching Cubes algorithm. In practice the input and output of the example are very similar meshes.
In your case, you load the points and try to create a voxel representation of them. The problem is that your set of points is not sufficient to define a volume, they are not a sealed mesh, just a list of points.
In order to replicate the example you should do the following:
1) building a 3D mesh from your points (you gave no information of what the points are/represent so I can't help you much with this task). In other words you need to tell how these points are connected between then to form a 3D shape (vtkPolyData). VTK can't guess how your points are connected, you have to tell it.
2) once you have a mesh, if you need a voxel representation (vtkImageData) of it you can use vtkVoxelModeller or vtkImplicitModeller. At this point you can use vtk filters that need a vtkImageData as input.
3) finally in order to convert voxels back to a mesh (vtkPolyData) you can use vtkMarchingCubes (or better vtkFlyingEdges3D that is a very similar algorithm but much faster).
Edit:
It is not clear what the shape you want should be, but you can try to use vtkImageOpenClose3D so the steps are:
First, load 3D points from file as vtkPolyData.
Then, use vtkVoxelModeller
Put voxelModeller output to vtkImageOpenClose3D algorithm, then vtkImageOpenClose3D algorithm output to MC (change to vtkFlyingEdges3D) algorithm and finally visualize
Example for vtkImageOpenClose3D:
https://www.vtk.org/Wiki/VTK/Examples/Cxx/Images/ImageOpenClose3D

Constructing 3D image from center points and radius data

I have to construct a 3D image of spherical particles using python array operations. The data I have is center points and radius of spherical particles in (x,y,z,r) format. Where x,y,z,r are in the form of arrays of length 55000. When I do 3D plotting of these coordinates using mpl_toolkits.mplot3d the structure looks like as one shown in figure.
Can you suggest a good way to make 3D image using numpy or scipy.ndimage image processing tools. If not possible is there any alternative method to solve this issue? Thanks in advance.

Convert/export graphics from Pov-Ray in 2D vector format

By default, Pov-Ray renders a bitmap file. Is there a way to convert or export the same image, in a vector format like eps, pdf, svg etc?
POV-Ray does not have any sort of vector output. In general ray-tracers (like POV-Ray) work by tracing rays from screen pixels into the scene, to work out what colour pixels should be - so they are inherently pixel based.
To 'ray-trace' to a vector format, you would have to calculate illumination values for each visible polygon, and then project the polygons onto the viewing angle as vectors. I don't know of any available software that can do this.
I'll also add that if you take an image and convert it using most tools to a vector format like pdf or eps, it basically just wraps up the bitmap data into an array and still can only render it pixel by pixel.
But if you render with POV-Ray at high contrast so that you can convert it to a black and white image, you can then use free software called potrace to convert it to true vector graphics.
Firstly, you can export the POV-Ray graphic to an asc file. To do so, see the link and answer given here.
Then you can open this asc file in Meshlab, and then export it in the STL or OBJ format. Finally you can import the STL or the OBJ file in Wings3D, which allows to export to eps and svg.

Resources