Sorry the naive question.
What is the way to exchange data between the VTK and ITK packages?
Example: Read a .mhd or .mha em VTK and using it in ITK registration.
Thanks,
Luis Gonçalves
Here's an example on how to convert a VTK image to an ITK image in python:
https://itk.org/ITKExamples/src/Bridge/VtkGlue/ConvertvtkImageDataToAnitkImage/Documentation.html
It uses ITK's itk.VTKImageToImageFilter. There is also a filter to go the other direction, ImageToVTKImageFilter.
Note that you can read .mha or .mhd file directly in ITK.
Related
I'm currently trying to perform a Polar to Cartesian Coordinate Image transformation, to display a raw sonar image into a 'fan-display'.
Initially I have a Numpy Array image of type np.float64, that can be seen below:
After doing some searching, I came across this StackOverflow post Inverse transform an image from Polar to Cartesian in OpenCV with a very similar problem, in which the poster seemed to have solved his/her issue by using the Python Wand library (http://docs.wand-py.org/en/0.5.9/index.html), specifically using their set of Distortion functions.
However, when I tried to use Wand and read the image in, I instead ended up with Wand getting the image below, which seems to be smaller than the original one. However, the weird thing is that img.size still gives the same size number as the original image's shape.
The code for this transformation can be seen below:
print(raw_img.shape)
wand_img = Image.from_array(raw_img.astype(np.uint8), channel_map="I") #=> (369, 256)
display(wand_img)
print("Current image size", wand_img.size) #=> "Current image size (369, 256)"
This is definitely quite problematic as Wand will automatically give the wrong 'fan image'. Is anybody familiar with this kind of problem with the Wand library previously, and if yes, may I ask what is the recommended solution to fix this issue?
If this issue isn't resolved soon I have an alternative backup of using OpenCV's cv::remap function (https://docs.opencv.org/4.1.2/da/d54/group__imgproc__transform.html#ga5bb5a1fea74ea38e1a5445ca803ff121). However the problem with this is that I'm not sure what mapping arrays (i.e. map_x and map_y) to use to perform the Polar->Cartesian transformation, as using a mapping matrix that implements the transformation equations below:
r = polar_distances(raw_img)
x = r * cos(theta)
y = r * sin(theta)
didn't seem to work and instead threw out errors from OpenCV as well.
Any kind of help and insight into this issue is greatly appreciated. Thank you!
- NickS
EDIT I've tried on another image example as well, and it still shows a similar problem. So first, I imported the image into Python using OpenCV, using these lines of code:
import matplotlib.pyplot as plt
from wand.image import Image
from wand.display import display
import cv2
img = cv2.imread("Test_Img.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure()
plt.imshow(img_rgb)
plt.show()
which showed the following display as a result:
However, as I continued and tried to open the img_rgb object with Wand, using the code below:
wand_img = Image.from_array(img_rgb)
display(img_rgb)
I'm getting the following result instead.
I tried to open the image using wand.image.Image() on the file directly, which is able to display the image correctly when using display() function, so I believe that there isn't anything wrong with the wand library installation on the system.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing? If so, what would it be and what is the suggested method to do so?
Please do keep in mind that I'm stressing the conversion of Numpy to Wand Image quite crucial, the raw sonar images are stored as binary data, thus the required use of Numpy to convert them to proper images.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing?
No, but there is a bug in Wand's Numpy implementation in Wand 0.5.x. The shape of OpenCV's ndarray is (ROWS, COLUMNS, CHANNELS), but Wand's ndarray is (WIDTH, HEIGHT, CHANNELS). I believe this has been fixed for the future 0.6.x releases.
If so, what would it be and what is the suggested method to do so?
Swap the values in img_rgb.shape before passing to Wand.
img_rgb.shape = (img_rgb.shape[1], img_rgb.shape[0], img_rgb.shape[2],)
with Image.from_array(img_rgb) as img:
display(img)
I'm trying to read a DICOM image with ITK reader then convert it into vtkimagedata for rendering.
As I convert ITK image with "itk::ImageToVTKImageFilter" and render it in vtkrenderwindow, the origin of this volume is set at center of this volume. How can I do to set the coordinate of render window same to the DICOM image?
Here's my code:
vtkSmartPointer<vtkImageData> vtkImg = ITKconnectVTK(itkImg);
vtkSmartPointer<vtkImageData> ITKconnectVTK(ImageType::Pointer inputImg)
{
ConnectorType::Pointer connector = ConnectorType::New();
connector->SetInput(inputImg);
connector->Update();
return connector->GetOutput();
}
Here is an example which does just that:
https://itk.org/Wiki/VTK/ExamplesBoneYard/Cxx/VolumeRendering/itkVtkImageConvert
And another one which does not need an image:
https://itk.org/Wiki/ITK/Examples/WishList/IO/itkVtkImageConvertDICOM
You can read the direct dicom series in the VTK, and if you need some filter send the image to itk and then get it again.
reading with vtk:
https://www.vtk.org/Wiki/VTK/Examples/Cxx/IO/ReadDICOM
convert from itk to vtk:
https://itk.org/Wiki/ITK/Examples/IO/ImageToVTKImageFilter
convert from vtk to itk
https://itk.org/Wiki/ITK/Examples/Broken/Images/VTKImageToImageFilter
you need to enable itkvtkglue in itk for this.
I have a segmentation, in a 3D numpy.ndarray, which I would like to render into VTK. [See here similar process here: https://pyscience.wordpress.com/2014/11/16/volume-rendering-with-python-and-vtk/ by#somada141]
My current (ad-hoc) solution includes:
(1) Save the NumPy array to a Nifiti file with nib. Nifti1Image
(2) Load the Nifiti file into vtk with (vtkNIFTIImageReader())
(3) Render the surface with vtkDiscreteMarchingCubes()
My question: How can I convert this 3D NumPy array directly into VTK without the intermediate file.
You can use the module numpy_support( https://github.com/Kitware/VTK/blob/master/Wrapping/Python/vtk/util/numpy_support.py) or the new vtk datasetadapter http://www.paraview.org/ParaView3/Doc/Nightly/www/py-doc/paraview.vtk.numpy_interface.dataset_adapter.html , http://kitware.com/blog/home/post/709
For an example of the first solution see https://pyscience.wordpress.com/2014/09/06/numpy-to-vtk-converting-your-numpy-arrays-to-vtk-arrays-and-files/
Actually while I was looking for an example I found also http://www.vtk.org/Wiki/VTK/Examples/Python/vtkWithNumpy, which I never tried before!
I've managed to compile GDCM with VTK and I have a particular application I would like to use, which is the "gdcm2vtk.exe".
Now, how's the syntax for converting a stack of imags into a ".vti" file? so far I have this:
gdcm2vtk Input_Directory file.vti
Now, when I run somthing like this:
gdcm2vtk "C:/dicom/dicom directory" output.vti I get an error:
could not find no reader to handle file: "C:/dicom/dicom directory"
Is there anything I'm missing there?
gdcm2vtk does not handle a directory as input as specified in the documentation.
You may want to convert your DICOM Series into a single DICOM Instance using gdcmimg
As of GDCM 2.6 gdcm2vtk is now able to take as input a directory. Pay attention to sort the file according to the well known Image Orientation (Patient) & Image Position (Patient) strategy instead of relying on the filenames ordering to reconstruct your VTK (*.vti) file:
$ gdcm2vtk --ipp-sort input_dir output.vti
So is there a quick way to convert a .dae file (COLLADA) to a .osg (OpenSceneGraph) file?
Do you have the collada loader plugin and the standard command line osg utils? If so,
osgconv FILE.dae FILE.osg
from a command line will do it.
If you don't have the COLLADA plugin, you can use sketchup with Ryan Pavlik's osg exporter: https://github.com/rpavlik/sketchupToOSG
As a side note, this means it is super-simple to get anything from Google's 3D Warehouse into osg native formats, which means tons of great models.
From what i remember the Blender 2.49 was able to import Collada files and export OpenSceneGraph files.
You can give it a tray and download this version of Blender plus the exporter for OSG.
http://forum.openscenegraph.org/viewtopic.php?p=40070#40070
http://download.blender.org/release/
There is also an .osg exporter for 3DS Max: http://osgmaxexp.wiki.sourceforge.net
You can import your .dae there and then use the exporter to create an .osg.