How to display NIFTI image that is read using vtkNIFTIImageReader with VTK - vtk

I can display sliced images using vtkNIFTIImageReader with the following code : https://github.com/dgobbi/vtk-dicom/blob/master/Examples/TestNIFTIDisplay.cxx
However, when I try to render the NIFTI file as a 3D object, it always comes up empty. How can I display NIFTI objects in VTK
vtkSmartPointer<vtkNIFTIImageReader> reader =
vtkSmartPointer<vtkNIFTIImageReader>::New();
reader->SetFileName (filename);
reader->Update();
vtkSmartPointer<vtkPolyDataMapper> mapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(reader->GetOutputPort());
vtkSmartPointer<vtkActor> actor =
vtkSmartPointer<vtkActor>::New();
actor->SetMapper(mapper);
vtkSmartPointer<vtkRenderer> renderer =
vtkSmartPointer<vtkRenderer>::New();
vtkSmartPointer<vtkRenderWindow> renderWindow =
vtkSmartPointer<vtkRenderWindow>::New();
renderWindow->AddRenderer(renderer);
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
renderer->AddActor(actor);
renderWindow->Render();
renderWindowInteractor->Start();

You have to select the correct visualization for an image (structured data), not a mesh.
You can try to run a marching cube algorithm in order to obtain a mesh and then use a polydata mapper, or use something derived from vtkvolumemapper.

Related

How to apply transform to graphics in OpenFL

I'm converting a JavaScript library to Haxe. In this library, there is an animated effect constructed with many of shapes. So I used the OpenFL library to render shapes.
But now I have a technical problem with transformation.
Some of the shapes has the child shapes so it's transform should be applied to the child shapes too.
For example, please imagine shapeC is attached on shapeB and, shapeB and shapeD are also attached on shapeA. In this case, shapeB, shapeD should be transformed by both of transformA and their own transform and, shapeC also should by transformA, transformB and transformC.
To achieve this, is it a good solution to render the same level shapes in one graphic and apply the parent's transform to that graphic? (on above example, render shapeB and shapeD to one graphic and a apply transformA to that graphic)
I think it's not a good optimized solution to calculate the final transform from all parents transforms and apply that to all vertexes of that shape. Please tech me the best optimized solution for rendering.
Any suggestion will be welcome.
And if there is any confused things on this question, please pardon me and let me check.
You can use the Sprite class:
var parentShape = new Sprite ();
parentShape.graphics.beginFill (0xFF0000);
parentShape.graphics.drawRect (0, 0, 100, 100);
var childShape = new Sprite ();
childShape.graphics.beginFill (0x00FF00);
childShape.graphics.drawCircle (0, 0, 50);
childShape.x = 200;
childShape.y = 200;
parentShape.addChild (childShape);
addChild (parentShape);
Each shape will use its own canvas element, so if you create a lot of shapes, you may decide to flatten it into a single image when you are ready. This is possible using cacheAsBitmap or bitmapData.draw
parentShape.cacheAsBitmap = true;
...or
removeChild (parentShape);
var bitmapData = new BitmapData (Math.ceil (parentShape.width), Math.ceil (parentShape.height), true, 0);
bitmapData.draw (parentShape);
var bitmap = new Bitmap (bitmapData);
addChild (bitmap);

Can fabric.js parse raster graphic to "real" svg?

I have a question/problem with fabric.js - in my code the user can upload a picture, with filters he can convert it to a black/white image. When I export the picture with canvas.toSVG(); it exports a svg image, but it is no real vector graphic - it loses quality when scaling up.
function handleImage(e) {
var reader = new FileReader();
reader.onload = function (event) {
var img = new Image();
img.onload = function () {
var imgInstance = new fabric.Image(img, {
scaleX: 0.7,
scaleY: 0.7
})
canvas.add(imgInstance);
}
img.src = event.target.result;
} reader.readAsDataURL(e.target.files[0]);}
$('saveBtn').onclick = function() {
var filedata= canvas.toSVG(); // the SVG file is now in filedata
var locfile = new Blob([filedata], {type: "image/svg+xml;charset=utf-8"});
var locfilesrc = URL.createObjectURL(locfile);//mylocfile);
var dwn = document.getElementById('dwn');
dwn.innerHTML = "<a href=" + locfilesrc + " download='mysvg.svg'>Download</a>";}
What am I doing wrong?
There is no easy way to "parse" raster graphics to a vector image. Vector graphics include information for how to draw an image, while raster images only include the pixel data for how an image appears at a given size and resolution. That's enough for many purposes, but it means that while it's easy to go from vector to raster (just execute the instructions), it's not easy to go from raster to vector.
It is possible to "trace" the edges of a raster image to obtain vectors that can approximate the raster: in other words, a set of vector instructions that, for that particular resolution and depth, yields an image that is the same as the original raster (or something very like it). But there is no guarantee that these actually correspond in any way to the original vectors (if there are any original vectors at all). Usually there is no correspondence, in fact, unless your tracing algorithm is very specialized: for example, tracing images of a font to make a vector copy of that font. Because they don't correspond, there's no guarantee that the image will scale up the way you want it to: it'll scale, but things may enlarge in strange ways.
It is possible to implement tracing algorithms in JavaScript, by drawing the image into a <canvas> element, using getImageData() to grab the pixel information from that, and performing your operations on the pixel information. Doing this, though, is beyond the scope of this question.

Get pixel data from Graphics object in Codename One

I'm trying to implement the Gaussian blur filter on Graphics object, but I can't find function for get pixel information or transform Graphics object to byte array (with RGB data).
That isn't supported since hardware accelerated surfaces might not provide that information.
However, you can do something else. Just paint the current form onto a mutable image and then just get the RGB of the mutable image which you can then use to create a new Image from an RGB e.g. something close to this:
Display d = Display.getInstance();
Image img = Image.createImage(d.getDisplayWidth(), d.getDisplayHeight());
Graphics g = img.getGraphics();
d.getCurrent().paintBackgrounds(g);
d.getCurrent().paintComponent(g, false);
int[] bufferArray = img.getRGB();
// blur...
Image blurredImage = Image.createImage(bufferArray, img.getWidth(), img.getHeight());

Extruding multiple polygons with multiple holes and texturing the combined shape

This question is related to this question. The answer shows very nice way to extrude polygons that have holes (see the excellent live example). The main learning of the answer was that paths in three.js (r58) cannot have more than one moveTo command and it have to be in the start of the path, which means that path have to be broken by moveTos, so that moveTo start always a new path.
Extruding in three.js means that 2D paths are converted to 3D shapes using possible beveling. It is suitable for extruding texts to make 3D letters and words, but can be used also to extrude custom paths.
Now there arises two questions:
how is it possible to handle polygons that have multiple hole-polygons and multiple non-hole-polygons?
how is it possible to add a texture to generated shape as a whole?
I made an example of this as SVG in http://jsbin.com/oqomuj/1/edit:
The image is produced using this path:
<path d="
M57.11,271.77 L57.11,218.33 L41.99,218.63 L105.49,165.77 L138.41,193.18 L138.41,172.2 L152.53,172.2 L152.53,204.93 L168.99,218.63 L153.21,218.63 L153.21,271.77Z
M74.14,264.13 L105.49,264.13 L105.49,232.8 L74.14,232.8Z
M115.35,250.7 L135.96,250.7 L135.96,232.61 L115.35,232.61Z
M56.11,145.77 L56.11,92.33 L40.99,92.63 L104.49,39.77 L137.41,67.18 L137.41,46.2 L151.53,46.2 L151.53,78.93 L152.53,79.76 L155.55,77.23 L159.5,74.52 L168.65,69.81 L176.46,66.93 L188.04,64.16 L200.63,62.7 L213.65,62.7 L226.05,64.09 L234.83,66.06 L245.65,69.73 L252.87,73.27 L259.12,77.34 L262.63,80.33 L265.6,83.47 L268.01,86.76 L269.83,90.17 L271.08,93.68 L271.76,99.08 L271.04,104.64 L269.75,108.2 L267.87,111.63 L265.42,114.91 L262.44,118.01 L258.95,120.92 L255.02,123.63 L245.86,128.34 L238.06,131.22 L226.48,133.99 L213.88,135.44 L200.63,135.44 L188.04,133.99 L176.46,131.22 L168.65,128.34 L159.5,123.63 L155.55,120.92 L152.21,118.12 L152.21,145.77Z
M73.14,138.13 L104.49,138.13 L104.49,106.8 L73.14,106.8Z
M114.35,124.7 L134.96,124.7 L134.96,106.61 L114.35,106.61Z
M207.26,117.33 L210.57,117.26 L216.87,116.53 L222.66,115.15 L227.8,113.18 L233.11,110 L236.34,106.99 L238.51,103.64 L239.42,100.48 L239.42,97.67 L238.51,94.51 L236.34,91.16 L233.11,88.15 L227.8,84.97 L222.66,83 L216.87,81.62 L210.57,80.89 L203.94,80.89 L197.65,81.62 L191.86,83 L186.71,84.97 L181.41,88.15 L178.18,91.16 L176.01,94.51 L175.1,97.67 L175.1,100.48 L176.01,103.64 L178.18,106.99 L181.41,110 L186.71,113.18 L191.86,115.15 L197.65,116.53 L203.94,117.26Z
"></path>
and this path converted to individual arrays of vertices:
var lower_house_material = [{x:57.11,y:271.77},{x:57.11,y:218.33},{x:41.99,y:218.63},{x:105.49,y:165.77},{x:138.42,y:193.18},{x:138.42,y:172.2},{x:152.53,y:172.2},{x:152.53,y:204.93},{x:168.99,y:218.63},{x:153.21,y:218.63},{x:153.21,y:271.77}];
var lower_house_hole_1 = [{x:74.14,y:264.13},{x:105.49,y:264.13},{x:105.49,y:232.8},{x:74.14,y:232.8}];
var lower_house_hole_2 = [{x:115.35,y:250.7},{x:135.96,y:250.7},{x:135.96,y:232.61},{x:115.35,y:232.61}];
var upper_house_material = [{x:56.11,y:145.77},{x:56.11,y:92.33},{x:40.99,y:92.63},{x:104.49,y:39.77},{x:137.42,y:67.18},{x:137.42,y:46.2},{x:151.53,y:46.2},{x:151.53,y:78.93},{x:152.53,y:79.76},{x:155.55,y:77.23},{x:159.5,y:74.52},{x:168.65,y:69.81},{x:176.46,y:66.93},{x:188.04,y:64.16},{x:200.63,y:62.7},{x:213.65,y:62.7},{x:226.05,y:64.1},{x:234.83,y:66.06},{x:245.65,y:69.73},{x:252.87,y:73.27},{x:259.12,y:77.35},{x:262.63,y:80.33},{x:265.6,y:83.47},{x:268.01,y:86.76},{x:269.84,y:90.17},{x:271.08,y:93.68},{x:271.76,y:99.08},{x:271.04,y:104.64},{x:269.75,y:108.2},{x:267.87,y:111.63},{x:265.42,y:114.91},{x:262.44,y:118.01},{x:258.96,y:120.92},{x:255.02,y:123.63},{x:245.86,y:128.34},{x:238.06,y:131.22},{x:226.48,y:133.99},{x:213.88,y:135.45},{x:200.63,y:135.45},{x:188.04,y:133.99},{x:176.46,y:131.22},{x:168.65,y:128.34},{x:159.5,y:123.63},{x:155.55,y:120.92},{x:152.21,y:118.12},{x:152.21,y:145.77}];
var upper_house_hole_1 = [{x:73.14,y:138.13},{x:104.49,y:138.13},{x:104.49,y:106.8},{x:73.14,y:106.8}];
var upper_house_hole_2 = [{x:114.35,y:124.7},{x:134.96,y:124.7},{x:134.96,y:106.61},{x:114.35,y:106.61}];
var upper_house_hole_3 = [{x:207.26,y:117.33},{x:210.57,y:117.26},{x:216.87,y:116.53},{x:222.66,y:115.15},{x:227.8,y:113.18},{x:233.11,y:110},{x:236.34,y:106.99},{x:238.51,y:103.64},{x:239.42,y:100.48},{x:239.42,y:97.67},{x:238.51,y:94.51},{x:236.34,y:91.16},{x:233.11,y:88.15},{x:227.8,y:84.97},{x:222.66,y:83},{x:216.87,y:81.62},{x:210.57,y:80.89},{x:203.94,y:80.89},{x:197.65,y:81.62},{x:191.86,y:83},{x:186.71,y:84.97},{x:181.41,y:88.15},{x:178.18,y:91.16},{x:176.01,y:94.51},{x:175.1,y:97.67},{x:175.1,y:100.48},{x:176.01,y:103.64},{x:178.18,y:106.99},{x:181.41,y:110},{x:186.71,y:113.18},{x:191.86,y:115.15},{x:197.65,y:116.53},{x:203.94,y:117.26}];
The question is, how this like structure can be converted to 3D object in three.js so that it can be extruded using THREE.ExtrudeGeometry( shape, extrusionSettings ) and after that textured as a whole?
I can examine the path data to know what hole belongs to what polygon and handle all as separate shapes, but because I want to use one texture image across all the shapes, I think the preferred way is to handle all material-polygons as one shape, and hole-polygons as other shape and use something like:
var shape = [lower_house_material, upper_house_material];
shape.holes = [lower_house_hole_1, lower_house_hole_2, upper_house_hole_1, upper_house_hole_2, upper_house_hole_3];
var 3d_geometry = THREE.ExtrudeGeometry( shape, extrusionSettings );
So the 3d_geometry should be at the end one mesh to which I can append a texture this way:
var textureFront = new THREE.ImageUtils.loadTexture( 'textureFront.png');
var textureSide = new THREE.ImageUtils.loadTexture( 'textureSide.png');
var materialFront = new THREE.MeshBasicMaterial( { map: textureFront } );
var materialSide = new THREE.MeshBasicMaterial( { map: textureSide } );
var materialArray = [ materialFront, materialSide ];
var faceMaterial = new THREE.MeshFaceMaterial(materialArray);
var final_mesh = new THREE.Mesh(3d_geometry, faceMaterial );
And one of the textures could be something like this (256x256px):
And texture applied:
And because the mesh is extruded, there is also 3D thickness on the above, but you got the idea of texturing.
I know that y-coordinates have to be flipped but it is a trivial task and not the point of my question, but if three.js has ready-made function for clipping y, it would be helpful.
I have spent hours to examine the three.js source code, examples and documentation, but because the most frequent word there is "todo", it cannot help much. And I'm very newbie to three.js, I would think that this may be trivial task for some experienced three.js user.
UPDATE: And just to make sure, the hole polygons are always well-behaved, which means that hole polygons are always fully inside material-polygons and there are no duplicate vertices or self-intersections either in material-polygons or hole-polygons and all material-polygons have CW winding order and holes CCW.
UPDATE: Merging geometries was not a solution for texturing the whole extruded polygon set by one texture: http://jsfiddle.net/C5dga. The texture is repeated on all individual shapes, so merging geometries in this case has no real meaning. The solution could be possibly found on merging shapes before they are extruded, but not found solution for this yet.
You can merge geometries as in the following snippet, resulting in just a single mesh. From your prior questions, you already know how to texture a single geometry.
var geometry1 = new THREE.ExtrudeGeometry( shape1, extrusionSettings );
var geometry2 = new THREE.ExtrudeGeometry( shape2, extrusionSettings );
geometry1.merge( geometry2 );
. . .
var mesh = new THREE.Mesh( geometry1, material );
scene.add( mesh );
Fiddle: http://jsfiddle.net/pHn2B/88/
Fiddle: http://jsfiddle.net/C5dga/13/ (with texture)
EDIT: As an alternative to creating separate geometries and using the merge utility, you can create a single geometry using the following pattern, instead:
var geometry1 = new THREE.ExtrudeGeometry( [ shape1, shape2 ], extrusionSettings );
EDIT: updated to three.js r.70

vtkMarchingCubes export nifti surfaces to wavefront OBJ

I want to run vtkMarchingCubes on a nifti label set. Regions of voxels, for which I want to produce surfaces all share the same value. I have two problems. First, I seem to be setting up the algorithm incorrectly because the resulting vtkPolyData apparently has no vertices. Secondly, it is not clear to me from the vtkOBJExporter documentation how to export the vtkPolyData as a wavefront .OBJ file. If anyone sees any issues with the code below or can tell me how to export the vtkPolyData as an OBJ, I would be grateful.
//Read The Nifti Label File
string input_path = "/MyPath/labels.nii";
nifti_image *im = nifti_image_read(input_path.c_str(),true);
cout<<im->nx<<","<<im->ny<<","<<im->nz<<endl; //Confirms Read Works
// Set up vtk image data
vtkImageImport* importer = vtkImageImport::New();
importer->SetImportVoidPointer((void*)im->data);
importer->SetDataScalarTypeToFloat();
importer->SetDataExtent(0, im->nx-1, 0, im->ny-1, 0, im->nz-1);
importer->SetWholeExtent(0, im->nx-1, 0, im->ny-1, 0, im->nz-1);
vtkImageData* point_cloud = importer->GetOutput();
point_cloud->SetScalarTypeToFloat();
point_cloud->SetExtent(0, im->nx-1, 0, im->ny-1, 0, im->nz-1);
point_cloud->SetSpacing(im->dx, im->dy, im->dz);
//Apply Threshold To Cut Out Other Data
//Is this needed or will Marching Cubes properly identify the region
vtkImageThreshold* threshold = vtkImageThreshold::New();
threshold->ThresholdBetween(label_number,label_number);
threshold->SetInValue(255);
threshold->SetOutValue(0);
threshold->SetInput(point_cloud);
//Apply the Marching Cubes algorithm
vtkMarchingCubes* marching_cubes = vtkMarchingCubes::New();
marching_cubes->SetValue(0, 127.0f);
marching_cubes->SetInput(threshold->GetOutput()); //(vtkDataObject*)point_cloud);
vtkPolyData* surface = marching_cubes->GetOutput();
marching_cubes->Update();
//See That Marching Cubes Worked
cout<<"# Vertices: "<< surface->GetNumberOfVerts()<<endl;
cout<<"# Cells: "<< surface->GetNumberOfCells()<<endl;
//Export (How is this done properly?)
vtkOBJExporter* exporter = vtkOBJExporter::New();
exporter->SetInput(vtkRenderWindow *renWin); //I don't want a render window, I want at file
exporter->SetFilePrefix("/MyPath/surface");
exporter->Write();
You can use this class https://github.com/daviddoria/vtkOBJWriter to write the obj file in the way you would expect (like every other VTK writer). Unfortunately the vtkOBJExporter wants to also write additional information that I never have.

Resources