Loading Large 3D NumPy Array Into Paraview - vtk

What is the best file format to save a large 3D NumPy array to, so I can easily load it into Paraview?
I have a very large 3D NumPy array filled with data points. I am trying to load this large array into Paraview using a few different methods, but I have been unsuccessful. The array has dimensions of (2000, 1500, 200).
So far, I have used gridToVTK to convert the array into a .vtr file, but gridToVTK crashes if the array is much larger than 100MB. I split up the array into smaller more maintainable chunks and saved the chunks as multiple .vtr files and stiched them together in Paraview, but this method is too slow and tedious.
I have also tried saving the NumPy array as a .raw file and loading that into Paraview, but I have been unsuccessful using that method.
Does anybody have any suggestions on how I should try and save this array so that I can easily load it into Paraview?

You can use the vtk module to use a vtkImageImport and then write the resulting imageData with vtkXMLImageDataWriter

Related

Is there a way to save a 3D volume to a single DICOM file in python

I'm processing a 3D numpy array and would like to know if it is possible to write/read the entire 3D volume as 1 single 3D .dcm rather than a series of 2D .dcm files. Can this be accomplished using pydicom library as this is what I'm using. I'm new in working with DICOMs so I'm not sure how to go about about implementing this if possible.
I found this to be what I wanted but I could not access the code as the link to it is dead. https://github.com/pydicom/pydicom/issues/786
Any help will be appreciated.

how to convert the 2d image into 3d object file using vtk

how to convert the image into object file like as .obj or .ply . I need some code written in visualization toolkit and c++.
Thanks
Image data is pixel data and .obj/ .ply or for that matter .stl is 3D geometry data with Point and Cell (for .obj Cell is Triangle) information.
Your question is not clear, but to give you some steps -
First, you need to identify how would you convert the pixels into points? vtkImageDataGeometryFilter might be of help here. Although it might not be sufficient as you will also need triangles data.
Once you get vtkPolyData from image data, you can write this data to STL or OBJ or PLY format. You can use following VTK classes for that
vtkSTLWriter, vtkOBJWriter and vtkPLYWriter.

saving huge 2D (or 3D) numpy array efficiently as labels for DNN with bit exact precision

I am developing a DNN algorithm and am in need of saving thousands of 2D (maybe 4 channels of 2D) Numpy arrays as labels. 2000 X 2000 array of integers.
I am not sure what is the most efficient way (in terms of speed and storage space) to save these arrays. I have tried:
- Numpy save and load, resulting in huge file sizes (hundreds of Mbs for one array)
- skimage package's imsave and imload, which is much faster but lossy and changes the numbers when reading back. Generally any package I use for saving image is lossy.
Please advise.

Reading and Batching Sequence data in Tensorflow with TFRecords

Hi I am currently trying to batch images of variable width with tensorflow.
For instance i am dealing with images of size 50*245, 50*235, 50*265...and so on
I followed a basic pipeline that i found online, first i serialize my data and write it in my tfrecord file using tf.train.SequenceExample(). I store different widths of images, in my case 245, 235, 265 ...and so on and my pixel data using this code.
example=tf.train.SequenceExample()
#First we store our image width in 'input_length' feature
example.context.feature['input_length'].int64_list.value.append(sequence_length)
feature_input=example.feature_lists.feature_list['input']
#Then we store pixel values in 'input' feature (our sequential data)
for pixel in image :
feature_input.feature.add().int64_list.value.append(pixel)
#write in the TFRecord file
writer.write(example.SerializeToString())
Then we open our TFRecord file and specify parsing of our sequential data
#Definition of data parsing
context_features = {
'input_length':tf.FixedLenFeature([],dtype=tf.int64)
}
sequence_features = {
"input":tf.FixedLenSequenceFeature([50,],dtype=tf.int64,allow_missing=False),
}
#Now we parse the examples
length_parsed,sequence_parsed=tf.parse_single_sequence_example(
serialized=serialized_data,
context_features=context_features,
sequence_features=sequence_features
)
input_lengths,input_data=tf.train.batch(
tensors=[length_parsed['input_length'],sequence_parsed['input']],
batch_size=1,
dynamic_pad=True
)
The problem is that dynamic padding does not seem to work i get tensors of shape
(?,50) when i thought i would get tensors with the shape of the biggest tensor that entered the batch maybe (265,50)....Does anybody have any idea of what i am doing wrong or that i am not specifying for the batching process or any of the above processes ?
Been stuck on this for 5 days :/
I found the solution... first of all i as only putting unique integers to my feature lists instead of "lists of integers" to my example. So the script did not understand why all of a sudden i was looking for 50 component vectors.

Importing Transient Data into Paraview

I have a 3D triangulated surface. Nodes and Conn variables store the coordinates and connectivity of the triangles. At each vertex, a scalar quantity, S, and a vector with three components, V, are stored. These data are time-dependent. Also, my geometry does not change over time and I have one surface for all the timesteps.
How should I approach for writing a VTK file that has the transient data over this surface? In other words, I want to write the value of S and V at different timestep on this 3D surface in a single VTK file. I ultimately want to import this VTK file into Paraview for visualization. vtkTemporalDataSet seems to be the solution for me but I could not find an example on how to write an ASCII or binary file for this VTK class. Could vtkPolyData somehow be used to define time so that Paraview knows the transient nature of my dataset? I would appreciate any help or comment.
The VTK file format does not support transient data. However, you can write a series of files that ParaView will interpret as a time sequence. This will work fine with poly data in the VTK file. The file series is defined as files of the same name with a number identifier in them. For example, if you have a series of files named:
MyFile_000.vtk
MyFile_001.vtk
MyFile_002.vtk
ParaView will group these files together in its file browser and when you read them together, it will treat them as a file sequence with 3 time steps.
The bad part of this representation is that you will have to replicate the Nodes and Conn in each file. If that is a problem, you will have to use a different file format that supports multiple time steps using the same connection information (such as the Exodus II file format).

Resources