I have an irregular data, x dimension - 384, y dimension - 256 and z dimension 64. Now these coordinates are stored in 3 separate binary files and i have a data file having a data value for these points. I want to know, how can i represent such data to be easily visualized in vtk.
Till now we were using AVS which has fld files, which can read such data easily. I dont know how to do it in vtk. Would appreciate any pointers in this direction.
My best answer would be write a small program that reads in the files and then fills a vtkImageData object and then save it using vtkMetaImageWriter or something?
vtkSmartPointer<vtkImageData> ImageData = vtkSmartPointer<vtkImageData>::New();
ImageData->SetDimensions(384,254,64);
ImageData->SetOrigin(0.0,0.0,0.0);
ImageData->SetSpacing(1.0,1.0,1.0);
ImageData->SetScalarTypeToDouble();
ImageData->AllocateScalars();
for(int i=0; i<z_dim-1; i++){
for(int j=0;j<y_dim-1;j++){
for(int k=0;k<x_dim-1;j++){
double pix= pixel from data file
double* pixel = static_cast<double*>(ImageData->GetScalarPointer(k,j,i));
pixel[0] = pix;
}
}
}
Maybe you can write a short program to convert the files to a VTK native format. They are straightforward to work with, and there are ASCII and binary flavors. They are described in this document: www.vtk.org/VTK/img/file-formats.pdf
You may find this helpful also: http://www.rug.nl/cit/hpcv/visualisation/VTK/avs2vtk/man.html - if you dig through the page, there are scripts there to convert AVS files to VTK formats, it may be a good start point.
Hope this helps,
Carlos-
You can use paraview to open all files, merge the points and visualize.
here is a example for load files
you can save the vtk file too like this example
here is a example for save the points
Related
I own a dataset of images with unknown label format, which is:
angry_actor_104.jpg 0 28 113 226 141 22.9362 0
It indicates an image as follows:
image_name face_id_in_image face_box_top face_box_left face_box_right face_box_bottom face_box_cofidence expression_label
My question is: How can this be converted into the yolov5 format?
I have been looking this up for a long time and hope someone can help.
Thank you very much in advance.
Since the format is unknown you are unlikely to find existing code to completely handle the transformation but I can share some tips to get started.
The annotations file does not have enough info to get converted to Yolo format. Because to convert to Yolo you also need to know the dimensions of the images. If all of your images are the same dimension then it easier but if all of the images are different then you will need additional code to extract the dimensions of the images. I will explain why below.
When you are done you will need to get the images and labels in a specific directly structure like this, with one txt file per image:
/images/actor1.jpg
/images/actor2.jpg
/labels/actor1.txt
/labels/actor2.txt
This is the shape that you want to get the annotation files into.
face_id_in_image x_center_image y_center_image width height
There is a clear description of what the values mean here https://stackoverflow.com/a/66563144/5183735.
Now you need to do some math to calculate the values.
width = (face_box_right - face_box_left)/image_width
height = (face_box_bottom - face_box_top)/image_height
x_center_image = face_box_left/image_width + (width/2)
y_center_image = face_box_top/image_height + (height/2)
I have some bits of code that may help you with reading the text file and saving the text files here.
https://github.com/pylabel-project/pylabel/blob/main/pylabel/exporter.py and https://github.com/pylabel-project/pylabel/blob/main/pylabel/importer.py.
If you are able to share your exact files I may be able to identify some shortcut to transform them.
A sensor provides a stream of frames containing object coordinates, which are stored in ProtoBuf format in a gzipped file. I would like to read this file in Julia.
Using protoc, I have generated the Protobuf files for both Python and Julia, coordinate_push.py and coordinate_push.jl
My Python code is as follows:
frameList = []
with gzip.open(filePath) as f:
data = f.read()
next_pos, pos = 0, 0
while pos < len(data):
msg = coordinate_push.CoordinatesFrame()
next_pos, pos = _DecodeVarint32(data, pos)
msg.ParseFromString(data[pos:pos + next_pos])
frameList.append(msg)
pos += next_pos
I'd like to rewrite the above in Julia, and don't know where to start. Part of the problem is that I haven't fully understood the Python script (IO is not my strong point).
I understand that I need:
to open the gzip file, presumably using using GZip; file = GZip.open(file_path, "r")
to read in the data, along the lines of using ProtoBuf; data = readproto(iob, CoordinatesFrame())
What I don't understand is:
how to define iob, and especially how to link it to file (in the Julia Protobuf manual, we had iob = PipeBuffer(), but here it's a gzip-file that we'd like to read)
how to replicate the while-loop in Julia, and in particular the mysterious _DecodeVarint32 (I'm on Windows, if it's related to that.)
whether the file coordinate_push.jl has to be in the same directory as my main file, and if not, how I can properly import it (it is currently in a proto subfolder, and in Python I'd import it using from src.proto import coordinate_push)
Insight on any of the three points would be highly appreciated.
You should open an issue on the Gzip GitHub repo and ask this first part of your question there (I am not a Gzip expert unfortunately).
On the second point, I suggest looking here: https://github.com/JuliaIO/FileIO.jl/blob/master/README.md for lots of examples of FileIO loops which seems exactly what you need to replicate that Python loop. For the second part of this question, you best bet for that function is to try and hunt down the definition on GitHub or in the docs somewhere.
For the 3rd questions, coordinate_push.jl does not need to be in the same folder as your "main file" (I am not sure what you mean by this so perhaps it would help to add context on the structure of your files). To import that file all you need to do is add include("path/to/coordinate_push.jl") at the top of the file you want to call/run the code from. It's worth noting that the path can either be the absolute path or the relative project path (in some cases).
I am trying to analyze a wav file in python and get the rms value from the wav. I am using audioop.rms to get the value from the wav. I went to do this and I did not know what fragment and width stood for. I am new to audioop and hope somebody can explain this. I am also wondering if there is any better way to do this in python.
Update: I have done some research and I found out fragment stood for the wav file. I still need to figure out what width means.
A fragment is just a chunk of data. Width is the size in bytes that the data is organized. ex 8bits data has width 1, 16bits has 2 and so on.
```
import alsaaudio, audioop
self.input = alsaaudio.PCM(alsaaudio.PCM_CAPTURE,alsaaudio.PCM_NONBLOCK)
self.input.setchannels(1)
self.input.setrate(8000)
self.input.setformat(alsaaudio.PCM_FORMAT_S16_LE)
self.input.setperiodsize(300)
length, data = self.input.read()
avg_i = audioop.avg(data,2)
```
In the example i am setting alsa capture card to use a S16_LE signed 16bits Little Endian, so I have to set width to be 2. For the fragment is just the data captured by alsa. In your case the wav file is your data.
Ok, I am trying to upload a .csv file, get it into a spatial points data frame and set the projection system to WGS 84. I then want to determine the distance between each point This is what I have come up with but I
cluster<-read.csv(file = "cluster.csv", stringsAsFactors=FALSE)
coordinates(cluster)<- ~Latitude+Longitude
cluster<-CRS("+proj=longlat +datum=WGS84")
d<-dist2Line(cluster)
This returns an error that says
Error in .pointsToMatrix(p) :
points should be vectors of length 2, matrices with 2 columns, or inheriting from a SpatialPoints* object
But this isn't working and I will be honest that I don't fully comprehend importing and manipulating spatial data in R. Any help would be great. Thanks
I was able to determine the issue I was running into. With WGS 84, the longitude comes before the latitude. This is just backwards from how all the GPS data I download is formatted (e.g. lat-long). Hope this helps anyone else who runs into this issue!
thus the code should have been
cluster<-read.csv(file = "cluster.csv", stringsAsFactors=FALSE)
coordinates(cluster)<- ~Longitude+Latitude
cluster<-CRS("+proj=longlat +datum=WGS84")
I am a beginner in VTK ITK, I am trying to read a DICOM series with ITK and display with VTK but I had pictures upside down, I tried to read a single image (JPG) with ITK and visualuser with VTK it is the same problem, so I had the idea of treating the image on photoshop ie I applied to the original image rotation (vertical symmetry of the work area) and I did the reading with ITK and display with VTK, the image is displayed in the correct orientation, infact ITK keeps the orientation of the image, but the problem is at VTK, it is which displays the image upside down, I searched all over the internet I have not found a solution or a method or not even an idea, I encountered the same problem in many forums but there is no response, I count on your help, I can not apply any image processing to find a solution to this problem.
Please Help! thank you in advance
Ideally you should re-orient your camera in VTK so that it is suited for medical image visualization. (The default camera in VTK uses the computer graphics conventions).
If you want a quick hack, you can copy-paste the following code in ITK:
FlipFilterType::Pointer flipperImage = FlipFilterType::New();
bool flipAxes[3] = { false, true, false };
flipperImage = FlipFilterType::New();
flipperImage->SetFlipAxes(flipAxes);
flipperImage->SetInput( image );
flipperImage->Update();
I use a rapid way to set the orientation:
imageActor->SetOrientation(180,0,0);
No need to add filter.
Here's an example of how I would do it. I'm not sure what classes you are using, so I cannot be specific.
vtkSmartPointer<vtkImageData> result = vtkSmartPointer<vtkIMageData>::New();
result->DeepCopy(YourImage); //DeepCopy your image to result
rImage->Update();
double val;
int i = 0;
for(vtkIdType f = result->GetNumberOfPoints()-1; f > -1; f--)
{
val = YourImage->GetPointData()->GetScalars()->GetTuple1(f);
result->GetPointData()->GetScalars->SetTuple1(i,val);
i++;
}
result->Update();
//Now Visualize your image