Is it possible to visualize point cloud using PyQT?
I used open3d library as pointcloud visualizing tool.
It (1) generates window GUI, (2) handles keyboard/mouse action, (3) visualize 3d model.
But I cannot add new keyboard/mouse action in open3d window.
So, I want to use PyQT (1) to generates window GUI , and (2) to handle keyboard/mouse action.
I am sure that I can add new keyboard action using PyQT, but not sure about visualizing 3d models in PyQT.
Is it possible to (3) visualize 3d model using PyQT? (Yes/No)
I tried searching on my own, but had no effort. All I want to know is just possibility. I'll do the rest.
Thanks.
Related
GOAL: I have to create a 3d model of a machine part. I have about 25 images of the same thing taken from different angles.
Progress: I am able to extract the coordinates for a label that is on the machine for most of the images.
Problem: but I have no idea how to proceed. I have read a bit about aero-triangulation, but I couldn't figure out how to implement it. I would really appreciate it, if you could guide me in the right direction.
It would be really helpful, if you could provide your solutions using python and opencv.
Edit: sorry but I cannot upload the code for this one as it is confidential. don't blame me please I am just an intern. Although I can tell that I cropped a template of the label from an image and then used Sift to match that template on all the images to get the coordinates of the label.
If you want to implement things yourself with OpenCV, I would command looking at SIFT (or SURF) features, RANSAC and the epipolar constraint. I believe the OpenCV cookbook describe those. Warning: math involved. And I don't know how to do dense mapping in OpenCV.
I know the GUI program "VisualSFM" that can automatically recreate 3D model from images. It uses SFM and other command line utilities behind the scenes. Since everything is opensource, you could create a python wrapper around the actual libraries (I found https://github.com/mapillary/OpenSfM asking Google). VisualSFM prints the command it calls, so a hacky way could be to call the same commands from python.
If it is a simple shape and you don't want to automate it, it could be faster to model it yourself (and the result could look better). In 1.5 week I managed to learn the basics of blender and to model a guitar necklace: https://youtu.be/BCGKsh51TNA . And I would now be able to do it in less than 1h. How long are you ready to invest to find a solution with OpenCV?
I am doing some studies on eye vascularization - my project contains a machine which can detect the different blood vessels in the retinal membrane at the back of the eye. What I am looking for is a possibility to segment the picture and analyze each segmentation on it`s own. The Segmentation consist of six squares wich I want to analyze separately on the density of white pixels.
I would be very thankful for every kind of input, I am pretty new in the programming world an I actually just have a bare concept on how it should work.
Thanks and Cheerio
Sam
Concept DrawOCTA PICTURE
You could probably accomplish this by using numpy to load the image and split it into sections. You could then analyze the sections using scikit-image or opencv (though this could be difficult to get working. To view the image, you can either save it to a file using numpy, or use matplotlib to open it in a new window.
First of all, please note that in image processing "segmentation" describes the process of grouping neighbouring pixels by context.
https://en.wikipedia.org/wiki/Image_segmentation
What you want to do can be done in various ways.
The most common way is by using ROIs or AOIs (region/area of interest). That's basically some geometric shape like a rectangle, circle, polygon or similar defined in image coordinates.
The image processing is then restricted to only process pixels within that region. So you don't slice your image into pieces but you restrict your evaluation to specific areas.
Another way, like you suggested is to cut the image into pieces and process them one by one. Those sub-images are usually created using ROIs.
A third option which is rather limited but sufficient for simple tasks like yours is accessing pixels directly using coordinate offsets and several nested loops.
Just google "python image processing" in combination with "library" "roi" "cropping" "sliding window" "subimage" "tiles" "slicing" and you'll get tons of information...
I am trying to render these models in Matlab so that I can sample them and then use their shape for my research work.
My question is that: Is there a way to render 3D models, used in different 3D graphics engines, in Matlab?
I have found something similar but it only loads an obj mesh.
Update:
I have also found an answer which uses OpenGL function in a mex file to access the graphics of a figure and get the depth buffer. However what I am trying to do is to render a model within a matlab figure.
Take a look at:
http://www.openu.ac.il/home/hassner/projects/poses/
I understand that it's still not completely stable, but they're working on it.
Anyway, this lets you render to a matrix and then do whatever you like with it (imshow it, imwrite it, etc.)
I have many 3D vectors. I want to plot them in a cube so that each dimension is on a particular side of the cube.
Now, I am looking for some visualization/tool or library that lets me rotate this cube in 3D and see the vectors in various different angles.
Thanks
Abhishek S
Try Processing, it is somewhat intended for data visualization and in addition to simple control over 3D drawing, it also has the full power of Java programming language. You can see numerous works, done by other people on OpenProcessing.
However, if you are into anything serious, I would suggest you to use some ohter IDE than the default one. I use Eclipse for that, importing Processing as a library into my project. It requires a tiny boilerplate to work, but then you're happy!
I have a black and white 2D drawing of a silhouette (say, a chess piece) that I would like to rotate around an axis to create a 3D object.
Then I want to render that 3D object from multiple angles using some sort of raytracing software, saving each angle into a separate file.
What would be the easiest way to automatically (repeatedly) 1. get a vector path from the 2d drawing 2. create the 3D model by rotating it 3. import it into the raytracer.
I haven't chosen a specific raytracer yet, but Sunflow has caught my eye.
Texturing/bump mapping would be nice but non-essential
The modeling feature you're looking for is a Lathe.
Sunflow can import 3ds files and blender files.
I've never used blender, but here's a tutorial for using the lathe to make a wine glass. You'd replace the silhouette of the wine glass with your shape:
http://www.blendermagz.com/2009/04/14/blender-3d-lathe-modeling-wine-glass/
Blender is FOSS, you can down load it here:
www.blender.org/download/get-blender/ (can't post more than one link, so you'll have to type this one in yourself :-)
I found a pretty cool site where you can do this online, interactively:
http://www.fi.uu.nl/toepassingen/00182/toepassing_wisweb.en.html
No great detail revolution but maybe you can find the code and extend it to your needs.