Is there a way to define the camera parameters myself? - graphics

I am working with Shapenet dataset which contains 3D information. I want to create images out of that dataset by defining the camera intrinsics and extrinsics on my own (so its like I will be defining where my camera is with respect to object and what will be the focal length and optical center of camera). Is there a concrete way by which I can pick some values for these ?
PS : I can load the shapenet models in some 3D viewing software and if I could extract the camera parameters might be using (since at any particular time, it is showing me an image)

Related

Object recognition in 3D images

I have a 3D video that I have broken down into single images in 7 different planes. I am wondering what tools can I use for object detection. I read that OpenCV might not be the right tool for that, what could I use instead?
Regards
Aleksej
OpenCV can be used for segmentation on 3D data as long as it can be represented as a depth map (normally the information of the Z-axis in camera coordinate).
If you have depth data as a cv::Mat, you can run segmentation (region-growing, watershed, etc) on the depth data to get segmented objects.
It is assumed that the edges are distinguishable and unique between objects ofcourse.
As a pre-processing step, you can also smoothen the edges with some morphological operations to make the segmentation better.

VTK - create 3D model

I'm trying to create a 3D mask model from the 3D coordinate points that are stored in the txt file. I use the Marching cubes algorithm. It looks like it´s not able to link individual points, and therefore holes are created in the model.
Steps: (by https://lorensen.github.io/VTKExamples/site/Cxx/Modelling/MarchingCubes/)
First, load 3D points from file as vtkPolyData.
Then, use vtkVoxelModeller
Put voxelModeller output to MC algorithm and finally visualize
visualization
Any ideas?
Thanks
The example takes a spherical mesh (a.k.a. a set of triangles forming a sealed 3D shape), converts it to a voxel representation (a 3D image where the voxels outside the mesh are black and those inside are not) then converts it back to a mesh using Marching Cubes algorithm. In practice the input and output of the example are very similar meshes.
In your case, you load the points and try to create a voxel representation of them. The problem is that your set of points is not sufficient to define a volume, they are not a sealed mesh, just a list of points.
In order to replicate the example you should do the following:
1) building a 3D mesh from your points (you gave no information of what the points are/represent so I can't help you much with this task). In other words you need to tell how these points are connected between then to form a 3D shape (vtkPolyData). VTK can't guess how your points are connected, you have to tell it.
2) once you have a mesh, if you need a voxel representation (vtkImageData) of it you can use vtkVoxelModeller or vtkImplicitModeller. At this point you can use vtk filters that need a vtkImageData as input.
3) finally in order to convert voxels back to a mesh (vtkPolyData) you can use vtkMarchingCubes (or better vtkFlyingEdges3D that is a very similar algorithm but much faster).
Edit:
It is not clear what the shape you want should be, but you can try to use vtkImageOpenClose3D so the steps are:
First, load 3D points from file as vtkPolyData.
Then, use vtkVoxelModeller
Put voxelModeller output to vtkImageOpenClose3D algorithm, then vtkImageOpenClose3D algorithm output to MC (change to vtkFlyingEdges3D) algorithm and finally visualize
Example for vtkImageOpenClose3D:
https://www.vtk.org/Wiki/VTK/Examples/Cxx/Images/ImageOpenClose3D

How to compute a 3d miniature model from a large set of 3d geometric models

i want to import a set of 3d geometries in to current scene, the imported geometries contains tons of basic componant which may represent an
entire building. The Product Manager want the entire building to be displayed
as a 3d miniature(colors and textures must corrosponding to the original building).
The problem: Is there any algortithms which can handle these large amount of datasin a reasonable time and memory cost.
//worst case: there may be a billion triangle surfaces in the imported data
And, by the way, i am considering another solotion: using a type of textue mapping:
1 take enough snapshots by the software render of the imported objects.
2 apply the images to a surface .
3 use some shader tricks to perform effects like bump-mapping---when the view posisition changed, the texture will alter and makes the viewer feels as if he was looking at a 3d scene.
----my modeller and render are ACIS and hoops, any ideas?
An option is to generate side views of the building at a suitable resolution, using the rendering engine and map them as textures to a parallelipipoid.
The next level of refinement is to obtain a bump or elevation map that you can use for embossing. Not the easiest to do.
If the modeler allows it, you can slice the volume using a 2D grid of "voxels" (actually prisms). You can do that by repeatedly cutting the model in two with a plane. And in every prism, find the vertex closest to the observer. This will give you a 2D map of elevations, with the desired resolution.
Alternatively, intersect parallel "rays" (linear objects) with the solid and keep the first endpoint.
It can also be that your modeler includes a true voxel model, or that rendering can be zone with a Z-buffer that you can access.

rajawali and vuforia 3d model positioning

I have followed the RajawaliVuforia tutorial and integrated the rajawali with vuforia CloudReco and i am able to get the 3D model but model is not positioned properly in target image center and also if i move camera close or up, the model is positioning out of the target image. Can someone let me know what could be the issue.
Vuforia passes the position (Vector3) and orientation (Quaternion) to Rajawali. Rajawali then uses this to position and rotate the model. This might interfere with animations applied to the model. If you're using animations or if you're setting the position manually you'll get unpredictable results. The reason for this is that the position is set twice on each frame.
The way to fix this is to put your 3D model in a container (an empty BaseObject3D). Vuforia's position and orientation will be applied to this container and not your 3D model. This way you can animate the model without getting unpredictable results.

Easiest way to create and render 3D model by rotating a 2D silhouette

I have a black and white 2D drawing of a silhouette (say, a chess piece) that I would like to rotate around an axis to create a 3D object.
Then I want to render that 3D object from multiple angles using some sort of raytracing software, saving each angle into a separate file.
What would be the easiest way to automatically (repeatedly) 1. get a vector path from the 2d drawing 2. create the 3D model by rotating it 3. import it into the raytracer.
I haven't chosen a specific raytracer yet, but Sunflow has caught my eye.
Texturing/bump mapping would be nice but non-essential
The modeling feature you're looking for is a Lathe.
Sunflow can import 3ds files and blender files.
I've never used blender, but here's a tutorial for using the lathe to make a wine glass. You'd replace the silhouette of the wine glass with your shape:
http://www.blendermagz.com/2009/04/14/blender-3d-lathe-modeling-wine-glass/
Blender is FOSS, you can down load it here:
www.blender.org/download/get-blender/ (can't post more than one link, so you'll have to type this one in yourself :-)
I found a pretty cool site where you can do this online, interactively:
http://www.fi.uu.nl/toepassingen/00182/toepassing_wisweb.en.html
No great detail revolution but maybe you can find the code and extend it to your needs.

Resources