Using Depth API with sceneform - android-studio

Can depth API be used with sceneform? Because using modelRenderable(), full 3D model is rendered, how can partial occlusion of the 3D model be done using Depth API?

Related

Automatically Mapping Image Texture to UV Maps For Custom Mesh?

I am trying to apply texture to 3D clothing model using openCV. I have the UV maps of the clothing model and an image of the texture I want to apply which I prepared using grabcut(). How do I deform the texture image to match my UV mesh using OpenCV or any other automated method. I am providing the input Texture Image and also the UV mesh of the clothing. I am new to 3D modeling and OpenCV and I am facing issues with this please help me out.
Input Texture Image
UV mapped mesh of 3D model
Similar query at this link here

Object recognition in 3D images

I have a 3D video that I have broken down into single images in 7 different planes. I am wondering what tools can I use for object detection. I read that OpenCV might not be the right tool for that, what could I use instead?
Regards
Aleksej
OpenCV can be used for segmentation on 3D data as long as it can be represented as a depth map (normally the information of the Z-axis in camera coordinate).
If you have depth data as a cv::Mat, you can run segmentation (region-growing, watershed, etc) on the depth data to get segmented objects.
It is assumed that the edges are distinguishable and unique between objects ofcourse.
As a pre-processing step, you can also smoothen the edges with some morphological operations to make the segmentation better.

3d graphics from scratch

What the minimum configuration for the program I need to build 3D Graphics from scratch, for example I have only SFML for working with 2d graphics and I need to implement the Camera object that can move & rotate in a space
Where to start and how to implement vector3d -> vector2d conversion functions and other neccessary things
All I have for now is:
angles Phi, Xi, epsilon 1-3 and some object that I can draw on the screen with the following formula
x/y = center.x/y + scale.x/y * dot(point[i], epsilon1/epsilon2)
But this way Im just transforming "world" axis, not the Object points
First you need to implement transform matrix and vector math:
Mathematically compute a simple graphics pipeline
Understanding 4x4 homogenous transform matrices
The rest depends on kind of rendering you want to achieve:
boundary polygonal mesh rendering
This kind of rendering is the native for nowadays gfx cards. You need to implement buffers for:
depth (for filled polygons without z-sorting)
screen (to avoid flickering and also serves as Canvas)
shadow,stencil,aux (for advanced rendering techniques)
they have usually the same resolution as target rendering area. On top of this you need to implement supported primitives rendering at least point,line,triangle. see:
Algorithm to fill triangle
on top of all this you can add textures,shaders and whatever else you want to ...
(back)ray tracing
this kind of rendering is very different and current gfx HW is not build for it. This involves implementing ray/primitives intersections computation, Snell's law and analytical representation of meshes. This way you can also do multi-spectral rendering and more physically accurate effects/processes see:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js? hybrid approach #1+#2
Algorithm for 2D Raytracer
How to implement 2D raycasting light effect in GLSL
Multi-Band Image raster to RGB
The difference between 2D and 3D ray tracer is almost none the only difference is how to compute perpendicular vector ...
There are also different rendering methods like Volume rendering, hybrid methods and others but their implementation is usually task oriented and generic description would most likely just mislead ... Here some 3D ray tracers of mine:
back raytrace through 3D mesh
back raytrace through 3D volume

rajawali and vuforia 3d model positioning

I have followed the RajawaliVuforia tutorial and integrated the rajawali with vuforia CloudReco and i am able to get the 3D model but model is not positioned properly in target image center and also if i move camera close or up, the model is positioning out of the target image. Can someone let me know what could be the issue.
Vuforia passes the position (Vector3) and orientation (Quaternion) to Rajawali. Rajawali then uses this to position and rotate the model. This might interfere with animations applied to the model. If you're using animations or if you're setting the position manually you'll get unpredictable results. The reason for this is that the position is set twice on each frame.
The way to fix this is to put your 3D model in a container (an empty BaseObject3D). Vuforia's position and orientation will be applied to this container and not your 3D model. This way you can animate the model without getting unpredictable results.

Easiest way to create and render 3D model by rotating a 2D silhouette

I have a black and white 2D drawing of a silhouette (say, a chess piece) that I would like to rotate around an axis to create a 3D object.
Then I want to render that 3D object from multiple angles using some sort of raytracing software, saving each angle into a separate file.
What would be the easiest way to automatically (repeatedly) 1. get a vector path from the 2d drawing 2. create the 3D model by rotating it 3. import it into the raytracer.
I haven't chosen a specific raytracer yet, but Sunflow has caught my eye.
Texturing/bump mapping would be nice but non-essential
The modeling feature you're looking for is a Lathe.
Sunflow can import 3ds files and blender files.
I've never used blender, but here's a tutorial for using the lathe to make a wine glass. You'd replace the silhouette of the wine glass with your shape:
http://www.blendermagz.com/2009/04/14/blender-3d-lathe-modeling-wine-glass/
Blender is FOSS, you can down load it here:
www.blender.org/download/get-blender/ (can't post more than one link, so you'll have to type this one in yourself :-)
I found a pretty cool site where you can do this online, interactively:
http://www.fi.uu.nl/toepassingen/00182/toepassing_wisweb.en.html
No great detail revolution but maybe you can find the code and extend it to your needs.

Resources