3D Graphics: software for visualizing 3D vectors? - graphics

I'm trying to teach myself about 3D graphics, but I'm having trouble visualizing the 3D vectors involved.
Is there any good software that I can use to visualize 3D vectors?
For example, right now I'm learning about camera transformations, and it would be nice if I could easily plot the right/up/look/eye vectors.
I've tried Grapher.app and gnuplot, but it's very difficult to enter points into Grapher.app and gnuplot doesn't seem to be able to lock the aspect ratio.

Visual Python is a super easy library for 3D visualization.
For example, to show a sphere and arrow:
import time, math, visual
ball = visual.sphere(pos=(0,2,0), radius=1, color=visual.color.red)
vect = visual.arrow(pos=(2,0,0), axis=(2 ,2,-2))
visual.scene.forward = (.1, -.3, -1) # controls the camera view angle
This window now also has all of the normal mouse interactivity, such as zooming and camera (i.e. viewing angle) rotation.
VPython is also easy to animate. For example, the following will rotate the arrow:
da = 2*math.pi/100
for timestep in range(100):
angle = timestep*da
vect.axis = (2+2*math.sin(angle), 2*math.cos(angle), -2)
time.sleep(.1)

I don't know if this would be easier than Grapher.app or gnuplot, but you could write your own 3D graphics program that just plots the vectors.
Here's an example in OpenGL that draws the X, Y, and Z axis vectors.
Update: Here's a Java applet specifically focused on helping you visualize the vectors in camera transformations. Note the installation instructions: you have to install Java 3D.
Description: The Perspective Camera
Parameters applet aims to familiarize
students with the various parameter
associated with a synthetic,
perspective-projection camera. Users
can adjust any of the following
parameters: field-of-view width,
field-of-view height, near clipping
plane distance, far clipping plane
distance, up vector, and look vector.
The viewing frustum is visualized in a
window, allowing students to
understand how the parameters relate
to the shape of the viewing frustum.
The same site has many components, such as axes, that you can use to set up a simple applet showing just the vectors you want.

Related

Reconstruction 3d for a rotation camera

I have rotating camera images and I'm trying this example of a MATLAB computer vision toolbox (https://www.mathworks.com/matlabcentral/fileexchange/67383-stereo-triangulation)
I have the calibration and rotation matrix for each image, however I always find 3d points equal to (0,0,0).
It is noted that the translation is null which makes the fourth column null.
You cannot reconstruct a 3D point from a rotating camera.
I suggest you try and draw an example. The idea of triangulation is to compute the intersection of two backprojection rays. These rays pass through the camera center and the point to be reconstructed. In your drawing, you'll find that the intersection becomes more and more accurate the larger the so-called stereo baseline is (that's the translation from one camera center to the other).
Now, for a rotating camera, the camera center remains the same and therefore, the two rays are identical. An intersection is not defined.

Live camera to shape distance calculation based on computer vision

The goal is to live detect the walls and export the distacne to wall .There is a setup , A closed 4 wall , one set of unique & ideal shape in each wall ( Triangle , Square .....) A robot with camera will roam inside the walls and have computer vision. Robot should detect the shape and export the distance between camera and wall( or that shape ).
I have implemented this goal by Opencv and the shape detection ( cv2.approxPolyDP ) and distance calculation ( perimeter calculation and edge counting then conversion of pixel length to real distance ).
It perfectly works in 90 degree angle , but not effective when happening in other angles.
Any better way of doing it.
Thanks
for cnt in contours[1:]:
# considering countours from 1 because from practical experience whole frame is often considered as a contour
area = cv2.contourArea(cnt)
# area of detected contour
approx = cv2.approxPolyDP(cnt, 0.02*cv2.arcLength(cnt, True), True)
#It predicts and makes pixel connected contour to a shape
x = approx.ravel()[0]
y = approx.ravel()[1]
# detected shape type label text placement
perimeter = cv2.arcLength(cnt,True)
# find perimeter
in other degrees you have the perspective view of the shapes.
you must use Geometric Transformations to neutralize perspective effect (using a known-shape object or angle of the camera).
also consider that using rectified images is highly recommended Camera Calibration.
Edit:
lets assume you have a square on the wall. when camera capture an image from non-90-degree straight-on view of the object. the square is not align and looks out of shape, this causes measurement error.
but you can use cv2.getPerspectiveTransform() .the function calculates the 3x3 matrix of a perspective transform M.
after that use warped = cv2.warpPerspective(img, M, (w,h)) and apply perspective transformation to the image. now the square (in warped image) looks like 90-degree straight-on view and your current code works well on the output image (warped image).
and excuse me for bad explanation. maybe this blog posts can help you:
4 Point OpenCV getPerspective Transform Example
Find distance from camera to object/marker using Python and OpenCV

3d graphics from scratch

What the minimum configuration for the program I need to build 3D Graphics from scratch, for example I have only SFML for working with 2d graphics and I need to implement the Camera object that can move & rotate in a space
Where to start and how to implement vector3d -> vector2d conversion functions and other neccessary things
All I have for now is:
angles Phi, Xi, epsilon 1-3 and some object that I can draw on the screen with the following formula
x/y = center.x/y + scale.x/y * dot(point[i], epsilon1/epsilon2)
But this way Im just transforming "world" axis, not the Object points
First you need to implement transform matrix and vector math:
Mathematically compute a simple graphics pipeline
Understanding 4x4 homogenous transform matrices
The rest depends on kind of rendering you want to achieve:
boundary polygonal mesh rendering
This kind of rendering is the native for nowadays gfx cards. You need to implement buffers for:
depth (for filled polygons without z-sorting)
screen (to avoid flickering and also serves as Canvas)
shadow,stencil,aux (for advanced rendering techniques)
they have usually the same resolution as target rendering area. On top of this you need to implement supported primitives rendering at least point,line,triangle. see:
Algorithm to fill triangle
on top of all this you can add textures,shaders and whatever else you want to ...
(back)ray tracing
this kind of rendering is very different and current gfx HW is not build for it. This involves implementing ray/primitives intersections computation, Snell's law and analytical representation of meshes. This way you can also do multi-spectral rendering and more physically accurate effects/processes see:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js? hybrid approach #1+#2
Algorithm for 2D Raytracer
How to implement 2D raycasting light effect in GLSL
Multi-Band Image raster to RGB
The difference between 2D and 3D ray tracer is almost none the only difference is how to compute perpendicular vector ...
There are also different rendering methods like Volume rendering, hybrid methods and others but their implementation is usually task oriented and generic description would most likely just mislead ... Here some 3D ray tracers of mine:
back raytrace through 3D mesh
back raytrace through 3D volume

skimage project an image's 3D plane to fronto-parallel view

I'm working on implementing Akush Gupta's synthetic data generation dataset (http://www.robots.ox.ac.uk/~vgg/data/scenetext/gupta16.pdf). In his work. he used a convolutional neural network to extract a point cloud from a 2-dimensional scenery image, segmented the point clouds to isolate different planes, used RANSAC to fit a 3d plane to the point cloud segments, and then warped the pixels for the segment, given the 3D plane, to a fronto-parallel view.
I'm stuck in this last part- warping my extracted 3D plane to a fronto-parallel view. I have X, Y, and Z vectors as well as a normal vector. I'm thinking what I need to do is perform some type of perspective transform or rotation that would bring all the pixels on the plane to a complete 0 Z-axis while the X and Y would remain the same. I could be wrong about this, it's been a long time since I've had any formal training in geometry or linear algebra.
It looks like skimage's Perspective Transform requires me to know the dimensions of the final segment coordinates in 2d space. It looks like AffineTransform requires me to know the rotation. All I have at this point is my X,Y,Z and normal vector and the suspicion that I may know my destination plane by just setting the Z axis to all zeros. I'm not sure if my assumption is correct but I need to be able to warp all the pixels in the segment of interest to fronto-parallel, fit a bounding box, place text inside of it, then warp the final segment back to the original perspective in 3d space.
Any help with how to think about this or implement it would be massively useful.

what the behind principal when picking object in the screen?

For a general 3D graphics application, we see from clicking at the windows, we can select an object in the scene graph, I am wondering what's the behind principal of this screen picking in 3D graphics?
Usually you will want to implement mouse picking (ray picking) like this:
take mesh(es) and its bounding volume(s)
transform mesh(es) and its bounding volume(s) to world space as usual (using its world matrix)
take mouse cursor coordinates (x, y)
unproject ("undo projection") 2D screen space mouse coordinates to 3D ray in world space using inverse view and inverse projection matrices
check for collision between ray and mesh bounding volume (coarse, fast) and/or mesh triangles (precise, slow)
if intersected, mark object as picked
repeat for all objects
if multiple objects get picked, choose nearest to camera
BTW, quick googling returns plenty of theoretic info on that topic with practical implementations using different programming languages.

Resources