For a general 3D graphics application, we see from clicking at the windows, we can select an object in the scene graph, I am wondering what's the behind principal of this screen picking in 3D graphics?
Usually you will want to implement mouse picking (ray picking) like this:
take mesh(es) and its bounding volume(s)
transform mesh(es) and its bounding volume(s) to world space as usual (using its world matrix)
take mouse cursor coordinates (x, y)
unproject ("undo projection") 2D screen space mouse coordinates to 3D ray in world space using inverse view and inverse projection matrices
check for collision between ray and mesh bounding volume (coarse, fast) and/or mesh triangles (precise, slow)
if intersected, mark object as picked
repeat for all objects
if multiple objects get picked, choose nearest to camera
BTW, quick googling returns plenty of theoretic info on that topic with practical implementations using different programming languages.
Related
What the minimum configuration for the program I need to build 3D Graphics from scratch, for example I have only SFML for working with 2d graphics and I need to implement the Camera object that can move & rotate in a space
Where to start and how to implement vector3d -> vector2d conversion functions and other neccessary things
All I have for now is:
angles Phi, Xi, epsilon 1-3 and some object that I can draw on the screen with the following formula
x/y = center.x/y + scale.x/y * dot(point[i], epsilon1/epsilon2)
But this way Im just transforming "world" axis, not the Object points
First you need to implement transform matrix and vector math:
Mathematically compute a simple graphics pipeline
Understanding 4x4 homogenous transform matrices
The rest depends on kind of rendering you want to achieve:
boundary polygonal mesh rendering
This kind of rendering is the native for nowadays gfx cards. You need to implement buffers for:
depth (for filled polygons without z-sorting)
screen (to avoid flickering and also serves as Canvas)
shadow,stencil,aux (for advanced rendering techniques)
they have usually the same resolution as target rendering area. On top of this you need to implement supported primitives rendering at least point,line,triangle. see:
Algorithm to fill triangle
on top of all this you can add textures,shaders and whatever else you want to ...
(back)ray tracing
this kind of rendering is very different and current gfx HW is not build for it. This involves implementing ray/primitives intersections computation, Snell's law and analytical representation of meshes. This way you can also do multi-spectral rendering and more physically accurate effects/processes see:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js? hybrid approach #1+#2
Algorithm for 2D Raytracer
How to implement 2D raycasting light effect in GLSL
Multi-Band Image raster to RGB
The difference between 2D and 3D ray tracer is almost none the only difference is how to compute perpendicular vector ...
There are also different rendering methods like Volume rendering, hybrid methods and others but their implementation is usually task oriented and generic description would most likely just mislead ... Here some 3D ray tracers of mine:
back raytrace through 3D mesh
back raytrace through 3D volume
I know that there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
Then, those models go through hidden surface removal algorithm.
Am I correct?
Be that way, What formula or algorithm can I use to draw a 3D Sphere?
I am using a low-level library named WinBGIm from colorado university.
there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
These are modelling techniques and not rendering techniques. They allow you to mathematically define your mesh's geometry. How you render this data on to a 2D canvas is another story.
There are two fundamental approaches to rendering 3D models on a 2D canvas.
Ray Tracing
The basic idea of ray tracing is to pass a ray from the camera's origin, through the point on the canvas whose colour needs to be determined. Determine which models get hit by it and pick the closest one, determine how it's lit to compute the colour there. This is done by further tracing rays from the hit point to all the light sources in the scene. If you notice, this approach eliminates the need to use hidden surface determination algorithms like the back face culling, z-buffer, etc. since the basic idea is rooted on a hidden surface algorithm (ray tracing).
There are packages, libraries, etc. that help you do this. However, it's common that ray tracers are written from scratch as a college-level project. However, this approach takes more time to render (not to code), but the results are generally more pleasing than the below one. This approach is more popular when you want to render non-interactive visuals like movies.
Rasterization
This approach takes primitives (triangles and quads) that define the models in the scene and sample them at regular intervals (screen pixels they cover) and write it on to a colour buffer. Here hidden surface is usually eliminated using the Z-buffer; a buffer that stores the z-order of the fragment and the closer one wins, when writing to the colour buffer.
Rasterization is the more popular approach with cheap hardware support for it available on most modern computers due to years of research and money that has gone in to it. Libraries like OpenGL and Direct3D are readily available to facilitate development. Although the results are less pleasing than ray tracing, it's faster to render and thus is widely used in interactive, real-time rendering like games.
If you want to not use those libraries, then you have to do what is commonly known as software rendering i.e. you will end up doing what these libraries do.
What formula or algorithm can I use to draw a 3D Sphere?
Depends on which one of the above you choose. If you simply rasterize a 3D sphere in 2D with orthographic projection, all you have to do is draw a circle on the canvas.
If you are looking for hidden lines removal (drawing the edges rather than the inside of the faces), the solution is easy: "back face culling".
Every edge of your model belongs to two faces. For every face you can compute the normal vector and check if it is facing to the observer (by the sign of the dot product of the normal and the direction of the projection line); in other words, if the observer is located in the outer half-space defined by the plane of the face. Then an edge is wholly visible if and only if it belongs to at least one front face.
Usual discretization of the sphere are made by drawing equidistant parallels and meridians. It may be advantageous to adjust the spacing of the parallels so that all tiles are about the same area.
I am trying to create a 3-dimensional game that is based entirely off of cubes of the exact same size. I wanted to learn how to make my own 3-dimensional game using only 2-dimensional game libraries. Currently, the way I am doing is that I have an array storing the locations of all the centers of each cube in the game. Then, when drawing a single cube, I figure out which 3 sides of the cube I need to draw (since you don't need to draw all 3 sides of the cube). Then, knowing the 3-dimensional points of all the corners of the cube, I project those points onto a 2 dimensional space using the camera position, camera angle, and the point I am projecting.
Now my real question is: Now that I can draw a single cube, how do I draw multiple cubes, considering that cubes need to be drawn in a certain order (i.e. the cubes that are further away need to be drawn first so that the cubes that are closer to us appear on top of the cubes far away from us)? How do I determine which cubes to draw first given the list of cube centers and their sizes, and the camera position/angle?
its quite a few years when I learned 3D graphics ...
All of the sites I used are gone now (as usual on the WEB but there are tons of new ones). I am OpenGL user so I recommend to use that. Many of mine students liked NEHE tutorials so look here. Also look at 4x4 transform matrices (homogenous coordinates).
Z-Buffer is automatic in OpenGL just create context with Z-Buffer (all tutorials use it) and call glEnable(GL_DEPTH_TEST);
Face culling (skip the other side of object) is done by glEnable(GL_CULL_FACE); and drawing faces in the same polygon winding (CW or CCW)
Here few related questions:
Understanding 4x4 homogenous transform matrices
3D (software) render pipeline computation
some transform matrices 3x3 and 4x4 insights
how to project points and vectors ... fixed pipeline OpenGL will do it for you
code for inverse 4x4 matrix
this is mine clean OpenGL app example in borland/embarcadero BDS2006 Turbo C++
some projection math
And some good sites to look at (strongly recommend):
transforms
Learning Modern 3D Graphics Programming
GL_PROJECTION matrix abuse
Sorry for that list of links but I think they are relevant and copy their stuff here would be too much. Now back to your question how to render more cubes. I see few options (in OpenGL):
every cube has its own transform matrix
Representing its position and orientation in space then just before rendering each cube change GL_MODELVIEW matrix and draw cube (with the same code for each). If you have too much objects/cubes it will consume big chunk of memory (+16 floats per cube).
every cube is aligned to each other
in this case you need to know just the 3D position (+3 floats per cube) so just do something like this:
glMatrixMode(GL_MODELVIEW); // this store original matrix
glPushMatrix();
glTranslatef(x[i],y[i],z[i]); // position of i-th cube
// here render cube (like glBegin(GL_QUADS); ... or use glDraw... )
glMatrixMode(GL_MODELVIEW); // this restore original matrix
glPopMatrix();
use of shaders
in modern shader pipeline you can use geometry shader to emit cube on receiving point but this is too much for OpenGL beginner. But in this case you would draw just points and the shader will convert them to cubes on GPU which is much faster ...
use VBO or VAO
VAO is vertex array object (list of VBO's)
VBO is vertex buffer object
VBO is basically an array of parameters copied to GPU as one chunk and not by individual calls like glVertex,glColor,glNormal... which is much much faster. This allows you to create model of your space (all cubes) and draw it at once with enough speed unless you hit the GPU/CPU/MEM speed limits
VAO similarly groups more VBOs together so you need to bind just single VAO per object instead of one VBO for each one parameter further reducing the API calls number needed for rendering.
In the problem of detecting objects near the mouse cursor to snap to (in a 3d view), we are using the picking ray method (which basically forms a 3d region of the cursor's immediate neighborhood and then detects objects present in the region).
I wonder if it is the only way to solve the task. Can I use, for example, the view matrix to get the 2D coordinates of the object in view space, then search for any objects in the cursor's vicinity?
I am not happy with the picking ray method because it is relatively expensive, so the question is essentially about whether any space transformation-based method will generally be faster. I am new to 3D programming so please give me a direction to dig into.
You can probably speedup the ray picking process by forming a hierarchy of nested bounding boxes around the objects, and checking for intersection of the rays with the bounding boxes. This way, you can spare a lot of intersection tests.
There is an alternative, exploiting the available rendering engine: instead of rendering to screen with the normal rendering attributes, you can render the same view in an off-screen plane, using flat shading and setting a different color for every object. You will obtain an object map that instantaneously tells you the object id for any pixel.
I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture.
For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent.
Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably).
This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.
Not too much but here's a start:
The color fill is accomplished by using a shader. Reimer's XNA Tutorials on pixel shaders is a great resource on the topic.
You need to calculate the geometry and build up vertex buffers to hold it. Note that all vector geometry in XNA is in 3D, but using a camera fixed to a plane will simulate 2D.
To add different colors to different triangles you basically need to group geometry into separate vertex buffers. Then, using a shader with a color parameter, for each buffer,
set the appropriate color before passing the buffer to the graphics device. Alternatively, you can use a vertex format containing color information, which basically let you assign a color to each vertex.