Creating a Cube-based 3-Dimensional Game - geometry

I am trying to create a 3-dimensional game that is based entirely off of cubes of the exact same size. I wanted to learn how to make my own 3-dimensional game using only 2-dimensional game libraries. Currently, the way I am doing is that I have an array storing the locations of all the centers of each cube in the game. Then, when drawing a single cube, I figure out which 3 sides of the cube I need to draw (since you don't need to draw all 3 sides of the cube). Then, knowing the 3-dimensional points of all the corners of the cube, I project those points onto a 2 dimensional space using the camera position, camera angle, and the point I am projecting.
Now my real question is: Now that I can draw a single cube, how do I draw multiple cubes, considering that cubes need to be drawn in a certain order (i.e. the cubes that are further away need to be drawn first so that the cubes that are closer to us appear on top of the cubes far away from us)? How do I determine which cubes to draw first given the list of cube centers and their sizes, and the camera position/angle?

its quite a few years when I learned 3D graphics ...
All of the sites I used are gone now (as usual on the WEB but there are tons of new ones). I am OpenGL user so I recommend to use that. Many of mine students liked NEHE tutorials so look here. Also look at 4x4 transform matrices (homogenous coordinates).
Z-Buffer is automatic in OpenGL just create context with Z-Buffer (all tutorials use it) and call glEnable(GL_DEPTH_TEST);
Face culling (skip the other side of object) is done by glEnable(GL_CULL_FACE); and drawing faces in the same polygon winding (CW or CCW)
Here few related questions:
Understanding 4x4 homogenous transform matrices
3D (software) render pipeline computation
some transform matrices 3x3 and 4x4 insights
how to project points and vectors ... fixed pipeline OpenGL will do it for you
code for inverse 4x4 matrix
this is mine clean OpenGL app example in borland/embarcadero BDS2006 Turbo C++
some projection math
And some good sites to look at (strongly recommend):
transforms
Learning Modern 3D Graphics Programming
GL_PROJECTION matrix abuse
Sorry for that list of links but I think they are relevant and copy their stuff here would be too much. Now back to your question how to render more cubes. I see few options (in OpenGL):
every cube has its own transform matrix
Representing its position and orientation in space then just before rendering each cube change GL_MODELVIEW matrix and draw cube (with the same code for each). If you have too much objects/cubes it will consume big chunk of memory (+16 floats per cube).
every cube is aligned to each other
in this case you need to know just the 3D position (+3 floats per cube) so just do something like this:
glMatrixMode(GL_MODELVIEW); // this store original matrix
glPushMatrix();
glTranslatef(x[i],y[i],z[i]); // position of i-th cube
// here render cube (like glBegin(GL_QUADS); ... or use glDraw... )
glMatrixMode(GL_MODELVIEW); // this restore original matrix
glPopMatrix();
use of shaders
in modern shader pipeline you can use geometry shader to emit cube on receiving point but this is too much for OpenGL beginner. But in this case you would draw just points and the shader will convert them to cubes on GPU which is much faster ...
use VBO or VAO
VAO is vertex array object (list of VBO's)
VBO is vertex buffer object
VBO is basically an array of parameters copied to GPU as one chunk and not by individual calls like glVertex,glColor,glNormal... which is much much faster. This allows you to create model of your space (all cubes) and draw it at once with enough speed unless you hit the GPU/CPU/MEM speed limits
VAO similarly groups more VBOs together so you need to bind just single VAO per object instead of one VBO for each one parameter further reducing the API calls number needed for rendering.

Related

3d graphics from scratch

What the minimum configuration for the program I need to build 3D Graphics from scratch, for example I have only SFML for working with 2d graphics and I need to implement the Camera object that can move & rotate in a space
Where to start and how to implement vector3d -> vector2d conversion functions and other neccessary things
All I have for now is:
angles Phi, Xi, epsilon 1-3 and some object that I can draw on the screen with the following formula
x/y = center.x/y + scale.x/y * dot(point[i], epsilon1/epsilon2)
But this way Im just transforming "world" axis, not the Object points
First you need to implement transform matrix and vector math:
Mathematically compute a simple graphics pipeline
Understanding 4x4 homogenous transform matrices
The rest depends on kind of rendering you want to achieve:
boundary polygonal mesh rendering
This kind of rendering is the native for nowadays gfx cards. You need to implement buffers for:
depth (for filled polygons without z-sorting)
screen (to avoid flickering and also serves as Canvas)
shadow,stencil,aux (for advanced rendering techniques)
they have usually the same resolution as target rendering area. On top of this you need to implement supported primitives rendering at least point,line,triangle. see:
Algorithm to fill triangle
on top of all this you can add textures,shaders and whatever else you want to ...
(back)ray tracing
this kind of rendering is very different and current gfx HW is not build for it. This involves implementing ray/primitives intersections computation, Snell's law and analytical representation of meshes. This way you can also do multi-spectral rendering and more physically accurate effects/processes see:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js? hybrid approach #1+#2
Algorithm for 2D Raytracer
How to implement 2D raycasting light effect in GLSL
Multi-Band Image raster to RGB
The difference between 2D and 3D ray tracer is almost none the only difference is how to compute perpendicular vector ...
There are also different rendering methods like Volume rendering, hybrid methods and others but their implementation is usually task oriented and generic description would most likely just mislead ... Here some 3D ray tracers of mine:
back raytrace through 3D mesh
back raytrace through 3D volume

How to compute a 3d miniature model from a large set of 3d geometric models

i want to import a set of 3d geometries in to current scene, the imported geometries contains tons of basic componant which may represent an
entire building. The Product Manager want the entire building to be displayed
as a 3d miniature(colors and textures must corrosponding to the original building).
The problem: Is there any algortithms which can handle these large amount of datasin a reasonable time and memory cost.
//worst case: there may be a billion triangle surfaces in the imported data
And, by the way, i am considering another solotion: using a type of textue mapping:
1 take enough snapshots by the software render of the imported objects.
2 apply the images to a surface .
3 use some shader tricks to perform effects like bump-mapping---when the view posisition changed, the texture will alter and makes the viewer feels as if he was looking at a 3d scene.
----my modeller and render are ACIS and hoops, any ideas?
An option is to generate side views of the building at a suitable resolution, using the rendering engine and map them as textures to a parallelipipoid.
The next level of refinement is to obtain a bump or elevation map that you can use for embossing. Not the easiest to do.
If the modeler allows it, you can slice the volume using a 2D grid of "voxels" (actually prisms). You can do that by repeatedly cutting the model in two with a plane. And in every prism, find the vertex closest to the observer. This will give you a 2D map of elevations, with the desired resolution.
Alternatively, intersect parallel "rays" (linear objects) with the solid and keep the first endpoint.
It can also be that your modeler includes a true voxel model, or that rendering can be zone with a Z-buffer that you can access.

Directx 11 spheres

I'm looking for an efficient way to display lots of spheres using directx 11. The spheres are defined by (x,y,z,r) where (x,y,z) are coordinates in space and r is the radius. I want to display only the spheres that can be seen, meaning that spheres that are not in the field of view and spheres that are too small to be seen wouldn't be drawn. However, if a group of spheres smaller than one pixel is at least as big as one pixel, then I want to display the most predominant color. Spheres have only one color and different levels of transparency. Any help would be appreciated and incomplete answers are acceptable.
You need several things. First an indexed unit sphere geometry, second a buffer to store the sphere instance properties ( position, radius and color ) and third a small buffer for the API parameters yet to come. The three combines in a single 'ID3D11DeviceContext::DrawIndexedInstancedIndirect'
The remaining question is "how to feed the instance buffer ?". cpu is easy, just apply frustum culling, sort back to front because of the transparency and apply a merge based on the screen projection, update the buffer and use 'ID3D11DeviceContext::DrawIndexedInstanced'.
gpu version will do the same thing with compute shaders but will be harder to implement. The advantage, zero cpu/gpu synchronization and should support far more instance.

Fill 2D area bound by vertices in XNA

I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture.
For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent.
Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably).
This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.
Not too much but here's a start:
The color fill is accomplished by using a shader. Reimer's XNA Tutorials on pixel shaders is a great resource on the topic.
You need to calculate the geometry and build up vertex buffers to hold it. Note that all vector geometry in XNA is in 3D, but using a camera fixed to a plane will simulate 2D.
To add different colors to different triangles you basically need to group geometry into separate vertex buffers. Then, using a shader with a color parameter, for each buffer,
set the appropriate color before passing the buffer to the graphics device. Alternatively, you can use a vertex format containing color information, which basically let you assign a color to each vertex.

What does 'Polygon' mean in terms of 3D Graphics?

An old Direct3D book says
"...you can achieve an acceptable frame
rate with hardware acceleration while
displaying between 2000 and 4000
polygons per frame..."
What is one polygon in Direct3D? Do they mean one primitive (indexed or otherwise) or one triangle?
That book means triangles. Otherwise, what if I wanted 1000-sided polygons? Could I still achieve 2000-4000 such shapes per frame?
In practice, the only thing you'll want it to be is a triangle because if a polygon is not a triangle it's generally tessellated to be one anyway. (Eg, a quad consists of two triangles, et cetera). A basic triangulation (tessellation) algorithm for that is really simple; you just loop though the vertices and turn every three vertices into a triangle.
Here, a "polygon" refers to a triangle. All . However, as you point out, there are many more variables than just the number of triangles which determine performance.
Key issues that matter are:
The format of storage (indexed or not; list, fan, or strip)
The location of storage (host-memory vertex arrays, host-memory vertex buffers, or GPU-memory vertex buffers)
The mode of rendering (is the draw primitive command issued fully from the host, or via instancing)
Triangle size
Together, those variables can create much greater than a 2x variation in performance.
Similarly, the hardware on which the application is running may vary 10x or more in performance in the real world: a GPU (or integrated graphics processor) that was low-end in 2005 will perform 10-100x slower in any meaningful metric than a current top-of-the-line GPU.
All told, any recommendation that you use 2-4000 triangles is so ridiculously outdated that it should be entirely ignored today. Even low-end hardware today can easily push 100,000 triangles in a frame under reasonable conditions. Further, most visually interesting applications today are dominated by pixel shading performance, not triangle count.
General rules of thumb for achieving good triangle throughput today:
Use [indexed] triangle (or quad) lists
Store data in GPU-memory vertex buffers
Draw large batches with each draw primitives call (thousands of primitives)
Use triangles mostly >= 16 pixels on screen
Don't use the Geometry Shader (especially for geometry amplification)
Do all of those things, and any machine today should be able to render tens or hundreds of thousands of triangles with ease.
According to this page, a polygon is n-sided in Direct3d.
In C#:
public static Mesh Polygon(
Device device,
float length,
int sides
)
As others already said, polygons here means triangles.
Main advantage of triangles is that, since 3 points define a plane, triangles are coplanar by definition. This means that every point within the triangle is exactly defined as a linear combination of polygon points. More vertices aren't necessarily coplanar, and they don't define a unique curved plane.
An advantage more in mechanical modeling than in graphics is that triangles are also undeformable.

Resources