How do ascii art 3d engines work? - graphics

I've seen some pretty cool demos involving 3d ascii art videos. Does anyone what algorithms are used for doing this?

You can use the same as with ordinary, monochrome pixel graphics. You or some algorithm classifies the perceived brightness of some ascii-characters, put them in some lookup table, and then draw very blocky graphics.

Related

Silhouette below 3D model

There are some 3D applications which can cast shadow or silhouette below 3D models. They render pretty fast and smooth. I wonder what kind of technology is the standard procedure to get 3D model shadow/silhouette.
For example is there any C++ library like libigl or CGAL to get shadow/silhouette pretty fast? Or maybe GLSL shading is used? Any hint would be appreciated on the standard technology stack.
For rendering, it's trivial. Just project the vertices to the surface (for the case of the XY plane, this just entails setting the Z coordinate to 0) and render the triangles. There'll be a lot of overlap, but since you're just rendering that won't matter.
If you're trying to build a set of polygons representing the silhouette shape, you'll need to instead union the projected triangles using something like the Vatti clipping algorithm.
Computing shadows is a vast and uneasy topic. In the real world, light sources are extended and the shadow edges are not sharp (there is penumbra). Then there are cast shadows, and even self-shadows.
If you limit yourself to punctual light sources (hence sharp shadows), there is a simple principle: if you place an observer at the light source, the faces he will see are illuminated by that light source. Conversely, the hidden surfaces are in the shadow.
For correct rendering, the shadowed areas should be back-projected to the scene and painted black.
By nature, the ray-tracing techniques make this process easy to implement.

How does a graphics engine figure out how to place pixels to make a 3d image?

I was wondering what procedure a simple 3d program uses to draw 2d pixels so that they appear 3d. I'm really interested in this for drawing purposes since if a program can figure out how to use a flat screen to produce images with depth then maybe I could use those techniques in my drawing.
Are there any basic 3d engine out there I can look at? Without any 2d to 3d abstractions?
Two notions may interest you:
The perspective projection, which is the mathematical transformation which takes 3D points (or vertices) and the characteristics of your camera (position, orientation, frustrum, ...) and gives you the 2D projection of the point on your chosen medium (screen).
Wikipedia - 3D Projection
StackOverflow - Transform GPS-Points to Screen-Points with Perspective Projection in Android (I made a detailed answer)
The Painter's algorithm (since you seem to ask for drawing-related techniques), a rendering method which sorts by depth all the elements of your scene after their projection, and draws them on your medium by decreasing depth, to ensure a realistic output ("far objects hidden behind closer ones" - imitating painters method). This algorithm has however some limits (far from efficient in its basic implementation, can't easily deal with elements intersecting or circularly overlapping each others), so most of the days a more efficient method is used, the Z-buffering, which deals with depth conflicts on a pixel-to-pixel basis.
Wikipedia - Painter's algorithm
Wikipedia - Z-buffering
By combining those notions, you can actually implement your own simple 3D engine (in the other StackOverflow thread I'm pointing, I gave a link to an article I made about creating such an engine easily).
If you want to look at more complex engines and notions, you can take a look at the GPU Gems 3 by Nvidia for instance, or look at articles about OpenGL.
Hope it helped, bye !

Best practice for creating 2d graphics assets

As a brief background, I have been slowly chugging away at the core framework of a game I've been wanting to make for some time now. It has gotten to the point where I want to start really fleshing it out with some graphics assets other than colored boxes. And this brings me to the heart of my question:
What is the best method for creating graphics assets that appear the same quality independent of the device they are drawn on?
My game is styled after Pokemon, so I want to capture the 16-bit feel while still remaining crisp regardless of the device resolution. Does this mean I just create a ton of duplicate sprite sheets? i.e. a 16x16 32x32 48x48 64x64 version of each asset? Or should I be making vector art and rendering it out specifically for each device? Or is there some other alternative I haven't considered?
Thanks!
If by 16-bit feel you mean a classic old-school "pixelated" style (but with crisp edges). Then you can just draw them in the minimal dimension and upscale by whatever factor you need using a Pixel Art Scaling Algorithm, the simplest being nearest neighbour. There are of course many algos that produce much nicer results than NN like the 2xSaI and hqx family of algorithms, and RotSprite if you need rotation.
If you want clean antialiased edges you might want to check out this Microsoft Research paper: Depixelizing Pixel Art
You can then use these algos as a loading pre-pass for your game.
Alternatively, you could shift them "earlier" into your art pipeline to help speed up generation of multiple (resolution/transform) variants, which you could further touch up. This choice largely depends on your level of labor resources and perfectionism. Note also that this loses the "purity" of the solution since it violates DRY because updates will require changes in all variants of a sprite.
I would suggest to first try out some of these upscaling filters and see if you are happy with the results. If you are, you can get away with a loading prepass, which is by far the most desirable outcome because it reduces work and maintenance by a large factor.

3D laser scanner capturing normals?

The Lab university I work at is in the process of purchasing a laser scanner for scanning 3D objects. All along from the start we've been trying to find a scanner that is able to capture real RAW normals from the actual scanned surface. It seems that most scanners only capture points and then the software interpolates to find the normal of the approximate surface.
Does anybody know if there is actually such a thing as capturing raw normals? Is there a scanner that can do this and not interpolate the normals from the point data?
Highly unlikely. Laser scanning is done using ranges. What you want would be combining two entirely different techniques. Normals could be evaluated with higher precision using well controlled lighting etc, but requiring a very different kind of setup. Also consider the sampling problem: What good is a normal with higher resolution than your position data?
If you already know the bidirectional reflectance distribution function of the material that composes your 3D object, it is possible that you could use a gonioreflectometer to compare the measured BRDF at a point. You could then individually optimize a computed normal at that point by comparing a hypothetical BRDF against the actual measured value.
Admittedly, this would be a reasonably computationally-intensive task. However, if you are only going through this process fairly rarely, it might be feasible.
For further information, I would recommend that you speak with either Greg Ward (Larson) of Radiance fame or Peter Shirley at NVIDIA.
Here is an example article of using structured light to reconstruct normals from gradients.
Shape from 2D Edge Gradients
I didn't find the exact article I was looking for, but this seems to be on the same principle.
You can reconstruct normals from the angle and width of the stripe after being deformed on the object.
You could with a structured light + camera setup.
The normal would come from the angle betwen the projected line and the position on the image. As the other posters point out - you can't do it from a point laser scanner.
Capturing raw normals is almost always done using photometric stereo. This almost always requires placing some assumptions on the underlying reflectance, but even with somewhat inaccurate normals you can often do well when combining them with another source of data:
Really nice code for combining point clouds (from a laser scan for example) with surface normals: http://www.cs.princeton.edu/gfx/pubs/Nehab_2005_ECP/

Volumetric particles

I'm toying with the idea of volumetric particles. By 'volumetric' I don't mean actually 3D model per particle - usually it's more expensive and harder to blend with other particles. What I mean is 2D particles that will look as close as possible to be volumetric.
Right now what I/we have tried is particles with additional local Z texture (spherical for example), and we conduct the alpha transparency according to the combination of the alpha value and the closeness by Z which is improved by the fact that particle does not have a single planar Z.
I think a cool add would be interaction with lighting (and shadows as well), but here the question is how will the lighting formula look like (taking transparency into account, let's assume that we are talking about smoke and dust/clouds and not additive blend) - any suggestions would be welcomed.
I also though about adding normal so I can actually squeeze all in two textures:
Diffuse & Alpha texture.
Normal & 256 level precision Z channel texture.
I ask this question to see what other directions can be thought of and to get your ideas regarding the proper lighting equation that might be used.
It sounds like you are asking for information on techniques for the simulation of participating media: "Participating media may absorb, emit and/or scatter light. The simplest participating medium only absorbs light. That means that light passing through the medium is attenuated depending on the density of the medium."
Here are some links to some example images and to Frisvad, Christensen, Jensen's the SIGGRAPH 2007 paper (including the PDF).
A nice paper on using spherical billboards to represent volumetric effects:
http://www.iit.bme.hu/~szirmay/firesmoke_link.htm
Doesn't handle particpating media, though.
See Volume Rendering and Voxel.

Resources