Will you please provide me a reference to help me understand how scanline based rendering engines works?
I want to implement a 2D rendering engine which can support region-based clipping, basic shape drawing and filling with anti aliasing, and basic transformations (Perspective, Rotation, Scaling). I need algorithms which give priority to performance rather than quality because I want to implement it for embedded systems with no fpu.
I'm probably showing my age, but I still love my copy of Foley, Feiner, van Dam, and Hughes (The White Book).
Jim Blinn had a great column that's available as a book called Jim Blinn's Corner: A Trip Down the Graphics Pipeline.
Both of these are quited dated now, and aside from the principles of 3D geometry, they're not very useful for programming today's powerful pixel pushers.
OTOH, they're probably just perfect for an embedded environment with no GPU or FPU!
Here is a good series of articles by Chris Hecker that covers software rasterization:
http://chrishecker.com/Miscellaneous_Technical_Articles
And here is a site that talks about and includes code for a software rasterizer. It was written for a system that does not have an FPU (the GP2X) and includes source for a fixed point math library.
http://www.trenki.net
I'm not sure about the rest, but I can help you with fast scaling and 2D rotation for ARM (written in assembly language). Check out a demo:
http://www.modaco.com/content/smartphone-software-games/291993/bbgfx-2d-graphics-library-beta/
L.B.
Related
As the title says, I'd like to program a 3d game (probably a BattleZone clone), but without the use of an API like OpenGL, DirectX, and the like. At the heart of the matter, I'd just like to learn how to draw basic 3d shapes to the screen and manipulate them. Don't care if it looks like crap. I've used OpenGL to achieve similar ends before, but really didn't learn about these topics.
The problem is, I have no idea where to start. I downloaded the Doom source code, but it's a bit over my head. Although I've programmed a bit, graphical matters are very much out of my depth.
I'd be very grateful if anyone could offer links or code (in any language) that would help me along in my purpose.
Sounds like an exciting project. I did something similar in the late 90's. Before OpenGL and DirectX became popular, there were a ton of great books on the subject.
Fundamentally you will have to learn how to
Represent 3D geometry
Transform that geometry (translate and rotate)
Project that geometry onto a 2D screen.
Each of those major topics has many sub-topics (for example, complex objects can be constructed from a number of polygons. You may want to limit polygons to being constructed of triangles only, or support other polygons. You may want to load common model formats e.g. .obj files so that you can create models with off the shelf tools).
The topics are way too broad for a detailed answer here. Whole books are written on the subject, including
Black Art of 3D Game Programming (Book, amazingly still available)
For a good introduction to the general topics, have a look at:
http://en.wikipedia.org/wiki/3D_projection
http://en.wikipedia.org/wiki/Orthographic_projection
http://en.wikipedia.org/wiki/Transformation_matrix#Perspective_projection
Doom, which you already looked at, used a special optimization called heightfield rendering and does not allow for rendering of arbitrary 3D shapes (e.g., you will not find a bridge in Doom that you can walk under).
I have the second edition of Computer Graphics: Principles and Practice in C and it uses SRGP (Simple Raster Graphics Programming) and SIGGRAPH which is a wrap-around SRGP, if you look up articles and papers on graphics research you'll see that both these libraries are used a lot, and they are way more direct and low level than the APIs you mentioned. I'm having a hard time locating them, so if you do, please give a link. Note that the third edition is in WPF, so I cannot guarantee much as to it's usefulness, and I don't know if the second edition is still in print, but I have found numerous references to the book, and it's got it's own page in Wikipedia.
Another solution would be the Win32 API which again does not provide much in terms of rendering, but it is trivial to draw dots and lines onto a window. I have written a few tutorials on it, but I didn't cover drawing pixels and lines, so they'll only be useful if you have trouble with the basics of setting up a window. Note that it is not intended for real-time rendering, so it may get slow.
Finally you can look at X11 programming, the foundation of most modern operating systems with a GUI. I haven't found the libraries for Windows, but again I didn't invest too much time on it. I know it is available for CIGWIN and for Linux in general though, and I believe it would be very interesting to look at the core of graphics since you're already looking under the hood of 3D graphics.
After a while of 3d modelling and enjoying ZBrush's impeccable performance and numerous features I thought it would be great OpenGL practice for me to create something similar, just a small sculpting tool. Sure enough I got it done, I couldn't match ZBrush's performance of course seeing as how a brigade of well payed professionals outmatch a hobbyist. For the moment I just assumed ZBrush was heavily hardware accelerated, imagine my surprise when I found out it's not and furthermore it uses neither opengl or direct3d.
This made me want to learn graphics on a lower level but I have no clue where to start. How are graphics libraries made and how does one access the framebuffer without the use of opengl. How much of a hassle would it be to display just a single pixel without any preexisting tools and what magic gives ZBrush such performance.
I'd appreciate any info on any question and a recommendation for a book that covers any of these topics. I'm already reading Michael Abrash's Graphics Programming Black Book but it's not really addressing these matters or I just haven't reached that point yet.
Thank you in advance.
(Please don't post answers like "just use opengl" or "learn math", this seems to be the reaction everywhere I post this question but these replies are off topic)
ZBrush is godly in terms of performance but I think it's because it was made by image processing experts with assembly expertise (it's also likely due to the sheer amount of assembly code that they've been almost 20 years late in porting to 64-bit). It actually started out without any kind of 3D sculpting and was just a 2.5D "pixol" painter where you could spray pixels around on a canvas with some depth and lighting to the "pixols". It didn't get sculpting until around ZB 1.5 or so. Even then it impressed people with how fast you could spray these 2.5D "pixols" around on the canvas back when a similarly-sized brush just painting flat pixels with Photoshop or Corel Painter would have brought framerates to a stutter. So they were cutting-edge in performance even before they tackled anything 3D and were doing nothing more than spraying pixels on a canvas; that tends to require some elite micro-optimization wizardry.
One of the things to note about ZB when you're sculpting 20 million polygon models with it is that it doesn't even use GPU rasterization. All the rasterization is done in CPU. As a result it doesn't benefit from a beefy video card with lots of VRAM supporting the latest GLSL/HLSL versions; all it needs is something that can plot colored pixels to a screen. This is probably one of the reasons it uses so little memory compared to, say, MudBox, since it doesn't have to triple the memory usage with, say, VBOs (which tend to double system memory usage while also requiring the data to be stored on the GPU).
As for how you get started with this stuff, IMO a good way to get your feet wet is to write your own raytracer. I don't think ZBrush uses, say, scanline rasterization which tends to rise very proportionally in cost the more polygons you have, since they reduce the number of pixels being rendered at times like when you rotate the model. That suggests that whatever technique they're using for rasterization is more dependent in terms of performance by the number of pixels being rendered rather than the number of primitives (vertices/triangles/lines/voxels) being rendered. Raytracing fits those characteristics. Also IMHO a raytracer is actually easier to write than a scanline rasterizer since you don't have to bother with tricky cases so much and elimination of overdrawing comes free of charge.
Once you got a software where the cost of an operation is more in proportion to the number of pixels being rendered than the amount of geometry, then you can throw a boatload of polygons at it as they did all the way back when they demonstrated 20 million polygon sculpting at Siggraph with silky frame rates almost 17 years ago.
However, it's very difficult to get a raytracer to update interactively in response to mesh data that is being not only sculpted interactively, but sometimes having its topology being changed interactively. So chances are that they are using some data structure other than your standard BVH or KD-Tree as popular in raytracing, and instead a data structure which is well-suited for dynamic meshes that are not only deforming but also having their topology being changed. Maybe they can voxelize and revoxelize (or "pixolize" and "repixolize") meshes on the fly really quickly and cast rays directly into the voxelized representation. That would start to make sense given how their technology originally revolved around these 2.5D "pixels" with depth.
Anyway, I'd suggest raytracing for a start even if it's only just getting your feet wet and getting you nowhere close to ZB's performance just yet (it's still a very good start on how to translate 3D geometry and lighting into an attractive 2D image). You can find minimal examples of raytracers on the web written with just a hundred lines of code. Most of the work typically in building a raytracer is performance and also handling a rich diversity of shaders/materials. You don't necessarily need to bother with the latter and ZBrush doesn't so much either (they use these dirt cheap matcaps for modeling). Then you'll likely have to innovate some kind of data structure that's well-suited for mesh changes to start getting on par with ZB and micro-tune the hell out of it. That software is really on a whole different playing field.
I have likewise been so inspired by ZB but haven't followed in their footsteps directly, instead using the GPU rasterizer and OpenGL. One of the reasons I find it difficult to explore doing all this stuff on the CPU as ZB has is because you lose the benefits of so much industrial research and revolutionary techniques that game engines and NVidia and AMD have come up with into lighting models in realtime and so forth that all benefit from GPU-side processing. There's 99% of the 3D industry and then there's ZBrush in its own little corner doing things that no one else is doing and you need a lot of spare time and maybe a lot of balls to abandon the rest of the industry and try to follow in ZB's footsteps. Still I always wish I could find some spare time to explore a pure CPU rasterizing engine like ZB since they still remain unmatched when your goal is to directly interact with ridiculously high-resolution meshes.
The closest I've gotten to ZB performance was sculpting 2 million polygon meshes at over 30 FPS back in the late 90s on an Athlon T-Bird 1.2ghz with 256MB of RAM, and that was after 6 weeks of intense programming and revisiting the drawing board over and over in a very simplistic demo, and that was a very rare time where my company gave me so much R&D time to explore what ZB was doing. Still, ZB was handling 5 times that geometry at the same frame rates even at that time and on the same hardware and using half the memory. I couldn't even get close, though I did end up with a newfound respect and admiration for the programmers at Pixologic. I also had to insist to my company to do the research. Some of the people there thought ZBrush would never become anything noteworthy and would just remain a cutesy artistic application. I thought the opposite since I saw something revolutionary long before it acquired such an epic following.
A lot of people at the time thought ZB's ability to handle so many polygons was impractical and that you could just paint bump/normal/displacement maps and add whatever details you needed into textures. But that's ignoring the workflow side of things. When you can just work straight with epic amounts of geometry, you get to uniformly apply the same tools and workflow to select vertices, polygons, edges, brush over things, etc. It becomes the most straightforward way to create such a detailed and complex model, after which you can bake out the details into bump/normal/displacement maps for use in other engines that would vomit on 20 million polygons. Nowadays I don't think anyone still questions the practicality of ZB.
[...] but it's not really addressing these matters or I just haven't
reached that point yet.
As a caveat, no one has published anything on how to achieve performance rivaling ZB. Otherwise there would be a number of applications rivaling its performance and features when it comes to sculpting, dynamesh, zspheres, etc and it wouldn't be so amazingly special. You definitely need your share of R&D to come up with anything close to it, but I think raytracing is a good start. After that you'll likely need to come up with some really interesting ideas for algorithms and data structures in addition to a lot of micro-tuning.
What I can say with a fair degree of confidence is that:
They have some central data structure to accelerate rasterization that can update extremely quickly in response to changes the user makes to a mesh (including topological ones).
The cost of rasterization is more in proportion to the number of pixels rendered rather than the size of the 3D input.
There's some micro-optimization wizardry in there, including straight up assembly coding (I'm quite certain ZB uses assembly coding since they were originally requiring programmers to have both assembly and C++ knowledge back when they were hiring in the 2000s; I really wanted to work at Pixologic but lacked the prerequisite assembly skills).
Whatever they use is pretty light on memory requirements given that the models are so dynamic. Last time I checked, they use less than 100MB per million polygons even when loading in production models with texture maps. Competing 3D software with the exception of XSI can take over a gigabyte for the same data. XSI uses even less memory than ZB with its gigapoly core but is ill-suited to manipulating such data, slowing down to a crawl (they probably optimized it in a way that's only well-suited for static data like offloading data to disk or even using some expensive forms of compression).
If you're really interested in exploring this, I'd be interested to see what you can come up with. Maybe we can exchange notes. I've devoted much of my career just being interested in figuring out what ZB is doing, or at least coming up with something of my own that can rival what it's doing. For just about everything else I've tackled over the years from raytracing to particle simulations to fluid dynamics and video processing and so forth, I've been able to at least come up with demos that rival or surpass the performance of the competition, but not ZBrush. ZBrush remains that elusive thorn in my side where I just can't figure out how they manage to be so damned efficient at what they do.
If you really want to crawl before you even begin to walk (I think raytracing is a decent enough start, but if you want to start out even more fundamental) then maybe a natural evolution is to first just focus on image processing: filtering images, painting them with brushes, etc, along with some support for basic vector graphics like a miniature Photoshop/Illustrator. Then work your way up to rasterizing some basic 3D primitives, like maybe just a wireframe of a model being rendered using Wu line rasterization and some basic projection functions. Then work your way towards rasterizing filled triangles without any lighting or texturing, at which point I think you'll get closer to ZBrush focusing on raytracing rather than scanline with a depth buffer. However, doing a little bit of the latter might be a useful exercise anyway. Then work on rendering lit triangles, maybe starting with direct lighting and just a single light source, just computing a luminance based on the angle of the normal relative to the light source. Then work towards textured triangles using baycentric coordinates to figure out what texels to render. Then work towards indirect lighting and multiple light sources. That should be plenty of homework for you to develop a fairly comprehensive idea of the fundamentals of rasterization.
Now once you get to raytracing, I'm actually going to recommend one of the least efficient data structures for the job typically: octrees, not BVH or KD-Tree, mainly because I believe octrees are probably closer to allowing what ZB allows. Your bottlenecks in this context don't have to do with rendering the most beautiful images with complex diffuse materials and indirect lighting and subpixel samples for antialiasing. It has to do with handling a boatload of geometry with simple lighting and simple shaders and one sample per pixel which is changing on the fly, including topologically. Octrees seem a little better suited in that case than KD-tree or BVHs as a starting point.
One of the problems with ignoring the fundamentals these days is that a lot of young developers have lost that connection from, say, triangle to pixel on the screen. So if you don't want to take such rasterization and projection for granted, then your initial goal is to project 3D data into a 2D coordinate space and rasterize it.
If you want a book that starts at a low level, with framebuffers and such, try Computer Graphics: Principles and Practice, by Foley, van Dam, et al. It is an older, traditional text, but newer books tend to have a higher-level view. For a more modern text, I can also recommend 3D Computer Graphics by Alan Watt. There are plenty of other good introductory texts available -- these are just two that I am personally familiar with.
Neither of the above books are tied to OpenGL -- if I recall correctly, they include the specific math and algorithms necessary to understand and implement 3D graphics from the bottom up.
Given images from a certain viewpoints is there some software out there which can help me interpolate the views(i.e. Viewpoint interpolation software?).
Thanks,
View Interpolation is an illposed and very difficult problem, which in general has no optimal solution in real world scenarios.
This is because of several reasons, some of which include
Wide Baseline Matching
Disparity Estimation
Occlusion Handling (Folds and Holes)
The quality of the outcome highly depends on your video footage, especially on how close together the cameras are.
Nevertheless, research is ongoing. Dyer and Seitz for example achieved nice results on constrained examples:
http://homes.cs.washington.edu/~seitz/vmorph/vmorph.htm
Stich et.al. from TU Braunschweig showed some amazing results with their Virtual Camera system, which probably is the one thing, you are looking for:
http://www.youtube.com/watch?v=uqKTbyNoaxE
And finally, for soccer enthusiasts:
http://www.youtube.com/watch?v=cUK1UobhCX0
http://www.youtube.com/watch?v=UePrOp2s31c
As a starting point, look for Image Morphing and View Morphing.
If you want, I can provide some more papers on the topic.
Good luck! :)
EDIT: Also, if you're interested, here's my work on Spatial and Temporal Interpolation of Multi-View Image-Sequence.
http://tobiasgurdan.de/research/
So I stumbled upon this "new" graphics engine/technology called Unlimited Detail.
This seems to be pretty interesting granted it's real and not a fake.
They have some videos explaining the technology but they only scratch the surface.
What do you think about it? Is it programmatically possible?
Or is it just a scam for investors?
Update:
Since the only answer was based on voxels I have to copy this from their site:
Unlimited Details method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are Ray tracing polygons and point cloud/voxels, they all have strengths and weaknesses. Polygons runs fast but has poor geometry, Ray-trace and voxels have perfect geometry but run very slowly.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine
The underlying technology is related to something called sparse voxel octrees (see, e.g., this paper), which aren't anything incredibly amazing. What the video doesn't tell you is that these are not at all suited for things that need to be animated, so they're of limited use for anything that uses procedural animation (e.g., all ragdoll physics, etc.). So they're very inflexible. You can get great detail, but you get it in a completely static world.
A rough summary of where things stand with this technology in mainstream games is here. You will also want to check out Samuli Laine's work; he's a Finnish researcher who is focusing a great deal of his attention on this subject and is unlocking some of the secrets to implementing it well.
Update: Yes, the website says it's not "voxel-based". I suspect this is merely an issue of semantics, however, in that what they're using are essentially voxels, but because it's not exactly a voxel they feel safe in being able to claim that it's not voxel-based. In any case, the magic isn't in how similar to a voxel it is -- it's how they select which voxels to actually show. This is the primary determinant of speed.
Right now, there is no incredibly fast way to show voxels (or something approximating a voxel). So either they have developed a completely new, non-peer-reviewed method for filtering voxels (or something like them), or they're lying.
You might find more detail in the following patents:
"A Computer Graphics Method For Rendering Three Dimensional Scenes"
"A Method For Efficent Streaming Of Octree Data For Access"
- Each voxel (they call it a "node") is represented as a single bit, along with information voxels at a finer level of detail.
The full-text can be viewed online here:
https://www.lens.org/lens/search?q=Euclideon+Pty+Ltd&l=en
or
http://worldwide.espacenet.com/searchResults?submitted=true&query=EUCLIDEON
Hey guys, I would like to develop a light/laser show editor and simulator, and for this of course I am going to learn some graphics programming. I am thinking about using C# and XNA.
I was just wondering what aspects of graphics programming I should research or focus on given the project I am working on. I am new to graphics programming so I don't know much about it, but for example I imagine something that I might look into would (possibly?) be volumetric lighting.
For example, what would be a practical way to go about rendering a 'laser' of varied width/color? I read somewhere to just draw a cylinder and apply a shader to it, I would like to confirm that this is the way.
Given that this seems like a big project, I was thinking about starting off by creating light sources and giving them properties so that I can easily manipulate them. I have (mis)read that only a certain amount of lights can be rendered at any given time, I believe eight. Does this only apply to ambient lights? Given this possible limitation, and the fact that most of the lights I will use will be directional, such as head-lights or lasers, what would be a different way to render these? Is that what volumetric lighting would be?
I'd just like to get some things clear before I dive into it. Since I'm new to this I probably didn't make the best use of words, so if something doesn't make sense please let me know. Thanks and sorry for my ignorance.
The answer to this depends on the level of sophistication that you need in your display simulation. Computer graphics is ultimately a simulation of the transport of light; that simulation can be as sophisticated as calculating the fraction of laser light deflected by particles in the atmosphere to the viewer's eyepoint, or as simple as drawing a line. Try out the cylinder effect and see if it works for your project. If you need something more sophisticated, look into shader programming (using Nvidia Cg, for example), and volumetric shading as you mentioned; also post-processing glow effects may be useful. For OpenGL, I believe there is a limit of 8? light sources in a scene, but you could conceivably work around this limit by doing your own shading logic.
Well if it's just for light show simulations I'd imagine your going to need a lot of custom lighting effects - so regardless if you decide to use XNA or straight DirectX your best bet would be to start by learning shader languages and how to program various lighting effects using them. Once you can reproduce the type of laser lighting you want, then you can experiment with the polygons you want to use to represent the lasers. (I've used the cylinder method in some of my work for personal purposes, but I'm not sure how well straight cylinders will fit your purpose).
Although its faster, I think its best not to use vanilla hardware lighting because of its limitations. Pixel shaders can help with you task. Also you may want to chose OpenGL because of portability and its clarity in rendering methods. I worked on Direct3D for several years before switching to OpenGL. OpenGL functions and states are easier to learn and rendering methods (like multi-pass rendering) is a lot clear. If you like to code on C# (which I dont recommend for these tasks), you can use CsGL library to access OpenGL functions.