Why are shaders called shader? - graphics

I can't find the word in any dictionary, neither in regular ones nor etymology ones. Only Wikipedia says:
The modern use of "shader" was introduced to the public by Pixar with their "RenderMan Interface Specification, Version 3.0" originally published in May 1988.
I know it's a program that runs on the GPU, but how did the Pixar guys come up with the name shader for it?

Although shader is not part of most dictionaries, the noun shading should be. dictionary.com, for example, defines it as
a slight variation or difference of color, character, etc.
This is is exactly what a shader does: It allows the programmer to add variations of color to a surface.

Shades is a very very very important factor in human vision. It allows recognizing shapes. And it's so because we live in a planet with plenty of shades, because of mainly Sun.
Realism needs shades. The Sun is not the only light. Conclusion: Good coloring implies shading from one or several lights. "Shaders" is the natural name for that programs that do it.

Related

gl_FragCoord - insuffucient definition in ES Shading Language?

It appears to me that gl_FragCoord is not sufficiently defined in the ES shading language specification: here
What is missing in my opinion is a specification of where pixel centers are supposed to lie: at integer coordinates or right between them. In contrast the regular Shading Language Specification of gl_FragCoord has this nailed down: here
Worse even I get mixed results on different platforms: An ARM Mali T604 seems to follow the .5 convention whereas an Adreno 330 seems to put the pixel centers at full integers (both tested on Android 4.4.2).
Can someone enlighten me on what's best practice here?
Going through the actual specification document, I found this:
1.1.4 Changes from OpenGL GLSL 3.3:
Removed:
* Layout qualifiers: index, origin_upper_left and pixel_center_integer
I don't know these qualifiers were omitted from OpenGL ES, and I couldn't find a clear mention which convention is the correct one (or if it's left for implementations to decide), although I think the traditional way is at half-integer coordinates. In any case, looks like you'll have to add some code to e.g. round the values down to get consistent behaviour.
By the way, the man pages are not to be trusted - they tend to omit a lot of stuff and contain errors. Specification is always the authority.

Programming a 3d game without the use of a graphics API

As the title says, I'd like to program a 3d game (probably a BattleZone clone), but without the use of an API like OpenGL, DirectX, and the like. At the heart of the matter, I'd just like to learn how to draw basic 3d shapes to the screen and manipulate them. Don't care if it looks like crap. I've used OpenGL to achieve similar ends before, but really didn't learn about these topics.
The problem is, I have no idea where to start. I downloaded the Doom source code, but it's a bit over my head. Although I've programmed a bit, graphical matters are very much out of my depth.
I'd be very grateful if anyone could offer links or code (in any language) that would help me along in my purpose.
Sounds like an exciting project. I did something similar in the late 90's. Before OpenGL and DirectX became popular, there were a ton of great books on the subject.
Fundamentally you will have to learn how to
Represent 3D geometry
Transform that geometry (translate and rotate)
Project that geometry onto a 2D screen.
Each of those major topics has many sub-topics (for example, complex objects can be constructed from a number of polygons. You may want to limit polygons to being constructed of triangles only, or support other polygons. You may want to load common model formats e.g. .obj files so that you can create models with off the shelf tools).
The topics are way too broad for a detailed answer here. Whole books are written on the subject, including
Black Art of 3D Game Programming (Book, amazingly still available)
For a good introduction to the general topics, have a look at:
http://en.wikipedia.org/wiki/3D_projection
http://en.wikipedia.org/wiki/Orthographic_projection
http://en.wikipedia.org/wiki/Transformation_matrix#Perspective_projection
Doom, which you already looked at, used a special optimization called heightfield rendering and does not allow for rendering of arbitrary 3D shapes (e.g., you will not find a bridge in Doom that you can walk under).
I have the second edition of Computer Graphics: Principles and Practice in C and it uses SRGP (Simple Raster Graphics Programming) and SIGGRAPH which is a wrap-around SRGP, if you look up articles and papers on graphics research you'll see that both these libraries are used a lot, and they are way more direct and low level than the APIs you mentioned. I'm having a hard time locating them, so if you do, please give a link. Note that the third edition is in WPF, so I cannot guarantee much as to it's usefulness, and I don't know if the second edition is still in print, but I have found numerous references to the book, and it's got it's own page in Wikipedia.
Another solution would be the Win32 API which again does not provide much in terms of rendering, but it is trivial to draw dots and lines onto a window. I have written a few tutorials on it, but I didn't cover drawing pixels and lines, so they'll only be useful if you have trouble with the basics of setting up a window. Note that it is not intended for real-time rendering, so it may get slow.
Finally you can look at X11 programming, the foundation of most modern operating systems with a GUI. I haven't found the libraries for Windows, but again I didn't invest too much time on it. I know it is available for CIGWIN and for Linux in general though, and I believe it would be very interesting to look at the core of graphics since you're already looking under the hood of 3D graphics.

Why do we still use Fixed function blending operations in D3D11 ect?

I was looking aground trying to understand why we are still using fixed function blending modes in newer 3D API's (like D3D11). In D3D10 fixed function Alpha Clipping was removed in favor of using the shaders. Why because its a much more powerful approach to almost any situation.
So why then can we not calculate or own blending operations (aka texture sample from the RenderTarget we are currently rendering into)?? Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
The reason this would be useful, is because you could do things like make refraction shaders run way faster as you wouldn't have to swap back and forth between two renderTargets for each refractive object overlay. Such as a refractive windowing system for an OS or game UI.
Where might be the best place to suggest an idea like this as this is not a discussion forum as I would love to see this in D3D12? Or is this already possible in D3D11?
So why then can we not calculate or own blending operations
Who says you can't? With shader_image_load_store (and the D3D11 equivalent), you can do pretty much anything you want with images. Provided that you follow the rules. That last part is generally what trips people up. Doing a full read/modify/write in a shader, such that later fragment shader invocations don't read the wrong value is almost impossible in the most general case. You have to restrict it by saying that each rendered object will not overlap with itself, and you have to insert a memory barrier between rendered objects (which can overlap with other rendered objects). Or you use the linked list approach.
But the point is this: with these mechanisms, not only have people implemented blending in shaders, but they've implemented order-independent transparency (via linked lists). Nothing is stopping you from doing what you want right now.
Well, nothing except performance of course. The fixed-function blender will always be faster because it can run in parallel with the fragment shader operations. The blending units are separate hardware from the fragment shaders, so you can be doing blending operations while simultaneously doing fragment shader ops (obviously from later fragments, not the ones being blended).
The read/modify/write mechanism in the blend hardware is designed specifically for blending, while the image_load_store is a more generic mechanism. And while generic may beat specific in the long-term of hardware evolution, for the immediate and near-future, you can expect fixed-function blending to beat image_load_store blending performance-wise every time.
You should use it only when you must. And even the, decide if you really, really need it.
Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
Yes, this is actually the case. If one could do blending in the fragment shader, this would introduce possible feedback loops, and this really complicates things. Blending is done in a separate hardwired stage for performance and parallelization reasons.

Recommended 3D Programming Aspects for Light/Laser Show Simulator?

Hey guys, I would like to develop a light/laser show editor and simulator, and for this of course I am going to learn some graphics programming. I am thinking about using C# and XNA.
I was just wondering what aspects of graphics programming I should research or focus on given the project I am working on. I am new to graphics programming so I don't know much about it, but for example I imagine something that I might look into would (possibly?) be volumetric lighting.
For example, what would be a practical way to go about rendering a 'laser' of varied width/color? I read somewhere to just draw a cylinder and apply a shader to it, I would like to confirm that this is the way.
Given that this seems like a big project, I was thinking about starting off by creating light sources and giving them properties so that I can easily manipulate them. I have (mis)read that only a certain amount of lights can be rendered at any given time, I believe eight. Does this only apply to ambient lights? Given this possible limitation, and the fact that most of the lights I will use will be directional, such as head-lights or lasers, what would be a different way to render these? Is that what volumetric lighting would be?
I'd just like to get some things clear before I dive into it. Since I'm new to this I probably didn't make the best use of words, so if something doesn't make sense please let me know. Thanks and sorry for my ignorance.
The answer to this depends on the level of sophistication that you need in your display simulation. Computer graphics is ultimately a simulation of the transport of light; that simulation can be as sophisticated as calculating the fraction of laser light deflected by particles in the atmosphere to the viewer's eyepoint, or as simple as drawing a line. Try out the cylinder effect and see if it works for your project. If you need something more sophisticated, look into shader programming (using Nvidia Cg, for example), and volumetric shading as you mentioned; also post-processing glow effects may be useful. For OpenGL, I believe there is a limit of 8? light sources in a scene, but you could conceivably work around this limit by doing your own shading logic.
Well if it's just for light show simulations I'd imagine your going to need a lot of custom lighting effects - so regardless if you decide to use XNA or straight DirectX your best bet would be to start by learning shader languages and how to program various lighting effects using them. Once you can reproduce the type of laser lighting you want, then you can experiment with the polygons you want to use to represent the lasers. (I've used the cylinder method in some of my work for personal purposes, but I'm not sure how well straight cylinders will fit your purpose).
Although its faster, I think its best not to use vanilla hardware lighting because of its limitations. Pixel shaders can help with you task. Also you may want to chose OpenGL because of portability and its clarity in rendering methods. I worked on Direct3D for several years before switching to OpenGL. OpenGL functions and states are easier to learn and rendering methods (like multi-pass rendering) is a lot clear. If you like to code on C# (which I dont recommend for these tasks), you can use CsGL library to access OpenGL functions.

Polygon Triangulation with Holes

I am looking for an algorithm or library (better) to break down a polygon into triangles. I will be using these triangles in a Direct3D application. What are the best available options?
Here is what I have found so far:
Ben Discoe's notes
FIST: Fast Industrial-Strength Triangulation of Polygons
I know that CGAL provides triangulation but am not sure if it supports holes.
I would really appreciate some opinions from people with prior experience in this area.
Edit: This is a 2D polygon.
To give you some more choices of libraries out there:
Polyboolean. I never tried this one, but it looks promising: http://www.complex-a5.ru/polyboolean/index.html
General Polygon Clipper. This one works very well in practice and does triangulation as well as clipping and holes holes: http://www.cs.man.ac.uk/~toby/alan/software/
My personal recommendation: Use the tesselation from the GLU (OpenGL Utility Library). The code is rock solid, faster than GPC and generates less triangles. You don't need an initialized OpenGL-Handle or anything like this to use the lib.
If you don't like the idea to include OpenGL system libs in a DirectX application there is a solution as well: Just download the SGI OpenGL reference implementation code and lift the triangulator from it. It just uses the OpenGL-Typedef names and a hand full of enums. That's it. You can extract the code and make a stand alone lib in an hour or two.
In general my advice would be to use something that alreay works and don't start to write your own triangulation.
It is tempting to roll your own if you have read about the ear-clipping or sweep-line algorithm, but fact is that computational geometry algorithms are incredible hard to write in a way that they work stable, never crash and always return a meaningful result. Numerical roundoff errors will accumulate and kill you in the end.
I wrote a triangulation algorithm in C for the company I work with. Getting the core algorithm working took two days. Getting it working with all kinds of degenerated inputs took another two years (I wasn't working fulltime on it, but trust me - I spent more time on it than I should have).
Jonathan Shewchuk's Triangle library is phenomenal; I've used it for automating triangulation in the past. You can ask it to attempt to avoid small/narrow triangles, etc., so you come up with "good" triangulations instead of just any triangulation.
CGAL has the tool you need:
Constrained Triangulations
You can simply provide boundaries of your polygon (incuding the boundaries of the holes) as constraints (the best would be that you insert all vertices, and then specify the constraints as pairs of Vertex_handles).
You can then tag the triangles of the triangulation by any traversal algorithm: start with a triangle incident to the infinite vertex and tag it as being outside, and each time you cross a constraint, switch to the opposite tag (inside if you were previously tagging the triangles as outsider, outside if you were tagging triangles as insider before).
I have found the poly2tri library to be exactly what I needed for triangulation. It produces a much cleaner mesh than other libraries I've tried (including libtess), and it does support holes as well. It's been converted to a bunch of languages. The license is New BSD, so you can use it in any project.
Poly2tri library on Google Code
try libtess2
https://code.google.com/p/libtess2/downloads/list
based on the original SGI GLU tesselator (with liberal licensing). Solves some memory management issues around lots of small mallocs.
You can add the holes relatively easily yourself. Basically triangulate to the convex hull of the input points, as per CGAL, and then delete any triangle whose incentre lies inside any of the hole polygons (or outside any of the external boundaries). When dealing with lots of holes in a large dataset, masking techniques may be used to significantly speed this process up.
edit: A common extension to this technique is to weed weak triangles on the hull, where the longest edge or smallest internal angle exceeds a given value. This will form a better concave hull.
I have implemented a 3D polygon triangulator in C# using the ear clipping method. It is easy to use, supports holes, is numerically robust, and supports aribtrary (not self-intersecting) convex/non-convex polygons.
This is a common problem in finite element analysis. It's called "automatic mesh generation". Google found this site with links to commercial and open source software. They usually presume some kind of CAD representation of the geometry to start.
Another option (with a very flexible license) is to port the algorithm from VTK:
vtkDelaunay2D
This algorithm works fairly well. Using it directly is possible, but requires links to VTK, which may have more overhead than you want (although it has many other nice features, as well).
It supports constraints (holes/boundaries/etc), as well as triangulating a surface that isn't necessarily in the XY plane. It also supports some features I haven't seen elsewhere (see the notes on Alpha values).

Resources