DirectX Tessellation specific algorithem - graphics

I know the for the DX tessellation process, the input is like Domain type ( Triangle, Quad, Isoline) and Tessellation Factor (per Edge) and partition type (like Odd Fractional Even Fractional Integer Pow2) while the output is like a generated point list (like a vertex buffer) and topology (like an index buffer).
The question is what is the real algorithm inside which means how to generate the output base on the input?
Are there any algorithm document describe this? Also why DX choose such an implementation for tessellation?
Thanks.

The DirectX 11 tessellation hardware is designed to support a range of different tessellation schemes: Bezier, NURBs, Subdivision, Displacement, etc.
Samples include DirectX SDK GitHub SimpleBezier11 and SubD11, as well as SilhouetteTessellation in the AMD Radeon SDK.
See this post for a list of other resources including presentations.

Related

fixed function vs shader based

I'm a beginner to computer graphics and am trying to get a better understanding. My professor has discussed fixed function pipeline and shader based programming. How do these two compare to each other? What's the difference?
The fixed-function pipeline is as the name suggests — the functionality is fixed. So someone wrote a list of different ways you'd be permitted to transform and rasterise geometry, and that's everything available. In broad terms, you can do linear transformations and then rasterise by texturing, interpolate a colour across a face, or by combinations and permutations of those things. But more than that, the fixed pipeline enshrines certain deficiencies.
For example, it was obvious at the time of design that there wasn't going to be enough power to compute lighting per pixel. So lighting is computed at vertices and linearly interpolated across the face.
There were some intermediate extensions related to specific effects — dot3 plus cubemaps for per-pixel lighting from a single source, for example — but the programmable pipeline lets you do whatever you want at each stage, giving you complete flexibility.
In the first place that allowed better lighting, then better general special effects (ripples on reflective water, imperfect glass, etc), and more recently has been used for things like deferred rendering that flip the pipeline on its end.
All support for the fixed-functionality pipeline is implemented by programming the programmable pipeline on hardware of the last decade or so. The programmable pipeline is an advance on its predecessor, afforded by hardware improvements.
Graphics Processing Units started off very simply with fixed functions, that allowed for quick 3D maths (much faster than CPU maths), and texture lookup, and some simple lighting and shading options (flat, phong, etc).
These were very basic but allowed the CPU to offload the very repetitive tasks of 3D rendering to the GPU. Once the Graphics was taken away from the CPU, and given to the GPU, Games made a massive leap forward.
It wasn't long before the fixed functions needed to be changed to assembly programs and soon there was demand for doing more than simple shading, basic reflections, and single texture maps offered by the fixed function GPUs.
So the 2nd breed of GPU was created, this had two distinct pipelines, one that processed vertex programs and moved verts around in 3D space, and the shader programs that worked with pixels allowing multiple textures to be merged, and more lights and shades to be created.
Now in the latest form of GPU all the pipes in the card are generic, and can run any type of GPU assembler code. This increased in the number of uses for the pipe - they still do vertex mapping, and pixel color calculation, but they also do geometry shaders (tessellation), and even Compute shaders (where the parallel processor is used to do a non-graphics job).
So fixed function is limited but easy, and now in the past for all but the most limited devices. Programmable function shaders using OpenGL (GLSL) or DirectX (HLSL) are the de-facto standard for modern GPUs.
Essential the fixed function pipeline is a hardwired implementation of a, well, fixed program, through which each piece of data a GPU processes traverses, without the ability to change the details of any step. The only thing you can parameterize are the occasional branch to switch between hardcoded paths in the program (like enabling or disabling lighting, or using a separate specular) or some constants used (light colors and positions, texture environment base color modulation). And each and every step follows a specific formula.
In a programmable pipeline however the GPU is clean slate. It's completely up to the programmer how the various stages of the rendering process (vertex transformation, tesselation, fragment processing) are carried out. And you can use whatever formula you see fit for the task.
Fixed function pipeline GPUs have exactly one illumination mode: A Lambertian illumination model, implemented using Gourad or Phong shading. There were a few tricks to slightly alter the illumination model, for example to be anisotropic, but you had to somehow outsmart (or outdumb to be hones) the GPU for this. With a programmable pipeline you simply do what you wanted to do in the first place.

How heavy is hardware tessellation?

If tessellation gives a bonus over just using high-poly models,then why do modern 2012 games still use gigantic models that take a lot of hard disk space instead of tessellating it all and just adjusting the tessellation factor to depend on distance from camera,creating a nice level of detail.
You can't get back detail by tessellation that was not there in the first place. It just means those models would be even bigger without it being available.
In its most basic form, tessellation is a method of breaking down polygons into finer pieces. For example, if you take a square and cut it across its diagonal, you’ve “tessellated” this square into two triangles. By itself, tessellation does little to improve realism. For example, in a game, it doesn’t really matter if a square is rendered as two triangles or two thousand triangles—tessellation only improves realism if the new triangles are put to use in depicting new information.
When a displacement map (left) is applied to a flat surface, the
resulting surface (right) expresses the height information encoded in
the displacement map. The simplest and most popular way of putting the
new triangles to use is a technique called displacement mapping. A
displacement map is a texture that stores height information. When
applied to a surface, it allows vertices on the surface to be shifted
up or down based on the height information. For example, the graphics
artist can take a slab of marble and shift the vertices to form a
carving. Another popular technique is to apply displacement maps over
terrain to carve out craters, canyons, and peaks
http://www.nvidia.com/object/tessellation.html
I think the reason why nobody uses hardware tessellation in games is, that ca. 60% of all game player are console player and aslong the console doesnt support shadermodel5, there is no reason to do games that uses hardware tessellation. Even if they do, they may be have to do a game in dx9 and dx11 because it is not really good downward compatible... but maybe there is an other reason to!
With the new consoles comming out this year, maybe HW Tessellation gets an other change ;)

Calculation of sound source position in 3d space

I have a 3d vector for a listener position and a 3d vector for a sound source. I also have a 3d vector for the orientation of the listener. I am trying to find the NED (north, east, down) for the position of the source relative to the listener so I can play the sounds in the right speakers... I've made so research but I can't seem to find the necessary equations...
Any idea?
Thanks!
The Ambisonics B-Format codec does what exactly what you're describing. However, although the specification of this codec is open, finding it is rather challenging due to it's unfortunate unpopularity.
The good news is, I've written a BSD open-source project called "Ambisonix" that details all the equations required to achieve up to 3rd order Ambisonics encoding and decoding. I've also added on some features such as distance encoding and Doppler effect which are not part of the original spec.
Check it out at: http://sourceforge.net/projects/ambisonix
I don't think you're going to find exactly what you're looking for. Spatial location of sound sources in a 3D field is a very complex subject and depends on many factors (listener location, loudspeaker locations, source material). The closest to what you're describing is probably Ambisonics, but this needs the listening setup to be Ambisonics too, which is not very common. If you're using something like Dolby Digital, I don't think they give out the equations, you need to license the algorithms or mix the source material with equipment which has the algorithms licensed and built in. However, systems such as Dolby are not really designed for precise sound source location in a 3D field - they're really just a spatial effect which gives the listener the feeling of a 3D sound field.
You need to subtract listener vector from sound vector then you can rotate through listener orientation vector. Now you can simple check if new vector is positive or negative on axes. For example vector [ 0, 10, -2] can be read as [0,+,-] and it means [central, up, back]. To use N-S W-E U-D directions just don't rotate vector after subtract.

How to use shaders in OpenGL ES with iPhone SDK

I have this obsession with doing realtime character animations based on inverse kinematics and morph targets.
I got a fair way with Animata, an open source (FLTK-based, sadly) IK-chain-style animation program. I even ported their rendering code to a variety of platforms (Java / Processing and iPhone) alt video http://ats.vimeo.com/612/732/61273232_100.jpg video of Animata renderers
However, I've never been convinced that their code is particularly optimised and it seems to take a lot of simulation on the CPU to render each frame, which seems a little unnecessary to me.
I am now starting a project to make an app on the iPad that relies heavily on realtime character animation, and leafing through the iOS documentation I discovered a code snippet for a 'two bone skinning shader'
// A vertex shader that efficiently implements two bone skinning.
attribute vec4 a_position;
attribute float a_joint1, a_joint2;
attribute float a_weight1, a_weight2;
uniform mat4 u_skinningMatrix[JOINT_COUNT];
uniform mat4 u_modelViewProjectionMatrix;
void main(void)
{
vec4 p0 = u_skinningMatrix[int(a_joint1)] * a_position;
vec4 p1 = u_skinningMatrix[int(a_joint2)] * a_position;
vec4 p = p0 * a_weight1 + p1 * a_weight2;
gl_Position = u_modelViewProjectionMatrix * p;
}
Does anybody know how I would use such a snippet? It is presented with very little context. I think it's what I need to be doing to do the IK chain bone-based animation I want to do, but on the GPU.
I have done a lot of research and now feel like I almost understand what this is all about.
The first important lesson I learned is that OpenGL 1.1 is very different to OpenGL 2.0. In v2.0, the principle seems to be that arrays of data are fed to the GPU and shaders used for rendering details. This is distinct from v1.1 where more is done in normal application code with pushmatrix/popmatrix and various inline drawing commands.
An excellent series of blog posts introducing the latest approaches to OpenGL available here: Joe's Blog: An intro to modern OpenGL
The vertex shader I describe above is a runs a transformation on a set of vertex positions. 'attribute' members are per-vertex and 'uniform' members are common across all vertices.
To make this code work you would feed in an array of vector positions (the original positions, I guess), corresponding arrays of joints and weights (the other attribute variables) and this shader would reposition the input vertices according to their attached joints.
The uniform variables relate first to the supplied texture image, and the projection matrix which I think is something to do with transforming the world coordinate system to something more appropriate to the particular requirements.
Relating this back to iPhone development, the best thing to do is to create an OpenGL ES template project and pay attention to the two different rendering classes. One is for the more linear and outdated OpenGL 1.1 and the other is for OpenGL 2.0. Personally I'm throwing out the GL1.1 code given that it applies mainly to older iPhone devices and since I'm targeting the iPad it's not relevant any more. I can get better performance with shaders on the GPU using GL2.0.

Non-Affine image transformations in .NET

Are there any classes, methods in the .NET library, or any algorithms in general, to perform non-affine transformations? (i.e. transformations that involve more than just rotation, scale, translation and shear)
e.g.:
(source: last100.com)
Is there another term for non-affine transformations?
I am not aware of anything integrated in .Net letting you do non affine transforms.
I guess you are trying to have some sort of 3D texture mapping? If that's the case you need an homogenous affine transform, which is not available in .Net. I'm also not aware of any integrated way to make pixel displacement transforms in .Net.
However, the currently voted solution might be good for what you are trying to do, just be aware that it won't do perspective correction out of the box.
For instance:
The picture on the left was generated using the single quad distort library provided by Neil N. The picture on the right was generated using a single quad (two triangles actually) in DirectX.
This may not have any impact on what you are trying to do, but this is something to keep in mind if you want to do 3D stuff, it will look very weird without perspective correct mapping.
All of the example images you posted can be done with a Quadrilateral Distortion. Though I cant say for certain that a quad distort will cover ALL non affine transforms.
Heres a link to a not so good implementation of it in C#... it works, but is slow. Poke around Wikipedia for the many different optimizations available for these kinds of calculations
http://www.vcskicks.com/image-distortion.html
-Neil
You can do this in wpf using a the Viewport3d control and a non-affine transform matrix. Rendering this to a bitmap again may be interesting.... Which I "fixed" by including an invisible <image> control with the same image as on my textured plane... (Also, I've had to work around the max texture size issues by splitting up the plane and cropping images...)
http://www.charlespetzold.com/blog/2007/08/060605.html
In my case I wanted the reverse of this (transform so arbitrary points on the warped become the corners of my rectangular window), which is the Inverse of the matrix to do the opposite.

Resources