How to use min-precision in d3d9 - graphics

I want to know how to use min-precision in d3d9. DX11 spec describes that min-precision enumeration is added into d3d9. However, When I write a pixel shader with min16float keyword and translate it with fxc.exe, it fails and reports "ps_3_0 doesnot support min-precision". So how can I use min-precision in d3d9?

TL;DR: For pixel shader 3.0, use half instead of min16float.
The DirectX 11 minimum shader precision feature is supported for Direct3D Hardware Feature Level 9.x devices. On Feature level 9.x, the VS and PS can have different precisions which is what is expressed in the AllOtherShaderStagesMinPrecision.AllOtherShaderStagesMinPrecision field.
Direct3D 9 shader model 2.0 only required 24 bits of precision for pixel shaders, and supported 16 bits when using half or _pp types. The DirectX 11 shader precision feature support for the 9.x feature level profiles make use of this.
See Microsoft Docs.

Related

Does Direct3D Feature Level guarantee non-power-of-2 support for volume textures?

In Direct3D9 there were capability flags like :
D3DPTEXTURECAPS_NONPOW2CONDITIONAL: ... conditionally supports the use of 2D textures with dimensions that are not powers of two ...
D3DPTEXTURECAPS_POW2: ... all textures must have widths and heights specified as powers of two. This requirement does not apply to ... volume textures ...
D3DPTEXTURECAPS_VOLUMEMAP_POW2: Device requires that volume texture maps have dimensions specified as powers of two.
In Direct3D10 there are feature levels instead.
Feature level 10_0 and above have:
Nonpowers-of-2 unconditionally⁴
⁴ At feature levels 10_0, 10_1 and 11_0, the display device unconditionally supports the use of 2-D textures with dimensions that are not powers of two.
But 3-D textures are not mentioned.
Are there any guarantees about support for non-power-of-2 volume textures in D3D10+?
Direct3D 10 and later defined all resource sizes to have no specific restrictions for power-of-2 sizing, or filtering functionality associated with them.
While they are not super easy to understand, you can look at the engineering specs for Direct3D 11 on GitHub

DirectX Tessellation specific algorithem

I know the for the DX tessellation process, the input is like Domain type ( Triangle, Quad, Isoline) and Tessellation Factor (per Edge) and partition type (like Odd Fractional Even Fractional Integer Pow2) while the output is like a generated point list (like a vertex buffer) and topology (like an index buffer).
The question is what is the real algorithm inside which means how to generate the output base on the input?
Are there any algorithm document describe this? Also why DX choose such an implementation for tessellation?
Thanks.
The DirectX 11 tessellation hardware is designed to support a range of different tessellation schemes: Bezier, NURBs, Subdivision, Displacement, etc.
Samples include DirectX SDK GitHub SimpleBezier11 and SubD11, as well as SilhouetteTessellation in the AMD Radeon SDK.
See this post for a list of other resources including presentations.

How to know which triangle contribute to the color of a pixel?

I'm total new in graphics and DX, encountered a problem and no one around me know graphics too. Sorry if the question seems too naive.
I use DirectX 11 to render a mesh, and I want to get a buffer for each pixel. This buffer should store a linked-list (or some other structure) of all triangles that contribute color to this pixel.
Should I operate on which shader or which part of DX? Or simply, where could I get the triangle information in pixel shader?
You can write the triangle ID in the pixel shader but using the hardware z-buffer you can only capture one triangle per pixel.
With multisampled textures you can capture more triangles. This should be enough in practical situations.
If your triangles are extremely small and many of them are visible within one pixel then you should consider the A-Buffer with your own hidden surface removal algorithm.
If you need it only for debug purposes you can use any of graphics debuggers:
Visual Studio Graphics Debugger (integrated since Visual Studio 2012)
For AMD GPUs: GPUPerfStudio
For NVidia GPUs: Nsight
Good old PIX from DX SDK.
If you need it at runtime (BTW, why? =) )
Use System-Generated Values: VertexID, PrimitiveID and SV_VertexID to calculate exact primitive or even vertex, that contributed in pixel color. It is tricky, but possible.
Another way is to use some kind of custom triangle ID in vertex declaration. But be aware of culling.
You can output final data from pixel shader into buffer, then read from it on CPU.
All of such problems are pretty advanced topics in DirectX. I'm not sure if "total new in graphics and DX" coder can solve it.

OpenGL Color Index for iPhone's OpenGL ES 1.1?

Is it possible to use color pallettes in openGL ES 1.1?
I'm currently developing a game which has player sprites, and the player sprites need to be able to be changed to different teams' colors. For example, changing the shirts' colors but not the face colors, which rules out simple hue rotation.
Is this possible, or will this have to be implemented manually (modifying the texture data directly)?
Keep in mind that anything other than non-mipmapped GL_NEAREST will blend between palette indices. I ended up expanding paletted textures in my decompression method before uploading them as BGRA32. (GLES 2.0)
It's not a hardware feature of the MBX but a quick check of gl.h for ES 1.x from the iPhone SDK reveals that GL_PALETTE4_RGB8_OES, GL_PALETTE8_RGBA8_OES and a bunch of others are available as one of the constants to pass to glCompressedTexImage2D, as per the man page here. So you can pass textures with palettes to that, but I'll bet anything that the driver will just turn them into RGB textures on the CPU and then upload them to the GPU. I don't believe Apple support those types of compressed texture for any reason other than that they're part of the ES 1.x spec.
On ES 2.x you're free to do whatever you want. You could easily upload the palette as one texture (with, say, the pixel at (x, 0) being the colour for palette index x) and the paletted texture as another. You'll then utilise two texture units to do the job that one probably could do when plotting fragments, so use your own judgment as to whether you can afford that.

How to use shaders in OpenGL ES with iPhone SDK

I have this obsession with doing realtime character animations based on inverse kinematics and morph targets.
I got a fair way with Animata, an open source (FLTK-based, sadly) IK-chain-style animation program. I even ported their rendering code to a variety of platforms (Java / Processing and iPhone) alt video http://ats.vimeo.com/612/732/61273232_100.jpg video of Animata renderers
However, I've never been convinced that their code is particularly optimised and it seems to take a lot of simulation on the CPU to render each frame, which seems a little unnecessary to me.
I am now starting a project to make an app on the iPad that relies heavily on realtime character animation, and leafing through the iOS documentation I discovered a code snippet for a 'two bone skinning shader'
// A vertex shader that efficiently implements two bone skinning.
attribute vec4 a_position;
attribute float a_joint1, a_joint2;
attribute float a_weight1, a_weight2;
uniform mat4 u_skinningMatrix[JOINT_COUNT];
uniform mat4 u_modelViewProjectionMatrix;
void main(void)
{
vec4 p0 = u_skinningMatrix[int(a_joint1)] * a_position;
vec4 p1 = u_skinningMatrix[int(a_joint2)] * a_position;
vec4 p = p0 * a_weight1 + p1 * a_weight2;
gl_Position = u_modelViewProjectionMatrix * p;
}
Does anybody know how I would use such a snippet? It is presented with very little context. I think it's what I need to be doing to do the IK chain bone-based animation I want to do, but on the GPU.
I have done a lot of research and now feel like I almost understand what this is all about.
The first important lesson I learned is that OpenGL 1.1 is very different to OpenGL 2.0. In v2.0, the principle seems to be that arrays of data are fed to the GPU and shaders used for rendering details. This is distinct from v1.1 where more is done in normal application code with pushmatrix/popmatrix and various inline drawing commands.
An excellent series of blog posts introducing the latest approaches to OpenGL available here: Joe's Blog: An intro to modern OpenGL
The vertex shader I describe above is a runs a transformation on a set of vertex positions. 'attribute' members are per-vertex and 'uniform' members are common across all vertices.
To make this code work you would feed in an array of vector positions (the original positions, I guess), corresponding arrays of joints and weights (the other attribute variables) and this shader would reposition the input vertices according to their attached joints.
The uniform variables relate first to the supplied texture image, and the projection matrix which I think is something to do with transforming the world coordinate system to something more appropriate to the particular requirements.
Relating this back to iPhone development, the best thing to do is to create an OpenGL ES template project and pay attention to the two different rendering classes. One is for the more linear and outdated OpenGL 1.1 and the other is for OpenGL 2.0. Personally I'm throwing out the GL1.1 code given that it applies mainly to older iPhone devices and since I'm targeting the iPad it's not relevant any more. I can get better performance with shaders on the GPU using GL2.0.

Resources