I'm trying to implement marching cube algorythm in my geometry shader. So i place my datagrid into a Texture3D. Now i want to look up the data in the geometry shader and this trows an error "cannot map expression to gs_4_0 instruction set"
This is the line of code where he trows the error
cubeVale[0] = dataFieldTex.Sample( samPoint, float3(k, j, i)).a;
I hope someone can help me out here.
ty
Sample() only works in pixel shaders, since it automatically computes the mipmap lod to use by taking derivatives of the texture coordinates, and derivatives are only available in pixel shaders.
MSDN has a list of texture object methods and the shader profiles they work in. For the gs_4_0 profile your choices are Load(), SampleLevel() or SampleGrad(). You probably want SampleLevel(), especially if your Texture3D only has one mip level.
Related
I am very new to Shaders and programming in direct 11(c++) and HLSL for shaders. However, I have been given a task to:
Implement cube mapping of a static environment onto a complex model (not a cube). Cube mapping allows an object to reflect the scene around it.
There aren't many resources online can anyone please tell me the steps to follow to achieve a correct cube mapping. I'm more concerned about the calculations to do on the HLSL side.
For a very basic environment mapping, all you need to do is:
Compute the position and surface normal of the current pixel (in the pixel shader) in world space
Compute the (normalised) view direction (world space pixel position - world space camera position)
Compute the reflection vector from view direction and surface normal (there is a builtin HLSL function to do that, if you don't want to do the math yourself)
Sample the cube map with that reflection vector, and return that color.
This then works like a mirror: The reflection vector is the direction in which your line of sight would get reflected if the surface of your mesh would be a perfect mirror, and then you ask the cube map what color is in that direction (aka, whats the reflection you're seeing). How simple or complex your mesh shape is, doesn't matter during this, because you're always only looking at one pixel of that (rasterized) mesh at a time, using that pixels sufrace normal as a guide.
More advanced environment mapping techinques will then blur the reflection based on the surface roughness (usually by sampling different mip map levels of your cube map), merge the color with other light/color computations of that pixel, add indirect environment mapping coloring (which requires sampling a different cube map, which was pre-computed in a special way, with the direction of the surface normal directly), etc. That's then where all the papers and stuff come into play, but the very basic concept of environment mapping are just a few lines of code and is very straight forward.
Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...
I have a texture and I need to know its dimensions within a pixel shader. This seems like a job for GetDimensions. Here's the code:
Texture2D t: register(t4);
...
float w;
float h;
t.GetDimensions(w, h);
However, this results in an error:
X4532: cannot map expression to pixel shader instruction set
This error doesn't seem to be documented anywhere. Am I using the function incorrectly? Is there a different technique that I should use?
I'm working in shader model 4.0 level 9_1, via DirectX.
This error usually occurs if a function is not available in the calling shader stage.
Is there a different technique that I should use?
Use shader constants for texture width and height. It saves instructions in the shader, which may also be better performance-wise.
I am a beginner in Graphics Programming. I came across a case where a "ResourceView" is created out of texture and then this resource view is set as VS Resource. To summarize:
CreateTexture2D( D3D10_TEXTURE2D_DESC{ 640, 512, .... **ID3D10Texture2D_0c2c0f30** )
CreateShaderResourceView( **ID3D10Texture2D_0c2c0f30**, ..., **ID3D10ShaderResourceView_01742c80** )
VSSetShaderResources( 0, 1, [**0x01742c80**])
When and what are the cases when we use textures in Vertex Shaders?? Can anyone help?
Thanks.
That completely depends on the effect you are trying to achieve.
If you want to color your vertices individually you would usually use a vertex color component. But nothing is stopping you from sampling the color from a texture. (Except that it is probably slower.)
Also, don't let the name fool you. Textures can be used for a lot more than just coloring. They are basically precomputed functions. For example, you could use a Textue1D to submit a wave function to animate clothing or swaying grass/foilage. And since it is a texture, you can use a different wave for every object you draw, without switching shaders.
The Direct3D developers just want to provide you with a maximum of flexibility. And that includes using texture resources in all shader stages.
I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture.
For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent.
Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably).
This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.
Not too much but here's a start:
The color fill is accomplished by using a shader. Reimer's XNA Tutorials on pixel shaders is a great resource on the topic.
You need to calculate the geometry and build up vertex buffers to hold it. Note that all vector geometry in XNA is in 3D, but using a camera fixed to a plane will simulate 2D.
To add different colors to different triangles you basically need to group geometry into separate vertex buffers. Then, using a shader with a color parameter, for each buffer,
set the appropriate color before passing the buffer to the graphics device. Alternatively, you can use a vertex format containing color information, which basically let you assign a color to each vertex.