So a vertex shader is executed for each vertex and a fragment shader for each fragment (right?).
How many times is a geometry shader executed?
It's executed once for each primitive (triangle, line or point) after the vertex shader has transformed the constituent vertices.
Related
In most of the programs I have seen that make use of vertex position data in the Pixel Shader, there is a tendency to process it as a float4 vector. This restriction does not appear to be present fin the other shaders. In the program that I am currently writing, for instance, float2's are inputted into the VS and float3's into the GS with no problem. But when I try to input this data into the PS, it rejects all forms except for float4. Are other vector types not allowed into the PS? If so, why?
In a pixel shader, the SV_Position is a system-generated value which must be a float4. When you use the SV_Position semantic in a vertex shader, it's basically just an alias for the old POSITION semantic and comes from the Input Assembler in whatever format the Input Layout specifies. The binding between a vertex and geometry shader has to agree, but can be whatever value.
In other words, it has a special meaning for a pixel shader because it's the pixel position as computed by the rasterizer stage.
I have to design a GLSL code the handle some unusual calculations. The scene contains 2048 vertex and the position is predefined. While there are 50 vec3 for each vertex as attribute and will be used in the vertex shader. Since the max attribute location for each vertex is typically limited to be 16, so how can I specify such amount of data for vertex?
Maybe texture lookup in vertex shader is one way to solve this problem, but I am not sure whether it is the optimal way.
In graphics pipeline after vertex shader comes, primitive assembly-> Clipping to view frustum-> normalized device coordinates -> viewport transformation.
Now in vertex shader we multiply object cordinates by modelview and projection matrix. " The Projection Matrix transforms the vertices in view coordinates into the
canonical view volume (a cube of sides 2 2 2, centered at the origin, and aligned with the 3 coordinate axes). Typically, this will be either by an orthographic projection or a perspective projection. This transform includes multiplication by the projection transformation matrix followed by a normalization
of each vertex, calculated by dividing each vertex by its own w coordinate. "
Now, if this is done in vertex shader only why it comes after the vertex shader part in pipeline shouldn't it just be a part of vertex shader.? If not what is the output of projection matrix multiplied by vertex coordinates?
I'm not sure I understand your question, but after you multiply your points by modelview and projection matrix in vertex shader, your points will be in clip coordinates. This is done, because now, the graphics hw can determine which objects can be visible and which not. This is called clipping and it is a separate step after the vertex shader. After this, it does the perspective division (divides xyz coordinates by homogenous coordinate w, this is hard coded inside the gpu) to get normalized device coordinates [-1, 1].
I'm creating a shader with SharpDX (DirectX11 in C#) that takes a segment (2 points) from the output of a Vertex Shader and then passes them to a Geometry Shader, which converts this line into a rectangle (4 points) and assigns the four corners a texture coordinate.
After that I want a Fragment Shader (which recieves the interpolated position and the interpolated texture coordinates) that checks the depth at the "spine of the rectangle" (that is, in the line that passes through the middle of the rectangle.
The problem is I don't know how to extract the position of the corresponding fragment at the spine of the rectangle. This happens because I have the texture coordinates interpolated, but I don't know how to use them to get the fragment I want, because the coordinate system of a) the texture and b) the position of my fragment in screen space are not the same.
Thanks a lot for any help.
I think it's not possible to extract the position of the corresponding fragment at the spine of rectangle. But for each fragment you have interpolated position (all, what you need to get it is to transmit it to fragment shader, and it will be interpolated for each fragment), and texture coordinates. Why can't you use it? Why do you need to find exactly fragment coordinates?
Also, you can generate some additional data in geometry shader to do what you want.
I have a texture and I need to know its dimensions within a pixel shader. This seems like a job for GetDimensions. Here's the code:
Texture2D t: register(t4);
...
float w;
float h;
t.GetDimensions(w, h);
However, this results in an error:
X4532: cannot map expression to pixel shader instruction set
This error doesn't seem to be documented anywhere. Am I using the function incorrectly? Is there a different technique that I should use?
I'm working in shader model 4.0 level 9_1, via DirectX.
This error usually occurs if a function is not available in the calling shader stage.
Is there a different technique that I should use?
Use shader constants for texture width and height. It saves instructions in the shader, which may also be better performance-wise.