How to specify 50 vec3 for each vertex in vertex shader? - attributes

I have to design a GLSL code the handle some unusual calculations. The scene contains 2048 vertex and the position is predefined. While there are 50 vec3 for each vertex as attribute and will be used in the vertex shader. Since the max attribute location for each vertex is typically limited to be 16, so how can I specify such amount of data for vertex?
Maybe texture lookup in vertex shader is one way to solve this problem, but I am not sure whether it is the optimal way.

Related

What do production game systems pack into the attribute fields in WebGL?

It looks like you can pass in multiple things in the vertex buffer in addition to the position, such as color. What are a list of all the attributes used by production game systems in complex environments? What is a good example? Some things that come to mind:
velocity/torque
mass/density
temp/energy
emission/absorption
Is there a common set of things?
There is no set of common things except positions, texture coordinates, and normals. Maybe also vertex colors, binormals and tangets. Otherwise everything else is game specific.
Most games don't use shaders for physics so velocity, torque, mass, density, temp, energy, emission, absorption, are not common inputs to a shader.
Though the per-vertex attributes are very game specific, I am listing down a few based on categories.
Geometric data
Position
Normals
Texture coordinates (multiple, based on number of textures)
Tangent, Bitagent (For normal map calculations)
Joint weight (joint id, weight)
Joint transform matrix (transformation matrix for joints)
level of details (tessellation)
Material data
Vertex color
reflection value
refraction value
Various light info (Emissive, ambient and other methods)
Physics data
mass / density
force
velocity
Particle data
index
age
lifetime
size
velocity
angular velocity
Please feel free to keep updating this space.

Inputting Position Into the Pixel Shader

In most of the programs I have seen that make use of vertex position data in the Pixel Shader, there is a tendency to process it as a float4 vector. This restriction does not appear to be present fin the other shaders. In the program that I am currently writing, for instance, float2's are inputted into the VS and float3's into the GS with no problem. But when I try to input this data into the PS, it rejects all forms except for float4. Are other vector types not allowed into the PS? If so, why?
In a pixel shader, the SV_Position is a system-generated value which must be a float4. When you use the SV_Position semantic in a vertex shader, it's basically just an alias for the old POSITION semantic and comes from the Input Assembler in whatever format the Input Layout specifies. The binding between a vertex and geometry shader has to agree, but can be whatever value.
In other words, it has a special meaning for a pixel shader because it's the pixel position as computed by the rasterizer stage.

What happened in rasterizer stage?

I want to use Direct3D 11 to blend several images that from multi-view into one texture, so i do multiple projection at Vertex Shader stage and Geometry Shader stage, one of the projection's result stored in SV_Position, others stored in POSITION0, POSITION1 and so on. These positions would be used to sample the image.
Then at the Pixel shader stage, the value in SV_Position is typical like a (307.5,87.5), because it's in screen space. as the size of render target is 500x500, so the uv for sample is (0.615,0.0.175), it's correct. but value in POSITION0 would be like a (0.1312, 0.370), it's vertical reversed with offset. i have to do (0.5 + x, 0.5 - y). the projection is twisted and just roughly matched.
What do the rasterizer stage do on SV_Position?
The rasterizer stage expects the coordinates in SV_Position to be normalized device coordinates. In this space X and Y values between -1.0 and +1.0 cover the whole output target, with Y going "up". That way you do not have to care about the exact output resolution in the shaders.
So as you realized, before a pixel is written to the target another transformation is performed. One that inverts the Y axis, scales X and Y and moves the origin to the top left corner.
In Direct3D11 the parameters for this transformation can be controlled through the ID3D11DeviceContext::RSSetViewports method.
If you need pixel coordinates in the pixel shader you have to do the transformation yourself. For accessing the output resolution in the shader bind them as shader-constants, for example.

Regarding graphics pipeline

In graphics pipeline after vertex shader comes, primitive assembly-> Clipping to view frustum-> normalized device coordinates -> viewport transformation.
Now in vertex shader we multiply object cordinates by modelview and projection matrix. " The Projection Matrix transforms the vertices in view coordinates into the
canonical view volume (a cube of sides 2 2 2, centered at the origin, and aligned with the 3 coordinate axes). Typically, this will be either by an orthographic projection or a perspective projection. This transform includes multiplication by the projection transformation matrix followed by a normalization
of each vertex, calculated by dividing each vertex by its own w coordinate. "
Now, if this is done in vertex shader only why it comes after the vertex shader part in pipeline shouldn't it just be a part of vertex shader.? If not what is the output of projection matrix multiplied by vertex coordinates?
I'm not sure I understand your question, but after you multiply your points by modelview and projection matrix in vertex shader, your points will be in clip coordinates. This is done, because now, the graphics hw can determine which objects can be visible and which not. This is called clipping and it is a separate step after the vertex shader. After this, it does the perspective division (divides xyz coordinates by homogenous coordinate w, this is hard coded inside the gpu) to get normalized device coordinates [-1, 1].

How can I find the interpolated position between 4 vertices in a fragment shader?

I'm creating a shader with SharpDX (DirectX11 in C#) that takes a segment (2 points) from the output of a Vertex Shader and then passes them to a Geometry Shader, which converts this line into a rectangle (4 points) and assigns the four corners a texture coordinate.
After that I want a Fragment Shader (which recieves the interpolated position and the interpolated texture coordinates) that checks the depth at the "spine of the rectangle" (that is, in the line that passes through the middle of the rectangle.
The problem is I don't know how to extract the position of the corresponding fragment at the spine of the rectangle. This happens because I have the texture coordinates interpolated, but I don't know how to use them to get the fragment I want, because the coordinate system of a) the texture and b) the position of my fragment in screen space are not the same.
Thanks a lot for any help.
I think it's not possible to extract the position of the corresponding fragment at the spine of rectangle. But for each fragment you have interpolated position (all, what you need to get it is to transmit it to fragment shader, and it will be interpolated for each fragment), and texture coordinates. Why can't you use it? Why do you need to find exactly fragment coordinates?
Also, you can generate some additional data in geometry shader to do what you want.

Resources