How to access passed in fragment shader data in Metal? - graphics

Just as you use the qualifier [[ buffer(n) ]] to access information passed to a vertex shader in Metal, how can I data passed in using the setFragmentBuffer or setFragmentBytes? buffer isn't a valid qualifier for the fragment shader, and apparently texture and color both have other usage scenarios. I just want to pass in my own custom data like a uniform color or current system time for an entire primitive drawn.

Yes, buffer is a valid qualifier for fragment shader. What makes you think it isn't?
You do it just the same way for a fragment shader as you do for a vertex shader.

Related

How to write values to depth buffer in godot fragment shader?

How do you specify the depth value in the fragment shader, if you would like to for example render a texture of a sphere that also affect depht buffer in the cameras z-direction?
In OpenGL you can use gl_FragDepth. Is there a similar builtin variable in godot?
Edit:
I found that there is a variable DEPTH after posting the question that seems to be merged.. I have not had time to try it yet. If you have any experience from using successfully, I would accept that answer.
Yes, you can write to DEPTH from the fragment buffer of the shader of an spatial material.
Godot will, of course, also draw depth by default. You can control that with the render modes depth_draw_*, see Depth Draw Mode.
And if you want to read depth, you can use DEPTH_TEXTURE. The article Screen Reading Shaders has an example.
Refer to Spatial Shader for the list available variables and options in spatial shaders.

wgpu Compute Write Direct to Surface Texture View

I am relatively new to using gpu apis, even newer to wgpu, and wanted to mess around with compute shaders drawing to a surface.
However, it seems that this is not allowed directly?
During run time upon attempting to create a binding to the texture view from the surface, an error stating that the STORAGE BINDING bit is necessary, however, that is not allowed to be defined during the surface configuration. I have also attempted to have the shader accept the texture as a regular texture rather than a storage texture, but that came with its own error of the binding being invalid.
Is there a good way to write directly to the surface texture, or is it necessary to create a separate storage texture? Does the render pipeline under the hood not write directly to the surface's texture view?
If a separate texture (which I am guessing it is), is there a best method to follow?
The compute shader cannot write to surface texture directly, that is the responsibility of the fragment shader.
Since swapchain uses double or multi-buffering technology, the surface texture changes from frame to frame; Also, the usage of surface texture is RENDER_ATTACHMENT, which means that it can only be used for RenderPass's color_attachments;
Compute shader can only output Storaga Buffer and Storage Texture, these two types of data can be used by binding to a fragment shader.

Inputting Position Into the Pixel Shader

In most of the programs I have seen that make use of vertex position data in the Pixel Shader, there is a tendency to process it as a float4 vector. This restriction does not appear to be present fin the other shaders. In the program that I am currently writing, for instance, float2's are inputted into the VS and float3's into the GS with no problem. But when I try to input this data into the PS, it rejects all forms except for float4. Are other vector types not allowed into the PS? If so, why?
In a pixel shader, the SV_Position is a system-generated value which must be a float4. When you use the SV_Position semantic in a vertex shader, it's basically just an alias for the old POSITION semantic and comes from the Input Assembler in whatever format the Input Layout specifies. The binding between a vertex and geometry shader has to agree, but can be whatever value.
In other words, it has a special meaning for a pixel shader because it's the pixel position as computed by the rasterizer stage.

How can I find the interpolated position between 4 vertices in a fragment shader?

I'm creating a shader with SharpDX (DirectX11 in C#) that takes a segment (2 points) from the output of a Vertex Shader and then passes them to a Geometry Shader, which converts this line into a rectangle (4 points) and assigns the four corners a texture coordinate.
After that I want a Fragment Shader (which recieves the interpolated position and the interpolated texture coordinates) that checks the depth at the "spine of the rectangle" (that is, in the line that passes through the middle of the rectangle.
The problem is I don't know how to extract the position of the corresponding fragment at the spine of the rectangle. This happens because I have the texture coordinates interpolated, but I don't know how to use them to get the fragment I want, because the coordinate system of a) the texture and b) the position of my fragment in screen space are not the same.
Thanks a lot for any help.
I think it's not possible to extract the position of the corresponding fragment at the spine of rectangle. But for each fragment you have interpolated position (all, what you need to get it is to transmit it to fragment shader, and it will be interpolated for each fragment), and texture coordinates. Why can't you use it? Why do you need to find exactly fragment coordinates?
Also, you can generate some additional data in geometry shader to do what you want.

HLSL: Getting texture dimensions in a pixel shader

I have a texture and I need to know its dimensions within a pixel shader. This seems like a job for GetDimensions. Here's the code:
Texture2D t: register(t4);
...
float w;
float h;
t.GetDimensions(w, h);
However, this results in an error:
X4532: cannot map expression to pixel shader instruction set
This error doesn't seem to be documented anywhere. Am I using the function incorrectly? Is there a different technique that I should use?
I'm working in shader model 4.0 level 9_1, via DirectX.
This error usually occurs if a function is not available in the calling shader stage.
Is there a different technique that I should use?
Use shader constants for texture width and height. It saves instructions in the shader, which may also be better performance-wise.

Resources