wgpu Compute Write Direct to Surface Texture View - rust

I am relatively new to using gpu apis, even newer to wgpu, and wanted to mess around with compute shaders drawing to a surface.
However, it seems that this is not allowed directly?
During run time upon attempting to create a binding to the texture view from the surface, an error stating that the STORAGE BINDING bit is necessary, however, that is not allowed to be defined during the surface configuration. I have also attempted to have the shader accept the texture as a regular texture rather than a storage texture, but that came with its own error of the binding being invalid.
Is there a good way to write directly to the surface texture, or is it necessary to create a separate storage texture? Does the render pipeline under the hood not write directly to the surface's texture view?
If a separate texture (which I am guessing it is), is there a best method to follow?

The compute shader cannot write to surface texture directly, that is the responsibility of the fragment shader.
Since swapchain uses double or multi-buffering technology, the surface texture changes from frame to frame; Also, the usage of surface texture is RENDER_ATTACHMENT, which means that it can only be used for RenderPass's color_attachments;
Compute shader can only output Storaga Buffer and Storage Texture, these two types of data can be used by binding to a fragment shader.

Related

How to write values to depth buffer in godot fragment shader?

How do you specify the depth value in the fragment shader, if you would like to for example render a texture of a sphere that also affect depht buffer in the cameras z-direction?
In OpenGL you can use gl_FragDepth. Is there a similar builtin variable in godot?
Edit:
I found that there is a variable DEPTH after posting the question that seems to be merged.. I have not had time to try it yet. If you have any experience from using successfully, I would accept that answer.
Yes, you can write to DEPTH from the fragment buffer of the shader of an spatial material.
Godot will, of course, also draw depth by default. You can control that with the render modes depth_draw_*, see Depth Draw Mode.
And if you want to read depth, you can use DEPTH_TEXTURE. The article Screen Reading Shaders has an example.
Refer to Spatial Shader for the list available variables and options in spatial shaders.

SDL2 / Surface / Texture / Render

I'm trying to learn SDL2. The main difference (as I can see) between the old SDL and SDL2 is that old SDL had window represented by it's surface, all pictures were surfaces and all image operations and blits were surface to surface. In SDL2 we have surfaces and textures. If I got it right, surfaces are in RAM, and textures are in graphics memory. Is that right?
My goal is to make object-oriented wrapper for SDL2 because I had a similar thing for SDL. I want to have class window and class picture (has private texture and surface). Window will have it's contents represented by an instance of the picture class, and all blits will be picture to picture object blits. How to organize these picture operations:
Pixel manipulation should be on surface level?
If I want to copy part of one picture to another without rendering it, it should be on surface level?
Should I blit surface to texture only when I want to render it on the screen?
Is it better to render it all to one surface and then render it to the window texture or to render each picture to the window texture separately?
Generally, when should I use surface and when should I use texture?
Thank you for your time and all help and suggestions are welcome :)
First I need to clarify some misconceptions: The texture based rendering does not work as the old surface rendering did. While you can use SDL_Surfaces as source or destination, SDL_Textures are meant to be used as source for rendering and the complimentary SDL_Renderer is used as destination. Generally you will have to choose between the old rendering framework that is done entirely on CPU and the new one that goes for GPU, but mixing is possible.
So for you questions:
Textures do not provide direct access to pixels, so it is better to do on surfaces.
Depends. It does not hurt to copy on textures if it is not very often and you want to render it accelerated later.
When talking about textures you will always render to SDL_Renderer, and is always better to pre-load surfaces on textures.
As I explained in first paragraph, there is no window texture. You can either use entirely surface based rendering or entirely texture based rendering. If you really need both (if you want to have direct pixel access and accelerated rendering) is better do as you said: blit everything to one surface and then upload to a texture.
Lastly you should use textures whenever you can. The surface use is a exception, use it when you either have to use intensive pixel manipulation or have to deal with legacy code.

Solicitation for suggestions on writing a purely hobbyist rendering engine

I've created something of a simplistic renderer on my own using OpenGL ES 2.0. Essentially, it's just a class for rendering quads according to a given sprite texture. To elaborate, it's really just a single object that accepts objects that represent quads. Each quad object maintains a a world transform and object transform matrix and furnishes methods for transforming them over a given number of frames and also specifies texture offsets into the sprite. This quad class also maintains a list of transform operations to execute on its matrices. The renderer class then reads all of these properties from the quad and sets up a VBO to draw all quads in the render list.
For example:
Quad q1 = new Quad();
Quad q2 = new Quad();
q1->translate(vector3( .1, .3, 0), 30); // Move the quad to the right and up for 30 frames.
q2->translate(vector3(-.1, -.3, 0), 30); // Move the quad down and to the left for 30 frames.
Renderer renderer;
renderer.addQuads({q1, q2});
It's more complex than this, but you get the simple idea.
From the implementation perspective, on each frame, it transforms the base vertices of each object according to instruction, loads them all into a VBO including info on alpha value, and passes to a shader program to draw all quad at once.
This obviously isn't what I would call a rendering engine, but performs a similar task, just for rendering 2D quads instead of 3D geometry. I'm just curious as to whether I'm on the right track for developing a makeshift rendering engine. I agree that in most cases it's great to use an established rendering engine to get started in understanding them, but from my point of view, I like to have something of an understanding of how things are implemented, as opposed to learning something prebuilt and then learning how it works.
The problem with this approach is that adding new geometry, textures or animations requires writing code. It should be possible to create content for a game engine using established tools, like 3DS, Maya or Blender, which are completely interactive. This requires reading and parsing some standard file format like Collada. I don't want to squash your desire to learn by implementing code yourself, but you really should take a look at the PowerVR SDK, which provides a lot of the important parts for building game engines. The source code is provided and it's free.

DirectX: Vertex Shader using textures

I am a beginner in Graphics Programming. I came across a case where a "ResourceView" is created out of texture and then this resource view is set as VS Resource. To summarize:
CreateTexture2D( D3D10_TEXTURE2D_DESC{ 640, 512, .... **ID3D10Texture2D_0c2c0f30** )
CreateShaderResourceView( **ID3D10Texture2D_0c2c0f30**, ..., **ID3D10ShaderResourceView_01742c80** )
VSSetShaderResources( 0, 1, [**0x01742c80**])
When and what are the cases when we use textures in Vertex Shaders?? Can anyone help?
Thanks.
That completely depends on the effect you are trying to achieve.
If you want to color your vertices individually you would usually use a vertex color component. But nothing is stopping you from sampling the color from a texture. (Except that it is probably slower.)
Also, don't let the name fool you. Textures can be used for a lot more than just coloring. They are basically precomputed functions. For example, you could use a Textue1D to submit a wave function to animate clothing or swaying grass/foilage. And since it is a texture, you can use a different wave for every object you draw, without switching shaders.
The Direct3D developers just want to provide you with a maximum of flexibility. And that includes using texture resources in all shader stages.

DirectX alpha blending (deferred rendering)

I'm having a major issue which has been bugging me for a while now.
My problem is my game uses a deferred rendering engine which makes it very difficult to do alpha blending.
The only way I can think of solving this issue is to render the scene (including depth map, normal map and diffuse map) without any objects which have alphas.
Then for each polygon which has a texture with an alpha component, disable the z buffer and render it out including normals, depth and colour, and wherever alpha is '0' don't output anything to the depth, normal and colour buffer. Perform lighting calculations/other deferred effects on these two separate textures then combine the colour buffers using the depth map to check for which pixel is visible.
This idea would be extremely costly (not to mention has some severe short comings) to do so obviously should only be reserved for as few cases as possible, which makes rendering forest areas out of the question. However if there is no better solution I have one question.
When doing alpha blending with directx is there a shader/device state I can set which makes it so that I can avoid writing to the depth/normal/colour buffer when I want to? The issue is the pixel shader has to output to all its render targets specified, so if its set to output to the 3 render targets it must do it, which will override the previous colour value for that texel in the texture.
If there is no blend state which allows me to do this it would mean I would have to copy the normal, texture and depth map to keep the scene and then render to a new texture, depth and normal map then combine the two textures based on the alpha and depth values.
I guess really all I want toknow is if there is a simple sure-fire and possibly cheap way to render alphas in a deferred renderer?
A usual approach to draw transparent geometry in deferred renderer is just draw them in a separate pass, but using the usual forward rendering, not deferred rendering.

Resources