I am a beginner in Graphics Programming. I came across a case where a "ResourceView" is created out of texture and then this resource view is set as VS Resource. To summarize:
CreateTexture2D( D3D10_TEXTURE2D_DESC{ 640, 512, .... **ID3D10Texture2D_0c2c0f30** )
CreateShaderResourceView( **ID3D10Texture2D_0c2c0f30**, ..., **ID3D10ShaderResourceView_01742c80** )
VSSetShaderResources( 0, 1, [**0x01742c80**])
When and what are the cases when we use textures in Vertex Shaders?? Can anyone help?
Thanks.
That completely depends on the effect you are trying to achieve.
If you want to color your vertices individually you would usually use a vertex color component. But nothing is stopping you from sampling the color from a texture. (Except that it is probably slower.)
Also, don't let the name fool you. Textures can be used for a lot more than just coloring. They are basically precomputed functions. For example, you could use a Textue1D to submit a wave function to animate clothing or swaying grass/foilage. And since it is a texture, you can use a different wave for every object you draw, without switching shaders.
The Direct3D developers just want to provide you with a maximum of flexibility. And that includes using texture resources in all shader stages.
Related
This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.
I'm trying to learn SDL2. The main difference (as I can see) between the old SDL and SDL2 is that old SDL had window represented by it's surface, all pictures were surfaces and all image operations and blits were surface to surface. In SDL2 we have surfaces and textures. If I got it right, surfaces are in RAM, and textures are in graphics memory. Is that right?
My goal is to make object-oriented wrapper for SDL2 because I had a similar thing for SDL. I want to have class window and class picture (has private texture and surface). Window will have it's contents represented by an instance of the picture class, and all blits will be picture to picture object blits. How to organize these picture operations:
Pixel manipulation should be on surface level?
If I want to copy part of one picture to another without rendering it, it should be on surface level?
Should I blit surface to texture only when I want to render it on the screen?
Is it better to render it all to one surface and then render it to the window texture or to render each picture to the window texture separately?
Generally, when should I use surface and when should I use texture?
Thank you for your time and all help and suggestions are welcome :)
First I need to clarify some misconceptions: The texture based rendering does not work as the old surface rendering did. While you can use SDL_Surfaces as source or destination, SDL_Textures are meant to be used as source for rendering and the complimentary SDL_Renderer is used as destination. Generally you will have to choose between the old rendering framework that is done entirely on CPU and the new one that goes for GPU, but mixing is possible.
So for you questions:
Textures do not provide direct access to pixels, so it is better to do on surfaces.
Depends. It does not hurt to copy on textures if it is not very often and you want to render it accelerated later.
When talking about textures you will always render to SDL_Renderer, and is always better to pre-load surfaces on textures.
As I explained in first paragraph, there is no window texture. You can either use entirely surface based rendering or entirely texture based rendering. If you really need both (if you want to have direct pixel access and accelerated rendering) is better do as you said: blit everything to one surface and then upload to a texture.
Lastly you should use textures whenever you can. The surface use is a exception, use it when you either have to use intensive pixel manipulation or have to deal with legacy code.
I've created something of a simplistic renderer on my own using OpenGL ES 2.0. Essentially, it's just a class for rendering quads according to a given sprite texture. To elaborate, it's really just a single object that accepts objects that represent quads. Each quad object maintains a a world transform and object transform matrix and furnishes methods for transforming them over a given number of frames and also specifies texture offsets into the sprite. This quad class also maintains a list of transform operations to execute on its matrices. The renderer class then reads all of these properties from the quad and sets up a VBO to draw all quads in the render list.
For example:
Quad q1 = new Quad();
Quad q2 = new Quad();
q1->translate(vector3( .1, .3, 0), 30); // Move the quad to the right and up for 30 frames.
q2->translate(vector3(-.1, -.3, 0), 30); // Move the quad down and to the left for 30 frames.
Renderer renderer;
renderer.addQuads({q1, q2});
It's more complex than this, but you get the simple idea.
From the implementation perspective, on each frame, it transforms the base vertices of each object according to instruction, loads them all into a VBO including info on alpha value, and passes to a shader program to draw all quad at once.
This obviously isn't what I would call a rendering engine, but performs a similar task, just for rendering 2D quads instead of 3D geometry. I'm just curious as to whether I'm on the right track for developing a makeshift rendering engine. I agree that in most cases it's great to use an established rendering engine to get started in understanding them, but from my point of view, I like to have something of an understanding of how things are implemented, as opposed to learning something prebuilt and then learning how it works.
The problem with this approach is that adding new geometry, textures or animations requires writing code. It should be possible to create content for a game engine using established tools, like 3DS, Maya or Blender, which are completely interactive. This requires reading and parsing some standard file format like Collada. I don't want to squash your desire to learn by implementing code yourself, but you really should take a look at the PowerVR SDK, which provides a lot of the important parts for building game engines. The source code is provided and it's free.
Is it possible to use color pallettes in openGL ES 1.1?
I'm currently developing a game which has player sprites, and the player sprites need to be able to be changed to different teams' colors. For example, changing the shirts' colors but not the face colors, which rules out simple hue rotation.
Is this possible, or will this have to be implemented manually (modifying the texture data directly)?
Keep in mind that anything other than non-mipmapped GL_NEAREST will blend between palette indices. I ended up expanding paletted textures in my decompression method before uploading them as BGRA32. (GLES 2.0)
It's not a hardware feature of the MBX but a quick check of gl.h for ES 1.x from the iPhone SDK reveals that GL_PALETTE4_RGB8_OES, GL_PALETTE8_RGBA8_OES and a bunch of others are available as one of the constants to pass to glCompressedTexImage2D, as per the man page here. So you can pass textures with palettes to that, but I'll bet anything that the driver will just turn them into RGB textures on the CPU and then upload them to the GPU. I don't believe Apple support those types of compressed texture for any reason other than that they're part of the ES 1.x spec.
On ES 2.x you're free to do whatever you want. You could easily upload the palette as one texture (with, say, the pixel at (x, 0) being the colour for palette index x) and the paletted texture as another. You'll then utilise two texture units to do the job that one probably could do when plotting fragments, so use your own judgment as to whether you can afford that.
I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture.
For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent.
Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably).
This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.
Not too much but here's a start:
The color fill is accomplished by using a shader. Reimer's XNA Tutorials on pixel shaders is a great resource on the topic.
You need to calculate the geometry and build up vertex buffers to hold it. Note that all vector geometry in XNA is in 3D, but using a camera fixed to a plane will simulate 2D.
To add different colors to different triangles you basically need to group geometry into separate vertex buffers. Then, using a shader with a color parameter, for each buffer,
set the appropriate color before passing the buffer to the graphics device. Alternatively, you can use a vertex format containing color information, which basically let you assign a color to each vertex.