I am using wgpu and I can't find anywhere how to render at a given resolution. I have tried with setting the Surface width and height but that didn't seem to do anything. I also couldn't find any methods in the render or surface structs I am using either.
If you want to render at a lower resolution than the Surface you're displaying to then you have to
create a texture of the size you want,
render to that (in exactly the same way you'd render to the surface), and
in a separate render pass, render that texture to the surface by putting it on a triangle that covers the entire screen.
A fair bit of setup, but the second render pass is also a useful opportunity to do things like tone mapping and other screen-space effects.
Related
Just as the question title, I'm a bit confused with thoes stuff, especially viewport and render area. AFAIK, viewport is used in VS stage, while render area is used in FS stage, if viewport is small than render area, what will happen?
THanks.
The viewport specifies how the normalized device coordinates are transformed into the pixel coordinates of the framebuffer.
Scissor is the area where you can render, this is similar to viewport in that regard but changing the scissor rectangle doesn't affect the coordinates.
RenderArea is the area of the framebuffer that will be changed by the renderpass. This lets the implementation know that not the entire frame buffer will be changed and gives it opportunity to optimize by for example not including some tiles in a tile based architecture. It is the application's responsibility that no rendering happens outside that area, for example by making sure the scissor rects are always fully contained within the renderArea.
Framebuffer size and attachment size are related in that the attachments must be at least as large as the framebuffer.
if viewport is small than render area, what will happen?
Nothing special, the render commands will render within the viewport. The other way around (render area smaller than the viewport) will result in undefined values in the framebuffer.
I'm trying to learn SDL2. The main difference (as I can see) between the old SDL and SDL2 is that old SDL had window represented by it's surface, all pictures were surfaces and all image operations and blits were surface to surface. In SDL2 we have surfaces and textures. If I got it right, surfaces are in RAM, and textures are in graphics memory. Is that right?
My goal is to make object-oriented wrapper for SDL2 because I had a similar thing for SDL. I want to have class window and class picture (has private texture and surface). Window will have it's contents represented by an instance of the picture class, and all blits will be picture to picture object blits. How to organize these picture operations:
Pixel manipulation should be on surface level?
If I want to copy part of one picture to another without rendering it, it should be on surface level?
Should I blit surface to texture only when I want to render it on the screen?
Is it better to render it all to one surface and then render it to the window texture or to render each picture to the window texture separately?
Generally, when should I use surface and when should I use texture?
Thank you for your time and all help and suggestions are welcome :)
First I need to clarify some misconceptions: The texture based rendering does not work as the old surface rendering did. While you can use SDL_Surfaces as source or destination, SDL_Textures are meant to be used as source for rendering and the complimentary SDL_Renderer is used as destination. Generally you will have to choose between the old rendering framework that is done entirely on CPU and the new one that goes for GPU, but mixing is possible.
So for you questions:
Textures do not provide direct access to pixels, so it is better to do on surfaces.
Depends. It does not hurt to copy on textures if it is not very often and you want to render it accelerated later.
When talking about textures you will always render to SDL_Renderer, and is always better to pre-load surfaces on textures.
As I explained in first paragraph, there is no window texture. You can either use entirely surface based rendering or entirely texture based rendering. If you really need both (if you want to have direct pixel access and accelerated rendering) is better do as you said: blit everything to one surface and then upload to a texture.
Lastly you should use textures whenever you can. The surface use is a exception, use it when you either have to use intensive pixel manipulation or have to deal with legacy code.
I succeeded in retrieving the exact tile my player is on, at runtime when walking around the tiledmap. I'd like now to add some alpha marks on the terrain when passing over, and to do that I need to modify the color of some pixels of the tile.
I really don't know how to do it right now.. any hints?
thanks.
You probably want to draw decorations on top of the tiles, rather than modifying the tiles themselves. The tile images are shared across all cells using a tile, so if you modify the tile itself you would see the change everywhere it was used. Further, modifying the texture is a relatively expensive operation that you probably should try to avoid.
To draw on top the tiles, you might draw additional sprites, or use a custom shader.
I've been struggling with this for a while.
Presently, I have a grid of 100 by 100 tiles, which belong to a Map.
The Map implements IDrawable. I call Draw() and it draws itself at 0,0 which is fine.
However, I want to expand this to draw essentially a viewport. The player will be drawn on the screen in the middle, and thus I want to display say, 10 tiles in each direction (rather than the entire map).
I'm having trouble thinking up the architecture for this one. I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself. This would have worked before, where it drew the player at x,y on the screen, but with a viewport it will no longer know where to draw itself.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible? Should the map tiles be objects that are subjected to this? Or should the viewport intelligently draw the map by coupling both together?
I'd love to know how typical scrolling tile games accomplish this.
If it matters, I'm using XNA
Edit to add: Can you do graphics manipulation such as trying the HTML rendering approach, where you tell things to draw, and they return a graphic of themselves, and then the parent places the graphic in the correct location? I'm thinking, if I had 2 viewports side by side for splitscreen, how would I stop them drawing outside the edges?
Possible design:
There's a 2D "world" that contains object instances.
"Object instance" is a sprite reference + its coordinates in the world.
When you draw scene, you request list of visible objects that exist in given 2D area, THEN you draw them.
With such design world can be very huge.
I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself.
visible things should draw themselves. Objects outside of viewport are not visible.
, how would I stop them drawing outside the edges?
Not sure about XNA, but OpenGL has "scissors test"/"glViewport" and Direct3D 9 has "SetViewport" method that allows you to use part of the screen/window for rendering. There are also clipplanes and stencil buffer (using stencil for 2D clipping is overkill, though) You could also render to texture then render the texture. There are many ways to deal with this.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible?
For a large world, you shouldn't examine every object, because it will be slow. You should be able to find visible object without testing every one of them. For that you'll need some kind of space partitioning - quad trees (because we are in 2D), k-d trees, etc. This way you should be able to handle few thousands (or even hundreds of thousands) of objects, as long as you don't see them all at once.
Should the map tiles be objects that are subjected to this?
If you keep drawing invisible things, FPS will drop.
and they return a graphic of themselves
For 2D game this may be very slow. Remember KISS principle.
Some basic ideas, not specifically for XNA:
objects draw themselves to a "virtual screen" in world coordinates, they don't draw themselves to the screen directly
drawable objects get a "graphics context" object which offers you a drawing API. The "graphics context" knows about the current viewport bounds and realizes the coordinate transformation from world coordinates to screen coordinates (for every drawing operations). The graphics context also does the direct drawing to the screen (or to a background screen buffer, if you need double buffering).
when you have many objects outside the visible bounds of your viewport, then as a performance optimization, your drawing loop can make a before-hand bounds-check for your objects and test if they are completely outside the visible area. If so, there is no need to let them draw themselves.
I'm having a major issue which has been bugging me for a while now.
My problem is my game uses a deferred rendering engine which makes it very difficult to do alpha blending.
The only way I can think of solving this issue is to render the scene (including depth map, normal map and diffuse map) without any objects which have alphas.
Then for each polygon which has a texture with an alpha component, disable the z buffer and render it out including normals, depth and colour, and wherever alpha is '0' don't output anything to the depth, normal and colour buffer. Perform lighting calculations/other deferred effects on these two separate textures then combine the colour buffers using the depth map to check for which pixel is visible.
This idea would be extremely costly (not to mention has some severe short comings) to do so obviously should only be reserved for as few cases as possible, which makes rendering forest areas out of the question. However if there is no better solution I have one question.
When doing alpha blending with directx is there a shader/device state I can set which makes it so that I can avoid writing to the depth/normal/colour buffer when I want to? The issue is the pixel shader has to output to all its render targets specified, so if its set to output to the 3 render targets it must do it, which will override the previous colour value for that texel in the texture.
If there is no blend state which allows me to do this it would mean I would have to copy the normal, texture and depth map to keep the scene and then render to a new texture, depth and normal map then combine the two textures based on the alpha and depth values.
I guess really all I want toknow is if there is a simple sure-fire and possibly cheap way to render alphas in a deferred renderer?
A usual approach to draw transparent geometry in deferred renderer is just draw them in a separate pass, but using the usual forward rendering, not deferred rendering.