I'm still a graphics programming novice and bet the following problem is just a matter of wrong configuration.
i am creating a game using webgl for graphics and box2dweb for physics. unfortunately the drawing shows gaps between the physical bodies (left is my actual rendering, right is a rendering using box2dweb's debug-drawing in another canvas):
both box2d and webgl use the same coordinate-system and sizes for the boxes. there is no conversion. the red boxes are actually textures, though this doesn't make a difference. the red boxes are dynamic bodies, the green boxes are static bodies.
obviously i can't just resize graphics or physics. if i made the graphics bigger, the green boxes would overlap, if made physics smaller there will be physics-gaps.
here is another example:
also, sometimes, there there is no gap just like in the following (just moved the physic-bodies a little on the right)
the black boxes are just color-drawn (no textures). looking at the previous image, i guess it has to do with converting the floating-world-coordinates to screen-pixel-coordinates, but i have no idea what the option for fixing this would be.
Thanks a lot for the help
[Update]
It is an ortographic projection matrix, that I am initializing in the following way:
mat4.ortho(-this.vpWidth * this.zoom, this.vpWidth * this.zoom, -this.vpHeight * this.zoom, this.vpHeight * this.zoom, 0.1, 100.0, this.pMatrix);
vpWidth and vpHeight are the canvas-dimensions (640 * 480). the projection matrix is passed to the vertex-shader and multiplied with the model-view-matrix and the vertex-position. i played around with the zoom-factor. the more i zoom in the bigger the gaps are.
[Update 2]
okay. i investigated this a little more. bad zeppelin had a good hint. box2d has gaps between bodies to avoid tunneling. though this is not the complete explanation. i looked at the debug-draw-code - it is not resizing anything. i made a little test, zooming in both in webgl and for the debug draw with the following result:
with 10-times-zoom both have the same gap, but in "normal" zoom webgl is drawing bigger gaps than canvas 2d. what could be the explanation? my guess is anti-aliasing, which is enabled for canvas 2d, but not for webgl (i am using firefox - guess i'll make a chrome test later today to see what happens)
If you check the box2d manual, it says on the chapter 4.2 that the box2d engine keeps the polygons slightly separated to avoid tunneling. Checking the Box2d debug drawing code to see how they translate from box2d to draw coordinates might be a good idea to see how you could do the same in your app.
With the matrix you provided, you'll be creating a viewport that has a "virtual size" of twice your canvas dimensions. If you are trying for a pixel-for-pixel match, try this (with a zoom of 1.0):
mat4.ortho(-(this.vpWidth/2) * this.zoom, (this.vpWidth/2) * this.zoom, -(this.vpHeight/2) * this.zoom, (this.vpHeight/2) * this.zoom, 0.1, 100.0, this.pMatrix);
That way your 640*480 canvas will have extents of [-320,-240] to [320*240], which gives you 640*480 units total. Note that this will probably not eliminate the gaps entirely, since as bad zeppelin noted box2d puts them there intentionally, but it should make them less visible.
Another option to reduce the visible gaps is to draw your geometry scaled up just a bit from the physical representation, so that it displays with an extra pixel or two around the edges. The worst that may happen is that the geometry might appear to overlap just a bit, but it's up to you to determine if that's a more objectionable artifact than the gaps.
Related
This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.
This question already has answers here:
OpenGL default pipeline alpha blending does not make any sense for the alpha component
(2 answers)
Closed 2 years ago.
I'm currently learning OpenGL and try to make a simple GUI. So far I know very little about shaders and didn't use any.
One of the tricks I made to accelerate text rendering is to render the text quads to a transparent frame buffer object before rendering them to screen. The speedup is significant, but I noticed the text is poorly drawn on the edges. I noticed then if I made the transparent texture be of another transparent color, then the text would blend with that color. In the example I rendered to a transparent green texture:
I use the following parameters for blending:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE)
with glBlendEquation being default (GL_ADD).
My understanding from the documentation is that each pixel is sent through an equation that is source_rgb * blend_factor + dest_rgb * blend_factor.
I would typically want that, when a texture is transparent, it's RGB to be ignored, both sides of the blending, so if I could I would compute the rgb with a similar equation:
source_rgb * source_alpha / total_alpha + dest_rgb * dest_alpha / total_alpha
where total_alpha is the sums of the alphas. Which doesn't seem supported.
Is there something that can help me with minimum headache? I'm open to suggestions, from fixes, to rewriting everything to using a library that already does it.
The full source code is available here if you are interested. Please let me know if you need relevant extracts.
EDIT: I did try before to remove the GL_ONE, GL_ONE from my alpha blending and simply use (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) for RGBA, but the results weren't great either. I get similar results.
Solved the problem using premultiplication as suggested.
First of all, total_alpha isn't the sum of the alphas but rather the following:
total_alpha = 1 - (1 - source_alpha)*(1 - dest_alpha)
As you noted correctly, OpenGL doesn't support that final division by total_alpha. But it doesn't need to. All you need is to switch into thinking and working in terms of pre-multiplied alpha. With that the simple
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)
does the right thing.
This question is actually for Unity3D, but it can also be a more general question, so therefore I'm going to make this question as general possible.
Suppose I have a scene with a camera (near = 0.3, far = 1000, fov = 60) and I want to draw a skydome that is 10000 units in radius.
The object is not culled by the frustum of the camera, because I'm inside of the dome. But the vertices are culled by some shader somehow and the end-result looks like this:
Now my question is:
what setting for any engine can I change to make sure that the complete object is drawn and not clipped by the far plane of the camera?
What I don't want is:
Change the far plane to 10000, because it makes the frustum less accurate
Change the near plane, because my game is actually on a very low scale
Change the scale of the dome, because this setting looks very realistic
I do not know how to do this in Unity but in DirectX and in OpenGL you switch off the zbuffer (both checks and writing) and draw the skybox first.
Then you switch on the zbuffer and draw the rest of the scene.
My guess is that Unity can do all this for you.
I have two solutions for my own problem. The first one doesn't solve everything. The second does, but is against my own design principles.
There was no possibility for me to change the shader's z-writing, which is a great solution from #Erno, because the shaders used are 3rd party.
Option 1
Just before the object is rendered, set the far plane to 100,000 and set it back to 1000 after drawing the sky.
Problem: The depth buffer is still filled with values between very low and 100,000. This decreases the accuracy of the depth buffer and gives problems with z-fighting and post-effects that depend on the depth buffer.
Option 2
Create two cameras that are linked to each other. Camera 1 renders the skydome first with a setting of far = 100000, near = 100. Camera 2 clears the depth buffer and draws the rest of the scene with a setting of far = 1000, near = 0.3. The depth buffer doesn't contain big values now, so that solves the problems of inaccurate depth buffers.
Problem: The cameras have to be linked by some polling system, because there are no change events on the camera class (e.g. when FoV changes). I like the fact that there is only one camera, but this doesn't seem possible quite easily.
I am currently working on a game in SDL which has destructible terrain. At the moment the terrain is one large (5000*500, for testing) bitmap which is randomly generated.
Each frame the main surface is cleared and the terrain bitmap is blitted into it. The current resolution is 1200 * 700, so when I was testing 1200 * 500 pixels were visible at most of the points.
Now the problem is: The FPS are already dropping! I thought one simple bitmap shouldn't show any effect - but I am already falling down to ~24 FPS with this!
Why is blitting & drawing a bitmap of that size so slow?
Am I taking a false approach at destructible terrain?
How have games like Worms done this? The FPS seem really high although there's definitely a lot of pixels drawn in there
Whenever you initialize a surface, do it the following way:
SDL_Surface* mySurface;
SDL_Surface* tempSurface;
tempSurface = SDL_LoadIMG("./path/to/image/image.jpg_or_whatever");
/* SDL_LoadIMG() is correct name? Not sure now, I`m at work, so I can`t verify it. */
mySurface = SDL_DisplayFormat(tempSurface);
SDL_FreeSurface(tempSurface);
The SDL_DisplayFormat() method converts the pixel format of your surface to the format the video surface uses. If you don`t do it the way I described above, SDL does this each time the surface is blitted.
And always remember: just blit the necessary parts that really are visible to the player.
That`s my first guess, why you are having performance problems. Post your code or ask more specific questions, if you want more tipps. Good luck with your game.
If you redraw the whole screen at each frame your will always get a bad FPS. You have to redraw only part of the screen which have changed. You can also try to use SDL_HWSURFACE to use hardware but it won't work on every graphical card.
2d in SDL is pretty slow and there isn't much you can do to make it faster (on windows at least it uses GDI for drawing by default.) Your options are:
Go opengl and start using textured quads for sprites.
Try SFML. It provides a hardware accelerated 2d environment.
Use SDL 1.3 Get a source snapshot it is unstable and still under development but hardware accelerated 2d is supposed to be one of the main selling points.
I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions.
love.graphics.draw( sprite, x, y )
So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions.
Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed.
Is there some command to deactivate the antialiasing I can call at program startup?
If you turn off anti-aliasing you will just get aliasing, hence the name! Why are you drawing at non-integral positions, and what do you want it to do about those fractional parts? (Round them to the nearest value? Truncate them? What about if they're negative?)
Personally I would leave the low level graphics alone and alter your code to use accessors for x and y that perform the rounding or truncation that you require. This guarantees your pixel art ends up drawn on integer boundaries while keeping the anti-aliasing on that you might need later.
Another possible work around may be to use math.floor() to round your integers as a cheap workaround.
In case anyone is interested, I've been asking in other places and found out that what I am asking is already requested as feature: http://love2d.org/forum/tracker.php?p=2&t=7
So, the current version of Löve that I'm using (0.5.0) still doesn't allow to disable the antialias filter, but the feature is already in the SVN version of the engine.
you can turn off anti-aliasing by adding love.graphics.setDefaultFilter("nearest", "nearest", 1) to love.load()