why there is a flickering of old image if I change vew_port for canvas before calling present? - rust

I wrote a small demo in Rust/SDL, which do fade-in and fade-out of the image, plus some occasional random specs. It was super smooth and good, cranking up to 250 fps.
I decided to add change to canvas.set_viewport on each frame with random dimensions. Basically, the same streaming texture (which filled with new tone or noise on each frame) is drawn at random location with random size.
I found there is an (unexplainable) flickering for already rendered rectangles.
I've tried to screencapture it, but on video there is no flickering. I used a normal camera (60fps) and there was no flickering. I've used high-speed mode in my phone and I got flickering recorded, but it looked very different from what I see.
What is it?
The code: https://github.com/amarao/sdl_random/tree/c4757190712f0a996c2aba88b105462942d4ca27/src
Non-flickering screencapture: https://www.youtube.com/watch?v=Zud9Hjwltxk
Flickering video (hi-speed): https://youtu.be/rVZki9COuZ0
The second question: if this is some kind of 'underfined behaviour' from my GPU (nvidia), why is it so? Is changing viewport on-fly supported?
Edit: I changed call to set_viewport into a rect parameter for canvas.copy:
canvas.copy(
&texture, None,
sdl2::rect::Rect::new(
new_x as i32,
new_y as i32,
new_width,
new_height
)
).unwrap();
but result is absolutely the same.

This is due to double buffering (wikipedia). The render system uses two buffers in-tandem: one is being presented while the other is being written to. You can verify that this is enabled in SDL2 by checking video_system.gl_attrs().double_buffer().
You are drawing iteratively on the same buffers without clearing or redrawing on them. So one buffer will have everything from even frames drawn on them, and the other will have the odd. So, the flickering is caused by swapping between them when they have wildly diverging contents.

Related

Direct3D 9 Backbuffer sampling

I'm locking the backbuffer in direct3D 9 and copying an image to it. I noticed on one computer that when the image is stretched to the screen, it becomes blurry. On another computer I tested on, it's completely unfiltered (pixelated). Is there a way to specify how the backbuffer is sampled to the screen, or is it controlled by something else?
I've tried
Device->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_POINT);
However it had no effect; I think it only affects textures.
SetSamplerState does not affect how the backbuffer is drawn to the screen. AFAIK most drivers will use point sampling, which means pixels will be lost or doubled, resulting in bad quality. BTW, what was the GPU/driver on the machine where it looked fine (you can't/shouldn't depend on this behavior everywhere)?
The right way to do this is to copy the image to a texture and render a screen aligned quad so you can use hardware sampling to smooth the result for you.
If for whatever reason you cannot use a texture + rendering pass, you can use IDirect3DDevice9::StretchRect to filter the image when copying to the backbuffer. To actually load the image from system memory, you'll have to use another surface, either locking and copying it or using D3DXLoadSurfaceFromMemory.

Disable culling on an object

This question is actually for Unity3D, but it can also be a more general question, so therefore I'm going to make this question as general possible.
Suppose I have a scene with a camera (near = 0.3, far = 1000, fov = 60) and I want to draw a skydome that is 10000 units in radius.
The object is not culled by the frustum of the camera, because I'm inside of the dome. But the vertices are culled by some shader somehow and the end-result looks like this:
Now my question is:
what setting for any engine can I change to make sure that the complete object is drawn and not clipped by the far plane of the camera?
What I don't want is:
Change the far plane to 10000, because it makes the frustum less accurate
Change the near plane, because my game is actually on a very low scale
Change the scale of the dome, because this setting looks very realistic
I do not know how to do this in Unity but in DirectX and in OpenGL you switch off the zbuffer (both checks and writing) and draw the skybox first.
Then you switch on the zbuffer and draw the rest of the scene.
My guess is that Unity can do all this for you.
I have two solutions for my own problem. The first one doesn't solve everything. The second does, but is against my own design principles.
There was no possibility for me to change the shader's z-writing, which is a great solution from #Erno, because the shaders used are 3rd party.
Option 1
Just before the object is rendered, set the far plane to 100,000 and set it back to 1000 after drawing the sky.
Problem: The depth buffer is still filled with values between very low and 100,000. This decreases the accuracy of the depth buffer and gives problems with z-fighting and post-effects that depend on the depth buffer.
Option 2
Create two cameras that are linked to each other. Camera 1 renders the skydome first with a setting of far = 100000, near = 100. Camera 2 clears the depth buffer and draws the rest of the scene with a setting of far = 1000, near = 0.3. The depth buffer doesn't contain big values now, so that solves the problems of inaccurate depth buffers.
Problem: The cameras have to be linked by some polling system, because there are no change events on the camera class (e.g. when FoV changes). I like the fact that there is only one camera, but this doesn't seem possible quite easily.

SDL: FPS problems with simple bitmap

I am currently working on a game in SDL which has destructible terrain. At the moment the terrain is one large (5000*500, for testing) bitmap which is randomly generated.
Each frame the main surface is cleared and the terrain bitmap is blitted into it. The current resolution is 1200 * 700, so when I was testing 1200 * 500 pixels were visible at most of the points.
Now the problem is: The FPS are already dropping! I thought one simple bitmap shouldn't show any effect - but I am already falling down to ~24 FPS with this!
Why is blitting & drawing a bitmap of that size so slow?
Am I taking a false approach at destructible terrain?
How have games like Worms done this? The FPS seem really high although there's definitely a lot of pixels drawn in there
Whenever you initialize a surface, do it the following way:
SDL_Surface* mySurface;
SDL_Surface* tempSurface;
tempSurface = SDL_LoadIMG("./path/to/image/image.jpg_or_whatever");
/* SDL_LoadIMG() is correct name? Not sure now, I`m at work, so I can`t verify it. */
mySurface = SDL_DisplayFormat(tempSurface);
SDL_FreeSurface(tempSurface);
The SDL_DisplayFormat() method converts the pixel format of your surface to the format the video surface uses. If you don`t do it the way I described above, SDL does this each time the surface is blitted.
And always remember: just blit the necessary parts that really are visible to the player.
That`s my first guess, why you are having performance problems. Post your code or ask more specific questions, if you want more tipps. Good luck with your game.
If you redraw the whole screen at each frame your will always get a bad FPS. You have to redraw only part of the screen which have changed. You can also try to use SDL_HWSURFACE to use hardware but it won't work on every graphical card.
2d in SDL is pretty slow and there isn't much you can do to make it faster (on windows at least it uses GDI for drawing by default.) Your options are:
Go opengl and start using textured quads for sprites.
Try SFML. It provides a hardware accelerated 2d environment.
Use SDL 1.3 Get a source snapshot it is unstable and still under development but hardware accelerated 2d is supposed to be one of the main selling points.

Turning off antialiasing in Löve2D

I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions.
love.graphics.draw( sprite, x, y )
So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions.
Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed.
Is there some command to deactivate the antialiasing I can call at program startup?
If you turn off anti-aliasing you will just get aliasing, hence the name! Why are you drawing at non-integral positions, and what do you want it to do about those fractional parts? (Round them to the nearest value? Truncate them? What about if they're negative?)
Personally I would leave the low level graphics alone and alter your code to use accessors for x and y that perform the rounding or truncation that you require. This guarantees your pixel art ends up drawn on integer boundaries while keeping the anti-aliasing on that you might need later.
Another possible work around may be to use math.floor() to round your integers as a cheap workaround.
In case anyone is interested, I've been asking in other places and found out that what I am asking is already requested as feature: http://love2d.org/forum/tracker.php?p=2&t=7
So, the current version of Löve that I'm using (0.5.0) still doesn't allow to disable the antialias filter, but the feature is already in the SVN version of the engine.
you can turn off anti-aliasing by adding love.graphics.setDefaultFilter("nearest", "nearest", 1) to love.load()

Creating nice animations with low framerate

Ok, this might sound like a stupid question but i want to know if there is any recommendations on how to animate objects as smoothly and quickly as possible when you know you will have low framerate.
What my animation does is that i move approximately 10 2d-rectangles(containing a texture each) about 500 pixels in both x and y and i also scale them down to maybe 30% from about 1000*1000px. I want the animation to complete in around 200ms. I estimate the framerate to be maybe 20-30fps.
I have tried different timings and movement-velocities but they all look like crap. If you have high speed you barely see the animation and if you have slow speed it looks smooth but it takes way to much time.
Has there been any research done on how to do a quick animation that still looks like it's running smooth. I was thinking that you maybe could have acceleration that goes slow in the beginning and then jumpy at the end, or maybe the other way around? My own experiments all look both jumpy and slow :P
There has to be some limit in pixels/frame that we humans think look good. Where can i find guidelines like this?
Why do i want to know this?
I've made a window-switching app that does some cool animations but the problem is that when i'm not running any graphic-intense application my graphic-card goes down into some low power mode. This causes my application, that doesn't run for more than 3secs at a time, to perform very poorly because the gfx-card never has time to speed up.
(You can probably try this yourself if you have a laptop and vista: press win+tab and you will see that the animation is a bit choppy, then start a movie and press win+tab again, this time the animation is much more smooth).
You should be able to get reasonable looking animation at around 15fps, if the movements are small. Realise that there is a limit on fitting high-bandwith graphics information (lots of movement and shape/color change) into a low-bandwidth medium (low fps), but techniques like motion blur will help.
Also, look into double- or triple- buffering, ideally sync'd to the monitor's vertical refresh, which will all help to reduce flicker and tearing that can distract from the animation.
If your animations are purely two-dimensional (for example, rigid shifts of window content), then you can improve their smoothness by pixel-locking them to the video frame. A motion of exactly N pixels per frame looks smooth even at very low framerates, whereas if you have some left-over fraction of a pixel, you get aliases from the pixel sampling which can be noticeable.
Motion blur is in theory the way to make motions look smooth but proper motion blur is expensive, so if you're already having trouble with the framerate then motion blur is probably only going to make things worse. But there may be some way of reducing the cost, for example if the motion is in a constant direction and speed then you could render a single blurred image and use that. Or maybe overdraw partially-transparent copies of the moving image several times to get a "trail".

Resources