Ok, this might sound like a stupid question but i want to know if there is any recommendations on how to animate objects as smoothly and quickly as possible when you know you will have low framerate.
What my animation does is that i move approximately 10 2d-rectangles(containing a texture each) about 500 pixels in both x and y and i also scale them down to maybe 30% from about 1000*1000px. I want the animation to complete in around 200ms. I estimate the framerate to be maybe 20-30fps.
I have tried different timings and movement-velocities but they all look like crap. If you have high speed you barely see the animation and if you have slow speed it looks smooth but it takes way to much time.
Has there been any research done on how to do a quick animation that still looks like it's running smooth. I was thinking that you maybe could have acceleration that goes slow in the beginning and then jumpy at the end, or maybe the other way around? My own experiments all look both jumpy and slow :P
There has to be some limit in pixels/frame that we humans think look good. Where can i find guidelines like this?
Why do i want to know this?
I've made a window-switching app that does some cool animations but the problem is that when i'm not running any graphic-intense application my graphic-card goes down into some low power mode. This causes my application, that doesn't run for more than 3secs at a time, to perform very poorly because the gfx-card never has time to speed up.
(You can probably try this yourself if you have a laptop and vista: press win+tab and you will see that the animation is a bit choppy, then start a movie and press win+tab again, this time the animation is much more smooth).
You should be able to get reasonable looking animation at around 15fps, if the movements are small. Realise that there is a limit on fitting high-bandwith graphics information (lots of movement and shape/color change) into a low-bandwidth medium (low fps), but techniques like motion blur will help.
Also, look into double- or triple- buffering, ideally sync'd to the monitor's vertical refresh, which will all help to reduce flicker and tearing that can distract from the animation.
If your animations are purely two-dimensional (for example, rigid shifts of window content), then you can improve their smoothness by pixel-locking them to the video frame. A motion of exactly N pixels per frame looks smooth even at very low framerates, whereas if you have some left-over fraction of a pixel, you get aliases from the pixel sampling which can be noticeable.
Motion blur is in theory the way to make motions look smooth but proper motion blur is expensive, so if you're already having trouble with the framerate then motion blur is probably only going to make things worse. But there may be some way of reducing the cost, for example if the motion is in a constant direction and speed then you could render a single blurred image and use that. Or maybe overdraw partially-transparent copies of the moving image several times to get a "trail".
Related
I am involved in a side project that has a loop of LEDs around 1.5m in diameter with a rotor on the bottom which spins the loop. A raspberry pi controls the LEDs so that they create what appears to be a 3D globe of light. I am interested in a project that takes a microphone input and turns it into a column of pixels which is rendered on the loop in real time. The goal of this is to see if we can have it react to music in real-time. So far I've come up with this idea:
Using a FFT to quickly turn the input sound into a function that maps to certain pixels to certain colors based on the amplitude of the resultant function at frequencies, so the equator of the globe would respond to the strength of the lower-frequency sound, progressing upwards towards the poles which would respond to high frequency sound.
I can think of a few potential problems, including:
Performance on a raspberry pi. If the response lags too far behind the music it wouldn't seem to the observer to be responding to the specific song he/she is also hearing.
Without detecting the beat or some overall characteristic of the music that people understand it might be difficult for the observers to understand the output is correlated to the music.
The rotor has different speeds, so the image is only stationary if the rate of spin is matched perfectly to the refresh rate of the LEDs. This is a problem, but also possibly helpful because I might be able to turn down both the refresh rate and the rotor speed to reduce the computational load on the raspberry pi.
With that backstory, I should probably now ask a question. In general, how would you go about doing this? I have some experience with parallel computing and numerical methods but I am totally ignorant of music and tone and what-not. Part of my problem is I know the raspberry pi is the newest model, but I am not sure what its parallel capabilities are. I need to find a few linux friendly tools or libraries that can do an FFT on an ARM processor, and be able to do the post-processing in real time. I think a delay of ~0.25s or about would be acceptable. I feel like I'm in over my head so I thought id ask you guys for input.
Thanks!
I've read this is no big deal but it's really annoying. I'm plotting a 40Mhz BW at 20MSPS. This is a N210 and I'm connected through a switch.
It seems to plot fine but the scale on the Y-axis is constantly changing. Can I fix this?
Finally, the X-axis is from 0 to 500e-3. This makes no sense to me given my settings. Can someone please help me understand this?
In response to the question, "It seems to plot fine but the scale on the Y-axis is constantly changing. Can I fix this?" you can bring up the plot menu using the small down arrow on the plot view. From there select Settings... and under the Plot section there are places for the plot min and max, which default to AUTO.
USRP Overflow detected; I've read this is no big deal ...
It really is a big deal. It means your PC was not fast enough at processing the samples that came from the USRP, so some samples had to be dropped. This is the worst that can happen to your signal.
You will need to make your signal processing faster (for example, instead of processing everything live first storing things to an SSD and then later process stuff offline, or buy a significantly faster PC, if you think that would help with your specific application), or reduce the sampling rate.
I'm plotting a 40Mhz BW at 20MSPS
Nyquist says you're not. You can't observe 40 MHz bandwidth with 20 MS/s, it's mathematically impossible.
It seems to plot fine but the scale on the Y-axis is constantly changing. Can I fix this?
I don't know the graphical sinks of redhawk, but this sounds like autoscaling, so yes, probably you can disable that feature.
Finally, the X-axis is from 0 to 500e-3. This makes no sense to me given my settings. Can someone please help me understand this?
You don't tell us what you're plotting. Time values, given some trigger, converting complex samples to their magnitude? Or is it some kind of power spectrum?
In the later case, this is most probably normalized frequency for a real signal; you have to read it as "frequency in units of sampling rate".
As a brief background, I have been slowly chugging away at the core framework of a game I've been wanting to make for some time now. It has gotten to the point where I want to start really fleshing it out with some graphics assets other than colored boxes. And this brings me to the heart of my question:
What is the best method for creating graphics assets that appear the same quality independent of the device they are drawn on?
My game is styled after Pokemon, so I want to capture the 16-bit feel while still remaining crisp regardless of the device resolution. Does this mean I just create a ton of duplicate sprite sheets? i.e. a 16x16 32x32 48x48 64x64 version of each asset? Or should I be making vector art and rendering it out specifically for each device? Or is there some other alternative I haven't considered?
Thanks!
If by 16-bit feel you mean a classic old-school "pixelated" style (but with crisp edges). Then you can just draw them in the minimal dimension and upscale by whatever factor you need using a Pixel Art Scaling Algorithm, the simplest being nearest neighbour. There are of course many algos that produce much nicer results than NN like the 2xSaI and hqx family of algorithms, and RotSprite if you need rotation.
If you want clean antialiased edges you might want to check out this Microsoft Research paper: Depixelizing Pixel Art
You can then use these algos as a loading pre-pass for your game.
Alternatively, you could shift them "earlier" into your art pipeline to help speed up generation of multiple (resolution/transform) variants, which you could further touch up. This choice largely depends on your level of labor resources and perfectionism. Note also that this loses the "purity" of the solution since it violates DRY because updates will require changes in all variants of a sprite.
I would suggest to first try out some of these upscaling filters and see if you are happy with the results. If you are, you can get away with a loading prepass, which is by far the most desirable outcome because it reduces work and maintenance by a large factor.
I am currently working on a game in SDL which has destructible terrain. At the moment the terrain is one large (5000*500, for testing) bitmap which is randomly generated.
Each frame the main surface is cleared and the terrain bitmap is blitted into it. The current resolution is 1200 * 700, so when I was testing 1200 * 500 pixels were visible at most of the points.
Now the problem is: The FPS are already dropping! I thought one simple bitmap shouldn't show any effect - but I am already falling down to ~24 FPS with this!
Why is blitting & drawing a bitmap of that size so slow?
Am I taking a false approach at destructible terrain?
How have games like Worms done this? The FPS seem really high although there's definitely a lot of pixels drawn in there
Whenever you initialize a surface, do it the following way:
SDL_Surface* mySurface;
SDL_Surface* tempSurface;
tempSurface = SDL_LoadIMG("./path/to/image/image.jpg_or_whatever");
/* SDL_LoadIMG() is correct name? Not sure now, I`m at work, so I can`t verify it. */
mySurface = SDL_DisplayFormat(tempSurface);
SDL_FreeSurface(tempSurface);
The SDL_DisplayFormat() method converts the pixel format of your surface to the format the video surface uses. If you don`t do it the way I described above, SDL does this each time the surface is blitted.
And always remember: just blit the necessary parts that really are visible to the player.
That`s my first guess, why you are having performance problems. Post your code or ask more specific questions, if you want more tipps. Good luck with your game.
If you redraw the whole screen at each frame your will always get a bad FPS. You have to redraw only part of the screen which have changed. You can also try to use SDL_HWSURFACE to use hardware but it won't work on every graphical card.
2d in SDL is pretty slow and there isn't much you can do to make it faster (on windows at least it uses GDI for drawing by default.) Your options are:
Go opengl and start using textured quads for sprites.
Try SFML. It provides a hardware accelerated 2d environment.
Use SDL 1.3 Get a source snapshot it is unstable and still under development but hardware accelerated 2d is supposed to be one of the main selling points.
Pardon me if my lingo is not correct as I'm new to game programming. I've been looking at some open source projects and noticed that some sprites are split up into several files, all of which are grouped together to make a 2d object look like it's animating. That's straight forward. Then I'll see a different approach, with the 2d object all in one png file or something similar, all next to each other.
Is there an advantage of using one approach to another? Should sprites be in separate files? Why are they sometimes all on one sheet?
The former approach is typically more straightforward and easy to program, so you see a lot of it in open source projects.
The second approach is more efficient on modern graphics hardware, because it allows you to draw multiple different sprites from one large texture by specifying different u,v coordinates to select each individual sprite from the composite sheet. Because u,v coordinates can be streamed along with vertex data to a shader, this allows you to draw a large group of sprites much more efficiently than you could if you had to switch textures (which means changing shader state) for each poly. That means you can draw more sprites per millisecond, and thus get more on screen.
Every time you switch your currently bound texture you incur a penalty (sometimes a very big one if the system runs out of memory and starts paging textures in and out). So the more things you can draw with one texture the better. Going to extremes, if you never switched texture bindings, you'd incur 0 penalty.
On the other hand, video cards limit the maximum size of a texture, so you can only group smaller textures into a big one so much. The older the card the smaller the texture size you can use. So if you want to make your game work on a large variety of cards, you have to limit your textures to a more normal size (or have different sets of textures for different cards).
Another problem is that sometimes the stuff in your virtual world just doesn't pertain itself to being grouped like this. While you can have a big texture with every little decoration for your UI (window frames, buttons, etc), you're gonna have a harder time to use a single texture for different enemies because they might not even appear on the screen at the same time, or you might be unable to draw them one after the other because of the back-to-front drawing scheme necessary for transparency.
Not so long ago one reason to use packed sprites instead of seperate ones was that graphics hardware was limited to power-of-two textures (256, 512, 1024, ...). So you would waste a good amount of memory by not packing the sprites as you would have to enlarge everything to power-of-two dimensions before you could upload it. Packing multiple sprites into a single texture worked around that.
Another reason is that its much quicker to load one big image file from the HD then it is to load hundreds of small ones. This is still the case as file access comes with quite a large overhead per file, so the less files you have the faster things become. And especially with small sprites you can easily turn hundred files into a single one, so the saving can be quite noticable.
There are however also reason against having everything in one texture. For one OpenGL is no longer limited to power-of-two textures, so any size will work. But more importantly, packing everything in one texture has negative side effects. When you for example have lots of scaling in a game you have to be careful about the borders of your sprites, as colors will blend into neighboring sprites giving you ugly artifacts. You can avoid that to a certain degree by adding extra space around your sprites, but its not a perfect solution. Having everything in one texture also limits what you can do with the image. For certain effects, such as a waterfall for example, you might want to do the animation by simply offsetting the UV coordinates of the texture, you can't do that so easily when everything is packed into a single texture.