Best approach for game animation? - graphics

I have a course exercise in OpenGL to write a game with simple animation of a few objects
While discussing with my partner our design options we've realized we have two major choices for how the animation is going to work, Either
Set a timer for a constant interval, say 30 msec, when the timer hits, calculate where objects should be and draw the frame. or -
Don't use a timer, just a normal loop that runs all the time and in each iteration check how much time passed, calculate where the objects should be according to the interval and draw the frame.
What should generally be the preferred approach? Does anyone have concrete experience with either approach?

Render and compute as fast as you can to get the maximum frame rate (as capped by the vertical sync)
Don't use a timer, they're not reliable < 50-100 ms on Windows. Check how much time has passed. (Usually, you need both delta t and an absolute value, depending on if your animation is physics or keyframe based.)
Also, if you want to be stable, use an upper/lower bound on your time-step, to go into slow-motion if a frame takes a few secs to render (disc access by another process?) or skip an update if you get two of them within say 10 ms.
Update
(Since this is a rather popular answer)
I usually prefer having a fixed time-step, as it makes everything more stable. Most physics engines are pretty robust against varying time, but other things, like particle systems or various simpler animations or even game logic, are easier to tune when having everything run in a fixed time step.
Update2
(Since I got 10 upvotes ;)
For further stability over long periods of running (>4 hours), you probably want to make sure you're not using floats/doubles to compute large time differences, since you lose precision doing so and your game's animations/physics will suffer. Use fixed point (or 64-bit microsecond-based) integers instead.
For the hairy details, I recommend reading A matter of precision by Tom Forsyth.

Read this page about game loops.
In short, set a timer:
Update the state of the game at a fixed frequency (something like every 25 ms = 1s/40fps). That includes the properties of the game objects, the input, the physics, the AI, etc. I call that the Model and the Controller. The need for a fixed update rate comes from the problems that may appear on too slow or too fast hardware (read the article). Some physics engine also prefer to update at a fixed frequency.
Update the frame (the graphics) of the game as fast as possible. That would be the View. That way you'll provide a smooth game. You can also enable vsync so the display will wait for the graphic card (usually it's 60 fps).
So each iteration of the loop, you check if you should update the model/controller. If it's late, update until they are up to date. Then, update the frame once and continue your loop.
The tricky part is that because of the different update rates, in fast hardware, the view will update several times before the model and controller. Therefore you should interpolate the position of your game objects depending on "where they would be if the game state would have been updated". It's really not that difficult.
You may have to maintain 2 different data structures : one for the model and one for the view. For instance you could have a scene graph for your model and a BSP tree for your view.

The second would be my preferred approach, because timers are often not as accurate as you're probably thinking and have all the latency and overhead of the event handling system. Accounting for the time interval will give your animations a much more consistent look and be robust if/when your frame rate dips.
Having said that, if your animation is based on a physics simulation (eg rigid body or ragdoll animation), then having a fixed update interval for your physics can greatly simplify the implementation.

Option 2 is by far preferred. It will scale nicely across differently performing hardware.
The book "Game Programming Gems 1" had a chapter that covers exactly what you need:
Frame Rate Independent Linear Interpolation

Use the second method. Did a game for my senior project and from experience, there is no guarantee that your logic will be done processing when the timer wants to fire.

I would be tempted to use the loop, since it will render as fast as possible (i.e. immediately after your physics computations are done). This will probably be more robust if you run into any slow-down in computation, which would cause timer firings to start queueing up. However, in case of such a slow-down you may have to put a cap on the time step computed between updates, since your physics engine may go unstable with too large a jump in time.

I'd suggest setting the system up to work on a "delta" that's passed in from outside.
When I did this, inside the animation format I based everything on real time values. The delta I passed in was 1 / 30 seconds, but it could be anything. With this system you can get either your first or second option, depending on whether you pass in a fixed delta or if you pass in the amount of time that has passed since the last frame.
As for which is better, it depends on your game and your requirements. Ideally all of your systems would be based around the same delta so that your physics match your animations. If your game drops frames at all and if all of your systems work with a variable delta, I'd suggest the variable delta is the better of the two solutions for end user experience.

Related

Level of Detail in 3D graphics - What are the pros and cons?

I understand the concept of LOD but I am trying to find out the negative side of it and I see no reference to that from Googling around. The only pro I keep coming across is that it improves performance by omitting details when an object is far and displaying better graphics when the object is near.
Seriously that is the only pro and zero con? Please advice. Tnks.
There are several kinds of LOD based on camera distance. Geometric, animation, texture, and shading variations are the most common (there are also LOD changes that can occur based on image size and, for gaming, hardware capabilities and/or frame rate considerations).
At far distances, models can change tessellation or be replaced by simpler models. Animated details (say, fingers) may simplify or disappear. Textures may move to simpler textures, bump maps vanish, spec/diffuse maps combines, etc. And shaders may also swap-put to reduce the number of texture inputs or calculation (though this is less common and may be less profitable, since when objects are far away they already fill fewer pixels -- but it's important for screen-filling entities like, say, a mountain).
The upsides are that your game/app will have to render less data, and in some cases, the LOD down-rezzed model may actually look better when far away than the more-complex model (usually because the more detailed model will exhibit aliasing when far away, but the simpler one can be tuned for that distance). This frees-up resources for the nearer models that you probably care about, and lets you render overall larger scenes -- you might only be able to render three spaceships at a time at full-res, but hundreds if you use LODs.
The downsides are pretty obvious: you need to support asset swapping, which can mean both the real-time selection of different assets and switching them but also the management (at times of having both models in your memory pipeline (one to discard, one to load)); and those models don't come from the air, someone needs to create them. Finally, and this is really tricky for PC apps, less so for more stable platforms like console gaming: HOW DO YOU MEASURE the rendering benefit? What's the best point to flip from version A of a model to B, and B to C, etc? Often LODs are made based on some pretty hand-wavy specifications from an engineer or even a producer or an art director, based on hunches. Good measurement is important.
LOD has a variety of frameworks. What you are describing fits a distance-based framework.
One possible con is that you will have inaccuracies when you choose an arbitrary point within the object for every distance calculation. This will cause popping effects at times since the viewpoint can change depending on orientation.

Given measurements from a event series as input, how do I generate an infinite input series with the same profile?

I'm currently working with a system that makes scheduling decisions based on a series of requests and the state of the system.
I would like to take the stream of real inputs, mock out some of the components, and run simulations against the rest. The idea is to use it for planning with respect to system capacity (i.e. when to scale certain components), tracking down certain failure modes, and analyzing the effects of changes to the codebase (i.e. simulations with version A compared to simulations with version B).
I can do everything related to this, except generate a suitable input stream. Replaying the exact input from production hasn't been very helpful because it's hard to get a long enough data stream to tease out some of the behavior that I'm trying to find. In other words, if production falls over at 300 days of input, I don't have enough data to find out until after it fell over. Repeating the same input set has been considered; but after a few initial tries, the developers all agree that the simulation seems to "need more random".
About this particular system:
The input is a series of irregularly spaced events (i.e. a stochastic process with discrete time and continuous state space).
Properties are not independent of each other.
Even the more independent of the properties are composites of other properties that will always be, by nature, invisible to me (leading to a multi-modal distribution).
Request interval is not independent of other properties (i.e. lots of requests for small amounts of resources come through in a batch, large requests don't).
There are feedback loops in it.
It's provably chaotic.
So:
Given a stream of input events with a certain distribution of various properties (including interval), how do I generate an infinite stream of events with the same distribution across a number of non-independent properties?
Having looked around, I think I need to do a Markov-Chain Monte-Carlo Simulation. My problem is figuring out how to build the Markov-Chain from the existing input data.
Maybe it is possible to model the input with a Copula. There are tools that help you doing so, e.g. see this paper. Apart from this, I would suggest to move the question to http://stats.stackexchange.com, as this is a statistical problem and will likely draw more attention over there.

How to predict when next event occurs based on previous events? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Basically, I have a reasonably large list (a year's worth of data) of times that a single discrete event occurred (for my current project, a list of times that someone printed something). Based on this list, I would like to construct a statistical model of some sort that will predict the most likely time for the next event (the next print job) given all of the previous event times.
I've already read this, but the responses don't exactly help out with what I have in mind for my project. I did some additional research and found that a Hidden Markov Model would likely allow me to do so accurately, but I can't find a link on how to generate a Hidden Markov Model using just a list of times. I also found that using a Kalman filter on the list may be useful but basically, I'd like to get some more information about it from someone who's actually used them and knows their limitations and requirements before just trying something and hoping it works.
Thanks a bunch!
EDIT: So by Amit's suggestion in the comments, I also posted this to the Statistics StackExchange, CrossValidated. If you do know what I should do, please post either here or there
I'll admit it, I'm not a statistics kind of guy. But I've run into these kind of problems before. Really what we're talking about here is that you have some observed, discrete events and you want to figure out how likely it is you'll see them occur at any given point in time. The issue you've got is that you want to take discrete data and make continuous data out of it.
The term that comes to mind is density estimation. Specifically kernel density estimation. You can get some of the effects of kernel density estimation by simple binning (e.g. count the number events in a time interval such as every quarter hour or hour.) Kernel density estimation just has some nicer statistical properties than simple binning. (The produced data is often 'smoother'.)
That only takes care of one of your problems, though. The next problem is still the far more interesting one -- how do you take a time line of data (in this case, only printer data) and produced a prediction from it? First thing's first -- the way you've set up the problem may not be what you're looking for. While the miracle idea of having a limited source of data and predicting the next step of that source sounds attractive, it's far more practical to integrate more data sources to create an actual prediction. (e.g. maybe the printers get hit hard just after there's a lot of phone activity -- something that can be very hard to predict in some companies) The Netflix Challenge is a rather potent example of this point.
Of course, the problem with more data sources is that there's extra legwork to set up the systems that collect the data then.
Honestly, I'd consider this a domain-specific problem and take two approaches: Find time-independent patterns, and find time-dependent patterns.
An example time-dependent pattern would be that every week day at 4:30 Suzy prints out her end of the day report. This happens at specific times every day of the week. This kind of thing is easy to detect with fixed intervals. (Every day, every week day, every weekend day, every Tuesday, every 1st of the month, etc...) This is extremely simple to detect with predetermined intervals -- just create a curve of the estimated probability density function that's one week long and go back in time and average the curves (possibly a weighted average via a windowing function for better predictions).
If you want to get more sophisticated, find a way to automate the detection of such intervals. (Likely the data wouldn't be so overwhelming that you could just brute force this.)
An example time-independent pattern is that every time Mike in accounting prints out an invoice list sheet, he goes over to Johnathan who prints out a rather large batch of complete invoice reports a few hours later. This kind of thing is harder to detect because it's more free form. I recommend looking at various intervals of time (e.g. 30 seconds, 40 seconds, 50 seconds, 1 minute, 1.2 minutes, 1.5 minutes, 1.7 minutes, 2 minutes, 3 minutes, .... 1 hour, 2 hours, 3 hours, ....) and subsampling them via in a nice way (e.g. Lanczos resampling) to create a vector. Then use a vector-quantization style algorithm to categorize the "interesting" patterns. You'll need to think carefully about how you'll deal with certainty of the categories, though -- if your a resulting category has very little data in it, it probably isn't reliable. (Some vector quantization algorithms are better at this than others.)
Then, to create a prediction as to the likelihood of printing something in the future, look up the most recent activity intervals (30 seconds, 40 seconds, 50 seconds, 1 minute, and all the other intervals) via vector quantization and weight the outcomes based on their certainty to create a weighted average of predictions.
You'll want to find a good way to measure certainty of the time-dependent and time-independent outputs to create a final estimate.
This sort of thing is typical of predictive data compression schemes. I recommend you take a look at PAQ since it's got a lot of the concepts I've gone over here and can provide some very interesting insight. The source code is even available along with excellent documentation on the algorithms used.
You may want to take an entirely different approach from vector quantization and discretize the data and use something more like a PPM scheme. It can be very much simpler to implement and still effective.
I don't know what the time frame or scope of this project is, but this sort of thing can always be taken to the N-th degree. If it's got a deadline, I'd like to emphasize that you worry about getting something working first, and then make it work well. Something not optimal is better than nothing.
This kind of project is cool. This kind of project can get you a job if you wrap it up right. I'd recommend you do take your time, do it right, and post it up as function, open source, useful software. I highly recommend open source since you'll want to make a community that can contribute data source providers in more environments that you have access to, will to support, or time to support.
Best of luck!
I really don't see how a Markov model would be useful here. Markov models are typically employed when the event you're predicting is dependent on previous events. The canonical example, of course, is text, where a good Markov model can do a surprisingly good job of guessing what the next character or word will be.
But is there a pattern to when a user might print the next thing? That is, do you see a regular pattern of time between jobs? If so, then a Markov model will work. If not, then the Markov model will be a random guess.
In how to model it, think of the different time periods between jobs as letters in an alphabet. In fact, you could assign each time period a letter, something like:
A - 1 to 2 minutes
B - 2 to 5 minutes
C - 5 to 10 minutes
etc.
Then, go through the data and assign a letter to each time period between print jobs. When you're done, you have a text representation of your data, and that you can run through any of the Markov examples that do text prediction.
If you have an actual model that you think might be relevant for the problem domain, you should apply it. For example, it is likely that there are patterns related to day of week, time of day, and possibly date (holidays would presumably show lower usage).
Most raw statistical modelling techniques based on examining (say) time between adjacent events would have difficulty capturing these underlying influences.
I would build a statistical model for each of those known events (day of week, etc), and use that to predict future occurrences.
I think the predictive neural network would be a good approach for this task.
http://en.wikipedia.org/wiki/Predictive_analytics#Neural_networks
This method is also used for predicting f.x. weather forecasting, stock marked, sun spots.
There's a tutorial here if you want to know more about how it works.
http://www.obitko.com/tutorials/neural-network-prediction/
Think of a markov chain like a graph with vertex connect to each other with a weight or distance. Moving around this graph would eat up the sum of the weights or distance you travel. Here is an example with text generation: http://phpir.com/text-generation.
A Kalman filter is used to track a state vector, generally with continuous (or at least discretized continuous) dynamics. This is sort of the polar opposite of sporadic, discrete events, so unless you have an underlying model that includes this kind of state vector (and is either linear or almost linear), you probably don't want a Kalman filter.
It sounds like you don't have an underlying model, and are fishing around for one: you've got a nail, and are going through the toolbox trying out files, screwdrivers, and tape measures 8^)
My best advice: first, use what you know about the problem to build the model; then figure out how to solve the problem, based on the model.

Typical rendering strategy for many and varied complex objects in directx?

I am learning directx. It provides a huge amount of freedom in how to do things, but presumably different stategies perform differently and it provides little guidance as to what well performing usage patterns might be.
When using directx is it typical to have to swap in a bunch of new data multiple times on each render?
The most obvious, and probably really inefficient, way to use it would be like this.
Stragety 1
On every single render
Load everything for model 0 (textures included) and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
Load everything for model 1 (textures included) and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
etc...
I am guessing you can make this more efficient partly if the biggest things to load are given dedicated slots, e.g. if the texture for model 0 is really complicated, don't reload it on each step, just load it into slot 1 and leave it there. Of course since I'm not sure how many registers there are certain to be of each type in DX11 this is complicated (can anyone point to docuemntation on that?)
Stragety 2
Choose some texture slots for loading and others for perpetual storage of your most complex textures.
Once only
Load most complicated models, shaders and textures into slots dedicated for perpetual storage
On every single render
Load everything not already present for model 0 using slots you set aside for loading and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
Load everything not already present for model 1 using slots you set aside for loading and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
etc...
Strategy 3
I have no idea, but the above are probably all wrong because I am really new at this.
What are the standard strategies for efficient rendering on directx (specifically DX11) to make it as efficient as possible?
DirectX manages the resources for you and tries to keep them in video memory as long as it can to optimize performance, but can only do so up to the limit of video memory in the card. There is also overhead in every state change even if the resource is still in video memory.
A general strategy for optimizing this is to minimize the number of state changes during the rendering pass. Commonly this means drawing all polygons that use the same texture in a batch, and all objects using the same vertex buffers in a batch. So generally you would try to draw as many primitives as you can before changing the state to draw more primitives
This often will make the rendering code a little more complicated and harder to maintain, so you will want to do some profiling to determine how much optimization you are willing to do.
Generally you will get better performance increases through more general algorithm changes beyond the scope of this question. Some examples would be reducing polygon counts for distant objects and occlusion queries. A popular true phrase is "the fastest polygons are the ones you don't draw". Here are a couple of quick links:
http://msdn.microsoft.com/en-us/library/bb147263%28v=vs.85%29.aspx
http://www.gamasutra.com/view/feature/3243/optimizing_direct3d_applications_.php
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter06.html
Other answers are better answers to the question per se, but by far the most relevant thing I found since asking was this discussion on gamedev.net in which some big title games are profiled for state changes and draw calls.
What comes out of it is that big name games don't appear to actually worry too much about this, i.e. it can take significant time to write code that addresses this sort of issue and the time it takes to spend writing code fussing with it probably isn't worth the time lost getting your application finished.

Do I need to worry about the overall audio gain in my program?

Is there level limiting somewhere in the digital audio chain?
I'm working on a tower defence game with OpenAL, and there will be many towers firing at the same time, all using the same sound effect (at least for now). My concern is that triggering too many sounds at the same time could lead to speakers being blown, or at the very least a headache for the user.
It seems to me that there should be a level limiter either in software, or at the sound card hardware level to prevent fools like me from doing this.
Can anyone confirm this, and if so, tell me where this limiter exists? Thanks!
as it is, you'd be lucky if the signal were simply clipped in the software before it hit the DAC. you can easily implement this yourself. when i say 'clipped', i mean that amplitudes that exceed the maximum are set to the maximum, rather than allowed to overflow, wrap, or other less unpleasant results. clipping at this stage often sounds terrible, but the alternatives i mentioned sound worse.
there's actually a big consideration to account for in this regard: are you rendering in float or int? if int, what is your headroom? with int, you could clip or overflow at practically any stage. with floating point, that will only happen as a serious design flaw. of course, you'll often eventually have to convert to int, when interfacing with the DAC/hardware. the DAC will limit the output because it handles signals within very specific limits. at worst, this will be the equivalent of (sampled) white noise at 0 dB FS (which can be an awful experience for the user). so... the DAC serves as a limiter, although this stage only makes it significantly less probable that a signal will cause hearing or equipment damage.
at any rate, you can easily avoid this, and i recommend you do it yourself, since you're directly in control of the number of sounds and their amplitude. at worst, samples with peaks of 0 dB FS will all converge at the same sample and you'll need to multiply signal (the sum of shots) by the reciprocal of the sum of shots:
output[i] = numShots > 1 ? allThoseShots[i]*(1.0/numShots) : allThoseShots[i];
that's not ideal in many cases (because there will be an exaggerated ducking sound). so you should actually introduce a limiter in addition to overall reduction for the number of simultaneous shots. then you back off the shots signal by a lower factor since their peaks will not likely converge at the same point in time. a simple limiter with ~10 ms of lookahead should prevent you from doing something awful. it would also be a good idea to detect heavy limiting in debug mode, this catches upstream design issues.
in any case, you should definitely consider appropriate gain compensation your responsibility - you never want to clip the output dac. in fact, you want to leave some headroom (ref: intersample peaks).

Resources