Well, there isn't much example code for this. I'm writing a game loop and I'm running into a snag. At first I attempted to create a schedule for the loop as this was listed under best practices as the way it should be done. My frame rate is 60 fps, and the loop was also running at an interval of 60 fps. I quickly noticed however that the interval time is little more than a suggestion, and the actual rate that the function is updating is highly erratic, ranging between 15 and 65 fps. This led to jumps in objects updating, even when their update distance was regulated as a function of time. Once updating dropped below 20 fps it became highly apparent when it updated and was quite ugly.
I then tried to create my own thread so that I could more closely regulate the update rate. With this method I was able to almost precisely regulate my updating rate. With this all movement and animations were smooth. The issue with this method is that cocos2d clearly doesn't support multithreading well as I periodically see screen tearing. Also, if my update rate doesn't precisely match my draw rate I see jumps. I believe this is due to the draw method firing in in the middle of my update loop, a common problem with multithreading.
The two ways I can think of to solve this problem are to find some way to closely regulate the firing rate of the scheduler, or to find a way to lock the drawing code until I can finish my update. I've been looking for quite some time for any examples on how to do either of these, and have met with no success. If anyone out there has a clearer idea of how to handle this kind of a situation I would be very appreciative of some advice. I'm just too used to C++ and having things happen at the rate I tell them to. If you need me to post any additional information, just ask! Thanks in advance for any assistance.
Your problem is in your scheduled function. It just can't run as fast as you want. Try running helloWorld scene from cocos2d and you will notice that fps are stable near 60 (59.8 - 60.2)
Related
Let's say I have two separate recordings of the same concert (created on a user's phone and then uploaded to our server). These recordings are then aligned according to their creation timestamp. However, when these recordings are played together or quickly toggled between, it is revealed that their creation timestamps must be off because there is a perceptible delay.
Since the time stamp is not a reliable way to align these recordings, what is an alternative? I would really prefer not to have to learn about audio signal processing to solve this problem, but recognize this may be the only way. So, I guess my question is:
Can I get away with doing some kind of clock synchronization? Is that even possible if the internal device clocks are clearly off by an unknown amount? If yes, a general outline of how this would work and key words would be appreciated.
If #1 is not an option, I guess I need to learn about audio signal processing? Again, a general outline of how to tackle the problem from that angle and some key words would be appreciated.
There are 2 separate issues you need to deal with. Issue 1 is the alignment of the start time of the recordings. I doubt you can expect that both user's pressed record at the exact same moment. Even if they did they may be located different distances from the speaker and it takes time for sound to travel. Aligning the start times by hand is pretty trivial. The human brain is good at comparing the similarities of sound. Programmatically it's a different story. You might try using something like cross correlation or looking over on dsp.stackexchange.com. There is no exact method though.
Issue 2 is that the clocks driving the A/D converters on the two devices are not going to be running at the same exact rate. So even if you synchronize the start time, eventually the two are going to drift apart. The time it takes to noticeably drift is a function of the difference of the two clock frequencies. If they are relatively close you may not notice in a short recording. To counter act this you need to stretch the time of one of the recordings. This increases or decreases the duration of the recording without affecting the pitch. There are plenty of audio recording apps that allow you to time stretch but they don't give you any help in figuring out by how much. Start be googling "time stretching" or again have a look at dsp.stackexchange.com.
I realize neither of these are direct answers - rather suggestions.
Take a look at this document, describes how you can align recordings using Sonic Visualizer(GPL) and a plugin.
I've not used it before, but found the document (and this question) when I was faced with a similar problem.
I'm having this problem with LWJGL. I have a simple game and all works fine. My main loop is calculating when it should render and update my game. It stays constant 59-60 fps. The problem comes in opengl I guess. After random amounts of time my whole game starts to run at very low fps. My game loop still calculates 60 fps and updates, but what I see on screen doesn't match it. I'm guessing I overload openGL. I'm clearing color buffer bit and depth buffer(though I don't do any depth). Is there anything more I need to clear?
It's king of tough to say what may be wrong with your program without actually looking at the code. Clearing off the screen is one thing but it really shouldn't have the biggest impact so unfortunately I can't really tell you without any additional information.
Possibly it is a problem with slow hardware? This seems like a trivial "I have a slow graphics card" or "I have a lot of things open in the background" kind of problem. Also note that on most laptops if you shake it the hard drive will lock up for a few seconds, causing stuttering.
As Andrew said you can't really pinpoint this sort of problem without code.
I am currently helping someone with a reaction time experiment. For this experiment reaction times on the keyboard are measured. For this experiment it might be important to know, how much error could be introduced because of the delay between the key-press and the processing in the software.
Here are some factors that I found out using google already:
The USB-bus is polled at 125Hz at minimum and 1000Hz at maximum (depending on settings, see this link).
There might be some additional keyboard buffers in Windows that might delay the keypresses further, but I do not know about the logic behind those.
Unfortunately it is not possible to control the low level logic of the experiment. The experiment is written in E-Prime a software that is often used for this kind of experiments. However the company that offers E-Prime also offers additional hardware, that they advertise for precise reaction-timing. Hence they seem to be aware about this effect (but do not tell how large it is).
Unfortunately it is necessary to use a standart keyboard, so I need to provide ways to reduce the latency.
any latency from key presses can be attributed to the debounce routine (i usually use 30ms to be safe) and not to the processing algorithms themselves (unless you are only evaluating the first press).
If you are running an experiment where millisecond timing is important you may want to use http://www.blackboxtoolkit.com/ to find sources of error.
Your needs also depend on the nature of your study. I've run RT experiments in Eprime with a keyboard. Since any error should be consistent on average across participants, for some designs it is not a big problem. If you need to sync up the data though with something else (like Eye tracking or EEG) or want to draw conclusions about RT where specific magnitude is important then E-Primes serial resp box (or another brand, though I have had compatibility issues in the past with other brand boxes and eprime) is a must.
First I want to apologize for my approximate English, as I'm French. I'm currently making a real-time game in java, using LWJGL.
I have some questions regarding game loops:
I'm running the rendering routine in a thread. Is it a good idea? Usually, the rendering routine is fairly slow and should not slow down the world update (tick) routine, which is way more important. So I guess using a thread here seems like a good idea (minus the complications from using a thread).
In the world update routine, I'm updating a list of entities with the current time. Each entity can then compute their own deltaTime, corresponding to the last time they were updated. This differs from the usual update loop, which updates every entity in the list with the same deltaTime. This seemed appropriate because of the threaded rendering. Is it a good idea? Should I use the second method instead? If so, is the threaded rendering still needed? If so, do I have to add a maximum deltaTime?
In general, is it a good idea to have a maximum deltaTime?
Thanks for your time!
Is it a good idea? Separate threads are fairly advanced stuff, I see no reason to do multithreading to begin with. All the mobile games I have worked on so far have not needed multiple threads, even though they are 'real-time'. Hardcore PC and console games are where multithreading really starts to come into play. Here is a link to a recent talk on the subject if interested : http://archive.assembly.org/2011/seminars/adventures-in-multithreaded-gameplay-coding.
Sounds like this could cause some strange things if the physics are not handled in one go. Not sure about this. Colliding an object that has already been updated to another position with an object that comes another time, for example, correcting this sort of situation may become problematic? Fast moving collisions may need to be subdivided, which may be why you have the separate update thread, but why not have them all calculated as happening at the same time?
'Variable timestep' and 'Fixed timestep' are the options available for rendering. Most games at the moment seem to choose a 30 fps fixed timestep. The rendering has to be kept under the limits so no catching up should be needed.
One problem with variable timestep is you are forced to pass deltaTime to all time-dependent areas. Fixed timestep is handy as you can assume you are running at say 30 fps, and use that value everywhere. It is a preferred method at the moment as far as I know.
Though this question is a few years old…
AFAIK,
Rendering is usually done in separated processor — GPU, so they're already a separated thread. But, drawing command must be processed by graphics driver (which is running in CPU) before dispatched to GPU, and this processing may be saved by being multi-threaded. Anyway in this case, you're responsible to manage synchronization between logics and rendering thread.
Generally speaking, games are all about interactions between objects, and it's very hard to divide state-graph into fully separated divisions. As a result, whole game state usually becomes single graph, and this graph cannot be updated while being rendered. In this case, you have no benefit by being multi-threaded.
If you can keep a separated immutable data for rendering, than you may gain some benefit from rendering in separated thread. But otherwise, I don't recommend it.
In addition, you should consider GC if you truly want a realtime game. GC related performance issues usually the biggest obstacles to make realtime stuffs.
"Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast.
Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.)
What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine?
Thank you, and sorry if the question was confusing.
There are no reasons why physics should depend on the framerate and this is clearly a bad design.
I've once tried to understand why people do this. I did a code review for a game written by another team in the company, and I didn't see it from the beginning but they used a lot of hardcoded value of 17 in their code. When I ran the game on debug mode with the FPS shown, I saw it, FPS was exactly 17! I look over the code again and now it's clear: the programmers assumed that the game will always have a 17 FPS constant frame rate. If the FPS was greater than 17, they did a sleep to make the FPS be exactly 17. Of course, they did nothing if the FPS was smaller than 17 the game just went crazy (like when played at 2 FPS and driving a car in the game, the game system alerted me: "Too Fast! Too Fast!").
So I write an email asking why they hardcoded this value and use it their physics engine and they replied that this way they keep the engine simpler. And i replied again, Ok, but if we run the game on a device that is incapable of 17 FPS, your game engine runs very funny but not as expected. And they said that will fix the issue until the next code review.
After 3 or 4 weeks I get a new version of the source code so I was really curious to find out what they did with the FPS constant so first thing i do is search through code after 17 and there are only a couple matches, but one of them was not something i wanted to see:
final static int FPS = 17;
So they removed all the hardcoded 17 value from all the code and used the FPS constant instead. And their motivation: now if I need to put the game on a device that can only do 10 FPS, all i need to do is to set that FPS constant to 10 and the game will work smooth.
In conclusion, sorry for writing such a long message, but I wanted to emphasize that the only reason why anyone will do such a thing is the bad design.
Here's a good explanation on why your timestep should be kept constant: http://gafferongames.com/game-physics/fix-your-timestep/
Additionally, depending on the physics engine, the system may get unstable when the timestep changes. This is because some of the data that is cached between frames is timestep-dependant. For example, the starting guess for an iterative solver (which is how constraints are solved) may be far off from the answer. I know this is true for Havok (the physics engine used by many commericial games), but I'm not sure which engine SMB uses.
There was also an article in Game Developer Magazine a few months ago, illustrating how a jump with the same initial velocity but different timesteps was achieved different max heights with different frame rates. There was a supporting anecdote from a game (Tony Hawk?) where a certain jump could be made when running on the NTSC version of the game but not the PAL version (since the framerates are different). Sorry I can't find the issue at the moment, but I can try to dig it up later if you want.
They probably needed to get the game done quickly enough and decided that they would cover sufficient user base with the current implementation.
Now, it's not really that hard to retrofit independence, if you think about it during development, but I suppose they could go down some steep holes.
I think it's unnecessary, and I've seen it before (some early 3d-hw game used the same thing, where the game went faster if you looked at the sky, and slower if you looked at the ground).
It just sucks. Bug the developers about it and hope that they patch it, if they can.