I'm struggling to implement a good chaser. I have a hockey player who needs to chase a puck. I can predict both next player and puck positions. I was trying to use steering behaviors but failed to find a good predictor for situations when puck is close (imagine for example the puck heads almost towards the player with a high speed. The player makes just a little angle turns when the puck is somewhat away. However when the puck comes closer and it just misses the player, the last two-three ticks the player needs to turn much bigger angles to still be looking at the puck. When there's a limit to the turning angle, the puck escapes and the player can't do anything. If it started to turn earlie, it would be just fine, but when I predict more steps, the player tends to start turning for a puck position far behind him...). Then I was going to use a* search. Works great while the puck is ahead and the puck speed is lower then that of the player. However when the puck speed is bigger, it becomes an escaping target. So every time a* expands a new state, it tends to look back and find that on the previous states the puck was closer to the player (the puck escapes!), so it prefers the previous states and becomes bfs.
So I guess there's a well-known solution to this, but I fail to google anything on that, so maybe community will help me. Thanks in advance!
UPDATE: so basically I reinvented the wheel I guess. What I'm doing now is I'm iterating through the puck positions. When I hit the first position that is reachable with the same number of ticks, I declare victory. This is VERY resource expensive, but I couldn't come up with anything better.
The part about the behavior is pretty hard to understand at the moment but about the A* problem I think that your problem is that since your agent (the player) is operating in a dynamic environment you have to re-compute the heuristic every expansion step because of course, the h values for the states in the fronteer are now obsolete since the puck is moving. Am I any close to having understood what your problem is?
Curiosity, what kind of heuristic are you using?
Related
What sort of algorithms do I use for simulating sound? Like, if the player approaches the source of a sound, it should get louder, but if the player goes farther away, it should get softer. That's the big thing I can't seem to figure out.
I don't require any code, mostly I just want the equations, assuming there is one.
What you are talking about is important just like the Doppler effect. In general, you need more than just calculate the distance of an object to the source of the sound upon location change. It is much better to take into account the following:
the movement of the sound source
the movement of the active object
potential obstacles (for instance a wall)
"approaching" and "departing" as special cases of the Doppler effect
distance deviation in short time period
It should not be a goal to make this perfectly accurate, because in that case you would have to calculate too many things. Your aim should be to make this "good-enough" and the definition of "good-enough" should be made by you upon tests. Naturally, you need a lot of formulas.
I'm really experiencing pain in my wrists. I am looking for a new keyboard to minimize this problem. I tried to search for reviews of both the Kinesis Advantage keyboard and the ergoMagic Vertical keyboard, but I couldn't find any actual comparisons between the two.
The Kinesis Advantage has a really 3D design, but I still lack info about it compared to a real vertical position keyboard. As I see it now, the Kinesis advantage really looks comfortable, but still its position is very horizontal, which seems like it would be uncomfortable for me. On the other hand, vertical keyboards like the ergoMagic Comfort don't really look that nice either.
Anyone have experience with both?
I have a Kinesis Freestyle with an Ascent accessory. I use it completely vertical, 90 degrees to the table, as far as I am aware this arrangement is unique. I had a hurting wrist. Now I don't. That is all to say. It's not empty propaganda: stand up and just relax, now check the position of your hands! See, that's their natural position. A split keyboard (like the Freestyle) helps already keeping your hands closer to their natural position but a vertical keyboard keeps them where they should be. Superb. Because the Ascent is adjustable you can get used to it in about 7-10 days by going up one step a day.
As I travel to various Drupal events with this monstrosity, other Drupalers bought the keyboard as well and as far as I am aware, they are also happy.
The problem that your experiencing is a pinching of the radial or ulner nerves, which travel from the hand through the grooves in your elbow, and up your arm, over the shoulder and up your neck.
The pinch can occur in your wrist, elbow, shoulder, or neck.
If the pinch is occurring in the wrist or elbow, a compression wrap/guard on the wrist will most likely solve the problem. This is a wrap for the wrist that is made out of a taut stretchy material, you should feel moderate pressure when you put it on, but loosen if blood flow restriction occurs. Get something that has support on the pad of the wrist for use while typing. Mine has a small bean bag. I find the bean bag more comfortable than the gel types. This will correct posture on any keyboard. And you can take it with you anywhere. You can also wear it at any time, even when your not at a keyboard and you'll get increased healing benefits.
Do not use a rigid wrap while typing (this kind has a metal bar that forces your wrist into a natural position). You can buy a rigid wrap for compression while sleeping, but be careful to not hit your partner while asleep.
Second thing to try is pairing the wrist wrap with an elbow wrap. Same thing, compression type.
If having both still doesn't solve the problem, try looking for a knot along the inner arm muscle. These knots can pinch the ulnar nerve. If you find any knots, massage them out. The process will hurt, because after the nerve pinch subsides there will be a momentary increase in pain.
If there are no knots in your arm, the problem is in your shoulder or neck. First try adjusting your seating position. Ensure that knees and elbows are at 90 degree angles, feet are flat, and most important, shoulders are at a natural position, not raised like in the "I don't know" expression. Back straight, and good lower back support (they call this lumbar support). Your spine is S shaped (front to back), and you need to support this with your posture. Viewing angle of monitor should be level with your head or below. Not above.
If perfecting your posture doesn't work, you'll need to visit a doctor because the pinch is in your shoulder or neck. You'll need support in working out a shoulder pinch, and if it's a neck pinch you may need surgery. Consider this a last resort. Patients of surgery usually see no difference from people who avoided surgery as early as five years. So these painful episodes in your life are either temporary, or recur even after surgery.
A recap:
If pinch is in wrist or elbow, $10 can solve your problem, and it's $10 you can always use, no need to buy multiple keyboards and waste $100s. If this doesn't work, look for a knot in the arm. Massage out.
If the pinch is in your shoulder or neck, ensure you have good seating posture, then visit a doctor. Try both chiropractor and a neural specialist. They take different approaches, but always consider surgery as a worst possible option.
Experience:
I'm not a doctor, but I've had multiple problems with my ulnar nerve, and I've talked with doctors quite a bit. I also have a disc that's torn, so I've worked out problems with my leg as well. I'm experienced in using multiple remedial therapies, and have a good feel for what works and what's just from blood-sucking vampires using bogus theories. The threat to your health is real. The outside of my left hand is, from what I can tell, permanently numb.
Update: After half a year of practicing all the above, the feeling in my hand returned.
I have three of the Kinesis Advantage keyboards. I've had severe tendonitis in the wrists in the past. These Keyboards saved my career. The comfort level and even more important the ergonomics is unsurpassed. The orientation of the keys are in straight lines and not staggered. The huge typing advantage and comfort of this seemingly small modification will makes itself felt because I am not able to type for hours on end without any problems. I could not do so on regular keyboards. The keys are low force and of the mechanical type and don't bottom out so you won't experience any of the jarring when using membrane type keyboards. The customer support for these keyboards is excellent. Don't be put off by the cost. the cost is nothing as compared to saving your career. RSI related trauma is very real and dangerous. Please don't neglect it as it only gets worse over time if ignored.
On another important note I suggest good physical therapy and posture (bad posture is responsible for computer related wrist issues). I had therapy from Suparna Damany in Allentown, PA (a world-class therapist and author in computer related trauma). These measures combined with having a great keyboards will heal your wrist.
All the best,
I haven't used the ergoMagic, but I've been a happy Kinesis Advantage user for more than 7 months now. Though I've thankfully not been afflicted with any RSI/CTS and related problems, I'm noticing a substantial increase in typing comfort, especially after many-hour programming sessions. The gently curved profile of the Advantage and proper spacing of the keyboard "wells" allow the hands to occupy a more relaxed and natural position. Unlike traditional keyboards with a single group of keys, the wrists on an Advantage are kept mostly straight, keeping the ulnar nerve from long-term pressure. One additional modification that I've been using (on all my keyboards, including on the laptop) was to remap CapsLock to Ctrl. I'm a heavy Emacs user, and this step was a natural choice. The Advantage is rather expensive (though still much cheaper than, say, Maltron), but I would say that it's worth every dollar and more. I bought mine on Massdrop for little over $200, which was a real bargain, but I've been so happy with my typing ever since. Plus, you get an additional perk that people passing by your desk will go "What the...?" :). Anyway, this is the best keyboard I've used so far, so I can honestly recommend it.
"Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast.
Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.)
What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine?
Thank you, and sorry if the question was confusing.
There are no reasons why physics should depend on the framerate and this is clearly a bad design.
I've once tried to understand why people do this. I did a code review for a game written by another team in the company, and I didn't see it from the beginning but they used a lot of hardcoded value of 17 in their code. When I ran the game on debug mode with the FPS shown, I saw it, FPS was exactly 17! I look over the code again and now it's clear: the programmers assumed that the game will always have a 17 FPS constant frame rate. If the FPS was greater than 17, they did a sleep to make the FPS be exactly 17. Of course, they did nothing if the FPS was smaller than 17 the game just went crazy (like when played at 2 FPS and driving a car in the game, the game system alerted me: "Too Fast! Too Fast!").
So I write an email asking why they hardcoded this value and use it their physics engine and they replied that this way they keep the engine simpler. And i replied again, Ok, but if we run the game on a device that is incapable of 17 FPS, your game engine runs very funny but not as expected. And they said that will fix the issue until the next code review.
After 3 or 4 weeks I get a new version of the source code so I was really curious to find out what they did with the FPS constant so first thing i do is search through code after 17 and there are only a couple matches, but one of them was not something i wanted to see:
final static int FPS = 17;
So they removed all the hardcoded 17 value from all the code and used the FPS constant instead. And their motivation: now if I need to put the game on a device that can only do 10 FPS, all i need to do is to set that FPS constant to 10 and the game will work smooth.
In conclusion, sorry for writing such a long message, but I wanted to emphasize that the only reason why anyone will do such a thing is the bad design.
Here's a good explanation on why your timestep should be kept constant: http://gafferongames.com/game-physics/fix-your-timestep/
Additionally, depending on the physics engine, the system may get unstable when the timestep changes. This is because some of the data that is cached between frames is timestep-dependant. For example, the starting guess for an iterative solver (which is how constraints are solved) may be far off from the answer. I know this is true for Havok (the physics engine used by many commericial games), but I'm not sure which engine SMB uses.
There was also an article in Game Developer Magazine a few months ago, illustrating how a jump with the same initial velocity but different timesteps was achieved different max heights with different frame rates. There was a supporting anecdote from a game (Tony Hawk?) where a certain jump could be made when running on the NTSC version of the game but not the PAL version (since the framerates are different). Sorry I can't find the issue at the moment, but I can try to dig it up later if you want.
They probably needed to get the game done quickly enough and decided that they would cover sufficient user base with the current implementation.
Now, it's not really that hard to retrofit independence, if you think about it during development, but I suppose they could go down some steep holes.
I think it's unnecessary, and I've seen it before (some early 3d-hw game used the same thing, where the game went faster if you looked at the sky, and slower if you looked at the ground).
It just sucks. Bug the developers about it and hope that they patch it, if they can.
In a game such as Warcraft 3 or Age of Empires, the ways that an AI opponent can move about the map seem almost limitless. The maps are huge and the position of other players is constantly changing.
How does the AI path-finding in games like these work? Standard graph-search methods (such as DFS, BFS or A*) seem impossible in such a setup.
Take the following with a grain of salt, since I don't have first-person experience with pathfinding.
That being said, there are likely to be different approaches, but I think standard graph-search methods, notably (variants of) A* are perfectly reasonable for strategy games. Most strategy games I know seem to be based on a tile system, where the map is comprised of little squares, which are easily mapped to a graph. One example would be StarCraft II (Screenshot), which I'll keep using as an example in the remainder of this answer, because I'm most familiar with it.
While A* can be used for real-time strategy games, there are a few drawbacks that have to be overcome by tweaks to the core algorithm:
A* is too slow
Since an RTS is by definion "real time", waiting for the computation to finish will frustrate the player, because the units will lag. This can be remedied in several ways. One is to use Multi-tiered A*, which computes a rough course before taking smaller obstacles into account. Another obvious optimization is to group units heading to the same destination into a platoon and only calculate one path for all of them.
Instead of the naive approach of making every single tile a node in the graph, one could also build a navigation mesh, which has fewer nodes and could be searched faster – this requires tweaking the search algorithm a little, but it would still be A* at the core.
A* is static
A* works on a static graph, so what to do when the landscape changes? I don't know how this is done in actual games, but I imagine the pathing is done repeatedly to cope with new obstacles or removed obstacles. Maybe they are using an incremental version of A* (PDF).
To see a demonstration of StarCraft II coping with this, go to 7:50 in this video.
A* has perfect information
A part of many RTS games is unexplored terrain. Since you can't see the terrain, your units shouldn't know where to walk either, but often they do anyway. One approach is to penalize walking through unexplored terrain, so units are more reluctant to take advantage of their omniscience, another is to take the omniscience away and just assume unexplored terrain is walkable. This can result in the units stumbling into dead ends, sometimes ones that are obvious to the player, until they finally explore a path to the target.
Fog of War is another aspect of this. For example, in StarCraft 2 there are destructible obstacles on the map. It has been shown that you can order a unit to move to the enemy base, and it will start down a different path if the obstacle has already been destroyed by your opponent, thus giving you information you should not actually have.
To summarize: You can use standard algorithms, but you may have to use them cleverly. And as a last bonus: I have found Amit’s Game Programming Information interesting with regard to pathing. It also has links to further discussion of the problem.
This is a bit of a simple example, but it shows that you can make the illusion of AI / Indepth Pathfinding from a non-complex set of rules: Pac-Man Pathfinding
Essentially, it is possible for the AI to know local (near by) information and make decisions based on that knowledge.
A* is a common pathfinding algorithm. This is a popular game development topic - you should be able to find numerous books and websites that contain information.
Check out visibility graphs. I believe that is what they use for path finding.
Despite all the advances in 3D graphic engines, it strikes me as odd that the same level of attention hasn't been given to audio. Modern games do real-time rendering of 3D scenes, yet we still get more-or-less pre-canned audio accompanying those scenes.
Imagine - if you will - a 3D engine that models not just the physical appearance of items, but also their audio properties. And from these models it can dynamically generate audio based on the materials that come into contact, their velocity, distance from your virtual ears, etcetera. Now, when you're crouching behind the sandbags with bullets flying over your head, each one will yield a unique and realistic sound.
The obvious application of such a technology would be gaming, but I'm sure there are many other possibilities.
Is such a technology being actively developed? Does anyone know of any projects that attempt to achieve this?
Thanks,
Kent
I once did some research toward improving OpenAL, and the problem with simulating 3D audio is that so many of the cues that your mind uses — the slightly different attenuation at various angles, the frequency difference between sounds in front of you and those behind you — are quite specific to your own head and are not quite the same for anyone else!
If you want, say, a pair of headphones to really make it sound like a creature is in the leaves ahead and in front of the character in a game, then you actually have to take that player into a studio, measure how their own particular ears and head change the amplitude and phase of the sound at different distances (amplitude and phase are different, and are both quite important to the way your brain processes sound direction), and then teach the game to attenuate and phase-shift the sounds for that particular player.
There do exist "standard heads" that have been mocked up with plastic and used to get generic frequency-response curves for the various directions around the head, but an average or standard will never sound quite right to most players.
Thus the current technology is basically to sell the player five cheap speakers, have them place them around their desk, and then the sounds — while not particularly well reproduced — actually do sound like they're coming from behind or beside the player because, well, they are coming from the speaker behind the player. :-)
But some games do bother to be careful to compute how sound would be muffled and attenuated through walls and doors (which can get difficult to simulate, because the ear receives the same sound at a few milliseconds different delay through various materials and reflective surfaces in the environment, all of which would have to be included if things were to sound realistic). They tend to keep their libraries under wraps, however, so public reference implementations like OpenAL tend to be pretty primitive.
Edit: here is a link to an online data set that I found at the time, that could be used as a starting point for creating a more realistic OpenAL sound field, from MIT:
http://sound.media.mit.edu/resources/KEMAR.html
Enjoy! :-)
Aureal did this back in 1998. I still have one of their cards, although I'd need Windows 98 to run it.
Imagine ray-tracing, but with audio. A game using the Aureal API would provide geometric environment information (e.g. a 3D map) and the audio card would ray-trace sound. It was exactly like hearing real things in the world around you. You could focus your eyes on the sound sources and attend to given sources in a noisy environment.
As I understand it, Creative destroyed Aureal by means of legal expenses in a series of patent infringement claims (which were all rejected).
In the public domain world, OpenAL exists - an audio version of OpenGL. I think development stopped a long time ago. They had a very simple 3D audio approach, no geometry - no better than EAX in software.
EAX 4.0 (and I think there is a later version?) finally - after a decade - I think have incoporated some of the geometric information ray-tracing approach Aureal used (Creative bought up their IP after they folded).
The Source (Half-Life 2) engine on the SoundBlaster X-Fi already does this.
It really is something to hear. You can definitely hear the difference between an echo against concrete vs wood vs glass, etc...
A little known side area is voip. While games are having actively developed software, you are likely to spent time talking to others while you are gaming as well.
Mumble ( http://mumble.sourceforge.net/ ) is software that uses plugins to determine who is ingame with you. It will then position its audio in a 360 degree area around you, so the left is to the left, behind you sounds like as such. This made a creepily realistic addition, and while trying it out it led to funny games of "marko, polo".
Audio took a massive back turn in vista, where hardware was not allowed to be used to accelerate it anymore. This killed EAX as it was in the XP days. Software wrappers are gradually getting built now.
Very interesting field indeed. So interesting, that I'm going to do my master's degree thesis on this subject. In particular, it's use in first person shooters.
My literature research so far has made it clear that this particular field has little theoretical background. Not a lot of research has been done in this field, and most theory is based on movie-audio theory.
As for practical applications, I haven't found any so far. Of course, there are plenty titles and packages which support real-time audio-effect processing and apply them depending on the general surroundings of the auditor. e.g.: auditor enters a hall, so a echo/reverb effect is applied on the sound samples. This is rather crude. An analogy for visuals would be to subtract 20% of the RGB-value of the entire image when someone turns off (or shoots ;) ) one of five lightbulbs in the room. It's a start, but not very realisic at all.
The best work I found was a (2007) PhD thesis by Mark Nicholas Grimshaw, University of Waikato , called The Accoustic Ecology of the First-Person Shooter
This huge pager proposes a theoretical setup for such an engine, as well as formulating a wealth of taxonomies and terms for analysing game-audio. Also he argues that the importance of audio for first person shooters is greatly overlooked, as audio is a powerful force for emergence into the game world.
Just think about it. Imagine playing a game on a monitor with no sound but picture perfect graphics. Next, imagine hearing game realisic (game) sounds all around you, while closing your eyes. The latter will give you a much greater sense of 'being there'.
So why haven't game developers dove into this full-hearted already? I think the answer to that is clear: it's much harder to sell. Improved images is easy to sell: you just give a picture or movie and it's easy to see how much prettier it is. It's even easily quantifyable (e.g. more pixels=better picture). For sound it's not so easy. Realism in sound is much more sub-conscious, and therefor harder to market.
The effects the real world has on sounds are subconsciously percieved. Most people never even notice most of them. Some of these effects cannot even conciously be heard. Still, they all play a part in the percieved realism of the sound. There is an easy experiment you can do yourself which illustrates this. Next time you're walking on the sidewalk, listen carefully to the background sounds of the enviroment: wind blowing through leaves, all the cars on distant roads, etc.. Then, listen to how this sound changes when you walk nearer or further from a wall, or when you walk under an overhanging balcony, or when you pass an open door even. Do it, listen carefully, and you'll notice a big difference in sound. Probably much bigger than you ever remembered.
In a game world, these type of changes aren't reflected. And even though you don't (yet) consciously miss them, your subconsciously do, and this will have a negative effect on your level of emergence.
So, how good does audio have to be in comparison to the image? More practical: which physical effects in the real world contribute the most to the percieved realism. Does this percieved realism depend on the sound and/or the situation? These are the questions I wish to answer with my research. After that, my idea is to design a practical framework for an audio engine which could variably apply some effects to some or all game audio, depending (dynamically) on the amount of available computing power. Yup, I'm setting the bar pretty high :)
I'll be starting per September 2009. If anyone's interested, I'm thinking about setting up a blog to share my progress and findings.
Janne Louw
(BSc Computer Sciences Universiteit Leiden, The Netherlands)