What sort of algorithms do I use for simulating sound? Like, if the player approaches the source of a sound, it should get louder, but if the player goes farther away, it should get softer. That's the big thing I can't seem to figure out.
I don't require any code, mostly I just want the equations, assuming there is one.
What you are talking about is important just like the Doppler effect. In general, you need more than just calculate the distance of an object to the source of the sound upon location change. It is much better to take into account the following:
the movement of the sound source
the movement of the active object
potential obstacles (for instance a wall)
"approaching" and "departing" as special cases of the Doppler effect
distance deviation in short time period
It should not be a goal to make this perfectly accurate, because in that case you would have to calculate too many things. Your aim should be to make this "good-enough" and the definition of "good-enough" should be made by you upon tests. Naturally, you need a lot of formulas.
Related
I am tasked with something seemingly trivial which is to
find out how "noisy" a given recording is.
This recording came about via a voice recorder, a
OLYMPUS VN-733 PC which was fairly cheap (I am not doing
advertisement, I merely mention this because I in no way
aim to do anything "professional" here, I simply need to
solve a seemingly simple problem).
To preface this, I have already obtained several datasets
from different outside locations, in particular parks or
near-road recordings. That is, the noise that exists at
these specific locations, and to then compare this noise,
on average, with the other locations.
In other words:
I must find out how noisy location A is compared to location
B and C.
I have made 1 minute recordings each so that at the
least the time span of a recording can be compared
to the other locations (and I was using the very
same voice record at all positions, in the same
height etc...).
A sample file can be found at:
http://shevegen.square7.ch/test.mp3
(This may eventually be moved lateron, it just serves as
example how these recordings may sound right now. I am
unhappy about the initial noisy clipping-sound, ideally
I'd only capture the background noise of the cars etc..
but for now this must suffice.)
Now my specific question is, how can I find out how "noisy"
or "loud" this is?
The primary goal is to compare them to the other .mp3
files, which would suffice for my purpose just fine.
But ideally it would be nice to calculate on average
how "loud" every individual .mp3 is and then compared
it to the other ones (there are several recordings
per given geolocation, so I could even merge them
together).
There are some similar questions but not one in particular
that I was able to find that could answer this in a
objective manner, or perhaps I did not understand the
problem at hand. I have all the audio datasets already
but I have no idea how to find out how "loud" any one
of them is individually; there are some apps on smartphones
that claim that they can do this automatically but since
I do not have any smartphone, this is a dead end for me.
Any general advice will be much appreciated.
Noise is a notion difficult to define. Then, I will focus on loudness.
You could compute the energy of each files. For that, you need to access the samples of the audio signal (generally from a built-in function of you programming language). Then you could compute the RMS energy of the signal.
That could be the more basic processing.
I'm struggling to implement a good chaser. I have a hockey player who needs to chase a puck. I can predict both next player and puck positions. I was trying to use steering behaviors but failed to find a good predictor for situations when puck is close (imagine for example the puck heads almost towards the player with a high speed. The player makes just a little angle turns when the puck is somewhat away. However when the puck comes closer and it just misses the player, the last two-three ticks the player needs to turn much bigger angles to still be looking at the puck. When there's a limit to the turning angle, the puck escapes and the player can't do anything. If it started to turn earlie, it would be just fine, but when I predict more steps, the player tends to start turning for a puck position far behind him...). Then I was going to use a* search. Works great while the puck is ahead and the puck speed is lower then that of the player. However when the puck speed is bigger, it becomes an escaping target. So every time a* expands a new state, it tends to look back and find that on the previous states the puck was closer to the player (the puck escapes!), so it prefers the previous states and becomes bfs.
So I guess there's a well-known solution to this, but I fail to google anything on that, so maybe community will help me. Thanks in advance!
UPDATE: so basically I reinvented the wheel I guess. What I'm doing now is I'm iterating through the puck positions. When I hit the first position that is reachable with the same number of ticks, I declare victory. This is VERY resource expensive, but I couldn't come up with anything better.
The part about the behavior is pretty hard to understand at the moment but about the A* problem I think that your problem is that since your agent (the player) is operating in a dynamic environment you have to re-compute the heuristic every expansion step because of course, the h values for the states in the fronteer are now obsolete since the puck is moving. Am I any close to having understood what your problem is?
Curiosity, what kind of heuristic are you using?
I want to make something remotely similar to DinahMoe's "plink". In plink you click your mouse to play notes whose pitch is proportional to your mouse height. I can see that the height is divided into multiple "stripes" so you don't have some kind of "sliding" sound when you move the mouse but rather a scale, but what I can't figure out is why it always sounds good.
No matter how hard you try, you can't manage to make it sound bad. I don't have a lot of musical knowledge, so could someone explain how this works and how you would go about implementing it?
It seems that it only uses notes on a pentatonic scale similar to playing up and down the black keys of a piano. That's something I often used to do when I was a kid, because it does usually sound good!
As to why it sounds good, there's no definitive answer (and of course to some people it may not sound good!) but music that is harmonically pleasing to most people will tend to have lots of occurrences of simple frequency ratios between notes that make up the piece, especially when those notes are playing at the same time. This happens to occur a lot when you choose even fairly random selections of notes from this particular pentatonic scale. (For related reasons, you could see this scale as made up of important notes in the minor scale - a bit like a blues scale in some ways).
Unfortunately there may not be not much more mileage in that specific idea, because there is a limited number of simple ratios you can use - anything else you made with the same pentatonic scale could end up sounding similar to 'plink'. However, if you take the general idea of providing a set of musical options, all of which sound OK, and then allow the user basically just to select which one to choose, there are lots of routes you could go down. For example, you could have a similar 'game' where one 'player' was selecting the root note of a chord from the major scale, and another was picking which note in the chord to play in the melody.
I recently saw something that set me wondering how to create a realistic-looking (2D) lava lamp-like animation, for a screen-saver or game.
It would of course be possible to model the lava lamp's physics using partial differential equations, and to translate that into code. However, this is likely to be both quite difficult (because of several factors, not least of which is the inherent irregularity of the geometry of the "blobs" of wax and the high number of variables) and probably computationally far too expensive to calculate in real time.
Analytical solutions, if any could be found, would be similarly useless because you would want to have some degree of randomness (or stochasticity) in the animation.
So, the question is, can anyone think of an approach that would allow you to animate a realistic looking lava lamp, in real time (at say 10-30 FPS), on a typical desktop/laptop computer, without modelling the physics in any great detail? In other words, is there a way to "cheat"?
One way to cheat might be to use a probabilistic cellular automaton with a well-chosen transition table to simulate the motion of the blobs. Some popular screensavers (in particular ParticleFire) use this approach to elegantly simulate complex motions in 2D space by breaking the objects down to individual pixels and then defining the ways in which individual pixels transition by looking at the states of their neighbors. You can get some pretty emergent behavior with simple cellular automata - look at Conway's game of life, for example, or this simulation of a forest fire.
LavaLite is open source. You can get code with the xscreensaver-gl package in most Linux distros. It uses metaballs.
In a game such as Warcraft 3 or Age of Empires, the ways that an AI opponent can move about the map seem almost limitless. The maps are huge and the position of other players is constantly changing.
How does the AI path-finding in games like these work? Standard graph-search methods (such as DFS, BFS or A*) seem impossible in such a setup.
Take the following with a grain of salt, since I don't have first-person experience with pathfinding.
That being said, there are likely to be different approaches, but I think standard graph-search methods, notably (variants of) A* are perfectly reasonable for strategy games. Most strategy games I know seem to be based on a tile system, where the map is comprised of little squares, which are easily mapped to a graph. One example would be StarCraft II (Screenshot), which I'll keep using as an example in the remainder of this answer, because I'm most familiar with it.
While A* can be used for real-time strategy games, there are a few drawbacks that have to be overcome by tweaks to the core algorithm:
A* is too slow
Since an RTS is by definion "real time", waiting for the computation to finish will frustrate the player, because the units will lag. This can be remedied in several ways. One is to use Multi-tiered A*, which computes a rough course before taking smaller obstacles into account. Another obvious optimization is to group units heading to the same destination into a platoon and only calculate one path for all of them.
Instead of the naive approach of making every single tile a node in the graph, one could also build a navigation mesh, which has fewer nodes and could be searched faster – this requires tweaking the search algorithm a little, but it would still be A* at the core.
A* is static
A* works on a static graph, so what to do when the landscape changes? I don't know how this is done in actual games, but I imagine the pathing is done repeatedly to cope with new obstacles or removed obstacles. Maybe they are using an incremental version of A* (PDF).
To see a demonstration of StarCraft II coping with this, go to 7:50 in this video.
A* has perfect information
A part of many RTS games is unexplored terrain. Since you can't see the terrain, your units shouldn't know where to walk either, but often they do anyway. One approach is to penalize walking through unexplored terrain, so units are more reluctant to take advantage of their omniscience, another is to take the omniscience away and just assume unexplored terrain is walkable. This can result in the units stumbling into dead ends, sometimes ones that are obvious to the player, until they finally explore a path to the target.
Fog of War is another aspect of this. For example, in StarCraft 2 there are destructible obstacles on the map. It has been shown that you can order a unit to move to the enemy base, and it will start down a different path if the obstacle has already been destroyed by your opponent, thus giving you information you should not actually have.
To summarize: You can use standard algorithms, but you may have to use them cleverly. And as a last bonus: I have found Amit’s Game Programming Information interesting with regard to pathing. It also has links to further discussion of the problem.
This is a bit of a simple example, but it shows that you can make the illusion of AI / Indepth Pathfinding from a non-complex set of rules: Pac-Man Pathfinding
Essentially, it is possible for the AI to know local (near by) information and make decisions based on that knowledge.
A* is a common pathfinding algorithm. This is a popular game development topic - you should be able to find numerous books and websites that contain information.
Check out visibility graphs. I believe that is what they use for path finding.