I am developing a little/mid RTS game with SpriteKit.
I wonder if my multithreaded approach is OK.
Generally speaking I have user controlled units and enemy units.
Both units has a basic AI, enemies obviously has some more AI
Lets say if unit is standing still and enemy unit is approached to its attacking range I want a unit to Automagically attack the enemy.
And ofcourse same for the enemy.
I choosed not to add a logic and distance measurements to the update method which is expensive.
I decided to make 2 threads/queues/pools whatever with own update method. 1 thread for enemies and 2 for units respectively.
Question: Is it ok/fine/bad/acceptable approach ?
Do I get benefits from it or contrary?
Just my point of view
You say
I choosed not to add a logic and distance measurements to the update method which is expensive.
So I assume you are write expensive computations involving distance measurements inside a closure you want to execution on a separate thread.
An example
Time t0
A unit is near the enemy. You start an asynch thread with complex code to determine whether the unit should attack the enemy.
Time t1
Then (on the main thread) the user moves the unit far away from the enemy.
Time t2
Finally the thread started at t0 completes. The decision is to attack the enemy because at time t0 the unit was close to the enemy. But now the unit is no longer near the enemy so you see a wrong behaviour on the screen.
Wrap up
If this is an acceptable behaviour for your game then I don't see further problems.
Another simple solution
If your logic about whether a unit/enemy should attack an enemy/unity is exclusively based on the distance between the 2 objects you could use the physics engine provided by SpriteKit to check for collisions.
Perimeter
You can simply create a circular physics body without mass centered on each unit/enemy. Let's call this Perimeter.
You also set the bit mask values so that you only receives notification when a unit perimeter and an enemy perimeter collided. So no notification when 2 unit or 2 enemy perimeters collides.
Now the physics engine will notifier you each time a unit is close enough an enemy. no need for multithreading and very easy code.
Related
I am currently in the process of implementing a strength powerup into my game. The powerup is within its own scene and will emit a signal into the enemy script if the player collides with it. Without the powerup, the player has to individually attack the enemy 4 times (i.e. one hit reduces one life, the enemy has 4 lives). I wish for the strength powerup to increase the damage from 1, to 2 - so the player only has to attack the enemy 2 times.
The below function handles the collision between the player's sword and the enemy's hurtbox.
func _on_Hurtbox_area_entered(area): # called when there is a collision between players sword and enemy
if area.is_in_group("Sword"):
damage(1)
flash()
This function will then call the damage function.
func damage(damage_dealt):
var damage_n = damage_dealt # stores the parameter as a new local variable
lives -= damage_n # removes enemy life
if lives < 1: # no lives
dead()
The signal currently links to a function called powerup_damage.
Essentially, I am unsure of how to call the damage function, from the _on_Hurtbox_area_entered function with a damage_dealt value of 2, only when the powerup has been collected.
Thanks.
Answering the question as posted.
The powerup is within its own scene and will emit a signal into the enemy script if the player collides with it.
This is very odd to me. You are telling me that the power-up does not make the player stronger, but instead it makes the enemies weaker. OK.
You are going to another signal for when the power up ends. So you can have a variable that you set to true when the player gets the power up, and to false when it ends. Then you can check if the variable is true and use that to decide the value of the damage.
I'm giving this solution to answer the question as posted. Yet, I would like to encourage another approach.
Another approach
The other approach I want to encourage has this goal: Don't have the enemy calling damage on itself.
Instead, you would have the sword call damage on the enemy. The enemy would then not need any information about what power-ups the player has. Instead all the power up code would be on the player (and sword) side. Which also means that:
Every time you add a new enemy kind, you don't need to make sure it has the power-up logic.
Every time you add a new power-up, you don't need to make sure every enemy kind has the logic for it.
Currently the sword already has an area. For the change I'm suggesting, you could could have that area detect when the hurtbox enters, instead of having the hurtbox detect when the sword area enters.
Presumably you would not need any power-up signal. Since this approach does not require to notify the enemies that the player picked a power-up.
By the way, the damage function does not need to take the actual damage, in fact, I would argue it should not.
Instead have it take an attack value, or even more parameters. Let the code on the enemy compute how much damage that actually is. That allows you to have some enemies more resistant than others, without having the code on the sword depending on the enemy exposing variables for the computation.
And yes, you could pass what power-ups the player has, and let the enemy use that information to compute the actual damage (that allows you to have enemies that ignore the power-up).
Furthermore, if the damage parameters get too complex, you can always make an resource class, and pass an object of that class. You can even reuse the object for identical attacks.
My algorithm is processing DEMs. a DEM (Digital Elevation Model) is a representations of ground topography where elevation is known at grid nodes.
My problem can be summarized as follows:
Q is a queue containing nodes to visit.
at start, the boundary of the grid is pushed in Q.
while Q is not empty, do
remove Node N from the top of Q
if N was never visited then do
consider the 8 neighbors of N
among them select the unvisited ones
among them select those with a higher elevation than N's
push these at Q's tail
mark N as visited
done
done
As described, the algorithm will mark as 'visited' every node that can be reached from the border by a continuously ascendant path. It is worth noticing that the order of processing the nodes in the queue is unimportant. Note also that some points may request a tortuous ascendant path to be reached from the border. Think for example to a cone with a furrow spiraling around it. The ridge of the furrow is such a unique and tortuous path capable of reaching the top of the cone without never descending into the furrow.
Anyway, I want to mutithread this algorithm. I am still in the first step of wondering which is the best organization of data and threads in order to have as least pain as possible at debugging the beast when it is written.
My first thought is to divide the grid into tiles and split the Queue in as many tiles as there is in the grid. The tiles are piled in a work-list. A few threads are parsing the work-list and grab any tile where something can be done at the moment.
Working on a specific tile will firstly need that the tile's queue is not empty. I may also need that the neighboring tiles can be locked if the walker's tile has to visit a node at the edge of the tile.
I am thinking that when a walker cannot lock a neighboring tile while it needs to, then it can skip to the next node in the local queue, or even the thread itself can release the tile to the work-list and seek for another tile to work on.
My actual experience of multi-thread programming is good enough to understand that this lovely description is very likely to turn into a nightmare when I will debug it. However I am not experienced enough to evaluate the various possibilities of programming the algorithm and make a good decision, having in mind that I will not be given a month to debug a spaghetti dish.
Thanks for reading :)
I am using Player (Player/Stage) on the iRobot Create. The interface for getting odometry data from the robot is fairly simple: call playerc_client_read, and then if you've properly subscribed a playerc_position2d proxy, you should be able to access the proxy's members px, py, pa for distance traveled in x and y (in meters); and rotation (in radians).
I have no issue with doing this in a single threaded application -- all the odometry data is perfectly where I need it to be.
However, when I try to move the robot controller to its own thread (with pthreads), I run into some issues. The issue is that only px seems to be updated. py and pa always remain 0.
Here's the gist of the robot thread
//declare everything (including the playerc_client_t* object and playerc_position2d_t* object)
//connect to server (in pull mode or push mode, it doesn't seem to matter)
//subscribe to position2d proxy
while(!should_quit) {
playerc_client_read(client)
double xPosition = position2d->px;
double yPosition = position2d->py;
double radians = position2d->pa;
//do some stuff
sleep(10 milliseconds)
}
cleanup and unsubscribe
and sure enough, only xPosition is ever set while yPosition and radians remain 0 no matter how the robot moves.
I couldn't find anything else online, is this a known bug? Has anybody else had this issue? Can someone provide insight as to why this may be happening? Thank you.
Full disclosure: I'm a graduate student and this is for a class project.
The issue here is not necessarily with threading.
What we found is that the Create's internal odometry is very inconsistent, especially when a netbook is sitting atop it.
To get any semblance of an accurate reading, one has to set the angular velocity high enough (higher than 0.11 rads/s in our case).
This site helped explain a few things -- namely that the Creates use motor power to determine odometry, and not wheel counters or any kind of analog.
To get accurate odometry for dead reckoning tasks, one either needs to build their own accurate estimator, or use some external sensors that give better information about positional changes.
Our specific problem was caused by a thresholding in the multithreaded case that set angular velocity to low to register a change, whereas the sequential code did not have such thresholding.
I'm moving a character (ellipsoid) around in my physics engine. The movement must be constrained by the static geometry, but should slide on the edges, so it won't be stuck.
My current approach is to move it a little and then push it back out of the geometry. It seems to work, but I think it's mostly because of luck. I fear there must be some corner cases where this method will go haywire. For example a sharp corner where two walls keeps pushing the character into each other.
How would a "state of the art" game engine solve this?
Consider using a 3rd party physics library such as Chipmunk-physics or Box2D. When it comes to game physics, anything beyond the most basic stuff can be quite complex, and there's no need to reinvent the wheel.
Usually the problem you mention is solved by determining the amount of overlap, contact points and surface normals (e.g., by using separating-axis theorem). Then impulses are calculated and applied, which change object velocities, so that in the next iteration the objects are moved apart in a physically realistic way.
I have not developed a state of the art game engine, but I once wrote a racing game where collision was simply handled by reversing the simulation time and calculate where the edge was crossed. Then the car was allowed to bounce back into the game field. The penalty was that the controls was disabled until the car stopped.
So my suggestion is that you run your physics engine to calculate exactly where the edge is hit (it might need some non-linear equation solving approach), then you change your velocity vector to either bounce off or follow the edge.
In the case of protecting against corner cases, one could always keep a history of the last valid position within the game and state of the physics engine. If the game gets stuck, the simulation can be restarted from that point but with a different condition (say by adding some randomization to the internal parameters).
I have a course exercise in OpenGL to write a game with simple animation of a few objects
While discussing with my partner our design options we've realized we have two major choices for how the animation is going to work, Either
Set a timer for a constant interval, say 30 msec, when the timer hits, calculate where objects should be and draw the frame. or -
Don't use a timer, just a normal loop that runs all the time and in each iteration check how much time passed, calculate where the objects should be according to the interval and draw the frame.
What should generally be the preferred approach? Does anyone have concrete experience with either approach?
Render and compute as fast as you can to get the maximum frame rate (as capped by the vertical sync)
Don't use a timer, they're not reliable < 50-100 ms on Windows. Check how much time has passed. (Usually, you need both delta t and an absolute value, depending on if your animation is physics or keyframe based.)
Also, if you want to be stable, use an upper/lower bound on your time-step, to go into slow-motion if a frame takes a few secs to render (disc access by another process?) or skip an update if you get two of them within say 10 ms.
Update
(Since this is a rather popular answer)
I usually prefer having a fixed time-step, as it makes everything more stable. Most physics engines are pretty robust against varying time, but other things, like particle systems or various simpler animations or even game logic, are easier to tune when having everything run in a fixed time step.
Update2
(Since I got 10 upvotes ;)
For further stability over long periods of running (>4 hours), you probably want to make sure you're not using floats/doubles to compute large time differences, since you lose precision doing so and your game's animations/physics will suffer. Use fixed point (or 64-bit microsecond-based) integers instead.
For the hairy details, I recommend reading A matter of precision by Tom Forsyth.
Read this page about game loops.
In short, set a timer:
Update the state of the game at a fixed frequency (something like every 25 ms = 1s/40fps). That includes the properties of the game objects, the input, the physics, the AI, etc. I call that the Model and the Controller. The need for a fixed update rate comes from the problems that may appear on too slow or too fast hardware (read the article). Some physics engine also prefer to update at a fixed frequency.
Update the frame (the graphics) of the game as fast as possible. That would be the View. That way you'll provide a smooth game. You can also enable vsync so the display will wait for the graphic card (usually it's 60 fps).
So each iteration of the loop, you check if you should update the model/controller. If it's late, update until they are up to date. Then, update the frame once and continue your loop.
The tricky part is that because of the different update rates, in fast hardware, the view will update several times before the model and controller. Therefore you should interpolate the position of your game objects depending on "where they would be if the game state would have been updated". It's really not that difficult.
You may have to maintain 2 different data structures : one for the model and one for the view. For instance you could have a scene graph for your model and a BSP tree for your view.
The second would be my preferred approach, because timers are often not as accurate as you're probably thinking and have all the latency and overhead of the event handling system. Accounting for the time interval will give your animations a much more consistent look and be robust if/when your frame rate dips.
Having said that, if your animation is based on a physics simulation (eg rigid body or ragdoll animation), then having a fixed update interval for your physics can greatly simplify the implementation.
Option 2 is by far preferred. It will scale nicely across differently performing hardware.
The book "Game Programming Gems 1" had a chapter that covers exactly what you need:
Frame Rate Independent Linear Interpolation
Use the second method. Did a game for my senior project and from experience, there is no guarantee that your logic will be done processing when the timer wants to fire.
I would be tempted to use the loop, since it will render as fast as possible (i.e. immediately after your physics computations are done). This will probably be more robust if you run into any slow-down in computation, which would cause timer firings to start queueing up. However, in case of such a slow-down you may have to put a cap on the time step computed between updates, since your physics engine may go unstable with too large a jump in time.
I'd suggest setting the system up to work on a "delta" that's passed in from outside.
When I did this, inside the animation format I based everything on real time values. The delta I passed in was 1 / 30 seconds, but it could be anything. With this system you can get either your first or second option, depending on whether you pass in a fixed delta or if you pass in the amount of time that has passed since the last frame.
As for which is better, it depends on your game and your requirements. Ideally all of your systems would be based around the same delta so that your physics match your animations. If your game drops frames at all and if all of your systems work with a variable delta, I'd suggest the variable delta is the better of the two solutions for end user experience.