For context, i am making a multi-player game using nodejs and my db is postgres.
players play one by one, and i save everything in the db.
when first user plays, they can't play again, until the other player played too.
what i am doing now is having a boolean on each player in the db that says "ableToPlay" which is true, then turns to false if it's not the user's turn.
issue is when user spams the "play button" and my db is in a remote server, it takes time to update from true to false, making the user play multiple times then causes the app to crash.
I am using aws Microservices architecture so the server must be stateless.
is there any way i can save the game progress in a way where the progress is accessible to all my micro-services?
How do you check the turn? Is it something:
select turn from db
if turn == X then
//allow the turn
do all the logic
update the turn to Y
endif
So the "do all the logic" may be called several times as several requests will get turn=X.
This is a very common problem in programming, there are several approaches you could do.
Two key observations to address:
the same player should not do a turn twice in a row
while one player is making the turn, the other player must wait
Easiest way it to use a transaction in the DB while the turn is happening. For example, when player X making the turn:
start transaction
update turn=X where turn=Y (Y is the other player)
if update done (one record is updates)
do all the logic
commit the transaction
In that approach, update will wait for the previous one to finish, and the WHERE clause will make sure the same player won't do two or more turns in a row. And the transaction isolation will avoid running turn logic at the same time.
If you don't want to use the transaction, you could build a state machine, with states:
waitingForTurnX
makingTurnX
waitingForTurnY
makingTurnY
this would be a nice model to code and these transitions could be handled without transactions:
update state=makingTurnX where state=waitingForTurnX
This approach will also eliminate race condition, because in vast majority of databases, updates are atomic when it comes to a single record.
Related
For example, let's say I have a random game in which I have 500 independent objects and 10 players.
Independent object is an object that moves in a specific direction per update regardless of what players do (there is no need for players to come into contact with these objects).
Now if a player is shooting (lets say) a bullet, it is easier because it belongs to a specific player therefore it's easier to avoid in game lag. Lets look at something simpler, though, for example a player try to update their position. The typical thing I would do on client & server side would be this :
client side : update the coords of the player + send a message to the server as socket X
server side : receives the message from socket X, updates the coords of the player on the server side +
sends a message with the coords of that same player to all other sockets
When you do the communication like this, everyone will receive the new coords of the player and there will be little to no lag. (It is also sufficient for objects like bullets, because they are created upon firing a player event)
How do you handle 500+ independent objects that move in random directions with random speed all across the map and update them for all players efficiently? (Be aware that their velocity and speed can be changed upon contact with a player). What I've tried so far:
1) Put all of the movement + collission logic on the server side &
notifying all clients with a setTimeout loop & io.emit -
Result : causes massive lag even when you have only 500+ objects and 4 connected players. All of the players receive the server's response way too slow
2) Put all of the movement + collission logic on the client side & notifying the server about every object' position-
Result : To be honest, couldn't encounter much lag, but I am not sure if this is the correct idea as every time an object moves, I am literally sending a message to the server from each client to update that same object (server is getting notified N[number of connected clients] amount of times about that same object). Handling this entirely on the client side is also a bad idea because when a player randomly switches tabs [goes inactive], no more javascript will be executed in that players' browser and this whole logic will break
I've also noticed that games like agar.io, slither.io, diep.io, etc, all of them do not really have hundreds of objects that move in various directions. In agar.io and slither you mainly have static objects (food) and players, in diep.io there are dynamical objects, but none of them move at very high speeds. How do people achieve this? Is there any smart way to achieve this with minimal lag?
Thanks in advance
Convert your user interactions to enumerated actions and forward those. Player A presses the left arrow which is interpreted by the client as "MOVE_LEFT" with possible additional attributes (how much, angle, whatever) as well as a timestamp indicating when this action took place from Player A's perspective.
The server receives this and validates it as a possible action and forwards it to all the clients.
Each client then interprets the action themselves and updates their own simulation with respect to Player A's action.
Don't send the entire game state to every client every tick, that's too bloated. The other side is to be able to handle late or missing actions. One way of doing that is rollback where you keep multiple sets of state and then keep the game simulation going until a missinterpretation (late/missing packet) is found. Revert to the "right" state and replay all the messages since in order to get state to correct. This is the idea behind GGPO.
I suggest also reading every article related to networking that Gaffer on Games goes into, especially What Every Programmer Needs To Know About Game Networking. They're very good articles.
I'm developing an multiplayer turn based game (e.g chess), should support a lot of players (that's the idea). My question is about a service i'm developing, it's the pairing system, the responsible of pairing 2 players to start a room and start playing.
So, this is the pairing service:
matchPlayers() {
if (this.players.length >= 2) {
let player1 = this.players.shift();
let player2 = this.players.shift();
if (player1 !== undefined && player2 !== undefined) {
player1.getSocket().emit('opponent_found');
player2.getSocket().emit('opponent_found');
return this.createMatchInDataBaseApiRequest(player1, player2)
.then(function (data) {
let room = new RoomClass(data.room_id, player1, player2);
player1.setRoom(room);
player2.setRoom(room);
return room;
});
}
}
return false;
}
At the entrypoint of the server, each new socket connection I push it to an array "PlayersPool" this array is for players waiting to get matched up.
Right now my approach is to pair users when there are available, (FIFO - first in first out).
The problems (and question) I see with this pairing system is:
This depends on new users, this gets executed each time a new user is connected, The flow is: A user connects, get's added to the pool, and check if there are users waiting for being paired, if yes a room is created and they can play, if not he gets added to the waiting pool; Until a new user connects and the code get's executed and so on...
What would happen if in some weird case (not sure if this could happen) 2 players gets added to the waiting pool at the same exact time, this service would find the pool empty and would not create a room: To solve this maybe having another service running always and checking the pool? what would be the best approach? Could this even happen? in which scenario?
Thanks for the help.
I'm guessing this particular code snippet is on the server? If so, assuming there is only one server, then there is no "race condition": node.js is single-threaded, as IceMetalPunk mentioned, so if you're running this function every time you add a player to this.players, you should be fine.
There are other reasons to be examining the player pool periodically, though: players you've added to the pool may have gotten disconnected (due to timeout or closing the browser), so you should remove them; you also might want to handle situations where players have been waiting a long time - after X seconds, should you be updating the player on progress, calculating an estimated wait time for them, perhaps spawning an AI player for them to interact with while they wait, etc.
You can run into a "race condition", it's explained here in this package which provides you a Locking mechanism.
https://www.npmjs.com/package/async-lock
That package will be useful, only if you run node.js in a single process, meaning you are not having multiple servers, or having node cluster running multiple processes.
In that case, you will have to implement a distributed locking mechanism which is one of the most complex things in distributed computing, but today you can use the npm package for Redlock algorithm, set 3 redis servers and go.
Too much overhead for a game without players.
Node.js is not single threaded, here is the explanation of one of the creators.
Morning Keynote- Everything You Need to Know About Node.js Event Loop - Bert Belder, IBM
https://www.youtube.com/watch?v=PNa9OMajw9w
Conclusion, keep it simple, run it in a single node process and use the "async-lock" package.
If your server grows to become a MMO, you will need to read about distributed computing:
How to do distributed locking:
https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html
Book on data intensive apps
http://dataintensive.net/
I am trying to learn more about CQRS and Event Sourcing (Event Store).
My understanding is that a message queue/bus is not normally used in this scenario - a message bus can be used to facilitate communication between Microservices, however it is not typically used specifically for CQRS. However, the way I see it at the moment - a message bus would be very useful guaranteeing that the read model is eventually in sync hence eventual consistency e.g. when the server hosting the read model database is brought back online.
I understand that eventual consistency is often acceptable with CQRS. My question is; how does the read side know it is out of sync with the write side? For example, lets say there are 2,000,000 events created in Event Store on a typical day and 1,999,050 are also written to the read store. The remaining 950 events are not written because of a software bug somewhere or because the server hosting the read model is offline for a few secondsetc. How does eventual consistency work here? How does the application know to replay the 950 events that are missing at the end of the day or the x events that were missed because of the downtime ten minutes ago?
I have read questions on here over the last week or so, which talk about messages being replayed from event store e.g. this one: CQRS - Event replay for read side, however none talk about how this is done. Do I need to setup a scheduled task that runs once per day and replays all events that were created since the date the scheduled task last succeeded? Is there a more elegant approach?
I've used two approaches in my projects, depending on the requirements:
Synchronous, in-process Readmodels. After the events are persisted, in the same request lifetime, in the same process, the Readmodels are fed with those events. In case of a Readmodel's failure (bug or catchable error/exception) the error is logged and that Readmodel is just skipped and the next Readmodel is fed with the events and so on. Then follow the Sagas, that may generate commands that generate more events and the cycle is repeated.
I use this approach when the impact of a Readmodel's failure is acceptable by the business, when the readiness of a Readmodel's data is more important than the risk of failure. For example, they wanted the data immediately available in the UI.
The error log should be easily accessible on some admin panel so someone would look at it in case a client reports inconsistency between write/commands and read/query.
This also works if you have your Readmodels coupled to each other, i.e. one Readmodel needs data from another canonical Readmodel. Although this seems bad, it's not, it always depends. There are cases when you trade updater code/logic duplication with resilience.
Asynchronous, in-another-process readmodel updater. This is used when I use total separation of the Readmodel from the other Readmodels, when a Readmodel's failure would not bring the whole read-side down; or when a Readmodel needs another language, different from the monolith. Basically this is a microservice. When something bad happens inside a Readmodel it necessary that some authoritative higher level component is notified, i.e. an Admin is notified by email or SMS or whatever.
The Readmodel should also have a status panel, with all kinds of metrics about the events that it has processed, if there are gaps, if there are errors or warnings; it also should have a command panel where an Admin could rebuild it at any time, preferable without a system downtime.
In any approach, the Readmodels should be easily rebuildable.
How would you choose between a pull approach and a push approach? Would you use a message queue with a push (events)
I prefer the pull based approach because:
it does not use another stateful component like a message queue, another thing that must be managed, that consume resources and that can (so it will) fail
every Readmodel consumes the events at the rate it wants
every Readmodel can easily change at any moment what event types it consumes
every Readmodel can easily at any time be rebuild by requesting all the events from the beginning
there order of events is exactly the same as the source of truth because you pull from the source of truth
There are cases when I would choose a message queue:
you need the events to be available even if the Event store is not
you need competitive/paralel consumers
you don't want to track what messages you consume; as they are consumed they are removed automatically from the queue
This talk from Greg Young may help.
How does the application know to replay the 950 events that are missing at the end of the day or the x events that were missed because of the downtime ten minutes ago?
So there are two different approaches here.
One is perhaps simpler than you expect - each time you need to rebuild a read model, just start from event 0 in the stream.
Yeah, the scale on that will eventually suck, so you won't want that to be your first strategy. But notice that it does work.
For updates with not-so-embarassing scaling properties, the usual idea is that the read model tracks meta data about stream position used to construct the previous model. Thus, the query from the read model becomes "What has happened since event #1,999,050"?
In the case of event store, the call might look something like
EventStore.ReadStreamEventsForwardAsync(stream, 1999050, 100, false)
Application doesn't know it hasn't processed some events due to a bug.
First of all, I don't understand why you assume that the number of events written on the write side must equal number of events processed by read side. Some projections may subscribe to the same event and some events may have no subscriptions on the read side.
In case of a bug in projection / infrastructure that resulted in a certain projection being invalid you might need to rebuild this projection. In most cases this would be a manual intervention that would reset the checkpoint of projection to 0 (begining of time) so the projection will pick up all events from event store from scratch and reprocess all of them again.
The event store should have a global sequence number across all events starting, say, at 1.
Each projection has a position tracking where it is along the sequence number. The projections are like logical queues.
You can clear a projection's data and reset the position back to 0 and it should be rebuilt.
In your case the projection fails for some reason, like the server going offline, at position 1,999,050 but when the server starts up again it will continue from this point.
Currenty i'm working on creating a chess app with nodejs & socket.io
now the running games information are stored in an array like this:
games[token] = {
'creator': socket,
'players': [],
'interval': null,
'timeout': timeout,
'FEN' : '',
'PGN' : ''
};
The question is : Is better to save games info to DB at the creation of games and change the values of fields move by move, or save every game after finish?
Which is better approach?
if you wait until the end of a game to save state then you run the risk of losing it if something like a server crash occurs. Think unhandled exception or something outside of your control like a container restart or something worse.
Persist every bit of data that you want recoverable as soon as possible. I could imagine an rpg in which it wasn't super important to always be able to recover the players exact position on a map. It seems you'd always want to be able to recover the state of your chess games.
If you want crash proof implementation cheapest way is to write every move in journal log. When game ends you save the state and discard journal. On every game start load the state and then check if there is anything in the journal, if yes just play back the events.
Journal can be in database, disk or some light weight DB like Redis.
I'm creating a board game where 2 players can play, and others can be spectators (viewers)
so, when a spectator joins, he gets the current state of the game, and from then on, he only gets the move each player has made (to save data obviously).
My question is: when the spectator first get the state of the game from the server, how can I make sure it is actually synced? I don't really know when he will get the state, and it might be a fraction of a second before something has changed, and then the Delta he gets for every move made won't make sense.
Should I use some kind of an internal? what would you suggest to make sure everything is synced?
Assuming that your state is result of, and only of, user actions, you could store you state in a table like format with an auto-increment integer ID.
In the move event, you pass the new ID and the previous ID. If the receiver's max ID is less than the previous ID, you know to ask server for the missing actions.