I'm writing a game server and clients. The server generates a world and serves that up to various player clients. It is not massively multiplayer.
The client (at least for now) runs a fairly demanding software renderer, as well as doing the standard client side simulation (which is then subject to approval by the authoritative server), so any processing scalability available on this end would be good.
The server will also be processor intensive, and given the nature of the project, is likely to only become more so as the project progresses. The world is procedural and some of that happens during play; world options are set before a game begins, based on system capabilities at that time (ideally with minimal outside processes running). So scalability on the number of hardware threads is an absolute necessity for the server.
Again, as this is not an MMO, players will all have access to both client and server, so that they can set up their own servers. Many players will have only one machine on which to both host a server, and run the client, in order to be able to play with their friends. Based on this fact, which would serve me better?
A combined client and server process
Separate client and server processes
...And why?
PS. Even if some of the above complexity only comes to pass later, I need to get the architecture (in this regard) right, as early on as possible.
I'd say the most important aspect to help you with decision is - are there any computation intensive tasks that are shared for both client and server? If you can do some computation for both client and server once instead of twice, then it makes sense to have one process (but this may be quite hard to implement). Otherwise i say separate it.
I see few advantages with separated processes:
when client crashes (i'd say more probable with complicated renderers and/or buggy graphics drivers) server stays up
accidentally(and/or angry QQ) exiting game won't kill the game for other players
(if needed) it's easier to bind server to few cores and client to others
you probably need to write dedicated server anyway (so it can be hosted on remote servers, possibly without graphics) and porting it to other OSes might be easier ...
makes the client code overall less complicated
(i may later add something to the list if i come up with it)
Separation of concerns is always an important factor, so it's better to keep them as separate processes. It's also better for performance. If I only want to use the client functionality, there is no reason to also keep the server-related code in the same application. I will only launch it if I need it.
Related
I will be hosting a webrtc application. All the server needs to do is just pass around messages like room number, ice candidates, disconnects, etc, just all those messages for signalling. I am using socket.io and node.js.
The server pretty much just passes text around. There is no sign ups, no database, it's all in memory. It keeps track of a list of users that are online (just how many are online), and list of rooms that are taken.
So a few lists of numbers, and it passes text between users so that they can connect via webrtc.
Now, obviously when (if) I get a huge amount of traffic coming in, the lists might get kind of big, like maybe 10k-20k 5 digit numbers in each list (there are only a couple big lists).
And all that passing around, like the disconnect and connect. I need a server that can do this stuff fast, perferably a free server. I mean, it's only text, so it shouldn't be that big of a deal, right? But my app is structured around connecting a person to the next person who connects. So, if a whole bunch of people connect at around the same second, then I need a quick hosting server that can handle that to the milisecond... Will this even be a problem?
What exactly should I be looking for in a server, if I'm just using memory for number lists (no databases), and passing around text stuff.
First of all, this doesn't have anything to do with webrtc itself. What you basically want is a chat-server, a server that sends data from one client to the other.
Second, the type of server is irrelevant to the amount of RAM required to run it. What matters is how much clients you will have concurrently. (to some extent, game servers will obviously consume more RAM even without clients).
Third, more RAM does NOT mean faster handling. That is, if you don't fully use the available RAM, adding more won't do you any good. Obviously, when you exceed the available RAM things start to slow down a lot. Read more about it here
Now, with those out of the way, let's see what you need. You can make a very rough estimation by connecting a few clients to a server and see how much RAM it uses. Check if the amount of RAM goes up if these clients start calling each other and by how much it goes up. You now have a minimum and a maximum amount of RAM for x clients. I would do this test with about 10 clients.
Now that you can make an estimation, calculate how much the minimum and maximum RAM is for your expected userbase. It will become more and more a preference thing from here on out, but I would at least double that amount and then round up to the nearest amount of RAM that "makes sense" (14.7GB becomes 16GB, 28.32GB becomes 32GB etc...)
I will add, from my own experience with webrtc with about 1000-1500 concurrent users that 8GB is easily enough. But it's really up to the amount of users you expect.
On a side node, I very much recommend nodejs for a server. It's super easy to use, any programmer that knows javascript (so basically any programmer) can create a chat-server in nodejs in a day or two. Take a look at this open-source webrtc server in nodejs
I am in the planning phase of a mobile multi-player real time game and I am considering what architecture to implement in order to keep track of state within a game lobby.
I have considered making the game either peer-to-peer where all devices within the same game lobby (4 players max) will emit their position to other players as the game progresses. I have also considered having the players connect to the server and the server keeping track of the game state and sends the state to every player.
If I do go this route and implement the server using node what are some considerations that I need to look at?
I have a prediction that scalability will be a major issue as the server has to keep track of the state of every game lobby that is nominally running at 60 fps. If I have 100 active lobbies I can foresee the issues that could arise. Will this be the case and should I look into a different networking architecture? or could I get significant capacity until the server reaches a maximum?
Depends highly on the kind of game.
The peer-to-peer solution is probably tricky and cheating would be pretty easy except you validate it on every client but since you want to run it on a mobile client that would slow down the game a lot.
It makes more sense to validate the data on a central server and just send out the validated data.
In general the FPS doesn't matter for the server at all.
Every client just emits user input to the server and not every frame
(button pressed - button released).
The server uses all this information and calculates what needs to happen and sends regular updates to the client.
Then the client renders everything like 200ms (really depending on the kind of game) in the past and "predicts" what the server will send in the next update to make it look smooth.
Just read this article, it explains it better than I ever could: https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
But yeah, it will def take a lot more power than a normal website.
However, the single lobbies don't need to communicate with each other so you can super easy just use 3 servers in parallel.
You can expect Node to perform pretty well in comparison to most alternatives you have cause it handles concurrent connections very well. Just make sure you don't run CPU heavy things because that will pretty much kill Node (if you just receive status updates and use basic math to validate it and send it out again ->perfect choice).
Hope that helps
The biggest issue you will run into is Node is not built to handle application that are as latency sensitive as real time simulations.
HTTP is a bloated protocol, and it's text-based. It will not work for the type of games most people would consider multiplayer. It may work for games in which high latency is not an issue such as facebook games.
So I'm working on a node.js game server application at the moment, and I've hit a bit of a wall here. My issue is that I'm using socket.io to accept inbound connections from game clients. These clients might be connected to one of several Zones or areas of the game world.
The basic architecture is displayed below. The master process forks a child process for each Zone of the game which runs the Zone Manager process; a process dedicated to maintaining the Zone data (3d models, positions of players/entities, etc). The master process then forks multiple "Communication Threads" for each Zone Manager it creates. These threads create an instance of socket.io and listen on the port for that Zone (multiple threads listening on a single port). These threads will handle the majority of the game logic in their own process as well as communicate with the database backing the game server. The only issue is that in some circumstances they may need to communicate with the Zone Manager to receive information about the Zone, players, etc.
As an example: A player wants to buy/sell/trade with a non-player character (NPC) in the Zone. The Zone Communication thread needs to ask the Zone Manager thread if the player is close enough to the NPC to make the trade before it allows the trade to take place.
The issue I'm running into here is that I was planning to make use of the node.js cluster functionality and use the send() and on() methods of the processes to handle passing messages back and forth. That would be fine except for one caveat I've run into with it. Since all child processes spun off with cluster.fork() can only communicate with the "master" process. The node.js root process becomes a bottleneck for all communication. I ran some benchmarks on my system using a script that literally just bounced a message back and forth using the cluster's Inter-process Communication (IPC) and kept track of how many relays per second were being carried out. It seems that eventually node caps out at about 20k per second in terms of how many IPC's it can relay. This number was consistent on both a Phenom II 1.8ghz quad core laptop, and an FX-8350 4.0ghz 8-core desktop.
Now that sounds pretty decently high, except that this basically means that regardless of how many Zones or Communication Threads there are, all IPC is still bottlenecking through a single process that acts as a "relay" for the entire application. Which means that although it seems each individual thread can relay > 20k IPCs per second, the entire application as a whole will never relay more than that even if it were on some insane 32 core system since all the communication goes through a single thread.
So that's the problem I'm having. Now the dilemma. I've read a lot about the various other options out there and read like 20 different questions here on stack about this topic and I've seen a couple things popping up regularly:
Redis:
I'm actually running Redis on my server at the moment and using it as the socket.io datastore so that socket.io in multiple threads can share connection data so that a user can connect to any of N number of socket.io threads for their Zone so the server can sort of automatically load balance the incoming connections.
My concern with this is that it runs through the network stack. Hardly ideal for communication between multiple processes on the same server. I feel like the latency would be a major issue in the long run.
0MQ (zeromq/zmq):
I've never used this one for anything before, but I've been hearing a bit about it lately. Based on the reading I've done, I've found a lot of examples of people using it with TCP sockets, but there's not a lot of buzz about people using it for IPC. I was hoping perhaps someone here had worked with 0MQ for IPC before (possibly even in node.js?) and could shine some light on this option for me.
dnode:
Again I've never used this, but from what I've seen it looks like it's another option that is designed to work over TCP which means the network stack gets in the way again.
node-udpcomm:
Someone linked this in another question on here (which I can't seem to find again unfortunately). I've never even heard of it, and it looks like a very small solution that opens up and listens on UDP connections. Although this would probably still be faster than TCP options, we still have the network stack in the way right? I'm definitely like about a mile outside of my "programmer zone" as is here and well into the realm of networking/computer architecture stuff that I don't know much about lol
Anyway the bottom line is that I'm completely stuck here and have no idea what the best option would be for IPC in this scenario. I'm assuming at the moment that 0MQ is the best option of the ones I've listed above since it's the only one that seems to offer an "IPC" option for communication protocol which I presume means it's using a UNIX socket or something that's not going through the network stack, but I can't confirm that or anything.
I guess I'm just hoping some people here might know enough to point me in the right direction or tell me I'm already headed there. The project I'm working on is a multiplayer game server designed to work "out of the box" with a multiplayer game client both powering their 3D graphics/calculations with Three.js. The client and server will be made open source to everyone once I get them all working to my satisfaction, and I want to make sure that the architecture is as scalable as possible so people don't build a game on this and then go to scale up and end up hitting a wall.
Anyway thanks for your time if you actually read all this :)
I think that 0MQ would be a very good choice, but I admit that I don't know the others :D
For 0MQ it's transparent what transport you decide to use, the library calls are the same. It's just about choosing particular endpoint (and thus transport) during the call to zmq_bind and zmq_connect at the beginning. There are basically 4 paths you may decide to take:
"inproc://<id>" - in-process communication endpoint between threads via memory
"ipc://<filepath>" - system-dependent inter-process communication endpoint
"tcp://<ip-address>" - clear
"pgm://..." or "epgm://..." - an endpoint for Pragmatic Reliable Multicast
So to put is simply, the higher in the list you are, the faster it is and the less problems considering latency and reliability you have to face. So you should try to keep as high as possible. Since your components are processes, you should go with the IPC transport, naturally. If you later on need to change something, you can just change the endpoint definition and you are fine.
Now what is in fact more important than the transport you choose is the socket type, or rather pattern you decide to use. You situation is a typical request-response kind of communication, so you can either do
Manager: REP socket, Threads: REQ socket; or
Manager: ROUTER, Threads: REQ or DEALER
The Threads would then connect their sockets to the single Manager socket assigned to them and that's all, they can start to send their requests and wait for responses. All they have to decide on is the path they use as the endpoint.
But to describe in details what all those socket types and patterns mean is definitely out of the scope of this post, but you can and should read more about it in the ZeroMQ Guide. There you can not only learn about all the socket types, but also about many different ways how to connect your components and let them talk to each other. The one I mentioned is just a very simple one. Once you understand, you can build arbitrary hierarchies. It's like LEGO ;-)
Hope it helped a bit, cheers!
If I was programming an HTTP server, why should I consider handling every HTTP connection in its own thread?
I've read plenty of arguments that event-driven HTTP servers are faster and more scalable than thread-driven ones. (For example, see Ars Technica on Nginx). And yet Apache, the world's most popular server, is thread-driven. Why? What are the advantages?
It's simple to code against.
Basically, without language support (such as C# and VB will be getting in their next versions) and really good libraries, writing asynchronous code is hard. Not impossible, and no doubt there will be comments from those who are able to do it standing on their heads, but it's harder than the synchronous version. We're much better at thinking about code which just executes top to bottom than code which has to be reentrant etc.
Depending on your platform, threads can be quite cheap these days - so with appropriate pooling, the thread-per-request model works pretty well for servers who don't need to handle that many requests at a time. It sucks for long-polling, of course, or servers which mostly need to delegate requests to other services which may take "a while" to come back (even if that's only a tenth of a second). The latter class of server should be able to handle huge request rates as they're only doing minimal processing to do their work - but if they hog a thread for each request simply to block while waiting for another service to come back, that can be very wasteful, particularly of memory.
I'm mostly looking for setup advise and pointers on how to go about going about this. I'll explain in as much detail as I can think and also note possible approaches that may be plausible.
The aim of this is to create a real time browser game, the best method that I have found for my needs would to use "long polling" with ajax, which will basically setup a request with the server that will "hang there" til the server has something to send it, then re-establish the connection upon receipt for more data. For my purposes this will handle a chat system aswell as character movement, IE: if a player enters the same area the clients there will recieve a response to inform them and thus update the browser client to show this.
The above is relatively easy to implement and I have already made a test-case for it, however I want to improve on it, on the server side it runs a loop for X amount of time before it'll auto timeout and send back and empty string, so another connection can be made, this is to prevent infinite loops and use up resources in cases where it shouldn't. Instead of looking up the database on each loop cycle (would be expensive I believe) for messages that need sending to the client, I use flatfiles, if a file has a modified timestamp greater than the last message sent to the client, then there is something new to send. However I believe this would also be expensive (not as much as using a mysql database though?) when done a couple of times per second.
My thought process on this was to have a C++ program (for speed) constantly running, and use that for very fast lookups in memory for new messages and so fourth, this would also give me the added bonus of being able to have bots within the game that the server can control for a more real-time feel/approach, however I have no clue if this is even possible and my searches on google have been fruitless.
The approach I would most love to be able to do, is to continue to use PHP to do the rendering and control of the page etc, and have the ajax requests go to the C++ application (that will always be running) that can handle all the real-time aspects.
CGI defeats the purpose of the above approach, as it creates a new instance of the application on each request, which is both slow and exactly what I do not want, I have php for that and don't want to switch one perfectally running language for another that would be better suited, PHP however (to my knowledge) can't store things in memory (ram) and so fourth.
Another approach that I have thought about was to use php sockets to connect into the C++ application, though I have no idea how feasible this may be. The C++ application only basically will need to control bots (AI) and the chat system messages.. I have absolutely no idea how to go about handling bots via PHP.
I hope this fully explains what my intentions and goals are, so if anyone has any pointers or advise then please reply and help me out, it would be very much appreciated. If you need any extra information (for if I didn't cover something or something very well) then I'll be happy to attempt to better explain.
How fast do the reactions need to be? For anything approaching real-time action games, AJAX/Comet is going to be much too slow. The overhead is also really depressing.
The way forward for that kind of thing will probably be WebSocket, with a custom server on the backend. But I don't think that means you need to resort to C[++] for this; the bottleneck is most likely going to be the network and not server processor power.
I'm using a Python SocketServer with a trivial message replication system — all the game logic in my case is on the client-side, with some complicated JavaScript maintaining a consistent game world in the face of lag — but even for a more complex server-side I think a scripting language will probably be just fine.
WebSocket isn't ready yet; there are no mainstream browser implementations. In the meantime I'm using a Flash Socket backup that emulates the WebSocket interface. Flash Sockets have their own problems in that they fail to negotiate proxies, but they are fast and hopefully the need for them will diminish as WebSocket arrives properly.
Reading your post sets alarm bells ringing.
How familiar are you with multi-threaded code? With C++? If the answer is "not very", then I fear you might be biting off a quite a large chunk. Why not take advantage of some existing (tried and tested) COMET server implementations rather than this barebones approach? Whatever application you have in mind, it should be quite separate from the comms implementation.
As someone who has implemented a such a server, I can tell you that it will take many design iterations and a helluva long time to get right. Testing such a product realisticly is also a very tricky process.