I have a node cluster where the master responds to http requests.
The server also listens to websocket connections (via socket.io). A client connects to the server via the said websocket. Now the client choses between various games (with each node process handles a game).
The questions I have are the following:
Should I open a new connection for each node process? How to tell the client that he should connect to the exact node process X? (Because the server might handle incoming connection-requests on its on)
Is it possible to pass a socket to a node process, so that there is no need for opening a new connection?
What are the drawbacks if I just use one connection (in the master process) and pass the user messages to the respective node processes and the process messages back to the user? (I feel that it costs a lot of CPU to copy rather big objects when sending messages between the processes)
Is it possible to pass a socket to a node process, so that there is no
need for opening a new connection?
You can send a plain TCP socket to another node process as described in the node.js doc here. The basic idea is this:
const child = require('child_process').fork('child.js');
child.send('socket', socket);
Then, in child.js, you would have this:
process.on('message', (m, socket) => {
if (m === 'socket') {
// you have a socket here
}
});
The 'socket' message identifier can be any message name you choose - it is not special. node.js has code that when you use child.send() and the data you are sending is recognized as a socket, it uses platform-specific interprocess communication to share that socket with the other process.
But, I believe this only works for plain sockets that do not yet have any local state established yet other than the TCP state. I have not tried it with an established webSocket connection myself, but I assume it does not work for that because once a webSocket has higher level state associated with it beyond just the TCP socket (such as encryption keys), there's a problem because the OS will not automatically transfer that state to the new process.
Should I open a new connection for each node process? How to tell the
client that he should connect to the exact node process X? (Because
the server might handle incoming connection-requests on its on)
This is probably the simplest means of getting a socket.io connection to the new process. If you make sure that your new process is listening on a unique port number and that it supports CORS, then you can just take the socket.io connection you already have between the master process and the client and send a message to the client on it that tells the client where to reconnect to (what port number). The client can then contain code to listen for that message and make a connection to that new destination.
What are the drawbacks if I just use one connection (in the master
process) and pass the user messages to the respective node processes
and the process messages back to the user? (I feel that it costs a lot
of CPU to copy rather big objects when sending messages between the
processes)
The drawbacks are as you surmise. Your master process just has to spend CPU energy being the middle man forwarding packets both ways. Whether this extra work is significant to you depends entirely upon the context and has to be determined by measurement.
Here's ome more info I discovered. It appears that if an incoming socket.io connection that arrives on the master is immediately shipped off to a cluster child before the connection establishes its initial socket.io state, then this concept could work for socket.io connections too.
Here's an article on sending a connection to another server with implementation code. This appears to be done immediately at connection time so it should work for an incoming socket.io connection that is destined for a specific cluster. The idea here is that there's sticky assignment to a specific cluster process and all incoming connections of any kind that reach the master are immediately transferred over to the cluster child before they establish any state.
Related
I have a node server accepting websocket connections from the clients. Each client can broadcast a message to all of the other clients.
UPDATE: I am using https://github.com/websockets/ws as my library of choice.
At the moment, the server has an array with all of the connections. Each connection has a tabId. When one of the client emits a message, I go through all of the connections and check: if the connection's tabId doesn't match, I send the message to the client.
For loading issues, I am facing the problem of having to have more than one server. So, there will be say two servers, each one with a number of clients.
How do I make sure that a message gets broadcast to all of the websocket clients, and not only the ones connected to the same server?
One possible solution I thought is to have the connections stored on a database, where each record has the tabId and the serverId. However, even a simple broadcast gets tricky as messages to "local" sockets are easy to broadcast (the socket is local and available) whereas messages to "remote" sockets are tricky, and would imply intra-server communication.
Is there a good pattern to solve this? Surely, this is something that people face every day.
You could use a messagequeue like RabbitMQ.
When a client logs in to your server, create a consumer which listens to a queue which will receive messages directed to that particular client. And when the clients are sending messages, just use a publisher to publish them to the recipients queue.
This way it doesn't matter and you don't need to know on which nodes the clients are on, or if they jump from a node to another.
On the Server side for websockets there is already an ping/pong implementation where the server sends a ping and client replies with a pong to let the server node whether a client is connected or not. But there isn't something implemented in reverse to let the client know if the server is still connected to them.
There are two ways to go about this I have read:
Every client sends a message to server every x seconds and whenever
an error is thrown when sending, that means the server is down, so
reconnect.
Server sends a message to every client every x seconds, the client receives this message and updates a variable on the client, and on the client side you have a thread that constantly checks every x seconds which checks if this variable has changed, if it hasn't in a while it means it hasn't received a message from the server and you can assume the server is down so reestablish a connection.
You can achieve trying to figure out on client side whether the server is still online using either methods. The first one you'll be sending traffic to the server whereas the second one you'll be sending traffic out of the server. Both seem easy enough to implement but I'm not so sure which is the better way in terms of being the more efficient/cost effective.
Server upload speeds are higher than client upload speeds, but server CPUs are an expensive resource while client CPUs are relatively cheap. Unloading logic onto the client is a more cost-effective approach...
Having said that, servers must implement this specific logic (actually, all ping/timeout logic), otherwise they might be left with "half-open" sockets that drain resources but aren't connected to any client.
Remember that sockets (file descriptors) are a limited resource. Not only do they use memory even when no traffic is present, but they prevent new clients from connecting when the resource is maxed out.
Hence, servers must clear out dead sockets, either using timeouts or by implementing ping.
P.S.
I'm not a node.js expert, but this type of logic should be implemented using the Websocket protocol ping rather than by your application. You should probably look into the node.js server / websocket framework and check how to enable ping-ing.
You should set pings to accommodate your specific environment. i.e., if you host on Heroku, than Heroku will implement a timeout of ~55 seconds and your pings should be sent before this timeout occurs.
I know that a server normally open one port and listen it.
Today I learnt that there was a function select in system Unix-Like. With select we can listen multi-sockets.
I just can't imagine a case where we need to use select. If we have two sockets, it means that we are listening two ports, right? So I have a question:
What kind of server would open more than one port but receive and process the same type of requests?
Using select helps with handling reads and writes on multiple sockets. It doesn't have to be multiple server sockets. The most typical use is for multiplexing a large number of client sockets.
You have a server with one listening socket. Each time you accept a connection, you add the new client socket to the multiplexing pool. select then returns any time any of those sockets has data available to read. The big win is that you're doing all this with one thread.
You also get as socket for each connection that you've accepted on the listening (server) socket.
selecting among these (client) sockets and the server socket (readable => new connection) allows you to write apps such as chat servers efficiently.
Ummm... remember the difference between ports and sockets.
A "port" is like a telephone-number. But a single phone-number could be handling any number of "calls!"
A "socket," then, represents a single telephone-call: a currently active connection between this server and a particular client. Each connection, by definition, "takes place over a particular port," but any number of connections might exist at the same time.
(The "accept" operation corresponds to: picking up the phone.)
So, then, what select() buys you is the ability to monitor any number of sockets at one time. It examines all the sockets, waits (if necessary) for something to happen on any one of them, and returns one message to you. Now, the design of your server becomes "a simple loop." No matter how many sockets you're listening to, and no matter how many of them have messages waiting, select() will return messages to you one at a time.
It's basically the case that "every server out there will use a select() loop at its heart, unless there's an exceptionally wonderful reason not to.
Take a look here:
One traditional way to write network servers is to have the main
server block on accept(), waiting for a connection. Once a connection
comes in, the server fork()s, the child process handles the connection
and the main server is able to service new incoming requests.
With select(), instead of having a process for each request, there is
usually only one process that "multi-plexes" all requests, servicing
each request as much as it can.
So one main advantage of using select() is that your server will only
require a single process to handle all requests. Thus, your server
will not need shared memory or synchronization primitives for
different 'tasks' to communicate.
One major disadvantage of using select(), is that your server cannot
act like there's only one client, like with a fork()'ing solution. For
example, with a fork()'ing solution, after the server fork()s, the
child process works with the client as if there was only one client in
the universe -- the child does not have to worry about new incoming
connections or the existence of other sockets. With select(), the
programming isn't as transparent.
http://www.lowtek.com/sockets/select.html
I'm running a Node.JS application involving heavy child process I/O. Due to the way Node.JS handles file descriptors (among other reasons), I want to fork a new V8 instance for every connection to the server. (Yes, I'm aware that this is a potentially expensive operation, but that's not the point of this question.)
I am using nssocket for my server, but this question should apply to other types of Node.JS servers (express, Socket.IO, etc) as well.
Right now I have:
var server = require("nssocket").createServer(function(socket){
// Do stuff with the new connection
}).listen(8000);
The intuitive thing to do is this:
// master.js
var server = require("nssocket").createServer(function(socket){
// Fork a new process to handle the connection
child_process.fork("worker.js");
}).listen(8000);
// worker.js
// Do stuff with the new connection
However, then the child process won't have access to the socket variable.
I've read about the new cluster API in Node, but it doesn't look like it's designed for the case when you want every connection to spawn a new worker.
Any ideas?
The cluster API is probably closest to what you want. In theory you can call cluster.fork() at any time within the master process. Note that once the socket connection is established, there is afaik no way to hand it over to another process.
To forward the communication to the worker, you could use message passing (i.e. worker.send) or you could open another port in the worker process and direct the client there.
I should stress that running significantly more worker processes than CPU cores is probably not a good idea. Have you considered pooling the workers or using a work queue like Beanstalkd?
You can use cluster module to fork workers, then IPC (Inter-process communication) channel plus a messaging queue to pass objects between master process and workers. A good option would be ZMQ.
This is really basic but I am blanking right now.
I have a daemon process and would like to have multiple clients be able to talk to it. I would like a client to be able to start up and then using a shared library, essentially 'register' with the daemon process. The daemon process would spawn a thread off for this new client and provide a communication pipe between the client and new thread.
I am thinking a unix datagram socket as a 'registration channel' for all clients to use initially and then switching over to a client-specific channel but then cannot figure out how I create unique names for the new datagram sockets without setting them up a priori.
Server and clients are on same machine, prefer to use datagram sockets to not have to deal with breaking up the stream into packets.
Will be sending (very) high rate small messages back and forth.
You can entirely avoid the problem of naming the client sockets, if you wish. Each client can create a connected pair of sockets using socketpair(). The client then sends one of the
socket descriptors to the server over your well known "registration channel". The server and client then have a private, connected, unnamed pair of sockets for their communication.
The socket descriptor is sent to the server using sendmsg() and filling in the msg's control message.
These two answers have some relevant info/links:
How would I use a socket to have several processes communicate with a central process?
Sending file descriptor over UNIX domain socket, and select()
Basically I think you need to compromise and have a 2 stage process with a SOCK_STREAM socket as stage 1 and SOCK_DGRAM as stage 2.
So it will be like this:
server:
create SOCK_STREAM socket "my.daemon.handshake"
accept client
send a randomly generated string XXX to the client and close the
socket
create a SOCK_DGRAM socket "my.daemon.XXX" and start
processing it
repeat (2)
client
connect to socket "my.daemon.handshake"
read to EOF -- get value XXX
start communicating with server on socket "my.daemon.XXX"
profit!!!!