Just a quick background. I am willing to open two sockets per thread of the application.The main thread has the accept() call to accept a TCP connection. There are three other threads and all of them also have an accept(). The problem is sometimes in multithreaded environment, the client tries to connect before the accept call of the server in a child thread which results in "connection refused" error. The client doesn't know when the server is ready to connect
I do not want the main thread socket to be sending any control information to the client like "You can now connect to the server". To avoid this, I have two approaches in my mind
1. To set a max counter(attempt) at the client side to connect to the server before exiting with connection refused error.
2. A separate thread whose only function is to accept connections at server side as a common accept function for all the thread connections except for the main thread.
Would really appreciate to know if there is any other approach. Thanks
Connection refused is not because you're calling accept late, it's because you're calling listen late. Make sure you call listen before any connect calls (you can check with strace). This probably requires that you listen before you spawn any children.
After you call listen on a socket incoming connections will queue until you call accept. At some point the not-yet-accepted connections can get dropped but this shouldn't occur with only 2 or 3 sockets.
If this is unix you can just use pipe2 or socketpair to create a pair of connected pipes/unix domain sockets with a lot less code. Of course, you need to do this before spawning the child thread and pass one end to the child.
Related
I know that a server normally open one port and listen it.
Today I learnt that there was a function select in system Unix-Like. With select we can listen multi-sockets.
I just can't imagine a case where we need to use select. If we have two sockets, it means that we are listening two ports, right? So I have a question:
What kind of server would open more than one port but receive and process the same type of requests?
Using select helps with handling reads and writes on multiple sockets. It doesn't have to be multiple server sockets. The most typical use is for multiplexing a large number of client sockets.
You have a server with one listening socket. Each time you accept a connection, you add the new client socket to the multiplexing pool. select then returns any time any of those sockets has data available to read. The big win is that you're doing all this with one thread.
You also get as socket for each connection that you've accepted on the listening (server) socket.
selecting among these (client) sockets and the server socket (readable => new connection) allows you to write apps such as chat servers efficiently.
Ummm... remember the difference between ports and sockets.
A "port" is like a telephone-number. But a single phone-number could be handling any number of "calls!"
A "socket," then, represents a single telephone-call: a currently active connection between this server and a particular client. Each connection, by definition, "takes place over a particular port," but any number of connections might exist at the same time.
(The "accept" operation corresponds to: picking up the phone.)
So, then, what select() buys you is the ability to monitor any number of sockets at one time. It examines all the sockets, waits (if necessary) for something to happen on any one of them, and returns one message to you. Now, the design of your server becomes "a simple loop." No matter how many sockets you're listening to, and no matter how many of them have messages waiting, select() will return messages to you one at a time.
It's basically the case that "every server out there will use a select() loop at its heart, unless there's an exceptionally wonderful reason not to.
Take a look here:
One traditional way to write network servers is to have the main
server block on accept(), waiting for a connection. Once a connection
comes in, the server fork()s, the child process handles the connection
and the main server is able to service new incoming requests.
With select(), instead of having a process for each request, there is
usually only one process that "multi-plexes" all requests, servicing
each request as much as it can.
So one main advantage of using select() is that your server will only
require a single process to handle all requests. Thus, your server
will not need shared memory or synchronization primitives for
different 'tasks' to communicate.
One major disadvantage of using select(), is that your server cannot
act like there's only one client, like with a fork()'ing solution. For
example, with a fork()'ing solution, after the server fork()s, the
child process works with the client as if there was only one client in
the universe -- the child does not have to worry about new incoming
connections or the existence of other sockets. With select(), the
programming isn't as transparent.
http://www.lowtek.com/sockets/select.html
I have a node cluster where the master responds to http requests.
The server also listens to websocket connections (via socket.io). A client connects to the server via the said websocket. Now the client choses between various games (with each node process handles a game).
The questions I have are the following:
Should I open a new connection for each node process? How to tell the client that he should connect to the exact node process X? (Because the server might handle incoming connection-requests on its on)
Is it possible to pass a socket to a node process, so that there is no need for opening a new connection?
What are the drawbacks if I just use one connection (in the master process) and pass the user messages to the respective node processes and the process messages back to the user? (I feel that it costs a lot of CPU to copy rather big objects when sending messages between the processes)
Is it possible to pass a socket to a node process, so that there is no
need for opening a new connection?
You can send a plain TCP socket to another node process as described in the node.js doc here. The basic idea is this:
const child = require('child_process').fork('child.js');
child.send('socket', socket);
Then, in child.js, you would have this:
process.on('message', (m, socket) => {
if (m === 'socket') {
// you have a socket here
}
});
The 'socket' message identifier can be any message name you choose - it is not special. node.js has code that when you use child.send() and the data you are sending is recognized as a socket, it uses platform-specific interprocess communication to share that socket with the other process.
But, I believe this only works for plain sockets that do not yet have any local state established yet other than the TCP state. I have not tried it with an established webSocket connection myself, but I assume it does not work for that because once a webSocket has higher level state associated with it beyond just the TCP socket (such as encryption keys), there's a problem because the OS will not automatically transfer that state to the new process.
Should I open a new connection for each node process? How to tell the
client that he should connect to the exact node process X? (Because
the server might handle incoming connection-requests on its on)
This is probably the simplest means of getting a socket.io connection to the new process. If you make sure that your new process is listening on a unique port number and that it supports CORS, then you can just take the socket.io connection you already have between the master process and the client and send a message to the client on it that tells the client where to reconnect to (what port number). The client can then contain code to listen for that message and make a connection to that new destination.
What are the drawbacks if I just use one connection (in the master
process) and pass the user messages to the respective node processes
and the process messages back to the user? (I feel that it costs a lot
of CPU to copy rather big objects when sending messages between the
processes)
The drawbacks are as you surmise. Your master process just has to spend CPU energy being the middle man forwarding packets both ways. Whether this extra work is significant to you depends entirely upon the context and has to be determined by measurement.
Here's ome more info I discovered. It appears that if an incoming socket.io connection that arrives on the master is immediately shipped off to a cluster child before the connection establishes its initial socket.io state, then this concept could work for socket.io connections too.
Here's an article on sending a connection to another server with implementation code. This appears to be done immediately at connection time so it should work for an incoming socket.io connection that is destined for a specific cluster. The idea here is that there's sticky assignment to a specific cluster process and all incoming connections of any kind that reach the master are immediately transferred over to the cluster child before they establish any state.
I have a server where I handle multiple clients. Each client that connects to it is serviced in its own thread. Now, if any errors occur on the server side, I want to exit that thread by calling pthread_exit, and terminate the client that was being serviced by that thread. However; when I try to do so, my client is hanging. Also, this causes other clients that are in different threads to hang as well. I called pthread_exit in a random spot to test it...
Most likely the problem is that you are not calling close(newsockfd) before you call pthread_exit(). If so, then your server-thread goes away, but the socket that it was using to communicate with the client remains open, even though the server is no longer doing anything with it. Then the client's outgoing TCP buffer fills up, and the client waits indefinitely for the server to recv() more data from the socket, which never happens.
I encounter this problem while trying to do TCP tunnelling between two threads.
Thread 1
listen at Port
accept
then add the sock after accept to epoll_ctl
while (1)
epoll_wait
read whatever from Port to remote (tunnelling)
Thread 2
connect to Port
if connected
communicate...
What I actually observe is: while Thread 2 is blocked on connect, Thread 1 has no chance to run epoll_wait and send the connect info to the remote. Thus both threads cannot make progress.
One possible solution is to use parent-child processes instead of multi-threading. But before I switch to that, could it still be done with multi-threading? I think what it is needed here is some kind of interrupt thing than just polling. Right?
Thank you for the insight.
You can add server side socket descriptor into epoll_ctl. But I'm curious that if thread2 blocked on connection, what information you need to send to server? Thanks for your hint.
I'm trying to understand how the events in a BSD socket interface translate to the state of a TCP Connection. In particular, I'm trying to understand at what stage in the connection process accept() returns on the server side
client sends SYN
server sends SYN+ACK
client sends ACK
In which one of these steps does accept() return?
accept returns when the connection is complete. The connection is complete after the client sends his ACK.
accept gives you a socket on which you can communicate. Of course you know, you can't communicate until the connection is established. And the connection can't be established before the handshake.
It wouldn't make sense to return before the client sens his ACK. It is entirely possible he won't say anything after the initial SYN.
The TCP/IP stack code in the kernel normally[1] completes the three-way handshake entirely without intervention from any user space code. The three steps you list all happen before accept() returns. Indeed, they may happen before accept() is even called!
When you tell the stack to listen() for connections on a particular TCP port, you pass a backlog parameter, which tells the kernel how many connections it can silently accept on behalf of your program at once. It is this queue that is being used when the kernel automatically accepts new connection requests, and there that they are held until your program gets around to accept()ing them. When there is one or more connections in the listen backlog queue when you call accept(), all that happens is that the oldest is removed from the queue and bound to a new socket.[2]
In other words, if your program calls listen(sd, 5), then goes into an infinite do-nothing loop so that it never calls accept(), five concurrent client connection requests will succeed, from the clients' point of view. A sixth connection request will get stalled on the first SYN packet until either the program owning the TCP port calls accept() or one of the other clients drops its connection.
[1] Firewall and other stack modifications can change this behavior, of course. I am speaking here only of default BSD sockets stack behavior.
[2] If there are no connections waiting in the backlog when you call accept(), it blocks by default, unless the listener socket was set to non-blocking, in which case it returns -1 and errno is EWOULDBLOCK.