I have a server where I handle multiple clients. Each client that connects to it is serviced in its own thread. Now, if any errors occur on the server side, I want to exit that thread by calling pthread_exit, and terminate the client that was being serviced by that thread. However; when I try to do so, my client is hanging. Also, this causes other clients that are in different threads to hang as well. I called pthread_exit in a random spot to test it...
Most likely the problem is that you are not calling close(newsockfd) before you call pthread_exit(). If so, then your server-thread goes away, but the socket that it was using to communicate with the client remains open, even though the server is no longer doing anything with it. Then the client's outgoing TCP buffer fills up, and the client waits indefinitely for the server to recv() more data from the socket, which never happens.
Related
I have a question about how server.listen method keep node process running. Is there any setInterval method inside?
I have read answer in post How does `server.listen()` keep the node program running. But still didn't understand it.
Anyone know please explain to me. Thanks.
Node.js internally in libuv has some sort of counter of the number of open resources that are supposed to keep the process running. It's not only timers that count here. Any type of open TCP socket or listening server counts too as would other asynchronous operations such as an in-process file I/O operations. You can see calls in the node.js source to uv_ref() and uv_unref(). That's how code internal to node.js marks resources that should keep the process running or release them when done.
Whenever the event loop is empty meaning there is no pending event to run, node.js checks this counter in libuv and if it's zero, then it exits the process. If it's not zero, then something is still open that is supposed to keep the process running.
So, let's supposed you have an idle server running with a listening server and an empty event loop. The libuv counter will be non-zero so node.js does not exit the process. Now, some client tries to connect to your server. At the lowest level, the TCP interface of the OS notifies some native code in node.js that there's a client that just connected to your server. This native code then packages that up into a node.js event and adds it to the node.js event queue. That causes the libuv to wake up and process that event. It pulls it from the event queue and calls the JS callback associated with that event, cause some JS code in node.js to run. That will end up emitting an event on that server (of the eventEmitter type) which the JS code monitoring that server will receive and then JS code can start processing that incoming request.
So, at the lowest level, there is native code built into the TCP support in node.js that uses the OS-level TCP interface to get told by the OS that an incoming connection to your server has just been received. That gets translated into an event in the node.js event queue which causes the interpreter to run the Javascript callback associated with that event.
When that callback is done, node.js will again check the counter to see if the process should exit. Assuming the server is still running and has not has .unref() called on it which removes it from the counter, then node.js will see that there are still things running and the process should not exit.
It's running through the event loop.
Every time event loop is looking for any pending operation, the server.listen() operation come forward.
I need to perform a cleanup action when the Node process exits.
I know I can use process.on('exit'), process.on('Unhandled|Rejected Exception') and process.on('<signal>') to listen such events.
However, I would like not bind any of this, for these reasons:
If I bind to process.on('exit'), I should not be doing any async operations, which is not reliable when you need to write out a message over a socket or a pipe (afaik).
If I bind to process.on('some exceptions'), I have to print the error like a normal node process. I don't want to reinvent the wheel.
If I bind process("SIGINT"), I have to exit the process manually. What if someone else is listening too, or what if I can't know the exit status of the node process?
Any idea what I should do?
In my particular use case I will spawn a server, and when Node exits, I need to send it a signal to tell it to quit. And I would like this to be as transparent as possible for the module-consumer.
Thanks!
Just let your client process hang up.
In your use case, you mention that you have a socket going from your nodejs module to your server, and you want to kill the server when the module exits because the server has nothing else to do. No matter which way you spin it, there is no foolproof way for a client to tell a server that it's done with them by sending a message (since the client can die unexpectedly), but when they die, any connections that they have opened will close (it might take a few minutes in the case of an unhandled exception, but they will eventually close).
As long as you keep the socket open, the server will be able to tell that the connection is still alive. No traffic need actually be passing between the two programs, but the server can tell all the same. And, when the socket is closed (because the other end has exited), the socket on the server will get the socket.on('end') event. At that point, the server can clean up resources, and its empty event loop will automatically shut it down.
I'm working with a Node library that doesn't explicitly close sockets after it's done with them. Instead it tries to clean up by deleting reference to the socket and letting them be garbage collected.
Googling is failing me: I don't think that is possible for the GC to clean up unclosed sockets. That is, I think that any socket descriptors will still be in use, from the OS's perspective.
Additionally, assuming that I as the library consumer have access to the socket objects, what is the best way for me to close them? I have played around with end(), close(), and destroy() with limited success. Sometimes they seem to block into perpetuity (end/destroy), and other times it seems like the callback is never made (close).
It could be due to the fact that your socket sent a FIN package and hangs up on the connection while waiting for the other end to send the FIN2 message. In cases when the socket on the other side is not nicely closed, your one won't receive any package, thus hanging up forever.
Actually, end sends a FIN packet and does not shutdown the socket.
A possible solution could be to wait for a while on it by means of setTimeout when you invoke the end function, then explicitly destroy it by means of the function destroy. This won't affect your socket if the other end has correctly closed the connection, otherwise it will force the shutdown and all resources should be released.
Just a quick background. I am willing to open two sockets per thread of the application.The main thread has the accept() call to accept a TCP connection. There are three other threads and all of them also have an accept(). The problem is sometimes in multithreaded environment, the client tries to connect before the accept call of the server in a child thread which results in "connection refused" error. The client doesn't know when the server is ready to connect
I do not want the main thread socket to be sending any control information to the client like "You can now connect to the server". To avoid this, I have two approaches in my mind
1. To set a max counter(attempt) at the client side to connect to the server before exiting with connection refused error.
2. A separate thread whose only function is to accept connections at server side as a common accept function for all the thread connections except for the main thread.
Would really appreciate to know if there is any other approach. Thanks
Connection refused is not because you're calling accept late, it's because you're calling listen late. Make sure you call listen before any connect calls (you can check with strace). This probably requires that you listen before you spawn any children.
After you call listen on a socket incoming connections will queue until you call accept. At some point the not-yet-accepted connections can get dropped but this shouldn't occur with only 2 or 3 sockets.
If this is unix you can just use pipe2 or socketpair to create a pair of connected pipes/unix domain sockets with a lot less code. Of course, you need to do this before spawning the child thread and pass one end to the child.
I'm trying to understand how the events in a BSD socket interface translate to the state of a TCP Connection. In particular, I'm trying to understand at what stage in the connection process accept() returns on the server side
client sends SYN
server sends SYN+ACK
client sends ACK
In which one of these steps does accept() return?
accept returns when the connection is complete. The connection is complete after the client sends his ACK.
accept gives you a socket on which you can communicate. Of course you know, you can't communicate until the connection is established. And the connection can't be established before the handshake.
It wouldn't make sense to return before the client sens his ACK. It is entirely possible he won't say anything after the initial SYN.
The TCP/IP stack code in the kernel normally[1] completes the three-way handshake entirely without intervention from any user space code. The three steps you list all happen before accept() returns. Indeed, they may happen before accept() is even called!
When you tell the stack to listen() for connections on a particular TCP port, you pass a backlog parameter, which tells the kernel how many connections it can silently accept on behalf of your program at once. It is this queue that is being used when the kernel automatically accepts new connection requests, and there that they are held until your program gets around to accept()ing them. When there is one or more connections in the listen backlog queue when you call accept(), all that happens is that the oldest is removed from the queue and bound to a new socket.[2]
In other words, if your program calls listen(sd, 5), then goes into an infinite do-nothing loop so that it never calls accept(), five concurrent client connection requests will succeed, from the clients' point of view. A sixth connection request will get stalled on the first SYN packet until either the program owning the TCP port calls accept() or one of the other clients drops its connection.
[1] Firewall and other stack modifications can change this behavior, of course. I am speaking here only of default BSD sockets stack behavior.
[2] If there are no connections waiting in the backlog when you call accept(), it blocks by default, unless the listener socket was set to non-blocking, in which case it returns -1 and errno is EWOULDBLOCK.