I'm establishing a TCP client socket connection to an XMPP server and need a reliable way to detect interruptions in the connection (e.g. server crashes, restarts etc). I have listeners attached to the end, error and close events, but they do not fire reliably when I cut my internet connection during an active connection. How can my client detect when the connection has been broken? I would prefer not to resort to pinging/timeouts.
I'm in no way an expert on TCP or socket programming, but I'm pretty sure that there exists no "reliable way to detect interruptions in the connection". See e.g. this Unix.com thread.
In node, your options seem to be socket.setTimeout/socket.on('timeout', callback) and/or socket.setKeepAlive.
Edit: Here is a guide on TCP keepalive.
Related
I'm having difficulty finding a robust socket library for doing local tcp socket connections in node.js.
I'm a big fan of using libraries like SockJS or Socket.io for client/server socket connections but I know those use websockets which are different from regular sockets.
I'm wondering if I could use a Websocket library for local connections with similar performance as just using regular sockets or would that include lots of undesired networking overhead?
Basically I want to achieve these three things with sockets and I don't think the native networking module can do them out of the box.
Monitor the health of each socket in it's pool (Alive or dead).
Attach an id to each socket so you know where data is coming from
Build the data from the chunks sent through the sockets
WebSockets are a TCP-like connection, but which actually runs on top of an established HTTP(s) connection (which itself runs within a TCP-connection). This means:
There is additional overhead: all data gets put into special frames, also you have the HTTP connection establishment additionally to the normal TCP connection establishment.
They are not compatible with normal sockets, e.g. you need a WebSockets-aware peer on the other side of the connection.
Apart from that they add no additional reliability or features to the underlying TCP connection. E.g. your requirements are already possible with normal sockets.
I am facing the following situation:
I have several devices (embedded devices running ARCH Linux) and i would like to have administration access to each device at any time. The problem is the devices are behind a NAT, so establishing a connection from a server to a device is not possible. How could i achieve this?
I thought i could write a simple service running on the device that opens a connection to a server at startup. This TCP connection remains open and can be used from the server to administrate the device. But is it a good idea to keep TCP connections open for a long time? If i have a lot of devices, for example 1000, will i have a problem on the server side with 1000 open TCP connections?
Is there maybe another way?
Thanks a lot!
But is it a good idea to keep TCP connections open for a long time?
It's not necessarily a bad idea; although in practice the connections will fail from time to time (e.g. due to network reconfiguration, temporary network outages, etc), so your clients should contain logic to reconnect automatically when this happens. Also note that TCP will not usually not detect it when a completely-idle TCP connection no longer has connectivity, so to avoid "zombie connections" that aren't actually connected, you may want to either enable SO_KEEPALIVE, or have your clients and/or server send the (very occasional) bit of dummy data on the socket just to goose the TCP stack into checking whether connectivity still exists on the socket.
If i have a lot of devices, for example 1000, will i have a problem on the server side with 1000 open TCP connections?
Scaling is definitely an issue you'll need to think about. For example, select() is typically implemented to only handle up to a fixed number of connections (often 1024), or if your server is using the thread-per-connection model, you'd find that a process with 1000+ threads is not very efficient. Check out the c10k problem article for lots of interesting details about various approaches and how well they scale up (or don't).
Is there maybe another way?
If you don't need immediate access to the clients, you could always have them check in periodically instead (e.g. once every 5 minutes); or you could have them occasionally send a UDP packet to the server instead of keeping a TCP connection all the time, just to let the server know their presence, and have the server indicate to them somehow (e.g. by updating a well-known web page that the clients check from time to time) when it wanted one of them to open a full TCP connection. Or maybe just use multiple servers to share the load...
The only limit I know of is imposed by state tracking in the iptables code. Check the value of net.ipv4.netfilter.ip_conntrack_max on both sides if you're using this to make sure you have enough headroom for other activities.
If you set the socket option SO_KEEPALIVE before the connect() call, the kernel will send TCP keepalives to make sure the far end is still there. This will mean that connections won't linger forever in the event of a reboot.
I have a server socket which listens on clients. This server run in an infinite loop.
After each connected client is processed, the "connected socket" is closed. Should I use the setsocketopt on the file descriptor of the connected socket for reusability? As the server socket file descriptor is never closed, I want that socket to exist all the time.
Also, I am assuming that a listening server socket is blocked until a new client establishes connection, therefore this is not using up memory.Isn't it? Please help.
thanks,
If you are thinking about SO_REUSEADDR, it doesn't let you re-use same socket for new connection. Also, I don't think this is going to buy you much. Creating new fd/socket isn't much of a task. You would find other bottlenecks than this.
But you can optimize, by not closing the connection (at server as well as client) so that same client can communicate over that connection for next requests. This will reduce your connection setup time.
Yes, by default listening socket is blocking so the accept call will be blocking. Also, this wouldn't be using much memory. You can make it non-blocking and use poll or select to determine new incoming connection.
You don't appear to know what SO_REUSEADDR is for. It doesn't have the magical properties you are attributing to it. The socket will exist until you close it. SO_REUSEADDR isn't required for any socket descriptor in most circumstances. If you're not getting bind errors you don't need it at all.
Is it possible in Node.JS to "drop" a connection in such a way that
The client never receives a response (200, 404 or otherwise)
The client is never notified that the connection is terminated (never receives connection reset or end of stream)
The server's resources are released (the server should not attempt to maintain the connection in any way)
I am specifically asking about Node.JS HTTP Servers (which are really just complex TCP servers) on Solaris., but if there are cases on other OSes (Windows, Linux) or programming languages (C/C++, Java) that permit this, I am interested.
Why do I want this?
To annoy or slow down (possibly single-threaded) robots such as phpMyAdmin Probe.
I know this is not really something that matters, but these types of questions can better help me learn the boundaries of my programs.
I am aware that the client host is likely to re-transmit the packets of the connection since I am never sending reset.
These are not possible in a generic TCP stack (vs. your own custom TCP stack). The reasons are:
Closing a socket sends a RST
Even if you avoid sending a RST, the client continues to think the connection is open while the server has closed the connection. If the client sends any packet on this connection, the server is going to send a RST.
You may want to explore firewalling these robots and block / rate limit their IP addresses with something like iptables (linux) or the equivalent on solaris.
closing a connection should NOT send an RST. There is a 3 way tear down process.
I am doing coding in linux architecture.
I have question regarding socket server and client.
I have made one sample code in which server continue to accept the connection and client is connected to server.
if somehow someone has remove the network cable so i am disconnecting client (client socket disconnected from PC) while in server side connection is still alive because i am not able to notify that client is disconnected because network is unplugged.
How can i know that client got disconnected ?
Thanks,
Neel
You need to either configure keepalive on the socket or send an application level heartbeat message, otherwise the listening end will wait indefinitely for packets to arrive. If you control the protocol, the application level heartbeat may be easier. As a plus side, either solution will help keep the connection alive across NAT gateways in the network.
See this answer: Is TCP Keepalive the only mechanism to determine a broken link?
Also see this Linux documentation: http://tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/#programming
SIGPIPE for local sockets and eof on read for every socket type.