I came across the sentence in one java client library:
socket.setReuseAddress(true);
Thought this is used to improve performance,
since the SO_REUSEADDR option can indicate that socket can forcibly
use the TIME_WAIT port even if it belongs to the other socket.
But Also I found that this option is mostly used in the server-side,
to enable the server restarting quickly, not waiting the TIME_WAIT socket to close.
My question is that Is this option useful for the client-side,
like this client library? Will this do harmful to the other socket, like some attack?
Thanks a lot!
-Dimi
It depends on what you mean by "client". You also mention "client library", which has nothing to do with it.
This is often misunderstood, SO_REUSEADDR is to be able to reuse a socket in TIME_WAIT, and TIME_WAIT only happens on one side of the TCP connection, the one that initiates the termination sequence i.e. sends the first FIN packet i.e. calls shutdown(SHUT_WR) first or calls close first, although the latter is unclear/may depend on other things such as connection state or platform, reasons why you should not call close before first calling shutdown(SHUT_WR). This article is a very informative as well as the two referenced at the end of the article. It makes clear that TIME_WAIT may occur on the listening (server) side as well as client side, and recommends actually having clients always initiate termination ("active close") so that the server doesn't accumulate sockets in TIME_WAIT, where that would be more of a problem.
Related
I have an app where a single client talks to a single server. Normally, the client does a single connect, and then calls send repeatedly, and there's no problem.
However, I need to do a version where the client sets up a connection for each individual send (a bit like HTTP with and without keep-alive). In this version, the client calls socket, connect, send once, and then close.
The problem with this is that I very quickly run out of ephemeral client ports, and the connect fails. To get around this I call setsockopt with SO_REUSEADDR, and then bind to port 0, before calling connect (see here, for example).
This works, except that the TCP connection is no longer reliable. I get occasional incorrect data, presumably because there's still data around when the TCP connection is closed.
Is there any way to make this reliable (and fast)? shutdown before close doesn't help. Maybe I can get select to tell me if the socket is ready for output, but that seems like overkill.
Do you have to use TCP? If so, you will probably have to maintain an open connection and route your messages over that one connection.
There is SCTP, which may be a good fit for your use case - a reliable datagram protocol:
Like TCP, SCTP provides reliable, connection oriented data delivery with congestion control. Unlike TCP, SCTP also provides message bound‐ ary preservation, ordered and unordered message delivery, multi-stream‐ ing and multi-homing. Detection of data corruption, loss of data and duplication of data is achieved by using checksums and sequence numbers. A selective retransmission mechanism is applied to correct loss or corruption of data.
How do I find out from a socket client program that the remote connection is down (e.g. the server is down). When I do a recv and the server is down it blocks if I do not set any timeout. However in my case I cannot put any reliable timeout value to get around it since otherwise the recv times out even when the server is up but the response really takes longer than the timeout value that I have set.
Unfortunately, ZeroMQ just passes this on to the next layer. So the protocol you are implementing on top of ZeroMQ will have to handle this.
Heartbeats are recommended. Basically, just have one side send a message if the connection is otherwise idle. The other side can treat the absence of such messages as a failure condition and close the connection.
You may wish to modify your higher level protocols to be more robust. For example, you can submit a command, query its status, and allow the other side to forget about the command. That way, if the connection is lost, you can reconnect and query any outstanding commands. Any it doesn't have, you know didn't get through and can resubmit. Once you get a reply with the result of a command, you can tell the other side that it can now forget the response.
This allows you to keep the connection active while a long-running command is ongoing. Every so often you ask, "is everything okay". The other side responds, "yes". You can use long polling where the other side delays responding for a second or so while the command is in process. This allows it to return the results immediately rather than having to wait a second for your next query.
The specifics depend on your exact requirements, but you must design this correctly into your protocol.
If the remote host goes down without sending you a tcp FIN package then you have no chance to detect that. You can test that behaviour by firewalling a port after a connection has been established on that port. Your program will "hang" forever.
However, the Linux kernel supports a mechanism called TCP keep alives which are meant to close a tcp connection after a given timeout. If you can't specify a timeout for your application, than there isn't a reliable chance to use that. Last chance might be to use features of the application protocol (can you name it?), if that protocol does not support features for connection handling you may invent something on your own on top of that.
I created a game using node.js and socket.io. All works well, but from time to time this game socket server doesn't respond to any connections. When I go to Process information -> Files and connections (in webmin), then I see there are many connections with CLOSE_WAIT and FIN_WAIT2 statuses. I think the problem is in these connections, because game fails when there are about 1,000 connections. Server OS is Ubuntu Linux 12.04.
How can I kill these connections or increase maximum allowed connections?
To add to Jim answer, i think there is a problem in your client handling of closing of socket connections . It seems your client is not closing the sockets properly(both server initiated and client initiated close) and that is the reason your server has so many wait states
You don't need to kill connections or increase the number allowed. You need to fix a defect in the application on one side of the connection, specifically, the side which does not initiate the close.
See Figure 13 of RFC 793. Your programs are at step 3 of the close sequence. The side which you see in FIN-WAIT-2 is behaving correctly. It has initiated the close and the TCP stack has sent a FIN packet on the network. The side in CLOSE-WAIT has the defect. The TCP stack on that side has received and acknowledged the FIN packet, but the application has failed to notice. How the application is expected to detect that the remote side has closed the connection will depend on your platform. Unfortunately, I am old, and don't know node.js or socket.io.
What happens in C is that the socket appears readable, but a read() returns a zero-length packet. When the application sees this, it is expected to call close(). You will find something equivalent in the docs for node.js or socket.io.
When you find it, considering answering your own question here and accepting the answer.
Linux has the SO_REUSEADDR option for setting socket parameters. It allows immediate reuse of the same port. Someone who knows your toolset can tell you how to set socket options. You may already know how. I do not know this toolset.
From older java docset:
http://docs.oracle.com/javase/1.5.0/docs/guide/net/socketOpt.html
Context: Linux (Ubuntu), C, ZeroMQ
I have a server which listens on ipc:// SUB ZeroMQ socket (which physically is a Unix domain socket).
I have a client which should connect to the socket, publish its message and disconnect.
The problem: If server is killed (or otherwise dies unnaturally), socket file stays in place. If client attempts to connect to this stale socket, it blocks in zmq_term().
I need to prevent client from blocking if server is not there, but guarantee delivery if server is alive but busy.
Assume that I can not track server lifetime by some external magic (e.g. by checking a PID file).
Any hints?
Non-portable solution seems to be to read /proc/net/unix and search there for a socket name.
Without showing your code all of this is guesswork... that said...
If you have a PUB/SUB pair, the PUB will hang around to make sure that its message gets through. Perhaps you're not using the right type of zmq pair. Sounds more like you have a REP/REQ pair instead.
This way, once you connect from the client (the REQ side), you can do a zmq_poll to determine if the socket is available for writing. If yes, then go ahead with your write, otherwise shutdown the client and handle the error condition (if it is an error in your system).
Maybe you can try to connect to the socket first, with your own native socket. If the connection succeeds, it's quite high possibility your publisher could work fine.
There is another solution. Don't use ipc:// sockets. Instead use something like tcp://127.0.0.101:10001. On most UNIXes that will be almost as fast as IPC because the OS recognizes that it is a local connection and shortcuts the full IP stack processing.
I should state that I'm not asking about specific implementation details (yet), but just a general overview of what's going on. I understand the basic concept behind a socket, and need clarification on the process as a whole. My (probably very wrong) understanding is currently this:
A socket is constantly listening for clients that want to connect (in its own thread). When a connection occurs, an event is raised that spawns another thread to perform the connection process. During the connection process the client is assigned it's own socket in which to communicate with the server. The server then waits for data from the client and when data arrives an event is raised which spawns a thread to read the data from a stream into a buffer.
My questions are:
How off is my understanding?
Does each client socket require it's own thread to listen for data on?
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
Any clarifications and additional explanation would be greatly appreciated.
EDIT:
Regarding the question about what data is typically shared and points of contention, I realize this is more of an implementation detail than it is a question regarding general process of accepting connections and sending/receiving data. I had looked at a couple implementations (SuperSocket and Kayak) and noticed some synchronization for things like session cache and reusable buffer pools. Feel free to ignore this question. I've appreciated all your feedback.
One thread per connection is bad design (not scalable, overly complex) but unfortunately way too common.
A socket server works more or less like this:
A listening socket is setup to accept connections, and added to a socketset
The socket set is checked for events
If the listening socket has pending connections, new sockets are created by accepting the connections, and then added to the socket set
If a connected socket has events, the relevant IO functions are called
The socket set is checked for events again
This happens in one thread, you can easily handle thousands of connected sockets in a single thread, and there's few valid reasons for making this more complex by introducing threads.
while running
select on socketset
for each socket with events
if socket is listener
accept new connected socket
add new socket to socketset
else if socket is connection
if event is readable
read data
process data
else if event is writable
write queued data
else if event is closed connection
remove socket from socketset
end
end
done
done
The IP stack takes care of all the details of which packets go to what "socket" in which order. Seen from the applications point of view, a socket represents a reliable ordered byte stream (TCP) or an unreliable unordered sequence of packets(UDP)
EDIT: In response to updated question.
I don't know either of the libraries you mention, but on the concepts you mention:
A session cache typically keeps data associated with a client, and can reuse this data for multiple connections. This makes sense when your application logic requires state information, but it's a layer higher than the actual networking end. In the above sample, the session cache would be used by the "process data" part.
Buffer pools are also an easy and often effective optimization of a high-traffic server. The concept is very easy to implement, instead of allocating/deallocating space for storing data you read/write, you fetch a preallocated buffer from a pool, use it, then return it to a pool. This avoids the (sometimes relatively expensive) backend allocation/deallocation mechanisms. This is not directly related to networking, you can just as well use buffer pools for e.g. something that reads chunks of files and process them.
How off is my understanding?
Pretty far.
Does each client socket require it's own thread to listen for data on?
No.
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
TCP/IP is a number of layers of protocol. There's no "kernel" to it. It's pieces, each with a separate API to the other pieces.
The IP Address is handled in on place.
The port # is handled in another place.
The IP addresses are matched up with MAC addresses to identify a particular host. The port # is what ties a TCP (or UDP) socket to a particular piece of application software.
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
What threaded environment?
Data sharing? What?
Contention? The physical channel is the number one point of contention. (Ethernet, for example depends on collision-detection.) After that, well, every part of the computer system is a scarce resource shared by multiple applications and is a point of contention.