Multiple FIN packets received by server - linux

I am running a port forwarding proxy on a linux box. All the connections from the browser are rerouted to a different port using the proxy.
Whenever the proxy receives (recv()) 0 bytes, I close the connection with the outer world (opened through proxy) using shutdown. When that connection is closed, I close the connection with the browser. The arrangement looks like follows:
Connection Out Local Connection
Outer World <-----> Forward Proxy(Local Box)<-------> Client(Local Box)
However, I receive multiple data packets of length 0 on "local connection" for the same socket before it is closed. This happens when proxy is trying to close the connection with the outer world.
My understanding is TIME_WAIT value is 2*MSL and that comes out to be pretty high,(hundreds of seconds). However, I see multiple 0 byte packets in a fraction of a second. Am I doing something wrong? Or my understanding is wrong.
Thanks

Related

What could cause one side of the TCP connection in FIN_WAIT2 state and the other side is fully closed (i.e. not in CLOSE_WAIT state)?

I recently encountered an TCP related issue and hope someone could shed some light on it.
I have an application1 in container1/pod1 that is connected to a server (client_ip:12345 <-> server_ip:443). After running for a while, the server decided to close this connection, so it sent FIN to the client and the client sends ACK back to the server (saw these two packets in the pcap). This sould leave the client in CLOSE_WAIT and the server in FIN_WAIT2.
In this situation, the client should call close() and send FIN back to the server. But I've found that the application lack close() in its code, so in theory, the client should be stuck in CLOSE_WAIT and the server will be in FIN_WAIT2 until FIN_WAIT2 timeout. The port 12345 on the client side shouldn't be reused by any other new connections.
However, it seems somehow the client_ip:12345 <-> server_ip:443 socket state on the client side was no longer in CLOSE_WAIT state (become fully closed and available), so when another application2 in container2 was up, it randomly pick the same port 12345 (the kernel assign the ephemeral source port from the range) to connect to the server's port 443. Because the server side were still in FIN_WAIT2, so the connection couldn't be established, and thus the service got interrupted until FIN_WAIT2 state timeout (300 secs).
I understand I should fix the application code by adding close(). However, I'm curious about what could make the CLOSE_WAIT state disappear/reset on the client side and let another application be able to pick the same 12345 port to connect to the server?
I found a F5 Bug mentioned a similar situation: "Client side connection has been fully closed. This may occur if a client SSL profile is in use and an 'Encrypted Alert' has been received."
https://cdn.f5.com/product/bugtracker/ID812693.html
I'm wondering if there are any other possibilities that could cause FIN_WAIT2 on one side and fully closed on the other side (not in CLOSE_WAIT)?
For example, the process using this socket was killed? But AFAIK, after killing that process, the socket file descriptor should be closed and a FIN should still be sent by TCP?
I hope someone could shed some light on it!

TCP server ignores incoming SYN

I have a tcp server running. A client connects to the server and send packet periodically. For TCP server, this incoming connections turns to be CONNECTED, and the server socket still listens for other connections.
Say this client suddenly get powered off, no FIN sent to server. When it powers up again, it still use the same port to connect, but server doesn't reply to SYNC request. It just ignores incoming request, since there exists a connection with this port.
How to let server close the old connection and accept new one?
My tcp server runs on Ubuntu 14.04, it's a Java program using ServerSocket.
That's not correct, a server can accept multiple connections and will accept a new connection from a rebooted client as long as it's connecting from a different port (and that's usually the case). If your program is not accepting it it's because you haven't called accept() a second time. This probably means that your application is only handling one blocking operation per time (for example, it might be stuck in a read() operation on the connected socket). The solution for this is to simultaneously read from the connected sockets and accept new connections. This might be done using an I/O multiplexer, like select(), or multiple threads.

Understanding Client Server Connections [duplicate]

This question already has answers here:
Does the port change when a server accepts a TCP connection?
(3 answers)
Closed 4 years ago.
I understand the basics of how ports work. However, what I don't get is how multiple clients can simultaneously connect to say port 80. I know each client has a unique (for their machine) port. Does the server reply back from an available port to the client, and simply state the reply came from 80? How does this work?
First off, a "port" is just a number. All a "connection to a port" really represents is a packet which has that number specified in its "destination port" header field.
Now, there are two answers to your question, one for stateful protocols and one for stateless protocols.
For a stateless protocol (ie UDP), there is no problem because "connections" don't exist - multiple people can send packets to the same port, and their packets will arrive in whatever sequence. Nobody is ever in the "connected" state.
For a stateful protocol (like TCP), a connection is identified by a 4-tuple consisting of source and destination ports and source and destination IP addresses. So, if two different machines connect to the same port on a third machine, there are two distinct connections because the source IPs differ. If the same machine (or two behind NAT or otherwise sharing the same IP address) connects twice to a single remote end, the connections are differentiated by source port (which is generally a random high-numbered port).
Simply, if I connect to the same web server twice from my client, the two connections will have different source ports from my perspective and destination ports from the web server's. So there is no ambiguity, even though both connections have the same source and destination IP addresses.
Ports are a way to multiplex IP addresses so that different applications can listen on the same IP address/protocol pair. Unless an application defines its own higher-level protocol, there is no way to multiplex a port. If two connections using the same protocol simultaneously have identical source and destination IPs and identical source and destination ports, they must be the same connection.
Important:
I'm sorry to say that the response from "Borealid" is imprecise and somewhat incorrect - firstly there is no relation to statefulness or statelessness to answer this question, and most importantly the definition of the tuple for a socket is incorrect.
First remember below two rules:
Primary key of a socket: A socket is identified by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL} not by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT} - Protocol is an important part of a socket's definition.
OS Process & Socket mapping: A process can be associated with (can open/can listen to) multiple sockets which might be obvious to many readers.
Example 1: Two clients connecting to same server port means: socket1 {SRC-A, 100, DEST-X,80, TCP} and socket2{SRC-B, 100, DEST-X,80, TCP}. This means host A connects to server X's port 80 and another host B also connects to the same server X to the same port 80. Now, how the server handles these two sockets depends on if the server is single-threaded or multiple-threaded (I'll explain this later). What is important is that one server can listen to multiple sockets simultaneously.
To answer the original question of the post:
Irrespective of stateful or stateless protocols, two clients can connect to the same server port because for each client we can assign a different socket (as the client IP will definitely differ). The same client can also have two sockets connecting to the same server port - since such sockets differ by SRC-PORT. With all fairness, "Borealid" essentially mentioned the same correct answer but the reference to state-less/full was kind of unnecessary/confusing.
To answer the second part of the question on how a server knows which socket to answer. First understand that for a single server process that is listening to the same port, there could be more than one socket (maybe from the same client or from different clients). Now as long as a server knows which request is associated with which socket, it can always respond to the appropriate client using the same socket. Thus a server never needs to open another port in its own node than the original one on which the client initially tried to connect. If any server allocates different server ports after a socket is bound, then in my opinion the server is wasting its resource and it must be needing the client to connect again to the new port assigned.
A bit more for completeness:
Example 2: It's a very interesting question: "can two different processes on a server listen to the same port". If you do not consider protocol as one of the parameters defining sockets then the answer is no. This is so because we can say that in such a case, a single client trying to connect to a server port will not have any mechanism to mention which of the two listening processes the client intends to connect to. This is the same theme asserted by rule (2). However, this is the WRONG answer because 'protocol' is also a part of the socket definition. Thus two processes in the same node can listen to the same port only if they are using different protocols. For example, two unrelated clients (say one is using TCP and another is using UDP) can connect and communicate to the same server node and to the same port but they must be served by two different server processes.
Server Types - single & multiple:
When a server processes listening to a port that means multiple sockets can simultaneously connect and communicate with the same server process. If a server uses only a single child process to serve all the sockets then the server is called single-process/threaded and if the server uses many sub-processes to serve each socket by one sub-process then the server is called a multi-process/threaded server. Note that irrespective of the server's type a server can/should always use the same initial socket to respond back (no need to allocate another server port).
Suggested Books and the rest of the two volumes if you can.
A Note on Parent/Child Process (in response to query/comment of 'Ioan Alexandru Cucu')
Wherever I mentioned any concept in relation to two processes say A and B, consider that they are not related by the parent-child relationship. OS's (especially UNIX) by design allows a child process to inherit all File-descriptors (FD) from parents. Thus all the sockets (in UNIX like OS are also part of FD) that process A listening to can be listened to by many more processes A1, A2, .. as long as they are related by parent-child relation to A. But an independent process B (i.e. having no parent-child relation to A) cannot listen to the same socket. In addition, also note that this rule of disallowing two independent processes to listen to the same socket lies on an OS (or its network libraries), and by far it's obeyed by most OS's. However, one can create own OS which can very well violate this restriction.
TCP / HTTP Listening On Ports: How Can Many Users Share the Same Port
So, what happens when a server listen for incoming connections on a TCP port? For example, let's say you have a web-server on port 80. Let's assume that your computer has the public IP address of 24.14.181.229 and the person that tries to connect to you has IP address 10.1.2.3. This person can connect to you by opening a TCP socket to 24.14.181.229:80. Simple enough.
Intuitively (and wrongly), most people assume that it looks something like this:
Local Computer | Remote Computer
--------------------------------
<local_ip>:80 | <foreign_ip>:80
^^ not actually what happens, but this is the conceptual model a lot of people have in mind.
This is intuitive, because from the standpoint of the client, he has an IP address, and connects to a server at IP:PORT. Since the client connects to port 80, then his port must be 80 too? This is a sensible thing to think, but actually not what happens. If that were to be correct, we could only serve one user per foreign IP address. Once a remote computer connects, then he would hog the port 80 to port 80 connection, and no one else could connect.
Three things must be understood:
1.) On a server, a process is listening on a port. Once it gets a connection, it hands it off to another thread. The communication never hogs the listening port.
2.) Connections are uniquely identified by the OS by the following 5-tuple: (local-IP, local-port, remote-IP, remote-port, protocol). If any element in the tuple is different, then this is a completely independent connection.
3.) When a client connects to a server, it picks a random, unused high-order source port. This way, a single client can have up to ~64k connections to the server for the same destination port.
So, this is really what gets created when a client connects to a server:
Local Computer | Remote Computer | Role
-----------------------------------------------------------
0.0.0.0:80 | <none> | LISTENING
127.0.0.1:80 | 10.1.2.3:<random_port> | ESTABLISHED
Looking at What Actually Happens
First, let's use netstat to see what is happening on this computer. We will use port 500 instead of 80 (because a whole bunch of stuff is happening on port 80 as it is a common port, but functionally it does not make a difference).
netstat -atnp | grep -i ":500 "
As expected, the output is blank. Now let's start a web server:
sudo python3 -m http.server 500
Now, here is the output of running netstat again:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
So now there is one process that is actively listening (State: LISTEN) on port 500. The local address is 0.0.0.0, which is code for "listening for all". An easy mistake to make is to listen on address 127.0.0.1, which will only accept connections from the current computer. So this is not a connection, this just means that a process requested to bind() to port IP, and that process is responsible for handling all connections to that port. This hints to the limitation that there can only be one process per computer listening on a port (there are ways to get around that using multiplexing, but this is a much more complicated topic). If a web-server is listening on port 80, it cannot share that port with other web-servers.
So now, let's connect a user to our machine:
quicknet -m tcp -t localhost:500 -p Test payload.
This is a simple script (https://github.com/grokit/dcore/tree/master/apps/quicknet) that opens a TCP socket, sends the payload ("Test payload." in this case), waits a few seconds and disconnects. Doing netstat again while this is happening displays the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:54240 ESTABLISHED -
If you connect with another client and do netstat again, you will see the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:26813 ESTABLISHED -
... that is, the client used another random port for the connection. So there is never confusion between the IP addresses.
Normally, for every connecting client the server forks a child process that communicates with the client (TCP). The parent server hands off to the child process an established socket that communicates back to the client.
When you send the data to a socket from your child server, the TCP stack in the OS creates a packet going back to the client and sets the "from port" to 80.
Multiple clients can connect to the same port (say 80) on the server because on the server side, after creating a socket and binding (setting local IP and port) listen is called on the socket which tells the OS to accept incoming connections.
When a client tries to connect to server on port 80, the accept call is invoked on the server socket. This creates a new socket for the client trying to connect and similarly new sockets will be created for subsequent clients using same port 80.
Words in italics are system calls.
Ref
http://www.scs.stanford.edu/07wi-cs244b/refs/net2.pdf

Linux application doesn't ACK the FIN packet and retransmission occurs

I have a server running on linux(kernel 3.0.35) and a client running on Windows 7. Every time the client connects to the server a TCP retransmission occurs:
Here is the traffic from wireshark:
http://oi39.tinypic.com/ngsnyb.jpg
What I think happens is this:
The client(192.168.1.100) connects to the server(192.168.1.103), the connection is successful. At some point the client decides to close the connection( FIN,ACK), the server doesn't ACK the FIN.
Then the client starts a new connection, that connection is ACK, and is successful. In the meantime the Windows kernel continues to retransmit the FIN,ACK packet, and finally decides to do a reset.
At the moment the second connection is established I don't receive the data that the client is sending(the packet with 16 bytes length) at server side, I receive these bytes only after the RST packet.
On the server side I'm using the poll() function to check for POLLIN events, and I'm not notified of any data until the RST packet arrives.
So does anyone know why this situation is happening?
Your data bytes are not sent on that 52687 connection but rather the following 52690 connection. I'd guess that the server app is accepting only one connection at a time (the kernel will accept them in advance and just hold the data) and thus not seeing data from the second connection until the first connection is dead and it moves on to the next.
That doesn't explain why your FIN is not being ACK'd. It should. Perhaps there is something in the kernel that doesn't like open-then-close-with-no-data connections? Maybe some kind of attack mitigation? Firewall rules?

difference between socket failing with EOF and connection reset

For testing a networking application, I have written an asio port "proxy": it listens on a socket for the application client activity and sends all incoming packets to another socket where it is listened to by the application server, and back.
Now when either the application or the server disconnect for various reasons, the "proxy" usually gets an EOF but sometimes it receives a "connection reset".
Hence, the question: when does a socket fail with a "connection reset" error?
A TCP connection is "reset" when the local end attempts to send data to the remote end and the remote end answer with a packet with the RST flag set (instead of ACK). This almost always happens because the remote end doesn't know about any TCP connection that matches the remote&local addresses and remote&local port numbers. Possible reasons include:
The remote end has been rebooted
A state-tracking firewall somewhere in the path has been rebooted/changed/added/removed
A load balancer has incorrectly directed the TCP connection to a different node than the one it was supposed to go to.
The remote IP address has changed hands (the new owner doesn't know anything about TCP connections belonging to the old owner).
The remote end considers that the TCP connection has been closed already (but somehow the local end doesn't agree).
Note that if the remote end answers the initial (SYN) packet in a TCP connection with a RST packet, it is considered "Connection refused" instead of "Connection reset by peer".

Resources