Linux application doesn't ACK the FIN packet and retransmission occurs - linux

I have a server running on linux(kernel 3.0.35) and a client running on Windows 7. Every time the client connects to the server a TCP retransmission occurs:
Here is the traffic from wireshark:
http://oi39.tinypic.com/ngsnyb.jpg
What I think happens is this:
The client(192.168.1.100) connects to the server(192.168.1.103), the connection is successful. At some point the client decides to close the connection( FIN,ACK), the server doesn't ACK the FIN.
Then the client starts a new connection, that connection is ACK, and is successful. In the meantime the Windows kernel continues to retransmit the FIN,ACK packet, and finally decides to do a reset.
At the moment the second connection is established I don't receive the data that the client is sending(the packet with 16 bytes length) at server side, I receive these bytes only after the RST packet.
On the server side I'm using the poll() function to check for POLLIN events, and I'm not notified of any data until the RST packet arrives.
So does anyone know why this situation is happening?

Your data bytes are not sent on that 52687 connection but rather the following 52690 connection. I'd guess that the server app is accepting only one connection at a time (the kernel will accept them in advance and just hold the data) and thus not seeing data from the second connection until the first connection is dead and it moves on to the next.
That doesn't explain why your FIN is not being ACK'd. It should. Perhaps there is something in the kernel that doesn't like open-then-close-with-no-data connections? Maybe some kind of attack mitigation? Firewall rules?

Related

TCP — delivery of packets sent before client closes connection

Let’s say client opens a tcp connection to server.
Say client sends 100 packets.
10 of them reached server and were picked by application.
50 of them reached server but not yet picked up by application
40 are still sitting in client
socket buffer because the servers receive window is full.
Let’s say now client closes the connection.
Question —
Does application get the 50 packets before it is told that the connection is closed?
Does the client kernel send the remaining 40 packets first to client before it sends the FIN packet?
Now to complicate matters, if there is lot of packet loss, what happens to the remaining 40 packets and the FIN. Does it close it?
Does application get the 50 packets before it is told that the connection is closed?
It does.
Does the client kernel send the remaining 40 packets first to client before it sends the FIN packet?
It does.
Now to complicate matters, if there is lot of packet loss, what happens to the remaining 40 packets and the FIN. Does it close it?
The kernel will keep trying to send the outstanding data. The fact that you closed the socket doesn't change things, unless you altered socket options to change this behaviour.

TCP server ignores incoming SYN

I have a tcp server running. A client connects to the server and send packet periodically. For TCP server, this incoming connections turns to be CONNECTED, and the server socket still listens for other connections.
Say this client suddenly get powered off, no FIN sent to server. When it powers up again, it still use the same port to connect, but server doesn't reply to SYNC request. It just ignores incoming request, since there exists a connection with this port.
How to let server close the old connection and accept new one?
My tcp server runs on Ubuntu 14.04, it's a Java program using ServerSocket.
That's not correct, a server can accept multiple connections and will accept a new connection from a rebooted client as long as it's connecting from a different port (and that's usually the case). If your program is not accepting it it's because you haven't called accept() a second time. This probably means that your application is only handling one blocking operation per time (for example, it might be stuck in a read() operation on the connected socket). The solution for this is to simultaneously read from the connected sockets and accept new connections. This might be done using an I/O multiplexer, like select(), or multiple threads.

How to know Socket client got disconnected?

I am doing coding in linux architecture.
I have question regarding socket server and client.
I have made one sample code in which server continue to accept the connection and client is connected to server.
if somehow someone has remove the network cable so i am disconnecting client (client socket disconnected from PC) while in server side connection is still alive because i am not able to notify that client is disconnected because network is unplugged.
How can i know that client got disconnected ?
Thanks,
Neel
You need to either configure keepalive on the socket or send an application level heartbeat message, otherwise the listening end will wait indefinitely for packets to arrive. If you control the protocol, the application level heartbeat may be easier. As a plus side, either solution will help keep the connection alive across NAT gateways in the network.
See this answer: Is TCP Keepalive the only mechanism to determine a broken link?
Also see this Linux documentation: http://tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/#programming
SIGPIPE for local sockets and eof on read for every socket type.

Socket send from client not failing when RST sent after FIN, ACK from server

Heres the scenrio:
TCP server running on solaris, TCP client running on Linux. Client connects and starts sending data. Client stops sending data and after N inactive seconds the server send a FIN, ACK (presumably from a shutdown call on the send pipe). The client starts sending data again. The server freaks out and starts sending a bunch of RST packets with no other flags set. The first packet is lost and they handshake again. The send never returns an error and the one packet is silently lost.
Any ideas why the RST is not being propagated to the client?
The send error and re-connect is being propgated. My bad. Staring at logs too long I guess. THANKS!

Tcp connections hang on CLOSE_WAIT status

Client close the socket first, when there is not much data from server, tcp connection shutdown is okay like:
FIN -->
<-- ACK
<-- FIN, ACK
ACK -->
When the server is busying sending data:
FIN -->
<-- ACK,PSH
RST -->
And the server connection comes to CLOSE_WAIT state and hang on there for a long time.
What's the problem here? client related or server related? This happens on Redhat5 for local sockets.
This article talk about why "RST" is sent, but I do not know why the server connection stuck on CLOSE_WAIT, and do not send a FIN out.
[EDIT]I ignored the most important information, this happens on qemu's slirp network emulation. It seems to be a problem of slirp bug for dealing with close connection.
This means that there is unread data left in in the stream, that the client hasn't finished reading.
You can force it off by using the SO_LINGER option. Here's relevant documentation for Linux (also see the option itself, here), and [here's the matching function2] for Win32.
It's the server side that is remaining open, so it's on the server side you can try disabling SO_LINGER.
It may mean that the server hasn't closed the socket. You can easily tell this by using "lsof" to list the file descriptors open by that process which will include TCP sockets. The fix is to have the process always close the socket when it's finished (even in error cases etc)
This a known defect for qemu.

Resources