TCP connection - client sending FIN in middle of communication - azure

I have azure VM which act as a TCP server (java program which listen on a port). TCP client (ncat) sits in corporate network. There is a corporate firewall in between. Client initiates a connection with server by sending message and after that server keeps sending the one line message to client at every 3 second. client do receive 3 or 4 messages but after that it stops receiving any message. There is no exception at server side. When client and server both are in corporate network, i don't experience this issue. The wire shark capture is below. I don't understand why client is sending FIN (highlighted in yellow) in between. Any suggestions please.

The issue was with the tcp-halfclose-timer value in firewall between Azure and Corporate network. It was set to 10 second, so once FIN is sent from client, after 10 second server was not able to send any messages to client as firewall was closing the half closed connection after 10 second. The value is changed to 1 hour and after that the issue got resolved.

Related

rpyc, how to handle disconnect and reconnect

using w10/64, python 3.6, rpyc
I have a server receiving serial data and want the data to be published to any client asking for a connection.
In the server I add every client into a connection list and when detecting changes in the data publish it to all clients.
Clients send a "startListening" request to the server including ip and port. The server then opens its own connection to the client to update it with the new data.
I have an "on_disconnect" method in my servers commands class and it gets triggered when a client stops.
When the client restarts and sends a "startListening" again I get an EOFError on the server showing the clients ip/port.
How can I properly detect and close the client connection to allow for a reconnect?

Azure VM outbound TCP connection breaks after several munutes

I have a virtual machine running Windows Server 2012R2 within Azure cloud. This machine has its private and public IP address statically assigned. On that machine, I'm running client application (Jenkins Agent to be specific). This client opens TCP connection to its server (Jenkins Master), which is running outside of Azure cloud (behind some public IP address). TCP connection is established fine.
In order to keep this connection alive, both the client and the server are "pinging" each other every 4-5 mins. This "pinging" is done by exchanging several TCP packages through that opened TCP connection.
After some random time interval, client can't reach the server anymore and the server can't reach the client anymore. Therefore, connection timeout exceptions are thrown on both client and server ends.
To analyze the issue, I was tracking this communication with Wireshark, which is running on Windows Server in Azure cloud (where the client application is running). While the communication works well, Wireshark shows TCP traffic is exchanged between:
- client's private IP address / local port
- server's public IP address / port
This seems perfectly logical because Azure machine (client) is not aware of its public IP address and publicly visible port (after NAT is applied).
When the issue starts occurring, I see that both client and server are sending TCP retransmission packets, which means that neither of them received TCP:ACK packet to some previously sent TCP:PSH packet. Most strange of all is that client machine was receiving these TCP retransmissions from the server but the problem is: those packages are not sent to client's private IP/local post. Those packages are shown in Wireshark as being sent to client's public IP and publicly visible port! Obviously the client application doesn't receive these packages because machine's NIC/driver discards them (which is also expected).
QUESTION: Does anyone have any idea why the TCP responses sent to Azure machine's (client's) public IP address and publicly visible port sometimes reaches the machine itself without NAT translation being applied to that content?!
After 3 days of tracking the status, no issue re-occurrences have been noticed! So I'm resolving this question with conclusion: more frequent client/server pinging (i.e. keeping connection alive) definitely works around this Azure problem.

Linux application doesn't ACK the FIN packet and retransmission occurs

I have a server running on linux(kernel 3.0.35) and a client running on Windows 7. Every time the client connects to the server a TCP retransmission occurs:
Here is the traffic from wireshark:
http://oi39.tinypic.com/ngsnyb.jpg
What I think happens is this:
The client(192.168.1.100) connects to the server(192.168.1.103), the connection is successful. At some point the client decides to close the connection( FIN,ACK), the server doesn't ACK the FIN.
Then the client starts a new connection, that connection is ACK, and is successful. In the meantime the Windows kernel continues to retransmit the FIN,ACK packet, and finally decides to do a reset.
At the moment the second connection is established I don't receive the data that the client is sending(the packet with 16 bytes length) at server side, I receive these bytes only after the RST packet.
On the server side I'm using the poll() function to check for POLLIN events, and I'm not notified of any data until the RST packet arrives.
So does anyone know why this situation is happening?
Your data bytes are not sent on that 52687 connection but rather the following 52690 connection. I'd guess that the server app is accepting only one connection at a time (the kernel will accept them in advance and just hold the data) and thus not seeing data from the second connection until the first connection is dead and it moves on to the next.
That doesn't explain why your FIN is not being ACK'd. It should. Perhaps there is something in the kernel that doesn't like open-then-close-with-no-data connections? Maybe some kind of attack mitigation? Firewall rules?

Multiple FIN packets received by server

I am running a port forwarding proxy on a linux box. All the connections from the browser are rerouted to a different port using the proxy.
Whenever the proxy receives (recv()) 0 bytes, I close the connection with the outer world (opened through proxy) using shutdown. When that connection is closed, I close the connection with the browser. The arrangement looks like follows:
Connection Out Local Connection
Outer World <-----> Forward Proxy(Local Box)<-------> Client(Local Box)
However, I receive multiple data packets of length 0 on "local connection" for the same socket before it is closed. This happens when proxy is trying to close the connection with the outer world.
My understanding is TIME_WAIT value is 2*MSL and that comes out to be pretty high,(hundreds of seconds). However, I see multiple 0 byte packets in a fraction of a second. Am I doing something wrong? Or my understanding is wrong.
Thanks

How to know Socket client got disconnected?

I am doing coding in linux architecture.
I have question regarding socket server and client.
I have made one sample code in which server continue to accept the connection and client is connected to server.
if somehow someone has remove the network cable so i am disconnecting client (client socket disconnected from PC) while in server side connection is still alive because i am not able to notify that client is disconnected because network is unplugged.
How can i know that client got disconnected ?
Thanks,
Neel
You need to either configure keepalive on the socket or send an application level heartbeat message, otherwise the listening end will wait indefinitely for packets to arrive. If you control the protocol, the application level heartbeat may be easier. As a plus side, either solution will help keep the connection alive across NAT gateways in the network.
See this answer: Is TCP Keepalive the only mechanism to determine a broken link?
Also see this Linux documentation: http://tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/#programming
SIGPIPE for local sockets and eof on read for every socket type.

Resources