The question is we have the following setup and we have noticed sometime client sends RST packet to terminate initial TCP handshake connection and application gets a timeout.
[10.5.8.30]------[Linux FW]-------[10.5.16.20]
Wireshark:
You can see in Wireshark RST packet, I thought its FW sending RST but in capture packet coming from 10.5.8.30 so what could be wrong here? why connection getting reset randomly, if I try next time then it will work.
The fact that the source IP for the RST packet is 10.5.8.30 doesn't mean that it really came from 10.5.8.30.
There are firewalls and various other intermediary devices that forge such packets. Try capturing on both ends to check whether 10.5.8.30 did, in fact, send the RST. It doesn't make sense for a client to send a TCP Syn and then a RST.
Related
As a way to learn how raw sockets work, I programmed a dummy firewall which drops the packets based on the TCP destination port. It is working but the problem is that the client retries for quite some time until the time out is finally reached.
I was wondering if perhaps the client retries for so long because it does not receive any answer. In that case, would it help if the firewall replies with a TCP RST to the TCP SYNC messages from the client? If not, is there any way to force the client to stop retrying (not reducing the timeout time in the Linux but more, getting a specific answer to its packets which will make the client stop)?
You can think of your firewall as the same case as if the port were closed on the host OS. What would the host OS's TCP/IP stack do?
RFC 793 (original TCP RFC) has the following to say about this case:
If the connection does not exist (CLOSED) then a reset is sent
in response to any incoming segment except another reset. In
particular, SYNs addressed to a non-existent connection are rejected
by this means.
You should read the TCP RFCs and make sure your TCP RST packet conforms to the requirements for this case. A malformed RST will be ignored by the client.
RFC 1122 also indicates that ICMP Destination Unreachable containing codes 2-4 should cause an abort in the connection. It's important to note the codes because 0, 1, and 5 are listed as a MUST NOT for aborting the connection
Destination Unreachable -- codes 2-4
These are hard error conditions, so TCP SHOULD abort
the connection.
Your firewall is behaving correctly. It is a cardinal principle of information scurity not to disclose any information to the attacker. Sending an RST would disclose that the host exists.
There were firewalls that did that 15-20 years ago but there were frowned on. Nowadays they behave like yours: they just drop the packet and do nothing in reply.
It is normal for the client to retry a few times before giving up if there is no response, but contrary to what you have been told in comments, a client will give up immediately with 'connection refused' if it receives an RST. It only retries if there is no response at all.
I am trying to solve the below issue,
I have a iptables rule in my output chain which says that if the packet matches certain criteria, then queue it and send it to userspace using NFQUEUE
The userspace program receives it,and once it recieves it, it checks if the packet is a tcp packet and if yes, it modifies the content of the packet
After Modifying it, it sends out and I am able to see that till now it works properly, I was able to re-calculate the checksum and verify it and update the length of the packet and everything gets properly reflected and I am able to confirm it using wireshark and I am also able to see that the packet is reaching the destination. The packet I am modifying is HTTP GET Packet.
The Intial TCP handshake happens and after the intial handshake, I am sending out the modified HTTP GET Packet and I am getting a response back from the server, but after this, the client for some reason generates a TCP RST packet and sends it to the destination, I am not sure why this happens, Earlier while googling, people had reported it might be due to sequence number disorder, but in my case, since I am modifying the first packet after the TCP Handshake, the sequence number will be the same as that of my last ACK packet belonging to the TCP Handshake.
I am suspecting that some part of the kernel module is caching the length of the HTTP GET request packet, and once I modify it, and update the length, the cached part is not getting updated, and as a reason, the client is sending the TCP RST Packet.
Can some one help me out with the above scenario.
The problem with this is that changing the length of a TCP packet which is part of an active flow messes up the sequence number code, which causes whichever side of the connection that notices to reset the connection. See the details in RFC 793 section 3.4
We have a linux system where data is being streamed from the server side of a TCP connection to a client side. [edit: both sides are using the sockets API]
At a point in time while this is happening our local TCP pcaps show a RST being sent from the client to the server, and client side logs show that reads are returning 0 bytes.
Is it possible for the RST to be sent unsolicited from the stack and then have subsequent client reads return 0 bytes?
Code is third party proprietary, so I can't share samples or snoops. I am asking this question in an effort to understand if TCP stack sending an unsolicited RST is a possible explanation for the above detailed behavior, and if so what would have to take place to trigger this.
This could also be a forged RST. Forged RSTs can be sent by a third party which wants to terminate your connection. This has been done in the past in industry-grade size. Read more here:
http://en.wikipedia.org/wiki/TCP_reset_attack#Forging_TCP_resets
To rule this out, you need to sniff the traffic at the client computer and see if it actually sends the RST, or if the RST is only received by the server side.
It is possible for the application to terminate the connection with an RST instead of a FIN, by fiddling with the SO_LINGER option. A well-written application should not do this.
I have a server running on linux(kernel 3.0.35) and a client running on Windows 7. Every time the client connects to the server a TCP retransmission occurs:
Here is the traffic from wireshark:
http://oi39.tinypic.com/ngsnyb.jpg
What I think happens is this:
The client(192.168.1.100) connects to the server(192.168.1.103), the connection is successful. At some point the client decides to close the connection( FIN,ACK), the server doesn't ACK the FIN.
Then the client starts a new connection, that connection is ACK, and is successful. In the meantime the Windows kernel continues to retransmit the FIN,ACK packet, and finally decides to do a reset.
At the moment the second connection is established I don't receive the data that the client is sending(the packet with 16 bytes length) at server side, I receive these bytes only after the RST packet.
On the server side I'm using the poll() function to check for POLLIN events, and I'm not notified of any data until the RST packet arrives.
So does anyone know why this situation is happening?
Your data bytes are not sent on that 52687 connection but rather the following 52690 connection. I'd guess that the server app is accepting only one connection at a time (the kernel will accept them in advance and just hold the data) and thus not seeing data from the second connection until the first connection is dead and it moves on to the next.
That doesn't explain why your FIN is not being ACK'd. It should. Perhaps there is something in the kernel that doesn't like open-then-close-with-no-data connections? Maybe some kind of attack mitigation? Firewall rules?
Heres the scenrio:
TCP server running on solaris, TCP client running on Linux. Client connects and starts sending data. Client stops sending data and after N inactive seconds the server send a FIN, ACK (presumably from a shutdown call on the send pipe). The client starts sending data again. The server freaks out and starts sending a bunch of RST packets with no other flags set. The first packet is lost and they handshake again. The send never returns an error and the one packet is silently lost.
Any ideas why the RST is not being propagated to the client?
The send error and re-connect is being propgated. My bad. Staring at logs too long I guess. THANKS!