Does TCP stack ever send unsolicited RST on existing connection - linux

We have a linux system where data is being streamed from the server side of a TCP connection to a client side. [edit: both sides are using the sockets API]
At a point in time while this is happening our local TCP pcaps show a RST being sent from the client to the server, and client side logs show that reads are returning 0 bytes.
Is it possible for the RST to be sent unsolicited from the stack and then have subsequent client reads return 0 bytes?
Code is third party proprietary, so I can't share samples or snoops. I am asking this question in an effort to understand if TCP stack sending an unsolicited RST is a possible explanation for the above detailed behavior, and if so what would have to take place to trigger this.

This could also be a forged RST. Forged RSTs can be sent by a third party which wants to terminate your connection. This has been done in the past in industry-grade size. Read more here:
http://en.wikipedia.org/wiki/TCP_reset_attack#Forging_TCP_resets
To rule this out, you need to sniff the traffic at the client computer and see if it actually sends the RST, or if the RST is only received by the server side.

It is possible for the application to terminate the connection with an RST instead of a FIN, by fiddling with the SO_LINGER option. A well-written application should not do this.

Related

TCP strange RST packet terminating connection

The question is we have the following setup and we have noticed sometime client sends RST packet to terminate initial TCP handshake connection and application gets a timeout.
[10.5.8.30]------[Linux FW]-------[10.5.16.20]
Wireshark:
You can see in Wireshark RST packet, I thought its FW sending RST but in capture packet coming from 10.5.8.30 so what could be wrong here? why connection getting reset randomly, if I try next time then it will work.
The fact that the source IP for the RST packet is 10.5.8.30 doesn't mean that it really came from 10.5.8.30.
There are firewalls and various other intermediary devices that forge such packets. Try capturing on both ends to check whether 10.5.8.30 did, in fact, send the RST. It doesn't make sense for a client to send a TCP Syn and then a RST.

When I use TCP to send data how do I know if data has arrived or not?

When i use TCP to send data, the write() function just ensures data has been copied to the TCP send buffer, but if TCP doesn't send data successfully, how do I know? Is there a signal? or what?
Short Answer: you don't. Conventionally the remote TCP peer sends a response, acknowledging your data. This is the 1st step toward building an application level protocol, atop TCP, your transport level protocol.
Longer Answer: This problem is the prime motivation for higher level protocols such as HTTP, STOMP, IMAP, etc.
but if TCP doesn't send data successfully, how do I know?
The write() system call can return -1 and set errno to indicate an error, however you cannot know how much data has been received by the remote peer, and how much was not. That question is best answered by the remote peer.
M.

Bypass TCP three way handshaking?

Is it possible to make a system call or write a kernel module to craft a tcp connection right into ESTABLISHED state without going over the three way handshaking process, assuming the correct SYN-seq and ack number are provided dynamically?
You may like to have a look at TCP fast open, which modern Linux kernels implement:
TCP Fast Open (TFO) is an extension to speed up the opening of successive Transmission Control Protocol (TCP) connections between two endpoints. It works by using a TFO cookie (a TCP option) in the initial SYN packet to authenticate a previously connected client. If successful, it may start sending data to the client before the receipt of the final ACK packet of the three way handshake is received, skipping a round trip and lowering the latency in the start of transmission of data.

TCP Servers: Drop Connection, instead of resetting or responding?

Is it possible in Node.JS to "drop" a connection in such a way that
The client never receives a response (200, 404 or otherwise)
The client is never notified that the connection is terminated (never receives connection reset or end of stream)
The server's resources are released (the server should not attempt to maintain the connection in any way)
I am specifically asking about Node.JS HTTP Servers (which are really just complex TCP servers) on Solaris., but if there are cases on other OSes (Windows, Linux) or programming languages (C/C++, Java) that permit this, I am interested.
Why do I want this?
To annoy or slow down (possibly single-threaded) robots such as phpMyAdmin Probe.
I know this is not really something that matters, but these types of questions can better help me learn the boundaries of my programs.
I am aware that the client host is likely to re-transmit the packets of the connection since I am never sending reset.
These are not possible in a generic TCP stack (vs. your own custom TCP stack). The reasons are:
Closing a socket sends a RST
Even if you avoid sending a RST, the client continues to think the connection is open while the server has closed the connection. If the client sends any packet on this connection, the server is going to send a RST.
You may want to explore firewalling these robots and block / rate limit their IP addresses with something like iptables (linux) or the equivalent on solaris.
closing a connection should NOT send an RST. There is a 3 way tear down process.

Tcp connections hang on CLOSE_WAIT status

Client close the socket first, when there is not much data from server, tcp connection shutdown is okay like:
FIN -->
<-- ACK
<-- FIN, ACK
ACK -->
When the server is busying sending data:
FIN -->
<-- ACK,PSH
RST -->
And the server connection comes to CLOSE_WAIT state and hang on there for a long time.
What's the problem here? client related or server related? This happens on Redhat5 for local sockets.
This article talk about why "RST" is sent, but I do not know why the server connection stuck on CLOSE_WAIT, and do not send a FIN out.
[EDIT]I ignored the most important information, this happens on qemu's slirp network emulation. It seems to be a problem of slirp bug for dealing with close connection.
This means that there is unread data left in in the stream, that the client hasn't finished reading.
You can force it off by using the SO_LINGER option. Here's relevant documentation for Linux (also see the option itself, here), and [here's the matching function2] for Win32.
It's the server side that is remaining open, so it's on the server side you can try disabling SO_LINGER.
It may mean that the server hasn't closed the socket. You can easily tell this by using "lsof" to list the file descriptors open by that process which will include TCP sockets. The fix is to have the process always close the socket when it's finished (even in error cases etc)
This a known defect for qemu.

Resources