read the content of send-q TCP socket in linux - linux

I have a TCP client sending data to a server continuously . After successful connection of client with the server , client sends data continuously with some intervals in terms of few seconds .
When the link between the client and server got disconnected after sending few data ,I came to know that TCP retransmits the data according to the value in TCP_retries2 , I configured this value to be 8 , such that I get write error after 100 secs .
But there will be some unacknowledged packets in send-q .
Is there way to read the content of this unacknowledged packets in send-q in my program before closing this socket or should i remember the send data and resend it after connecting again ? Is there any other way to implement this ?

You can get the size of sendq with an ioctl:
SIOCOUTQ
Returns the amount of unsent data in the socket send queue.
The socket must not be in LISTEN state, otherwise an error
(EINVAL) is returned. SIOCOUTQ is defined in
<linux/sockios.h>. Alternatively, you can use the synonymous
TIOCOUTQ, defined in <sys/ioctl.h>.
Note that sendq only tells you what the kernel of the remote system accepted, it does not guarantee that the application running on that host handled it. Most failures exist in the network between the communicating parties, but this metric can't be used for definite proof as successful transmission.

Once the application has given its data to TCP, it is the responsibility of TCP to keep track of the acknowledgement of the packets. If ACKs are not forthcoming, it tries its best to get the packet delivered based on RTO algorithm. Now until ACK is received, the data is kept in TCP_SEND_Q. I do not think there is any control from the application to determine current state of TCP_SEND_Q.
//should i remember the send data and resend it after connecting again//
How do you do this? The previous connection status is gone, isn't? Until the client and the server applications maintain some understanding as to what was received and sent offline, you have to start fresh with new connection.

No there isn't.
If you need to know that the peer application has received the data, you need to have the peer application acknowledge it back to your application via your application protocol, and treat any unacknowledged data as needing re-sending from your application somehow. This also brings in the question of transactional idempotence, so that you can resend with impunity.

It takes two to tango. You can close your end of the connection and it waits for the other end of the connection to drop, too. Think 3-way handshake in reverse.
How long do you wait between closing the connectiion and re-opening it? You must wait at least the TIME_WAIT before trying to reconnect using the same connection info.

Related

TCP buffer doesn't receive more data

I have a program which sends some TCP data to the kernel and another application is listening to the port. From my program, I keep sending the data using send() API after polling for space using the poll() API. There are multiple packets to be sent one by one. The TCP data buffer gets filled up after some point and the poll times out. The data is stuck at the buffer and the application which listens does not receive any packets, although the connection is established successfully.
I have tried to increase the buffer size too, but no luck.
Can someone please help me troubleshoot this issue?

TCP sockets connection becomes unreliable with SO_REUSEADDR

I have an app where a single client talks to a single server. Normally, the client does a single connect, and then calls send repeatedly, and there's no problem.
However, I need to do a version where the client sets up a connection for each individual send (a bit like HTTP with and without keep-alive). In this version, the client calls socket, connect, send once, and then close.
The problem with this is that I very quickly run out of ephemeral client ports, and the connect fails. To get around this I call setsockopt with SO_REUSEADDR, and then bind to port 0, before calling connect (see here, for example).
This works, except that the TCP connection is no longer reliable. I get occasional incorrect data, presumably because there's still data around when the TCP connection is closed.
Is there any way to make this reliable (and fast)? shutdown before close doesn't help. Maybe I can get select to tell me if the socket is ready for output, but that seems like overkill.
Do you have to use TCP? If so, you will probably have to maintain an open connection and route your messages over that one connection.
There is SCTP, which may be a good fit for your use case - a reliable datagram protocol:
Like TCP, SCTP provides reliable, connection oriented data delivery with congestion control. Unlike TCP, SCTP also provides message bound‐ ary preservation, ordered and unordered message delivery, multi-stream‐ ing and multi-homing. Detection of data corruption, loss of data and duplication of data is achieved by using checksums and sequence numbers. A selective retransmission mechanism is applied to correct loss or corruption of data.

How to detect network failure in sockets Node.js

I am trying to write internal transport system.
Data should be transferred from client to server using net sockets.
It is working fine except handling of network issues.
If I place firewall between client and server, on both sides I will not see any error, so data will continue to fill kernel buffer on client side.
And if I will restart app in this moment I will lose all data in buffer.
Question:
Do we have any way to detect network issues?
Do we have any way to get data back from kernel buffers?
Node js exposes the low level socket api to you very directly. I'm assuming that you are using a TCP socket to send and receive data.
One way to ensure that there is an active connection between the client and server is to send heartbeat signals back and forth. If you fail to receive a heartbeat from the server while sending data, you can assume that the connection failed.
As for the second part of your question: There is no easy way to get data back from kernel buffers. If losing the data will be a problem, I would make sure to write it to disk.

Where goes those messages not yet received in Node.js?

For example we have a basic node.js server <-> client comunicaciton.
A basic node.js server who sends each 500ms a message to the only o every one client connected with their respective socket initiated, the client is responding correctly to the heratbeat and receiving all the messages in time. But, imagine the client has a temporal connection lag (without closing socket), cpu overload, etc.. And cannot process nothing during 2secs or more.
In this situation, where goes all those the messages that are not yet received by the client??
They are stored in node? in any buffer or similar?
And viceversa? The client is sending every 500ms a message to the server (the server only listens without responding), but the server has a temporary connection issue or cpu overhead during 2 or 3 secs..
Thanks in advice!! any information or aclaration will be welcomed
Javier
Yes, they are stored in buffers, primarily in buffers provided by the OS kernel. Same thing on the receiving end for connections incoming to a node server.

What is the initial re-transmission timeout for TCP?

I am interested in Linux. I have a client and a server. Lets say the client is trying to connect to the server (which is down). What is the initial re-transmission timeout value? Does it exponentially back off?
Lets say the server is now trying to connect to the client. The server receives the SYN packet and sends a SYN ACK. But the client dies. What is the re-transmission policy for this scenario?
What is the initial re-transmission timeout value?
It is both platform-dependent and configuration-dependent.
Does it exponentially back off?
All retransmissions in TCP do that.
What is the re-transmission policy for this scenario?
See above.

Resources