TCP buffer doesn't receive more data - linux

I have a program which sends some TCP data to the kernel and another application is listening to the port. From my program, I keep sending the data using send() API after polling for space using the poll() API. There are multiple packets to be sent one by one. The TCP data buffer gets filled up after some point and the poll times out. The data is stuck at the buffer and the application which listens does not receive any packets, although the connection is established successfully.
I have tried to increase the buffer size too, but no luck.
Can someone please help me troubleshoot this issue?

Related

TCP send() is blocking

I am running an application in which I am using TCP blocking socket. TCP send() is blocked, but netstat is showing send and recv Q = 0.
Can someone suggest why would send() be blocked?
The two reasons I can think of would be:
The receiving program keeps the socket open but does not read the data. In this case when the receiving socket buffer would become full, the sender could not send any more, it's socket send buffer would fill up and send() would block.
The network connection between the sender and the receiver is completely blocked after you initially connect. This would result in the sender socket typically failing after a timeout, also this would usually not be repeatable.
Neither case exactly agrees with your netstat results, but from experience I'd say tcp send() does not block unless the socket send buffer is full.

How to detect network failure in sockets Node.js

I am trying to write internal transport system.
Data should be transferred from client to server using net sockets.
It is working fine except handling of network issues.
If I place firewall between client and server, on both sides I will not see any error, so data will continue to fill kernel buffer on client side.
And if I will restart app in this moment I will lose all data in buffer.
Question:
Do we have any way to detect network issues?
Do we have any way to get data back from kernel buffers?
Node js exposes the low level socket api to you very directly. I'm assuming that you are using a TCP socket to send and receive data.
One way to ensure that there is an active connection between the client and server is to send heartbeat signals back and forth. If you fail to receive a heartbeat from the server while sending data, you can assume that the connection failed.
As for the second part of your question: There is no easy way to get data back from kernel buffers. If losing the data will be a problem, I would make sure to write it to disk.

read the content of send-q TCP socket in linux

I have a TCP client sending data to a server continuously . After successful connection of client with the server , client sends data continuously with some intervals in terms of few seconds .
When the link between the client and server got disconnected after sending few data ,I came to know that TCP retransmits the data according to the value in TCP_retries2 , I configured this value to be 8 , such that I get write error after 100 secs .
But there will be some unacknowledged packets in send-q .
Is there way to read the content of this unacknowledged packets in send-q in my program before closing this socket or should i remember the send data and resend it after connecting again ? Is there any other way to implement this ?
You can get the size of sendq with an ioctl:
SIOCOUTQ
Returns the amount of unsent data in the socket send queue.
The socket must not be in LISTEN state, otherwise an error
(EINVAL) is returned. SIOCOUTQ is defined in
<linux/sockios.h>. Alternatively, you can use the synonymous
TIOCOUTQ, defined in <sys/ioctl.h>.
Note that sendq only tells you what the kernel of the remote system accepted, it does not guarantee that the application running on that host handled it. Most failures exist in the network between the communicating parties, but this metric can't be used for definite proof as successful transmission.
Once the application has given its data to TCP, it is the responsibility of TCP to keep track of the acknowledgement of the packets. If ACKs are not forthcoming, it tries its best to get the packet delivered based on RTO algorithm. Now until ACK is received, the data is kept in TCP_SEND_Q. I do not think there is any control from the application to determine current state of TCP_SEND_Q.
//should i remember the send data and resend it after connecting again//
How do you do this? The previous connection status is gone, isn't? Until the client and the server applications maintain some understanding as to what was received and sent offline, you have to start fresh with new connection.
No there isn't.
If you need to know that the peer application has received the data, you need to have the peer application acknowledge it back to your application via your application protocol, and treat any unacknowledged data as needing re-sending from your application somehow. This also brings in the question of transactional idempotence, so that you can resend with impunity.
It takes two to tango. You can close your end of the connection and it waits for the other end of the connection to drop, too. Think 3-way handshake in reverse.
How long do you wait between closing the connectiion and re-opening it? You must wait at least the TIME_WAIT before trying to reconnect using the same connection info.

Reusing a port number in a UDP

In ASIO, s it possible to create another socket that has the same source port as another socket?
My UDP server application is calling receive_from using port 3000. It passes the packet
off to a worker thread which will send the response (currently using a dynamic source port).
The socket in the other thread is created like this:
udp::socket sock2(io_service, udp::endpoint(udp::v4(), 0));
And responds to the original request using the sender_endpoint saved with the original packet.
What I'd like to be able to do is respond to the client using the same source port as the server is listening on. But I can't see how that can be done. I get an exception if I try that saying address in use. Is it possible to do what I'm asking? The reason I want that is if I use dynamic ports, it means the clients need to add special firewall rules in windows to allow the reply packets to be read. I've found that if the source port is the same in the reply, windows firewall will allow it to pass back in.
The exception tells you as it is: you can't create two live sockets with the same source port. I don't know ASIO, but you should be able to create the socket before spinning off the thread, keeping reference to the socket and the thread for later use, and once the data sending thread is idle, joining back to it and sending any other stuff.
EDIT: with a little bit of effort, you can also make a socket for which you don't have to wait until the entire data from one thread has been sent: have a worker thread owning the socket listen on a queue for chunks of data (ideally exactly the size of the payload you intend to send) and send arbitrary chunks of payload to this queue, from multiple threads.
You should be able to use the SO_REUSEADDR socket option to bind multiple sockets to the same address. But having said that, you don't want to do this because it's not specified which socket will receive incoming data on that port (you would have to check all sockets for incoming data)
The better option is just to use the same socket to send replies - this can safely be done from multiple threads without any additional synchronisation (as you are using UDP).
send reply to the same socket (that you received client's request on) instead of creating new one
but make sure you don't send to the same socket from both threads simultaneously

How the buffering work in socket on linux

How does buffering work with sockets on Linux?
i.e. if the server does not read the socket and the client keeps sending data.
So what will happen? How big is the socket's buffer? And will the client know so that it will stop sending?
For UDP socket client will never know - the server side will just start dropping packets after the receive buffer is filled.
TCP, on the other hand, implements flow control. The server's kernel will gradually reduce the window, so the client will be able to send less and less data. At some point the window will go down to zero. At this point the client fills up its send buffer and receives an error from the send(2).
TCP sockets use buffering in the protocol stack. The stack itself implements flow control so that if the server's buffer is full, it will stop the client stack from sending more data. Your code will see this as a blocked call to send(). The buffer size can vary widely from a few kB to several MB.
I'm assuming that you're using send() and recv() for client and server communication.
So, send() will return the number of bytes that have been sent out. This doesn't necessarily equal to to the number of bytes you wanted to send out, so it's up to you to realise this and send the rest.
Now, the recv() returns the number of bytes read to the buffer. So if recv returns a 0, then the server has probably closed the connection.

Resources