I have a Linux based wireless network application which requires to be pretty hard on latency. i.e., late arrival of a data packet is no better at all compared to no data. Also, Old data in the socket can only delay the newer data.
I'm looking if from the transmit side there is any way to detect that the socket is piling up with old data and in such case flush/discard it and send fresh data.
I know the receiver can read all of the old data and discard. However, this will still play catch up and not really help in speeding up as well as if there was a sender side option to evaluate.
Thanks for any help.
Try turning off the Nagle algorithm on the sending side. This algorithm tries to collect smaller packets into larger ones to reduce overhead, so turning it off can mean each packet is sent immediately. Here's a code snippet to turn it off:
void noNagle (int socket)
{
int on = 1;
if (setsockopt (socket, IPPROTO_TCP, TCP_NODELAY, &on, sizeof(on)) < 0)
bye ("Can not disable nagle algorithm\n");
}
Related
I am writing an application that sends parallel ICMP packets, and receives them. To help with the parallelism and synchronization, I have designed multiple writers (and sockets), and a single reader.
Let's say I have 256 writers and one reader. This means I created 257 raw sockets. From what I learned, because raw sockets work lower than the transport level, kernel copies every response from the recipients to all raw sockets. Even though I am able to filter or discard them, I don't want the 256 writer sockets to receive all this data from the kernel and spend unnecessary resources (imagine more writers). I don't know if lot's of raw sockets are a burden for the kernel, couldn't find any information about that, so I could also use help in that direction.
I wanted to prevent the writer raw sockets from receiving any data, even though filling their buffer up and let the kernel drop packets is an option.
What didn't help me:
close vs shutdown socket? (my research shows shutdown doesn't work with connectionless sockets)
create SOCK_RAW socket just for sending data without any recvform() (decreasing the receive buffer size to 0 doesn't seem to create the desired effect, also it is mentioned in the unix documentations the minimum is 256 bytes. The goal is to prevent kernel from ever consider the writer sockets for received data)
I have a network client which is stuck in recvfrom a server not under my control which, after 24+ hours, is probably never going to respond. The program has processed a great deal of data, so I don't want to kill it; I want it to abandon the current connection and proceed. (It will do so correctly if recvfrom returns EOF or -1.) I have already tried several different programs that purport to be able to disconnect stale TCP channels by forging RSTs (tcpkill, cutter, killcx); none had any effect, the program remained stuck in recvfrom. I have also tried taking the network interface down; again, no effect.
It seems to me that there really should be a way to force a disconnect at the socket-API level without forging network packets. I do not mind horrible hacks, up to and including poking kernel data structures by hand; this is a disaster-recovery situation. Any suggestions?
(For clarity, the TCP channel at issue here is in ESTABLISHED state according to lsof.)
I do not mind horrible hacks
That's all you have to say. I am guessing the tools you tried didn't work because they sniff traffic to get an acceptable ACK number to kill the connection. Without traffic flowing they have no way to get hold of it.
Here are things you can try:
Probe all the sequence numbers
Where those tools failed you can still do it. Make a simple python script and with scapy, for each sequence number send a RST segment with the correct 4-tuple (ports and addresses). There's at most 4 billion (actually fewer assuming a decent window - you can find out the window for free using ss -i).
Make a kernel module to get hold of the socket
Make a kernel module getting a list of TCP sockets: look for sk_nulls_for_each(sk, node, &tcp_hashinfo.ehash[i].chain)
Identify your victim sk
At this point you intimately have access to your socket. So
You can call tcp_reset or tcp_disconnect on it. You won't be able to call tcp_reset directly (since it doesn't have EXPORT_SYMBOL) but you should be able to mimic it: most of the functions it calls are exported
Or you can get the expected ACK number from tcp_sk(sk) and directly forge a RST packet with scapy
Here is function I use to print established sockets - I scrounged bits and pieces from the kernel to make it some time ago:
#include <net/inet_hashtables.h>
#define NIPQUAD(addr) \
((unsigned char *)&addr)[0], \
((unsigned char *)&addr)[1], \
((unsigned char *)&addr)[2], \
((unsigned char *)&addr)[3]
#define NIPQUAD_FMT "%u.%u.%u.%u"
extern struct inet_hashinfo tcp_hashinfo;
/* Decides whether a bucket has any sockets in it. */
static inline bool empty_bucket(int i)
{
return hlist_nulls_empty(&tcp_hashinfo.ehash[i].chain);
}
void print_tcp_socks(void)
{
int i = 0;
struct inet_sock *inet;
/* Walk hash array and lock each if not empty. */
printk("Established ---\n");
for (i = 0; i <= tcp_hashinfo.ehash_mask; i++) {
struct sock *sk;
struct hlist_nulls_node *node;
spinlock_t *lock = inet_ehash_lockp(&tcp_hashinfo, i);
/* Lockless fast path for the common case of empty buckets */
if (empty_bucket(i))
continue;
spin_lock_bh(lock);
sk_nulls_for_each(sk, node, &tcp_hashinfo.ehash[i].chain) {
if (sk->sk_family != PF_INET)
continue;
inet = inet_sk(sk);
printk(NIPQUAD_FMT":%hu ---> " NIPQUAD_FMT
":%hu\n", NIPQUAD(inet->inet_saddr),
ntohs(inet->inet_sport), NIPQUAD(inet->inet_daddr),
ntohs(inet->inet_dport));
}
spin_unlock_bh(lock);
}
}
You should be able to pop this into a simple "Hello World" module and after insmoding it, in dmesg you will see sockets (much like ss or netstat).
I understand that what you want to do it's to automatize the process to make a test. But if you just want to check the correct handling of the recvfrom error, you could attach with the GDB and close the fd with close() call.
Here you could see an example.
Another option is to use scapy for crafting propper RST packets (which is not in your list). This is the way I tested the connections RST in a bridged system (IMHO is the best option), you could also implement a graceful shutdown.
Here an example of the scapy script.
when the send(or write) buffer is going to be full, let me say, only theres is only 500 bytes space. if I have a NONBLOCKING fd, and do
n = send(fd, buf, 1000,0)
here I wll get n<0, and I can get EWOULDBLOCK or EAGAIN error. my questions are:
1 here, the send write 500 bytes into the send buffer or 0 bytes to the send buffer?
2 if 500 bytes are sent to the buffer and if the fd is a UDP socket, then the datagram is split into 2 parts?
3 I need to use the fd to send many datagrams, if this time the send buffer is full(if there is EWOULDBLOCK or EAGAIN error), I need to make a pending list of datagrams(a FIFO queue). And everytime I want to send some datagram, I will have to check the pending list to see whether it is empty or not. if it is not empty, send the datagram in the pending list first. It seems to me that this design is a bit troublesome. And the design is similar to extending the kernel(BTW, is it in the kernel?) send buffer by a userspace pending list. Is there better solution for this?
thanks!
The below only applies to UDP (and only in linux, though others are similar), but that seems to be what you're asking about.
Setting non-blocking mode on a UDP socket is completely irrelevant (for sending) as a send will never block -- it immediately sends the packet, without any buffering.
It IS possible (if the machine is very busy) for there to be a buffer space problem (run out of transient packet buffers for packet processing), but in that case call will return ENOBUFS, regardless of whether the fd is blocking or non-blocking. This should be extremely rare.
There's a potential problem if you're generating packets faster than the network can take them (fairly easy to do on a fast machine and a 10Mbit ethernet port), in which case the kernel will start dropping the outgoing packets. Unfortunately there's no easy way to detect when this happens (you can check the interface for TX dropped packets, but that won't tell you which packets were dropped).
Its also possible to have a problem if you use the UDP_CORK socket option, which buffers data written to the socket instead of sending a packet, and only sends a single packet when the CORK option is unset. In this case, if the buffer grows too big you'll get EMSGSIZE (and again, the NONBLOCKING setting is irrelevant).
If you are talking about UDP, you are completely off point here - for UDP the value of the SO_SNDBUF socket option puts a limit on the size of a datagram you can send. In other words, there's no real per-socket send buffer (though data is still queued in the kernel to be sent out by appropriate network controller). You would get EMSGSIZE if you try to send(2) more in one shot.
For TCP though, you would only get EWOULDBLOCK when there's no space in the send buffer at all, i.e. no data has been copied from user to the kernel. Otherwise sent(2)'s return value tells you exactly how many bytes have been copyed.
I am currently running an old system on Tru64 which involves lots of UDP sockets using the sendto() function. The sockets are used in our code to send messages to/from various processes and then eventually on to a thick client app that is connected remotely. Occasionally the socket to the thick client gets stuck, this can cause some of these messages to get built up. My question is how can I determine the current buffer size, and how do I determine the maximum message buffer. The code below gives a snippet of how I set up the port and use the sendto function.
/* need to adjust the maximum size we can send on this */
/* as it needs to be able to cope with the biggest */
/* messages we send */
lenlen = sizeof(len) ;
/* allow double for when the system is under load */
int lenlen, len ;
lenlen = sizeof(len) ;
len = 2 * 32000;
msg_socket = socket( AF_UNIX,SOCK_DGRAM, 0);
result = setsockopt(msg_socket, SOL_SOCKET, SO_SNDBUF, (char *)&len, lenlen) ;
result = sendto( msg_socket,
(char *)message,
(int)message_len,
flags,
dest_addr,
addrlen);
Note. We have ported this application to Linux and the problem does not seem to appear there.
Any help would be greatly appreciated.
Regards
UDP send buffer size is different from TCP - it just limits the size of the datagram. Quoting Stevens UNP Vol. 1:
...
A UDP socket has a send buffer size (which we can change with SO_SNDBUF socket option, Section 7.5), but this is simply an upper limit on the maximum-sized UDP datagram that can be written to the socket. If an application writes a datagram larger than the socket send buffer size, EMSGSIZE is returned. Since UDP is unreliable, it does not need to keep a copy of the application's data and does not need an actual send buffer. (The application data is normally copied into a kernel buffer of some form as it passes down the protocol stack, but this copy is discarded by the datalink layer after the data is transmitted.)
UDP simply prepends 8-byte header and passes the datagram to IP. IPv4 or IPv6 prepends its header, determines the outgoing interface by performing the routing function, and then either adds the datagram to the datalink output queue (if it fits within the MTU) or fragments the datagram and adds each fragment to the datalink output queue. If a UDO application sends large datagrams (say 2,000-byte datagrams), there's a much higher probability of fragmentation than with TCP. because TCP breaks the application data into MSS-sized chunks, something that has no counterpart in UDP.
The successful return from write to a UDP socket tells us that either the datagram or all fragments of the datagram have been added to the datalink output queue. If there is no room on the queue for the datagram or one of its fragments, ENOBUFS is often returned to the application.
Unfortunately, some implementations do not return this error, giving the application no indication that the datagram was discarded without even being transmitted.
The last footnote needs attention - but it looks like Tru64 has this error code listed in the manual page.
The proper way of doing it though is to queue your outstanding messages in the application itself and to carefully check return values and the errno after each system call. This still does not guarantee delivery (since UDP receivers might drop the packets without any notice to the senders). Check the UDP packet discard counters with netstat -s on both/all sides, see if they are growing. There is really no way around this besides switching to TCP or implementing your own timeout/ack and re-transmission logic.
You should probably be using some sort of congestion control to avoid overloading the network. By far the easiest way to do this is to use TCP instead of UDP.
It fails less often on Linux because UDP sockets wait for space in the local network interface queue on Linux (unless you set them non-blocking). However, with any operating system, if the overfull queue is not in the local system, the packet will be dropped silently.
I have an application that receives relatively sparse traffic over TCP with no application-level responses. I believe the TCP stack is sending delayed ACKs (based on glancing at a network packet capture). What is the recommended way to disable delayed-ACK in the network stack for a single socket? I've looked at TCP_QUICKACK, but it seems that the stack will change it under my feet anyways.
This is running on a Linux 2.6 kernel, and I am not worried about portability.
You could setsockopt(sockfd, IPPROTO_TCP, TCP_QUICKACK, (int[]){1}, sizeof(int)) after every recv you perform. It appears that TCP_QUICKACK is only reset when there is data being sent or received; if you're not sending any data, then it will only get reset when you receive data, in which case you can simply set it again.
You can check this in the 14th field of /proc/net/tcp; if it is not 1, ACKs should be sent immediately... if I'm reading the TCP code correctly. (I'm not an expert at this either.)
I believe using the setsockopt() function you can use the TCP_NODELAY which will disable the Nagle algorithm.
Edit Found a link: http://www.ibm.com/developerworks/linux/library/l-hisock.html
Edit 2 Tom is correct. Nagle does not affect Delayed ACKs.
The accepted answer has a bug, it should be
flags = 0;
flglen = sizeof(flags);
setsockopt(sfd, SOL_TCP, TCP_QUICKACK, &flags, flglen)
(SOL_TCP, not IPPROTO_TCP)
Post here in case someone need it.
As #damolp suggested, it is not a bug. Keep it here for record.