Why TCP_NODELAY option has no effect on linux 3.2.40? - linux

Why the following code can't turn off Nagle Algorithm:
int on = 1;
int ret = 0;
ret = setsockopt(sockfd, IPPROTO_TCP, TCP_NODELAY, &on, sizeof(on));
My program runs on linux kernel 3.2.40(maybe modified).
Thanks!
The following picture is a packet captured by wireshark, please focus on the red rectangle:
string BP01_2_S_1, BP01_3_S_1 and BP01_4_S_1(sorry, I can't include the angular brackets)
should be contained in 3 separate TCP packets(I call `send' 3 times), but now they are in the same packet, it's not what I want.

Related

Mac OS X: recvmsg returns EMSGSIZE when sending fd's via Unix domain datagram socket

I have a piece of code that uses Unix domain sockets and sendmsg/recvmsg to send fd's between two processes. This code needs to run on both Linux and Mac (it is complied separately for both platforms). I'm using SOCK_DGRAM (datagram) sockets.
I send one fd at a time in my code. On Mac, after sending a couple of fd's succesfully this way, recvmsg() fails with an EMSGSIZE. According to the manpage for recvmsg, this can only happen if msg->msg_iovlen <=0 or >= a constant which is 2048 on Mac. In my code, I've pegged msg_iovlen to 1 always, I verified this on the sender and receiver, and also from reading the message header right after recvmsg() faults. This same code works fine on Linux.
Another possibility, from looking at the XNU kernel source, is that the receiver could have run out of fd's, but I've only sent 4 or 5 fd's before the error happens so there should be plenty of fd's left.
If I don't send fd's and only send data, this error does not occur.
Here's what the code that's packing the control message looks like:
// *obj is the fd, objSize is sizeof(*obj)
// cmsg was allocated earlier as a 512 byte buffer
cmsgLength = CMSG_LEN(objSize);
cmsgSpace = CMSG_SPACE(objSize);
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SCM_RIGHTS;
cmsg->cmsg_len = cmsgLength;
memcpy(CMSG_DATA(cmsg), obj, objSize);
msg->msg_control = cmsg;
msg->msg_controllen = cmsgSpace;
And here's the receiver:
msg = (struct msghdr *)pipe->msg;
iov = msg->msg_iov;
iov->iov_base = buf;
iov->iov_len = size;
// msg->msg_control was set earlier
msg->msg_controllen = 512;
return recvmsg(sockFd, msg, 0);
Any clues?
Thanks in advance
Are you actually using the cmsg stuff that you are receiving? I notice that you set msg_controllen to 512. What have you set msg_flags to?
Would you be able to try the same thing out with the following one addition.
msg = (struct msghdr *)pipe->msg;
memset (msg, 0, sizeof(msghdr)); /* added this */
iov = msg->msg_iov;
iov->iov_base = buf;
iov->iov_len = size;
// msg->msg_control was set earlier
msg->msg_controllen = 512;
return recvmsg(sockFd, msg, 0);

TCP stream relay [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I have two program communicating with TCP/IP.
this works fine.
program A opens a TCP socket.
program B connects to this socket.
program B gives data to program A.
everything works very fine.
but when i put a relay between A and B
and just pass the byte stream, something goes wrong.
program C opens two TCP socket(socket1, socket2).
program B connects to this socket(socket1).
program A connects to this socket(socket2).
program C relays TCP strem from B to A with following code.
(this is based on linux)
char buf[BUFSIZE];
while(1){
// recv a packet segment
if( my_recv(socket1, buf, BUFSIZE) <= 0 ){
return 0;
}
if( send(socket2, buf, BUFSIZE, MSG_NOSIGNAL) != BUFSIZE ){
return 0;
}
}
my_recv is a wrapper for recv to guarantee it will recv
request size.
int my_recv(int sd, char* p, unsigned int len){
// recv a packet segment
unsigned int ssize=0;
int d;
while(ssize < len){
if( (d=recv( sd, p+ssize, len - ssize, 0))<=0){
return -1;
}
ssize += d;
}
return ssize;
}
this works fine for first time.
but after few seconds, everything is messed up.
I have debugged program A and B.
there is nothing wrong with them.
they send and recv in correct order but it seems
that relay gives false data...
some advise would be appreciated.
thank you in advance.
You are assuming each read fills the buffer. You have to use the count returned by the recv() call as the length argument to the send() call.
while ((count = recv(fd, buffer, sizeof buffer, 0)) > 0)
send(fd, buffer, count, 0);

Crafting an ICMP packet inside a Linux kernel Module

I'm tring to experiment with the ICMP protocol and have created a kernel-module for linux that analyses ICMP packet ( Processes the packet only if if the ICMP code field is a magic number ) . Now to test this module , i have to create a an ICMP packet and send it to the host where this analysing module is running . In fact it would be nice if i could implement it the kernel itself (as a module ) . I am looking for something like a packetcrafter in kernel , I googled it found a lot of articles explaining the lifetime of a packet , rather than tutorials of creating it . User space packetcrafters would be my last resort, that too those which are highly flexible like where i'll be able to set ICMP code etc . And I'm not wary of kernel panics :-) !!!!! Any packet crafting ideas are welcome .
Sir, I strongly advice you against using the kernel module to build ICMP packets.
You can use user-space raw-sockets to craft ICMP packets, even build the IP-header itself byte by byte.
So you can get as flexible as it can get using that.
Please, take a look at this
ip = (struct iphdr*) packet;
icmp = (struct icmphdr*) (packet + sizeof(struct iphdr));
/*
* here the ip packet is set up except checksum
*/
ip->ihl = 5;
ip->version = 4;
ip->tos = 0;
ip->tot_len = sizeof(struct iphdr) + sizeof(struct icmphdr);
ip->id = htons(random());
ip->ttl = 255;
ip->protocol = IPPROTO_ICMP;
ip->saddr = inet_addr(src_addr);
ip->daddr = inet_addr(dst_addr);
if ((sockfd = socket(AF_INET, SOCK_RAW, IPPROTO_ICMP)) == -1)
{
perror("socket");
exit(EXIT_FAILURE);
}
/*
* IP_HDRINCL must be set on the socket so that
* the kernel does not attempt to automatically add
* a default ip header to the packet
*/
setsockopt(sockfd, IPPROTO_IP, IP_HDRINCL, &optval, sizeof(int));
/*
* here the icmp packet is created
* also the ip checksum is generated
*/
icmp->type = ICMP_ECHO;
icmp->code = 0;
icmp->un.echo.id = 0;
icmp->un.echo.sequence = 0;
icmp->checksum = 0;
icmp-> checksum = in_cksum((unsigned short *)icmp, sizeof(struct icmphdr));
ip->check = in_cksum((unsigned short *)ip, sizeof(struct iphdr));
If this part of code looks flexible enough, then read about raw sockets :D maybe they're the easiest and safest answer to your need.
Please check the following links for further info
http://courses.cs.vt.edu/~cs4254/fall04/slides/raw_6.pdf
http://www.cs.binghamton.edu/~steflik/cs455/rawip.txt
http://cboard.cprogramming.com/networking-device-communication/107801-linux-raw-socket-programming.html a very nice topic, pretty useful imo
You can try libcrafter for packet crafting on user space. Is very easy to use! The library is able to craft or decode packets of most common networks protocols, send them on the wire, capture them and match requests and replies.
For example, the next code craft and send an ICMP packet:
string MyIP = GetMyIP("eth0");
/* Create an IP header */
IP ip_header;
/* Set the Source and Destination IP address */
ip_header.SetSourceIP(MyIP);
ip_header.SetDestinationIP("1.2.3.4");
/* Create an ICMP header */
ICMP icmp_header;
icmp_header.SetType(ICMP::EchoRequest);
icmp_header.SetIdentifier(RNG16());
/* Create a packet... */
Packet packet = ip_header / icmp_header;
packet.Send();
Why you want to craft an ICMP packet on kernel-space? Just for fun? :-p
Linux kernel includes a packet generator tool pktgen for testing the network with pre-configured packets. Source code for this module resides in net/core/pktgen.c

Linux termios VTIME not working?

We've been bashing our heads off of this one all morning. We've got some serial lines setup between an embedded linux device and an Ubuntu box. Our reads are getting screwed up because our code usually returns two (sometimes more, sometimes exactly one) message reads instead of one message read per actual message sent.
Here is the code that opens the serial port. InterCharTime is set to 4.
void COMClass::openPort()
{
struct termios tio;
this->fd = -1;
int tmpFD;
tempFD = open( port, O_RDWR | O_NOCTTY);
if (tempFD < 0)
{
cerr<< "the port is not opened"<< port <<"\n";
portOpen = 0;
return;
}
tio.c_cflag = BaudRate | CS8 | CLOCAL | CREAD ;
tio.c_oflag = 0;
tio.c_iflag = IGNPAR;
newtio.c_cc[VTIME] = InterCharTime;
newtio.c_cc[VMIN] = readBufferSize;
newtio.c_lflag = 0;
tcflush(tempFD, TCIFLUSH);
tcsetattr(tempFD,TCSANOW,&tio);
this->fd = tempFD;
portOpen = true;
}
The other end is configured similarly for communication, and has one small section of particular iterest:
while (1)
{
sprintf(out, "\r\nHello world %lu", ++ulCount);
puts(out);
WritePort((BYTE *)out, strlen(out)+1);
sleep(2);
} //while
Now, when I run a read thread on the receiving machine, "hello world" is usually broken up over a couple messages. Here is some sample output:
1: Hello
2: world 1
3: Hello
4: world 2
5: Hello
6: world 3
where number followed by a colon is one message recieved. Can you see any error we are making?
Thank you.
Edit:
For clarity, please view section 3.2 of the Linux Serial Programming HOWTO. To my understanding, with a VTIME of a couple seconds (meaning vtime is set anywhere between 10 and 50, trial-and-error), and a VMIN of 1, there should be no reason that the message is broken up over two separate messages.
I don't see why you are surprised.
You are asking for at least one byte. If your read() is asking for more, which seems probable since you are surprised you aren't getting the whole string in a single read, it can get whatever data is available up to the read() size. But all the data isn't available in a single read so your string is chopped up between reads.
In this scenario the timer doesn't really matter. The timer won't be set until at least one byte is available. But you have set the minimum at 1. So it just returns whatever number of bytes ( >= 1) are available up to read() size bytes.
If you are still experiencing this problem (realizing the question is old), and your code is accurate, you are setting your VTIME and VMIN in the newtio struct, and the rest of the other parameters in the tio struct.

Transferring an Image using TCP Sockets in Linux

I am trying to transfer an image using TCP sockets using linux. I have used the code many times to transfer small amounts but as soon as I tried to transfer the image it only transfered the first third. Is it possible that there is a maximum buffer size for tcp sockets in linux? If so how can I increase it? Is there a function that does this programatically?
I would guess that the problem is on the receiving side when you read from the socket. TCP is a stream based protocol with no idea of packets or message boundaries.
This means when you do a read you may get less bytes than you request. If your image is 128k for example you may only get 24k on your first read requiring you to read again to get the rest of the data. The fact that it's an image is irrelevant. Data is data.
For example:
int read_image(int sock, int size, unsigned char *buf) {
int bytes_read = 0, len = 0;
while (bytes_read < size && ((len = recv(sock, buf + bytes_read,size-bytes_read, 0)) > 0)) {
bytes_read += len;
}
if (len == 0 || len < 0) doerror();
return bytes_read;
}
TCP sends the data in pieces, so you're not guaranteed to get it all at once with a single read (although it's guaranteed to stay in the order you send it). You basically have to read multiple times until you get all the data. It also doesn't know how much data you sent on the receiver side. Normally, you send a fixed size "length" field first (always 8 bytes, for example) so you know how much data there is. Then you keep reading and building a buffer until you get that many bytes.
So the sender would look something like this (pseudocode)
int imageLength;
char *imageData;
// set imageLength and imageData
send(&imageLength, sizeof(int));
send(imageData, imageLength);
And the receiver would look like this (pseudocode)
int imageLength;
char *imageData;
guaranteed_read(&imageLength, sizeof(int));
imageData = new char[imageLength];
guaranteed_read(imageData, imageLength);
void guaranteed_read(char* destBuf, int length)
{
int totalRead=0, numRead;
while(totalRead < length)
{
int remaining = length - totalRead;
numRead = read(&destBuf[totalRead], remaining);
if(numRead > 0)
{
totalRead += numRead;
}
else
{
// error reading from socket
}
}
}
Obviously I left off the actual socket descriptor and you need to add a lot of error checking to all of that. It wasn't meant to be complete, more to show the idea.
The maximum size for 1 single IP packet is 65535, which is extremely close to the number you are hitting. I doubt that is a coincidence.

Resources