Delay in receiving Socket can messages - linux

I implement linux application which receives CAN messages and calculates period(using socketcan on raspberry pi4). The problem is that sometimes (about 0.5%) socketcan receives messages with delay. When I send 10ms messages with baudrate 500Kbps from my laptop(using vector tool), normally I can get reasonable period(9ms ~ 11ms) from raspberry pi. But sometimes it comes with 15ms ~ 16ms(then, next message comes after 4ms ~ 5ms). Even if I send 1 message only, same phenomenon occurs, so that the bus load could not be the reason. How can I resolve this issue?
Here is my source code as below.
wiringPiSetupSys();
if ((s = socket(PF_CAN, SOCK_RAW, CAN_RAW)) < 0)
{
perror("Socket");
return 1;
}
strcpy(ifr.ifr_name, "can0");
ioctl(s, SIOCGIFINDEX, &ifr);
memset(&addr, 0, sizeof(addr));
addr.can_family = AF_CAN;
addr.can_ifindex = ifr.ifr_ifindex;
if (bind(s, (struct sockaddr *)&addr, sizeof(addr)) < 0)
{
perror("Bind");
return 1;
}
while (1)
{
nbytes = read(s, &frame, sizeof(struct can_frame));
period = micros() - last_timer;
last_timer = micros();
}

I think that for the correct frame reception time, you need to get the frame timestamp, not the system value.
you can get the exact timestamp with ioctl call after reading the message from the socket.
struct timeval tv;
ioctl (s, SIOCGSTAMP, & tv);

Your CAN messages are received into SocketCAN buffer, and they are not processed immediately because Linux is a multitasking operating system, and SocketCAN is just waiting for its time slice to process the buffer and distribute messages to all CAN application(s). While you can not avoid this delay (which depends on current system load and number of processes), you can ask SocketCAN to deliver timestamps (as #fantasista has answered) so you can determine arrival time of each CAN message.

Related

Write to I2C I/O device

I am trying to talk to a Bosch Sensortec BNO055 sensor. I am using the shuttleboard. VDD and VDDIO are connected to 3.3V, on pin 17 and 18 Are SDA and SCL. These connected to a embedded linux board. An other sensor is on the same bus, I can see its values on the scope.
I have the following code:
BNO055_RETURN_FUNCTION_TYPE Bno055I2cBusWrite(u8 dev_addr, u8 reg_addr, u8* reg_data, u8 wr_len){
//According to https://www.kernel.org/doc/Documentation/i2c/dev-interface
int file = 0;
char filename[20];
snprintf(filename, 19, "/dev/i2c-%d", ADAPTER_NR);
if(open(filename, O_RDWR) < 0){ /*error*/ }
if(ioctl(file, I2C_SLAVE, dev_addr) < 0){ /*error*/ }
char buf[1 + wr_len];
buf[0] = reg_addr;
memcpy(&buf[1], reg_data, wr_len);
int written_bytes = 0;
if(write(file, buf, wr_len) != wr_len){
printf("Error BusWrite-write: %s.\n", strerror(errno));
exit(-1);
}
}
The first two if-statements are passed fine.
The write-operation fails. On the oscilloscope I see the correct device-address (which is then not acknowledged).
What I've done:
Added the device-address to buf (not covered in this code example).
Read and understood page 90-92 from the datasheet https://ae-bst.resource.bosch.com/media/products/dokumente/bno055/BST_BNO055_DS000_12~1.pdf
Soldered 1k8 ohm resistors to get steeper edges on clock and data
Made sure that the device address and the read/write bit are set correct
My sensor just does not acknowledges when its device address appears on the line.
What is exactly done by ioctl(file, I2C_SLAVE, dev_addr)? Does that send out the device-address on the I2C-bus?
Does the linuxkernel send the device-address by itself? I expect so.
Resuming:
Can someone point me in the right direction to let the sensor react?
Well... it just seemed that a wire to the scope interfered too much. And the device-address is sent by the driver when writing or reading, to answer my own question.

Mac OS X: recvmsg returns EMSGSIZE when sending fd's via Unix domain datagram socket

I have a piece of code that uses Unix domain sockets and sendmsg/recvmsg to send fd's between two processes. This code needs to run on both Linux and Mac (it is complied separately for both platforms). I'm using SOCK_DGRAM (datagram) sockets.
I send one fd at a time in my code. On Mac, after sending a couple of fd's succesfully this way, recvmsg() fails with an EMSGSIZE. According to the manpage for recvmsg, this can only happen if msg->msg_iovlen <=0 or >= a constant which is 2048 on Mac. In my code, I've pegged msg_iovlen to 1 always, I verified this on the sender and receiver, and also from reading the message header right after recvmsg() faults. This same code works fine on Linux.
Another possibility, from looking at the XNU kernel source, is that the receiver could have run out of fd's, but I've only sent 4 or 5 fd's before the error happens so there should be plenty of fd's left.
If I don't send fd's and only send data, this error does not occur.
Here's what the code that's packing the control message looks like:
// *obj is the fd, objSize is sizeof(*obj)
// cmsg was allocated earlier as a 512 byte buffer
cmsgLength = CMSG_LEN(objSize);
cmsgSpace = CMSG_SPACE(objSize);
cmsg->cmsg_level = SOL_SOCKET;
cmsg->cmsg_type = SCM_RIGHTS;
cmsg->cmsg_len = cmsgLength;
memcpy(CMSG_DATA(cmsg), obj, objSize);
msg->msg_control = cmsg;
msg->msg_controllen = cmsgSpace;
And here's the receiver:
msg = (struct msghdr *)pipe->msg;
iov = msg->msg_iov;
iov->iov_base = buf;
iov->iov_len = size;
// msg->msg_control was set earlier
msg->msg_controllen = 512;
return recvmsg(sockFd, msg, 0);
Any clues?
Thanks in advance
Are you actually using the cmsg stuff that you are receiving? I notice that you set msg_controllen to 512. What have you set msg_flags to?
Would you be able to try the same thing out with the following one addition.
msg = (struct msghdr *)pipe->msg;
memset (msg, 0, sizeof(msghdr)); /* added this */
iov = msg->msg_iov;
iov->iov_base = buf;
iov->iov_len = size;
// msg->msg_control was set earlier
msg->msg_controllen = 512;
return recvmsg(sockFd, msg, 0);

most efficient way to use libpcap on linux

I have an application which runs on Linux (2.6.38.8), using libpcap (>1.0) to capture packets streamed at it over Ethernet. My application uses close to 100% CPU and I am unsure whether I am using libpcap as efficiently as possible.
I am battling to find any correlation between the pcap tunables and performace.
Here is my simplified code (error checking etc. omitted):
// init libpcap
pcap_t *p = pcap_create("eth0", my_errbuf);
pcap_set_snaplen(p, 65535);
pcap_set_promisc(p, 0);
pcap_set_timeout(p, 1000);
pcap_set_buffer_size(p, 16<<20); // 16MB
pcap_activate(p);
// filter
struct bpf_program filter;
pcap_compile(p, &filter, "ether dst 00:11:22:33:44:55", 0, 0);
pcap_setfilter(p, &filter);
// do work
while (1) {
int ret = pcap_dispatch(p, -1, my_callback, (unsigned char *) my_args);
if (ret <= 0) {
if (ret == -1) {
printf("pcap_dispatch error: %s\n", pcap_geterr(p));
} else if (ret == -2) {
printf("pcap_dispatch broken loop\n");
} else if (ret == 0) {
printf("pcap_dispatch zero packets read\n");
} else {
printf("pcap_dispatch returned unexpectedly");
}
} else if (ret > 1) {
printf("processed %d packets\n", ret);
}
}
The result when using a timeout of 1000 miliseconds, and buffer size of 2M, 4M and 16M is the same at high data rates (~200 1kB packets/sec): pcap_dispatch consistently returns 2. According to the pcap_dispatch man page, I would expect pcap_dispatch to return either when the buffer is full or the timeout expires. But with a return value of 2, neither of these conditions should be met as only 2kB of data has been read, and only 2/200 seconds have passed.
If I slow down the datarate (~100 1kB packets/sec), pcap_dispatch returns between 2 and 7, so halving the datarate affects how many packets are processed per pcap_dispatch. (I think the more packets the better, as this means less context switching between OS and userspace - is this true?)
The timeout value does not seem to make a difference either.
In all cases, my CPU usage is close to 100%.
I am starting to wonder if I should be trying the PF_RING version of libpcap, but from what I've read on SO and libpcap mailing lists, libpcap > 1.0 does the zero copy stuff anyway, so maybe no point.
Any ideas, pointers greatly appreciated!
G

Linux serial port buffer not empty when opening device

I have a system where I am seeing strange behavior with the serial ports that I don't expect. I've previously seen this on occasion with usb-to-serial adapters, but now I'm seeing it on native serial ports as well, with much greater frequency.
The system is set up to run automated tests and will first perform some tasks that cause a large amount of data to be outputted from the serial device while I do not have the ports open. The device will also reset itself. Only the tx/rx lines are connected. There is no flow control.
After these tasks complete, the testware opens the serial ports and immediately fails because it gets unexpected responses. When I reproduce this, I found that if I open the serial port in a terminal program, I see several kilobytes of old data (that appears to have been sent when the port was closed) immediately flushed out. Once I close this program, I can then run the tests as expected.
What could cause this to happen? How does Linux handle buffering the serial port when the device is closed? If I opened a device, made it send output, and then closed it without reading from it, would this cause the same problem?
The Linux terminal driver buffers input even if it is not opened. This can be a useful feature, especially if the speed/parity/etc. are set appropriately.
To replicate the behavior of lesser operating systems, read all pending input from the port as soon as it is open:
...
int fd = open ("/dev/ttyS0", O_RDWR | O_NOCTTY | O_SYNC);
if (fd < 0)
exit (1);
set_blocking (fd, 0); // disable reads blocked when no input ready
char buf [10000];
int n;
do {
n = read (fd, buf, sizeof buf);
} while (n > 0);
set_blocking (fd, 1); // enable read blocking (if desired)
... // now there is no pending input
void set_blocking (int fd, int should_block)
{
struct termios tty;
memset (&tty, 0, sizeof tty);
if (tcgetattr (fd, &tty) != 0)
{
error ("error %d getting term settings set_blocking", errno);
return;
}
tty.c_cc[VMIN] = should_block ? 1 : 0;
tty.c_cc[VTIME] = should_block ? 5 : 0; // 0.5 seconds read timeout
if (tcsetattr (fd, TCSANOW, &tty) != 0)
error ("error setting term %sblocking", should_block ? "" : "no");
}

Transferring an Image using TCP Sockets in Linux

I am trying to transfer an image using TCP sockets using linux. I have used the code many times to transfer small amounts but as soon as I tried to transfer the image it only transfered the first third. Is it possible that there is a maximum buffer size for tcp sockets in linux? If so how can I increase it? Is there a function that does this programatically?
I would guess that the problem is on the receiving side when you read from the socket. TCP is a stream based protocol with no idea of packets or message boundaries.
This means when you do a read you may get less bytes than you request. If your image is 128k for example you may only get 24k on your first read requiring you to read again to get the rest of the data. The fact that it's an image is irrelevant. Data is data.
For example:
int read_image(int sock, int size, unsigned char *buf) {
int bytes_read = 0, len = 0;
while (bytes_read < size && ((len = recv(sock, buf + bytes_read,size-bytes_read, 0)) > 0)) {
bytes_read += len;
}
if (len == 0 || len < 0) doerror();
return bytes_read;
}
TCP sends the data in pieces, so you're not guaranteed to get it all at once with a single read (although it's guaranteed to stay in the order you send it). You basically have to read multiple times until you get all the data. It also doesn't know how much data you sent on the receiver side. Normally, you send a fixed size "length" field first (always 8 bytes, for example) so you know how much data there is. Then you keep reading and building a buffer until you get that many bytes.
So the sender would look something like this (pseudocode)
int imageLength;
char *imageData;
// set imageLength and imageData
send(&imageLength, sizeof(int));
send(imageData, imageLength);
And the receiver would look like this (pseudocode)
int imageLength;
char *imageData;
guaranteed_read(&imageLength, sizeof(int));
imageData = new char[imageLength];
guaranteed_read(imageData, imageLength);
void guaranteed_read(char* destBuf, int length)
{
int totalRead=0, numRead;
while(totalRead < length)
{
int remaining = length - totalRead;
numRead = read(&destBuf[totalRead], remaining);
if(numRead > 0)
{
totalRead += numRead;
}
else
{
// error reading from socket
}
}
}
Obviously I left off the actual socket descriptor and you need to add a lot of error checking to all of that. It wasn't meant to be complete, more to show the idea.
The maximum size for 1 single IP packet is 65535, which is extremely close to the number you are hitting. I doubt that is a coincidence.

Resources