most efficient way to use libpcap on linux - linux

I have an application which runs on Linux (2.6.38.8), using libpcap (>1.0) to capture packets streamed at it over Ethernet. My application uses close to 100% CPU and I am unsure whether I am using libpcap as efficiently as possible.
I am battling to find any correlation between the pcap tunables and performace.
Here is my simplified code (error checking etc. omitted):
// init libpcap
pcap_t *p = pcap_create("eth0", my_errbuf);
pcap_set_snaplen(p, 65535);
pcap_set_promisc(p, 0);
pcap_set_timeout(p, 1000);
pcap_set_buffer_size(p, 16<<20); // 16MB
pcap_activate(p);
// filter
struct bpf_program filter;
pcap_compile(p, &filter, "ether dst 00:11:22:33:44:55", 0, 0);
pcap_setfilter(p, &filter);
// do work
while (1) {
int ret = pcap_dispatch(p, -1, my_callback, (unsigned char *) my_args);
if (ret <= 0) {
if (ret == -1) {
printf("pcap_dispatch error: %s\n", pcap_geterr(p));
} else if (ret == -2) {
printf("pcap_dispatch broken loop\n");
} else if (ret == 0) {
printf("pcap_dispatch zero packets read\n");
} else {
printf("pcap_dispatch returned unexpectedly");
}
} else if (ret > 1) {
printf("processed %d packets\n", ret);
}
}
The result when using a timeout of 1000 miliseconds, and buffer size of 2M, 4M and 16M is the same at high data rates (~200 1kB packets/sec): pcap_dispatch consistently returns 2. According to the pcap_dispatch man page, I would expect pcap_dispatch to return either when the buffer is full or the timeout expires. But with a return value of 2, neither of these conditions should be met as only 2kB of data has been read, and only 2/200 seconds have passed.
If I slow down the datarate (~100 1kB packets/sec), pcap_dispatch returns between 2 and 7, so halving the datarate affects how many packets are processed per pcap_dispatch. (I think the more packets the better, as this means less context switching between OS and userspace - is this true?)
The timeout value does not seem to make a difference either.
In all cases, my CPU usage is close to 100%.
I am starting to wonder if I should be trying the PF_RING version of libpcap, but from what I've read on SO and libpcap mailing lists, libpcap > 1.0 does the zero copy stuff anyway, so maybe no point.
Any ideas, pointers greatly appreciated!
G

Related

What is the default behavior of perf record?

It's clear to me that perf always records one or more events, and the sampling can be counter-based or time-based. But when the -e and -F switches are not given, what is the default behavior of perf record? The manpage for perf-record doesn't tell you what it does in this case.
The default event is cycles, as can be seen by running perf script after perf record. There, you can also see that the default sampling behavior is time-based, since the number of cycles is not constant. The default frequency is 4000 Hz, which can be seen in the source code and checked by comparing the file size or number of samples to a recording where -F 4000 was specified.
The perf wiki says that the rate is 1000 Hz, but this is not true anymore for kernels newer than 3.4.
Default event selection in perf record is done in user-space perf tool which is usually distributed as part of linux kernel. With make perf-src-tar-gz from linux kernel source dir we can make tar gz for quick rebuild or download such tar from https://mirrors.edge.kernel.org/pub/linux/kernel/tools/perf. There are also several online "LXR" cross-reference viewers for linux kernel source which can be used just like grep to learn about perf internals.
There is the function to select default event list (evlist) for perf record: __perf_evlist__add_default of tools/perf/util/evlist.c file:
int __perf_evlist__add_default(struct evlist *evlist, bool precise)
{
struct evsel *evsel = perf_evsel__new_cycles(precise);
evlist__add(evlist, evsel);
return 0;
}
Called from perf record implementation in case of zero events parsed from options: tools/perf/builtin-record.c: int cmd_record()
rec->evlist->core.nr_entries == 0 &&
__perf_evlist__add_default(rec->evlist, !record.opts.no_samples)
And perf_evsel__new_cycles will ask for hardware event cycles (PERF_TYPE_HARDWARE + PERF_COUNT_HW_CPU_CYCLES) with optional kernel sampling, and max precise (check modifiers in man perf-list, it is EIP sampling skid workarounds using PEBS or IBS):
struct evsel *perf_evsel__new_cycles(bool precise)
{
struct perf_event_attr attr = {
.type = PERF_TYPE_HARDWARE,
.config = PERF_COUNT_HW_CPU_CYCLES,
.exclude_kernel = !perf_event_can_profile_kernel(),
};
struct evsel *evsel;
/*
* Now let the usual logic to set up the perf_event_attr defaults
* to kick in when we return and before perf_evsel__open() is called.
*/
evsel = evsel__new(&attr);
evsel->precise_max = true;
/* use asprintf() because free(evsel) assumes name is allocated */
if (asprintf(&evsel->name, "cycles%s%s%.*s",
(attr.precise_ip || attr.exclude_kernel) ? ":" : "",
attr.exclude_kernel ? "u" : "",
attr.precise_ip ? attr.precise_ip + 1 : 0, "ppp") < 0)
return evsel;
}
In case of failed perf_event_open (no access to hardware cycles sampling, for example in virtualized environment without virtualized PMU) there is failback to software cpu-clock sampling in tools/perf/builtin-record.c: int record__open() which calls perf_evsel__fallback() of tools/perf/util/evsel.c:
bool perf_evsel__fallback(struct evsel *evsel, int err,
char *msg, size_t msgsize)
{
if ((err == ENOENT || err == ENXIO || err == ENODEV) &&
evsel->core.attr.type == PERF_TYPE_HARDWARE &&
evsel->core.attr.config == PERF_COUNT_HW_CPU_CYCLES) {
/*
* If it's cycles then fall back to hrtimer based
* cpu-clock-tick sw counter, which is always available even if
* no PMU support.
*/
scnprintf(msg, msgsize, "%s", "The cycles event is not supported, trying to fall back to cpu-clock-ticks");
evsel->core.attr.type = PERF_TYPE_SOFTWARE;
evsel->core.attr.config = PERF_COUNT_SW_CPU_CLOCK;
return true;
} ...
}

Delay in receiving Socket can messages

I implement linux application which receives CAN messages and calculates period(using socketcan on raspberry pi4). The problem is that sometimes (about 0.5%) socketcan receives messages with delay. When I send 10ms messages with baudrate 500Kbps from my laptop(using vector tool), normally I can get reasonable period(9ms ~ 11ms) from raspberry pi. But sometimes it comes with 15ms ~ 16ms(then, next message comes after 4ms ~ 5ms). Even if I send 1 message only, same phenomenon occurs, so that the bus load could not be the reason. How can I resolve this issue?
Here is my source code as below.
wiringPiSetupSys();
if ((s = socket(PF_CAN, SOCK_RAW, CAN_RAW)) < 0)
{
perror("Socket");
return 1;
}
strcpy(ifr.ifr_name, "can0");
ioctl(s, SIOCGIFINDEX, &ifr);
memset(&addr, 0, sizeof(addr));
addr.can_family = AF_CAN;
addr.can_ifindex = ifr.ifr_ifindex;
if (bind(s, (struct sockaddr *)&addr, sizeof(addr)) < 0)
{
perror("Bind");
return 1;
}
while (1)
{
nbytes = read(s, &frame, sizeof(struct can_frame));
period = micros() - last_timer;
last_timer = micros();
}
I think that for the correct frame reception time, you need to get the frame timestamp, not the system value.
you can get the exact timestamp with ioctl call after reading the message from the socket.
struct timeval tv;
ioctl (s, SIOCGSTAMP, & tv);
Your CAN messages are received into SocketCAN buffer, and they are not processed immediately because Linux is a multitasking operating system, and SocketCAN is just waiting for its time slice to process the buffer and distribute messages to all CAN application(s). While you can not avoid this delay (which depends on current system load and number of processes), you can ask SocketCAN to deliver timestamps (as #fantasista has answered) so you can determine arrival time of each CAN message.

ttyACM0 only reads 64 bytes

I'm bit of a newbie but I have an legacy app that reads 64 bytes of AES encrypted data from a device using ttyACM0. I now need to read 128 bytes. Sounded simple; increase the sizes of buffers etc. But no matter what I try, I still can only read 64 bytes. After that it just hangs. I verified the communications in Windows with a terminal and cdc-acm driver. Device does not use flow control. I cant upload code because its proprietary but below are some snippets:
The Intialization:
CACS_RefID::Initialise()
{
int iRet = 1;
struct termios dev_settings;
if(( m_fdRefdev = open("/dev/ttyACM0", O_RDWR))<0)
{
g_dbg->debug("CACS_RefID::Failed to open device\n");
return 0;
}
g_dbg->debug("CACS_RefID::Initialse completed\n");
// Configure the port
tcgetattr(m_fdRefdev, &dev_settings);
cfmakeraw(&dev_settings);
//*tcflush
//tcflush(m_fdRefdev, TCIOFLUSH);
tcsetattr(m_fdRefdev, TCSANOW, &dev_settings);
return iRet;
}
The implementation:
int CACS_RefID::Readport_Refid(int ilen, char* buf)
{
int ierr=0, iret = 0, ictr=0;
fd_set fdrefid;
struct timeval porttime_refrd;
FD_ZERO(&fdrefid);
FD_SET(m_fdRefdev,&fdrefid);
porttime_refrd.tv_sec = 1;
porttime_refrd.tv_usec = 0; //10 Seconds wait time for read port
do
{
iret = select(m_fdRefdev + 1, &fdrefid, NULL, NULL, &porttime_refrd);
switch(iret)
{
case READ_TIMEOUT:
g_dbg->debug("Refid portread: Select timeout:readlen=%d \n",ilen);
ierr = -1;
break;
case READ_ERROR:
g_dbg->debug("Refid portread: Select error:readlen=%d \n",ilen);
ierr = -1;
break;
default:
iret = read(m_fdRefdev, buf, ilen);
g_dbg->debug("Refid portread: Read len(%d):%d\n",ilen,iret);
break;
}
}while((ierr == 0) && (iret<ilen) );
//Flush terminal content at Input and Output after every read completion
// tcflush(m_fdRefdev, TCIOFLUSH);
return ierr;
}
If I initialize every time that I before running the implementation, I get 128 bytes but the data is corrupt after 64 bytes. Even before working on it, I get a lot of READ_ERRORs. Looks like the original author expected the device to block with select() but it doesn't.
Is there some type of limitation on ttyACM0 buffer size in the system? Does baud rate matter with the ttyACM driver? Does read() stop reading after all bytes are read (thinking the first 64 are available, then empty, then more data)?
Pouring thru man pages but I'm stymied. ANY help would be greatly appreciated.
Heres my latest:
int CACS_RefID::Get_GasTest_Result(int ilen)
{
int ierr=0, iret = 0, ictr=0, iread=0;
fd_set fdrefid;
struct timeval porttime_refrd;
porttime_refrd.tv_sec = 5;
porttime_refrd.tv_usec = 0; //10 Seconds wait time for read port
if (Get_GasTest_FirstPass == 0)
{
g_dbg->debug("GasTest_Result_firstPass\n");
memset(strresult, 0, sizeof(strresult)); //SLY clear out result buffer
iread=0;
Get_GasTest_FirstPass = 1;
}
do
{
iread = strlen(strresult);
FD_ZERO(&fdrefid);
FD_SET(m_fdRefdev,&fdrefid);
iret = select(m_fdRefdev + 1, &fdrefid, NULL, NULL, &porttime_refrd);
switch(iret)
{
case READ_TIMEOUT: //0
g_dbg->debug("Get_GasTest_Result: Select timeout\n");
ierr = -1;
break;
case READ_ERROR: //-1
g_dbg->debug("Get_GasTest_Result: Select error=%d %s \n", errno,strerror(errno)) ;
ierr = -1;
break;
}
iret = read(m_fdRefdev, (&strresult[0] + iread), (ilen-iread));
g_dbg->debug("Get_GasTest_Result: ilen=%d,iret=%d,iread=%d \n",ilen,iret,iread);
}while((ierr == 0) && (iread<ilen) );
return ierr;
Note: I am now reading data regardless of select errors and STILL only getting 64bytes. I've contacted my device mfg. Must be something odd going on.
Here is one possible problem with your code; this may not be the one that is causing you to only get 64 bytes but it could explain what you are seeing. Assume that you invoke the function Readport_Refid() with a buffer of 128 bytes. In other words, your invocation was something like:
char buffer[128];
Readport_Refid(128, buffer);
Assume for whatever reason that the first call to select() gets you a return value of 1 (since one bit is set). Your code is only setting one bit so you go off and you read()
iret = read(m_fdRefdev, buf, ilen);
g_dbg->debug("Refid portread: Read len(%d):%d\n",ilen,iret);
break;
iret returns 64 (which means 64 bytes are read) and your program prints a nice message and since ierr is still 0 and iret (64) is less than ilen (128) you go round again and call select().
Assume that you get more data and select() returns 1 again. Then you will go read again on the same buffer with the same ilen and overwrite the first 64 bytes that were read.
At the very least, you should do the following. I have only shown below the changed lines. First add an iread variable and make sure you use it to preserve data that you've already read. Then use iread to determine whether you've read enough or not.
int CACS_RefID::Readport_Refid(int ilen, char* buf)
{
int ierr=0, iret = 0, ictr=0, iread = 0;
[...]
default:
iret = read(m_fdRefdev, buf + iread, ilen - iread);
if (iret > 0)
iread += iret;
g_dbg->debug("Refid portread: Read len(%d):%d\n",ilen,iret);
break;
}
}while((ierr == 0) && (iread<ilen) );
[...]
**** EDITED 2013-08-19 ****
I want to reiterate a comment made by #wildplasser
You should really also be setting FD_SET on each trip around the loop. Great catch.
With respect to your new code, does it work or do you still have a problem?
**** EDITED again 2013-08-19 ****
Getting EINTR is nothing to be worried about. You should just plan on resetting FD_SET and trying again.
I can't say I know why but the fix was to call the initialization code at the beginning of the implementation even though it is called previously. If I call it again, I can read in 128 bytes. If I don't, I can only read up to 64 bytes.

Use SATA HDD as Block Device

I'm totally new to the Linux Kernel, so I probably mix things up. But any advice will help me ;)
I have a SATA HDD connected via a PCIe SATA Card and I try to use read and write like on a block device. I also want the data power blackout save on the HDD - not cached. And in the end I have to analyse how much time I loose in each linux stack layer. But one step at a time.
At the moment I try to open the device with *O_DIRECT*. But I don't really understand where I can find the device. It shows up as /dev/sdd and I created one partition /dev/sdd1.
open and read on the partition /dev/sdd1 works. write fails with *O_DIRECT* (But I'm sure I have the right blocksize)
open read and write called on /dev/sdd fails completely.
Is there maybe another file in /dev/ which represents my device on the block layer?
What are my mistakes and wrong assumptions?
This is my current test code
int main() {
int w,r,s;
char buffer[512] = "test string mit 512 byte";
printf("test\n");
// OPEN
int fd = open("/dev/sdd", O_DIRECT | O_RDWR | O_SYNC);
printf("fd = %d\n",fd);
// WRITE
printf("try to write %d byte : %s\n",sizeof(buffer),buffer);
w = write(fd,buffer,sizeof(buffer));
if(w == -1) printf("write failed\n");
else printf("write ok\n");
// RESET BUFFER
memset(buffer,0,sizeof(buffer));
// SEEK
s = lseek(fd,0,SEEK_SET);
if(s == -1) printf("seek failed\n");
else printf("seek ok\n");
// READ
r = read(fd,buffer,sizeof(buffer));
if(r == -1) printf("read failed\n");
else printf("read ok\n");
// PRINT BUFFER
printf("buffer = %s\n",buffer);
return 0;
}
Edit:
I work with the 3.2 Kernel on a power architecture - if this is important.
Thank you very much for your time,
Fabian
Depending on your SDD's block size (could by 512bit or 4K), you can only read/write mulitple of that size.
Also: when using O_DIRECT flag, you need to make sure the buffer is rightly aligned to block boundaries. You cann't ensure that using an ordinary char array, use memalign to allocate aligned memory instead.

Transferring an Image using TCP Sockets in Linux

I am trying to transfer an image using TCP sockets using linux. I have used the code many times to transfer small amounts but as soon as I tried to transfer the image it only transfered the first third. Is it possible that there is a maximum buffer size for tcp sockets in linux? If so how can I increase it? Is there a function that does this programatically?
I would guess that the problem is on the receiving side when you read from the socket. TCP is a stream based protocol with no idea of packets or message boundaries.
This means when you do a read you may get less bytes than you request. If your image is 128k for example you may only get 24k on your first read requiring you to read again to get the rest of the data. The fact that it's an image is irrelevant. Data is data.
For example:
int read_image(int sock, int size, unsigned char *buf) {
int bytes_read = 0, len = 0;
while (bytes_read < size && ((len = recv(sock, buf + bytes_read,size-bytes_read, 0)) > 0)) {
bytes_read += len;
}
if (len == 0 || len < 0) doerror();
return bytes_read;
}
TCP sends the data in pieces, so you're not guaranteed to get it all at once with a single read (although it's guaranteed to stay in the order you send it). You basically have to read multiple times until you get all the data. It also doesn't know how much data you sent on the receiver side. Normally, you send a fixed size "length" field first (always 8 bytes, for example) so you know how much data there is. Then you keep reading and building a buffer until you get that many bytes.
So the sender would look something like this (pseudocode)
int imageLength;
char *imageData;
// set imageLength and imageData
send(&imageLength, sizeof(int));
send(imageData, imageLength);
And the receiver would look like this (pseudocode)
int imageLength;
char *imageData;
guaranteed_read(&imageLength, sizeof(int));
imageData = new char[imageLength];
guaranteed_read(imageData, imageLength);
void guaranteed_read(char* destBuf, int length)
{
int totalRead=0, numRead;
while(totalRead < length)
{
int remaining = length - totalRead;
numRead = read(&destBuf[totalRead], remaining);
if(numRead > 0)
{
totalRead += numRead;
}
else
{
// error reading from socket
}
}
}
Obviously I left off the actual socket descriptor and you need to add a lot of error checking to all of that. It wasn't meant to be complete, more to show the idea.
The maximum size for 1 single IP packet is 65535, which is extremely close to the number you are hitting. I doubt that is a coincidence.

Resources