How to set/get socket RTT in linux socket programming? - linux

I need to set or get RTT in socket(AF_INET, SOCK_RAW, IPPROTO_TCP) way.
What do I need to do next to control such RTT in socket programming? In other words, how to find such RTT parameter?

On Linux you can get the RTT by calling getsockopt() with TCP_INFO:
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
/* ... */
tcp_info info;
socklen_t tcp_info_length = sizeof info;
ret = getsockopt(sock, SOL_TCP, TCP_INFO, &info, &tcp_info_length);
printf("rtt: %u microseconds\n", info.tcpi_rtt);

To measure the Round Trip Time (RTT) write a simple client-server application where one node:
Reads the current time with clock_gettime()
Sends a message to the other node using write() on the (already opened) socket
Waits the message back using read()
Reads the current time using clock_gettime()
RTT is the difference between the two times.

Related

How to test tx_timeout operation of a network kernel module?

I'm having some doubts about how I can test the operation tx_timeout of a network kernel module.
For example, lets take the snull example from chapter 14 of Linux Device Driver book.
void snull_tx_timeout (struct net_device *dev)
{
struct snull_priv *priv = (struct snull_priv *) dev->priv;
PDEBUG("Transmit timeout at %ld, latency %ld\n", jiffies,
jiffies - dev->trans_start);
priv->status = SNULL_TX_INTR;
snull_interrupt(0, dev, NULL);
priv->stats.tx_errors++;
netif_wake_queue(dev);
return;
}
And its initialization:
#ifdef HAVE_TX_TIMEOUT
dev->tx_timeout = snull_tx_timeout;
dev->watchdog_timeo = timeout;
#endif
How can I force a timeout to test the implementation of snull_tx_timeout() ?
I would be glad for any suggestion.
Thanks!
This email from David Miller answer this question. I tested using another network device driver and it worked very well.
The way to test tx_timeout is so simple. If you don't send the packages that are stored in a buffer (or a queue) to the hardware itself. So, those packages will be accumulated until the buffer or queue fill. The next packet may not be stored (and sent), throwing a timeout exception according watchdog_timeo time.

bluez with simultaneous classic and low energy devices

Is it possible with bluez under Linux to connect to multiple classic and low energy devices at the same time? The bluez site isn't very helpful providing information like this.
Yes, I've managed to connect to 7 low energy devices at the same time. The maximum varies depending on the hardware you're using. You can also connect to multiple classic devices as well.
Here's some pseudo/snippet of C I used for connecting via L2CAP:
#include <sys/types.h>
#include <sys/socket.h>
#include <bluetooth/bluetooth.h>
#include <bluetooth/l2cap.h>
char *bdaddr;
int cid = 0;
int psm = 0;
int bdaddr_type = BDADDR_LE_PUBLIC;
int err;
struct sockaddr_l2 addr;
int sock_fd = socket(AF_BLUETOOTH, SOCK_SEQPACKET, BTPROTO_L2CAP)
memset(&addr, 0, sizeof(addr));
addr.l2_family = sock->sock_family;
str2ba(bdaddr, &addr.l2_bdaddr);
if (cid)
addr.l2_cid = htobs(cid);
else
addr.l2_psm = htobs(psm);
addr.l2_bdaddr_type = bdaddr_type;
err = connect(sock_fd, (struct sockaddr *) &addr, sizeof(addr));
My code is a mix of C and Python so I tried to restructure it so it's just the C parts. Everything was taken from reading the Bluez source code, specifically the gatttool.
UPDATE:
There's a bug in the linux kernel's bluez code in versions 3.4 and prior when dealing with L2CAP sockets. Essentially, if you have more than one connection, it will mix them up so you get all your data on the last connection made. So, the code I gave will only work on kernels 3.4 and prior if you only make one L2CAP connection.

Receiving multiple datagrams in a single system call

On Linux, unless I'm mistaken, an application can use the socket call family to send or receive one packet at a time on datagram transports.
Would like to know if Linux provides a means for the application to send and receive multiple packets in a single call on datagram transports.
Use recvmmsg to receive multiple datagram packets (example UDP)
int recvmmsg(int sockfd, struct mmsghdr *msgvec, unsigned int vlen,
unsigned int flags, struct timespec *timeout);
DESCRIPTION
The recvmmsg() system call is an extension of recvmsg(2) that allows
the caller to receive multiple messages from a socket using a single
system call. ...
http://man7.org/linux/man-pages/man2/recvmmsg.2.html
Use sendmmsg to send...
int sendmmsg(int sockfd, struct mmsghdr *msgvec, unsigned int vlen,
unsigned int flags);
DESCRIPTION
The sendmmsg() system call is an extension of sendmsg(2) that allows
the caller to transmit multiple messages on a socket using a single
system call.
http://man7.org/linux/man-pages/man2/sendmmsg.2.html
There is no such call on Linux. However, depending what you need, there are alternatives:
Both sendmsg() and recvmsg() can do "scatter/gather" - send/receive a single packet from multiple buffers.
"pktgen" ( http://www.linuxfoundation.org/collaborate/workgroups/networking/pktgen ) is a kernel module which can transmit millions of packets with just a handful of file writes for the testing purposes.

High CPU usage - simple packet receiver on Linux

I'm writing simple application under Linux that gathers all packets from network. I'm using blocking receiving by calling "recvfrom()" function. When I generate big network load with hping3 (~100k raw frames per second, 130 bytes each) "top" tool shows high CPU usage for my process - it is about 37-38%. It is big value for me. When I decrease number of packets, usage is lower - for example top shows 3% for 4k frames per second.
I've check DC++ when it downloads ~10MB/s and its process doesn't use 38% of CPU but 5%. Is there any programmable way in C to reduce CPU usage and still receive a lot of frames?
My CPU:
Intel i5-2400 CPU # 3.10Ghz
My system:
Ubuntu 11.04 kernel 3.6.6 with PREEMPT-RT patch
And here is my code:
#include <stdlib.h>
#include <stdio.h>
#include <sys/mman.h>
#include <string.h>
#include <sys/socket.h>
#include <linux/if_packet.h>
#include <linux/if_ether.h>
#include <linux/if_arp.h>
#include <arpa/inet.h>
/* Socket descriptor. */
int mainSocket;
/* Buffer for frame. */
unsigned char* buffer;
int main(int argc, char* argv[])
{
/** Create socket. **/
mainSocket = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
if (mainSocket == -1) {
printf("Error: cannot create socket!\n");
}
/** Create buffer for frame **/
buffer = malloc(ETH_FRAME_LEN);
printf("Listing...");
while(1) {
// Length of received packet
int length = recvfrom(mainSocket, buffer, ETH_FRAME_LEN, 0, NULL, NULL);
if(length > 0)
{
// ... do something ...
}
}
I don't know if this will help, but looking on Google I see that:
Raw socket, Packet socket and Zero copy networking in Linux as well as http://lxr.linux.no/linux+v2.6.36/Documentation/networking/packet_mmap.txt talk about using PACKET_MMAP and mmap() to improve the performance of raw sockets
The Overview of Packet Reception suggests setting your process's affinity to match the CPU to which you bind the NIC using RPS.
Does DC++ do a promiscuous receive? I wouldn't have guessed so. So instead of comparing your performance to DC++, perhaps you should compare your performance to the performance of a utility like libpcap.
May be because, TCP/IP stack running on NIC and DC++ is getting stream of data directly from NIC, so your processor is not doing any TCP/IP work. But in your case I think you are directly trying to get data from NIC, so it will not be processed by NIC but by your processor, and as you have infinite loop to fetch data, you doing lot of processing... so CPU usage spiked.

What is the Linux version of GetTickCount? [duplicate]

I'm looking for an equivalent to GetTickCount() on Linux.
Presently I am using Python's time.time() which presumably calls through to gettimeofday(). My concern is that the time returned (the unix epoch), may change erratically if the clock is messed with, such as by NTP. A simple process or system wall time, that only increases positively at a constant rate would suffice.
Does any such time function in C or Python exist?
You can use CLOCK_MONOTONIC e.g. in C:
struct timespec ts;
if(clock_gettime(CLOCK_MONOTONIC,&ts) != 0) {
//error
}
See this question for a Python way - How do I get monotonic time durations in python?
This seems to work:
#include <unistd.h>
#include <time.h>
uint32_t getTick() {
struct timespec ts;
unsigned theTick = 0U;
clock_gettime( CLOCK_REALTIME, &ts );
theTick = ts.tv_nsec / 1000000;
theTick += ts.tv_sec * 1000;
return theTick;
}
yes, get_tick()
Is the backbone of my applications.
Consisting of one state machine for each 'task'
eg, can multi-task without using threads and Inter Process Communication
Can implement non-blocking delays.
You should use: clock_gettime(CLOCK_MONOTONIC, &tp);. This call is not affected by the adjustment of the system time just like GetTickCount() on Windows.
Yes, the kernel has high-resolution timers but it is differently. I would recommend that you look at the sources of any odd project that wraps this in a portable manner.
From C/C++ I usually #ifdef this and use gettimeofday() on Linux which gives me microsecond resolution. I often add this as a fraction to the seconds since epoch I also receive giving me a double.

Resources