CentOS use SNMP to show interface useage - linux

I have an SNMP monitoring box and want to monitor interface utilisation on a clustered database server. I'm trying to work out the correct OID to monitor - I just need SNMP to return the total interface throughput at a given time.
The SNMP box is already configured and will correctly graph it. All howtos I can find talk about setting up Catci or MRTG which is all well and good, but what I need seems simpler, yet I can't seem to find what I'm looking for. The SNMP box is already configured with the correct community name etc so this should be a really easy one in theory.
Any help very gratefully received
Thanks

When you say "interface utilisation", I assume you mean Ethernet interface utilization. If that assumption is correct, there are a couple OIDs to investigate:
1.3.6.1.2.1.2.2.1.10 - ifInOctets returns the total number of octets received on the interface, including framing characters.
1.3.6.1.2.1.2.2.1.16 - ifOutOctets returns the total number of octets transmitted out of the interface, including framing characters.
1.3.6.1.2.1.31.1.1.1.6 - ifHCInOctets returns the total number of octets received on the interface, including framing characters (this is the 64-bit version of ifInOctets).
1.3.6.1.2.1.31.1.1.1.10 - ifHCInOctets returns the total number of octets transmitted out of the interface, including framing characters (this is the 64-bit version of ifOutOctets).
Each OID is part of a table and will have an associated index that links it to an interface description (e.g., eth0 or br1).
These OIDs provide a count of octets received and transmitted so they require a little massaging to get into the utilization rates you desire. In the past when I've monitored these OIDs I've queried for two values a few seconds apart and then calculated the rate.
(QueryResult2 - QueryResult1) / (SecondsElapsed)
I would guess that Cacti (which I assume you're using since you tagged your question with it) has some way to calculate rates from SNMP values, however, I've never used it so I am not positive.
One other important note is that the default snmpd.conf included with CentOS may not have these OIDs enabled. If you run snmpwalk on 1.3.6.1.2.1.2 and 1.3.6.1.2.1.31 and receive empty results, edit /etc/snmpd.conf to configure the SNMP daemon to respond to those OIDs. I can't remember the exact syntax but I think adding a line like,
view all included .1
will enable all available OIDs on the server.

http://namhuy.net/908/how-to-install-iftop-bandwidth-monitoring-tool-in-rhel-centos-fedora.html
Requirements:
libpcap: module provides a user-level network packet capture information and statistics.
libncurses: is a API programming library that enables programmers to provide text-based interfaces in a terminal.
gcc: GNU Compiler Collection (GCC) is a compiler system produced by the GNU Project supporting various programming languages.
Install libpcap, libnurses, gcc via yum
yum -y install libpcap libpcap-devel ncurses ncurses-devel gcc
Download and Install iftop
wget http://www.ex-parrot.com/pdw/iftop/download/iftop-0.17.tar.gz
./configure
make
make install

Related

c - using nl80211 without libnl or libnl-genl?

I'm hoping to just use the header in the kernel, linux/nl80211.h to get the channel my network device is on. I'm on a very restricted system where building has to happen with a minimum number of extra packages. It feels strange that SIOCGIWFREQ would be so easy to get, but I'd need a library to just get a frequency via nl80211.
Are there any examples of how to use the nl80211 interface directly in Linux? I'm just hoping to get NL80211_FREQUENCY_ATTR_FREQ
After a lot of struggling, I found out! It's actually easier to use netlink without libnl, as long as you're not doing anything complicated.
I wrote up an example here that prints all your wireless devices, what networks and channels they're connected to: https://github.com/cnlohr/netlink_without_libnl/blob/master/without_libnl.c

ICMP timestamps added to ping echo requests in linux. How are they represented to bytes? Or how to convert UNIX Epoch time to that timestamp format?

I encountered this issue while trying to make a ping program in C myself and used wireshark for further digging into the problem: ping which sends echo requests to a destination IP also ads a timestamp field of 8 bytes (TOD timestamp) after the ICMP header in linux. Ping in Windows doesn't add that timestamp but rather I think makes the time calculations locally. Now my question is how do you convert the time from Unix Epoch format (the number of seconds from 1970 which you get with the 'time' function in C) to that TOD format of 8 bytes? I got to this question as, finally, after quite a time of research, my ping.c program sends the ICMP echo request message to the destination, where after a test with 2 hosts I noticed that it manages to arrive, but gets no ping echo reply message back while the native linux ping works properly. I can only imagine 2 possible causes:
I didnt complete well the fields of the ICMP and IP header. To be honest, I myself pretty much doubt this possiblity because wireshark shows the message arrives to the destination and is recognized as an echo request message, but doesn't trigger any echo reply answer. However, if it would to be this, the only thing I can think off is that timestamp which I don`t know how to convert in TOD form to occupy at most 8 bytes.
There might be a firewall at the destination or some other system dependent fact.
https://www.ibm.com/docs/en/zos/2.2.0?topic=problems-using-ping-command
The Ping command does not use the ICMP/ICMPv6 header sequence number field (icmp_seq or icmp6_seq) to correlate requests with ICMP/ICMPv6 Echo Replies. Instead, it uses the ICMP/ICMPv6 header identifier field (icmp_id or icmp6_id) plus an 8-byte TOD time stamp field to correlate requests with replies. The TOD time stamp is the first 8-bytes of data after the ICMP/ICMPv6 header.
Finally, to repeat the initial question:
How do you convert the UNIX Epoch time to the TOD timestamp form which linux ping adds at the end of the ICMP header/begining of data field?
An useful explanation, but I don't think sufficient I found here:
https://towardsdatascience.com/3-tips-to-handle-timestamps-in-c-ad5b36892294
I should probably mention I`m working on Ubuntu 20.04 focalfossa.
I found a related post here. The book "Principles of Operation" is mentioned in the comments. I skimmed through it but it seems to be generally lower level than C so if anyone knows another place/way to answer the question it would be better.
Background
Including the UNIX timestamp of the time of transmission in the first data bytes of the ICMP Echo message is a trick/optimization the original ping by Mike Muuss used to avoid keeping track of it locally. It exploits the following guarantee made by RFC 792's Echo or Echo Reply Message description:
The data received in the echo message must be returned in the echo reply message.
Many (if not all) BSD ping implementations are based on Mike Muuss' original implementation and therefore kept this behavior. On Linux systems, ping is typically provided by iputils, GNU inetutils, or Busybox. All exhibit the same behavior. fping is a notable exception, which stores a mapping from target host and sequence number to timestamp locally.
Implementations typically store the timestamp in the sender's native precision and byte order as opposed to a well-defined precision in conventional network byte order (big endian), that is normally used for data transmitted over the network, as it intended to be only be interpreted by the original sender and others should just treat it as opaque stream of bytes.
Because this is so common however, the Wireshark ICMP dissector (as of v3.6.2) tries to be clever and heuristically decode it nonetheless, if the first 8 data bytes look like a struct timeval timestamp in 32-bit precision for seconds and microseconds in either byte order. Please note that if the sender was actually using big endian 64-bit precision, this will fail and if it was using little endian 64-bit precision, it will truncate the microseconds before the Epochalypse and fail after that.
Obtaining and serializing epoch time
To answer your actual question:
How do you convert the UNIX Epoch time to the TOD timestamp form which linux ping adds at the end of the ICMP header/begining of data field?
Most implementations use the obsolescent gettimeofday(2) instead of the newer clock_gettime(2). The following snippet is taken from iputils:
struct icmphdr *icp /* = pointer to ICMP packet */;
struct timeval tmp_tv;
gettimeofday(&tmp_tv, NULL);
memcpy(icp + 1, &tmp_tv, sizeof(tmp_tv));
memcpy from a temporary variable instead of directly passing the icp + 1 as target to gettimeofday is used to avoid potential improper alignment, effective type and strict aliasing violation issues.
I appreciate the clear answers. I actually managed to solve the problem and make the ping function work and I have to say the problem was certainly not the timestamp because, yes indeed, i was talking about the "Echo or Echo Reply Message". One way of implementing ping is by using the ICMP feature of Echo and Echo reply messages. The fact is when I put this question I was obviously stuck probably because I wasn't clearly differentiating the main aspects of the problem. Thus, I started to examine the packet sent by the native ping on my Ubuntu 20.04 focal_fossa (with Wireshark), hopefully trying to get a better grasp of how to fill the headers of the packet sent by my program (the IP and ICMP headers). This question simply arised from the fact that I noticed that in this version of Ubuntu, ping adds a timestamp (indeed of 32 bits) after the end of ICMP header, basically in the data/payload section. As a matter of fact, I also used Wireshark on Windows10 and saw that there is no timestamp added after the header. So yes, it might be about different versions of the program being used.
What is the main point I want to emphasize is that my final version of ping has nothing to do with any timestamps. So yes, they are not a crucial aspect for ping to work.

Building custom small sized TCPDUMP executable in order 100 to 300KB

I need to build a small size tcpdump for the embedded project that I am working on. Since the memory size of my embedded device is limited, I need to strip all the unwanted functionality in the TCPDUMP while building it. My target is make the tcpdump executable size less that 300KB. After using "strip tcpdump option" and disabling package options in the configure, I have reached 750KB. To achieve this, I want to remove all the protocol decoding capability of tcpdump. I want the tcpdump to have no more that hex dump capability. I have a below initial list of unwanted protocols that has to be removed.
print-802_11.c
print-802_15_4.c
print-ah.c
print-ahcp.c
print-aodv.c
print-aoe.c
print-ap1394.c
print-atalk.c
print-atm.c
print-babel.c
print-bootp.C
print-bt.c
print-calm-fast.c
print-carp.c
print-cdp.c
print-cfm.c
print-chdlc.c
print-cip.c
print-cnfp.c
print-dccp.c
print-decnet.c
print-dtp.c
print-dvmrp.c
print-eap.c
print-egp.c
print-eigrp.c
print-enc.c
print-esp.c
print-fddi.c
print-forces.c
print-ipx.c
print-isakmp.c
print-isoclns.c
print-juniper.c
print-krb.c
print-lane.c
print-m3ua.c
print-sip.c
print-sl.c
print-sll.c
print-sunatm.c
print-zephyr.c
print-usb.c
print-vjc.c
print-vqp.c
print-timed.c
print-tipc.c
print-token.c
I started to remove these from Makefile.in and removing the function calls manually in the source code. But then I realized this approach is not scalable.
Is there a better way to do this ? Someway by using configure options?
I am new to this. So please explain.
Is there a better way to do this ? Someway by using configure options?
No, there are no such configure options. You'll have to do it the non-scalable way.
"I want to remove all the protocol decoding capability of tcpdump. I
want the tcpdump to have no more that hex dump capability. [...] Is
there a better way to do this ?"
I think there is, but with a very different approach.
If all you want from tcpdump is:
the capability of specifying an interface,
put this interface on promiscuous mode or not, or monitor mode if it's a Wi-Fi interface,
apply a capture filter,
and then spit the output in a file or as hex to stdout,
...you'd be better write your own from scratch, using libpcap (which is what tcpdump uses BTW).
This should be no more than 100-400 lines of C code depending on the options you want to have, you'll have a very, very small executable, and no more dependencies than tcpdump which require libpcap anyway. All the complexity is in the dissection, once you remove all that, what you have is basically... a pcap loop.
It's not that hard, and looks to me as far less work than your approach - and also more interesting work.
There's a tutorial to start with (30-60 minutes read):
http://www.tcpdump.org/pcap.html
...at the end of this tutorial, you'll already have the core of your program.
And you can find loads of info (and ask questions) in the related SO tags:
https://stackoverflow.com/tags/libpcap/info
https://stackoverflow.com/tags/pcap/info
...and have about 70 well-written man pages documenting the full pcap API (you'll end up using maybe 10-20 of these).

Finding out the number of dropped packets in raw sockets

I am developing a program that sniffs network packets using a raw socket (AF_PACKET, SOCK_RAW) and processes them in some way.
I am not sure whether my program runs fast enough and succeeds to capture all packets on the socket. I am worried that the recieve buffer for this socket occainally gets full (due to traffic bursts) and some packets are dropped.
How do I know if packets were dropped due to lack of space in the
socket's receive buffer?
I have tried running ss -f link -nlp.
This outputs the number of bytes that are currently stored in the revice buffer for that socket, but I can not tell if any packets were dropped.
I am using Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-52-generic x86_64).
Thanks.
I was having a similar problem as you. I knew that tcpdump was able to to generate statistics about packet drops, so I tried to figure out how it did that. By looking at the code of tcpdump, I noticed that it is not generating those statistic by itself, but that it is using the libpcap library to get those statistics. The libpcap is on the other hand getting those statistics by accessing the if_packet.h header and calling the PACKET_STATISTICS socket option (at least I think so, but I'm no C expert).
Therefore, I saw only two solutions to the problem:
I had to interact somehow with the linux header files from my Pyhton script to get the packet statistics, which seemed a bit complicated.
Use the Python version of libpcap which is pypcap to get those information.
Since I had no clue how to do the first thing, I implemented the second option. Here is an example how to get packet statistics using pypcap and how to get the packet data using dpkg:
import pcap
import dpkt
import socket
pc=pcap.pcap(name="eth0", timeout_ms=10000, immediate=True)
def packet_handler(ts,pkt):
#printing packet statistic (packets received, packets dropped, packets dropped by interface
print pc.stats()
#example packet parsing using dpkt
eth=dpkt.ethernet.Ethernet(pkt)
if eth.type != dpkt.ethernet.ETH_TYPE_IP:
return
ip =eth.data
layer4=ip.data
ipsrc=socket.inet_ntoa(ip.src)
ipdst=socket.inet_ntoa(ip.dst)
pc.loop(0,packet_handler)
tpacket_stats structure is defined in linux/packet.h header file
Create variable using the tpacket_stats structre and pass it to getSockOpt with PACKET_STATISTICS SOL_SOCKET options will give packets received and dropped count.
-- some times drop can be due to buffer size
-- so if you want to decrease the drop count check increasing the buffersize using setsockopt function
First off, switch your operating system.
You need a reliable, network oriented operating system. Not some pink fluffy "ease of use" with "security" functionality enabled. NetBSD or Gentoo/ArchLinux (the bare installations, not the GUI kitted ones).
Start a simultaneous tcpdump on a network tap and capture the traffic you're supposed to receive along side of your program and compare the results.
There's no efficient way to check if you've received all the packets you intended to on the receiving end since the packets might be dropped on a lower level than you anticipate.
Also this is a question for Unix # StackOverflow, there's no programming here what I can see, at least there's no code.
The only certain way to verify packet drops is to have a much more beefy sender (perhaps a farm of machines that send packets) to a single client, record every packet sent to your reciever. Have the statistical data analyzed and compared against your senders and see how much you dropped.
The cheaper way is to buy a network tap or even more ad-hoc enable port mirroring in your switch if possible. This enables you to dump as much traffic as possible into a second machine.
This will give you a more accurate result because your application machine will be busy as it is taking care of incoming traffic and processing it.
Further more, this is why network taps are effective because they split the communication up into two channels, the receiving and sending directions of your traffic if you will. This enables you to capture traffic on two separate machines (also using tcpdump, but instead of a mirrored port, you get a more accurate traffic mirroring).
So either use port mirroring
Or you buy one of these:

tcpdump: capture outgoing packets on virtual interfaces that has an unknown link type to libpcap?

In the system I am testing right now, it has a couple of virtual L2 devices chained together to add our own L2.5 headers between Eth headers and IP headers. Now when I use
tcpdump -xx -i vir_device_1
, it actually shows the SLL header with IP header. How do I capture the full packet that is actually going out of the vir_device_1, i.e. after the ndo_start_xmit() device call?
How do I capture the full packet that is actually going out of the vir_device_1, i.e. after the ndo_start_xmit() device call?
Either by writing your own code to directly use a PF_PACKET/SOCK_RAW socket (you say "SLL header", so this is presumably Linux), or by:
making sure you've assigned a special ARPHRD_ value for your virtual interface;
using one of the DLT_USERn values for your special set of headers, or asking tcpdump-workers#lists.tcpdump.org for an official DLT_ value to be assigned for them;
modifying libpcap to map that ARPHRD_ value to the DLT_ value you're using;
modifying tcpdump to handle that DLT_ value;
if necessary, modifying other programs that would capture on that interface or read capture files as written by tcpdump on that interface to handle that value as well.
Note that the DLT_USERn values are specifically reserved for private use, and no official versions of libpcap, tcpdump, or Wireshark will ever assign them for their own use (i.e., if you use a DLT_USERn value, don't bother contributing patches to assign that value to your type of headers, as they won't be accepted; other people may already be using it for their own special headers, and that must continue to be supported), so you'll have to maintain the modified versions of libpcap, tcpdump, etc. yourself if you use one of those values rather than getting an official value assigned.
Thanks Guy Harris for providing very helpful answers to my original question!
I am adding this as an answer/note to a follow up question I asked in the comments.
Basically my question was what is the status of the packet received by PF_PACKET/SOCK_RAW.
For an software device(no queue), dev_queue_xmit() will call dev_hard_start_xmit(skb, dev) to start transmitting skb buffer. This function calls dev_queue_xmit_nit() before it calls dev->ops->ndo_start_xmit(skb,dev), which means the packet PF_PACKET sees is at the state before any changes made in ndo_start_xmit().

Resources