The scope of this work is to query two machines' high resolution timer at the "same time" and get the time clock inaccuracy between both systems. This is done by having the 3rd machine sending an SNMP-get for a custom OID where the SNMP agent is configured to invoke a perl script to return the high-resolution timer. All works fine as in the snmp-get manages to return the expected result. However it appears that regardless of the frequency of the snmpget queries, the snmpagent only performs a fresh query to the script at ~5 second intervals. I am running NET SNMP version 5.4.3. After some research I've seen that this is typical of NET SNMP to cache the results and this is done on MIB tree basis. There is MIB (nsCacheTable) with the respective intervals by querying snmpwalk to 1.3.6.1.4.1.8072.1.5.3. Apparently the values can be changed to 0 to remove caching. Some of these are read-only though. Although I've set a few of them to 0 using SNMPset (as most of them return a Bad object type error).
I know very basic SNMP so I followed a guide online and mapped the below custom OID to the perl script with this line in the snmpd.conf.
extend .1.3.6.1.4.1.27654.3 return_date /usr/bin/perl [directory]/[perl script name].pl
Then the actual OID containing the output (time in epoch) is:
iso.3.6.1.4.1.27654.3.3.1.1.11.114.101.116.117.114.110.95.100.97.116.101
Anyone has any ideas how I can disable the caching for this OID?
Thanks in advance.
---EDIT---
According to this blog post, in order to avoid disabling the caching - one can instead use pass-persist scripts which look more complex to implement at first glance. The perl script I used to call is the below:
#!/usr/bin/perl
# THIS SCRIPT RETURNS THE EPOCH TIME OF DAY IN MICROSECONDS
use Time::HiRes qw(gettimeofday);
($s, $usec) = gettimeofday();
$newtime = $s.$usec;
print ($newtime);
Anyone can provide help in converting this script for pass-persist and how the snmpd.conf should look like?
Related
I'd like to use 'perf' to measure real execution time of a function. 'perf script' command gives timestamp when the function is called.
Xorg 1523 [001] 25712.423702: probe:sock_write_iter: (ffffffff95cd8b80)
The timestamp field's format is X.Y. How can I understand this value? Is it X.Y seconds?
X.Y is the timestamp in units of seconds.microseconds.
How this value is displayed can be looked at here. You can pass the switch --ns to perf script to display the timestamps in seconds.nanoseconds format too.
To understand this value, you need to understand how the perf module calculates timestamps. You can associate each event with a different clock function to compute the timestamps. By default, perf uses sched_clock function to compute timestamps for an event, more details here.
event->clock = &local_clock;
But you can use the -k switch along with perf record command to associate an event with various clockids.
-k, --clockid
Sets the clock id to use for the various time fields in the
perf_event_type records. See clock_gettime(). In particular
CLOCK_MONOTONIC and CLOCK_MONOTONIC_RAW are supported, some
events might also allow CLOCK_BOOTTIME, CLOCK_REALTIME and
CLOCK_TAI.
Adding the -k switch to perf record command will enable various clock functions depending on which clockid you use, as can be seen here.
sched_clock function shall return the number of nanoseconds since the system was started. A particular architecture may or may not provide an implementation of sched_clock() on its own. The system jiffy counter will be used as sched_clock(), if a local implementation is not provided.
Note that, all of the above code snippets are for Linux kernel 5.6.7.
X.Y is the raw format of the date. The perf's time function, which operates in nanoseconds, needs to be converted to a human readable format. Use this website to convert it to date, http://www.timestamp.fr/ , or using bash
date -d #25712
I am writing a linux program that controls internet traffic. In other words, how much bytes I have used while some amount of time. I use a Pcap4J for java (implementation of libpcap) and I have question about it. What happens if my program hasn't proceeded a package while a new one has arrived.
1. It slows down the download(upload) rate for the whole OS?
2. It skips a new one, and my program will never know that it passed by?
In other words, I've downloaded the 1G of data on my computer. How many bytes my program get: 100% or it may be passed my program by but still got the destination place!
And give me know if it is a bad idea to write a control traffic app using this lib!
Your application loses packets. In your words, they pass by.
However, if your idea is to have a metric of how many packets went in and out of your system in a given time, there are definitely better ways to achieve it.
On Linux you can just do a script that does something like this:
DEVICE=eth0
RX0=$(cat /sys/net/$DEVICE/statistics/rx_bytes)
TX0=$(cat /sys/net/$DEVICE/statistics/tx_bytes)
while : ; do
sleep 5
RX1=$(cat /sys/net/$DEVICE/statistics/rx_bytes)
TX1=$(cat /sys/net/$DEVICE/statistics/tx_bytes)
echo "RX bytes: $(($RX1-$RX0))"
echo "TX bytes: $(($TX1-$TX0))"
RX0=RX1
TX0=TX1
done
You can adjust times or whether is a parameter, I think you'll get the idea.
I am working on a python 3.6 based script to monitor dns resolution time, using dns python. I need to understand what is the best approach to get the resolution time. I am currently using the following:
Method:1
import dns.resolver
import time
dns_start = time.perf_counter()
answers = dns.resolver.query(domain, qtype)
dns_end = time.perf_counter()
print("DNS record is: " ,answers.rrset)
print('DNS time =' ,(dns_end - dns_start)* 1000,"ms")
Method 2
import dns.resolver
import time
answers = dns.resolver.query(domain, qtype)
print("DNS record is: " ,answers.rrset)
print('DNS time =' , answers.response.time * 1000,"ms")
Thanks
"It depends" as the famous saying says.
You need to define what is "best" for you in "best approach". The two methods do things correctly, just different things.
First one will count both network time, remote server processing, and then local generation and parsing of DNS messages, including time spent by Python in the dnspython library.
The second method takes only into account the network and remote processing time: if you search for response_time in https://github.com/rthalley/dnspython/blob/master/dns/query.py you will see it is computed as a difference between the time just before sending the message (hence after it is been build, so the generation time will not be included) and just after it has been received (hence before it is parsed, so parsing time will not be included).
Second method can basically test the remote server performances, irrespective of your local program (including Python itself and the dnspython library) performances (but the network time needed between you and remote server will be counted in).
First method shows the perceived time, as it includes all time needed in the code to do something, that is build the DNS packet and parse the result, actions without which the calling application could do nothing with the DNS content.
I am trying to run a MiniZinc model with a OSICBC solver via bash, with the following command-line arguments (subject to a time limit of 30000ms or 30s):
minizinc --solver osicbc model.mzn data.dzn --time-limit 30000 --output-time
But for just this run, the entire process upon executing the command to getting outputs takes about a minute, and the output shows that "Time Elapsed: 36.21s" at the end.
Is this the right approach to imposing a time limit in running this model, where total time taken includes the time from which the command is invoked to which the outputs are shown in my terminal?
The --time-limit command line flag was introduced in MiniZinc 2.2.0 to allow the user to restrict the combined time that the compiler and the solver take. It also introduced --solver-time-limit to just limit the solver time.
Note that minizinc will allow the solver some extra time to output their final solutions.
If you experience that these flags do not limit the solver to the specified times and they are not stopped within a second of the given limit, then this would suggest a bug and I would invite you to make a bug report: https://github.com/MiniZinc/libminizinc/issues
On a Linux box, I need to display the average CPU utilisation per hour for the last week. Is that information logged somewhere? Or do I need to write a script that wakes up every 15 minutes to copy /proc/loadavg to a logfile?
EDIT: I'm not allowed to use any tools other than those that come with Linux.
You might want to check out sar (man page), it fits your use case nicely.
System Activity Reporter (SAR) - capture important system performance metrics at
periodic intervals.
Example from IBM Developer Works Article:
Add an entry to your root crontab
# Collect measurements at 10-minute intervals
0,10,20,30,40,50 * * * * /usr/lib/sa/sa1
# Create daily reports and purge old files
0 0 * * * /usr/lib/sa/sa2 -A
Then you can simply query this information using a sar command (display all of today's info):
root ~ # sar -A
Or just for a certain days log file:
root ~ # sar -f /var/log/sa/sa16
You can usually find it in the sysstat package for your linux distro
As far as I know it's not stored anywhere... It's a trivial thing to write, anyway. Just add something like
cat /proc/loadavg >> /var/log/loads
to your crontab.
Note that there are monitoring tools (like Munin) which can do this kind of thing for you, and generate pretty graphs of it to boot... they might be overkill for your situation though.
I would recommend looking at Multi Router Traffic Grapher (MRTG).
Using snmpd to read the load average, it will automatically calculate averages at any time interval and length, along with nice charts for analysis.
Someone has already posted a CPU usage example.