I'd like to use 'perf' to measure real execution time of a function. 'perf script' command gives timestamp when the function is called.
Xorg 1523 [001] 25712.423702: probe:sock_write_iter: (ffffffff95cd8b80)
The timestamp field's format is X.Y. How can I understand this value? Is it X.Y seconds?
X.Y is the timestamp in units of seconds.microseconds.
How this value is displayed can be looked at here. You can pass the switch --ns to perf script to display the timestamps in seconds.nanoseconds format too.
To understand this value, you need to understand how the perf module calculates timestamps. You can associate each event with a different clock function to compute the timestamps. By default, perf uses sched_clock function to compute timestamps for an event, more details here.
event->clock = &local_clock;
But you can use the -k switch along with perf record command to associate an event with various clockids.
-k, --clockid
Sets the clock id to use for the various time fields in the
perf_event_type records. See clock_gettime(). In particular
CLOCK_MONOTONIC and CLOCK_MONOTONIC_RAW are supported, some
events might also allow CLOCK_BOOTTIME, CLOCK_REALTIME and
CLOCK_TAI.
Adding the -k switch to perf record command will enable various clock functions depending on which clockid you use, as can be seen here.
sched_clock function shall return the number of nanoseconds since the system was started. A particular architecture may or may not provide an implementation of sched_clock() on its own. The system jiffy counter will be used as sched_clock(), if a local implementation is not provided.
Note that, all of the above code snippets are for Linux kernel 5.6.7.
X.Y is the raw format of the date. The perf's time function, which operates in nanoseconds, needs to be converted to a human readable format. Use this website to convert it to date, http://www.timestamp.fr/ , or using bash
date -d #25712
Related
I've been trying to analyze the output of perf sched record but I don't understand with what frame of reference do I try to understand the "20624.983302 secs". It isn't Unix time for sure, so what is it? How would I go about converting this into Unix time?
*A0 20624.983302 secs A0 => migration/0:12
*. 20624.983311 secs . => swapper:0
*B0 20624.983318 secs B0 => IPC I/O Child:33924
*. 20624.983355 secs
*C0 20624.983485 secs C0 => WRScene~lder#15:39974
*. 20624.983581 secs
*D0 20624.983972 secs D0 => IPC I/O Parent:33780
These timestamps are captured using the kernel scheduler clock, which counts in nanoseconds since boot. The exact details depend on the compile-time parameters chosen to build a particular Linux distribution and the target architecture.
In general, the timestamp of a sample is captured around the same time when it's recorded. Timestamps on the same core are guaranteed to be monotonically increasing as long the core remains in an active state. The samples you've shown were all captured on the same core and the core remained active from the first sample to the last sample. So the timestamps are guaranteed to be monotonic in this case irrespective of the platform and distribution. When profiling on multiple cores, there is no guarantee that the clocks on all cores are in sync.
All perf tools use the same clock to capture timestamps, but they may differ in the way timestamps are printed and it may happen that two tools print timestamps from the same sample file differently. This depends on the kernel version.
It's possible to specify a clock source when calling perf_event_open() by setting use_clockid to 1 and setting clockid to one of the clock sources defined in linux/time.h, such as CLOCK_MONOTONIC. perf record provides the -k or --clockid option to specify the clock source for capturing timestamps.
Modern distributions on x86 typically use TSC as the source for the scheduler clock (check /sys/devices/system/clocksource/clocksource0/current_clocksource). So if you're on an x86 processor, most probably the TSC of the profiled core was used to capture the current value of TSC cycles, which internally gets converted into nanoseconds. When a timestamp is printed, it may get converted to a different unit. In this case, timestamps are printed in the format "seconds.microseconds". A summary of the behavior of TSC on Intel processors can be found at: Can constant non-invariant tsc change frequency across cpu states?.
I'm using perf to do some optimization work. By using perf script to show my perf records and get results like this:
MyServer 13631 [015] 2611179.755027: probe_app:stat_timer: (52bbe0)
......
I write a script to analysis the results and need to change timestamp to epoch timestamp (at least accurate in second).
To to this, I first get the host start up epoch timestamp
$ date -d "`uptime -s`" +%s
1562557105
And verified by adding the uptime
$ cat /proc/uptime
2612552.50 36615651.34
The the start epoch timestamp is correct.
But I found the perf timestamp 211179.755027 is not the elapsed seconds since system startup as document said, there are some error about 100s+. How to get accurate timestamp while exec "perf record"? I tried "-T" but it seems not work.
The default clock used by perf (or the underlying perf_event_open) is not actually documented - so I would not necessarily rely on it to be any specific clock. Fortunately, perf allows you to chose a specific clock with the -k option. In general CLOCK_MONOTONIC and CLOCK_MONOTONIC_RAW are supported. Other clocks may be available depending on the events and version.
If you choose a specific clock, you can then use clock_gettime to get matching timestamps. I'm not aware of a simple way to do this from command line, but a C program will do.
There are several "hi-res" timestamping functions in ALSA:
snd_pcm_status_get_trigger_htstamp
snd_pcm_status_get_audio_htstamp
snd_pcm_status_get_driver_htstamp
snd_pcm_status_get_htstamp
I would like to understand what points in time the resulting functions represent.
My current understanding is that trigger_htstamp represents the time when stream was started/stopped/paused. snd_pcm_status_get_trigger_htstamp returns a constant value and when I add audio_htstamp to that value the result is very close to the current system time.
audio_htstamp seems to start from zero on my system and it is incremented by a value that is equal to the period size I use. Hence on my system it is a simple frame counter. If I understand ALSA correctly audio_htstamp can also work in different more accurate way depending on the system capabilities.
driver_htstamp I guess by the name is a timestamp generated by the audio driver.
Question 1: When is the timestamp driver_htstamp usually generated?
With htstamp I am really unsure where and when it is generated. I have a hunch that it may be related to DMA.
Question 2: Where is htstamp generated?
Question 3: When is htstamp generated?
Question 4: Is the assumption audio_htstamp < htstamp < driver_htstamp generally correct?
It seems like this with a little test program I wrote, but I want to verify my assumption.
I can not find this information in the ALSA documentation.
I just dug through the code for this stuff for my own purposes, so I figured I would share what I found.
The purpose of these timestamps is to allow you to determine subtle differences in the rate of different clocks; most importantly in this case the main system clock that Linux uses for general timekeeping compared with the different clock that determines the rate at which samples move in and out of the sound device. This can be very important for applications that need to keep audio from different hardware devices in sync, since the rates of different physical clocks are never exactly the same.
The technique used is sometimes called "cross-timestamping"; you capture timestamps from the clocks you want to compare as close to simultaneously as possible, and repeat this at regular intervals. There is usually some measurement error introduced, but some relatively simple filtering can get you a good characterization of the difference in the rate at which the clocks count.
The core PCM driver arranges to take a system clock timestamp as closely as possible to when an audio stream starts, and then it does a cross-timestamp between the system clock and audio clock (which can be measured in different ways) whenever it is asked to check the state of the hardware pointers for the DMA engine that moves samples around.
The default method of measuring the audio clock is via DMA hardware pointer comparsion. This isn't terribly precise, but over longer periods of time you can still get a good measure of the rate difference. At the start of snd_pcm_update_hw_ptr0, a system timestamp is captured; this will end up being htstamp. The DMA pointers are then checked, and if it's determined that they've moved since the last check, audio_htstamp is calculated based on the number of frames DMA has copied and the nominal frequency of the audio clock. Then, once all the DMA pointer update is done and right before snd_pcm_update_hw_ptr0 returns, another system timestamp is captured in driver_htstamp. This isn't meant to be used when you're using the DMA hw_ptr method of calculating the audio_htstamp though.
If you happen to have an audio device using the HDAudio driver, you can use an alternate and much more precise method of measuring the audio clock. It supplies an extra operation callback called get_time_info that is used instead of the default method of capturing the system and audio timestamps. It the HDAudio case, it takes a system timestamp for htstamp as close to possible to when it reads an interal counter driven by the same clock source as the audio clock; this forms the audio_htstamp. Afterwards, the same DMA hw_ptr bookkeeping is done, but the code that translates the pointer movement into time is skipped. The driver_htstamp is still taken right before the routine ends, though; this is "to let apps detect if the reference tstamp read by low-level hardware was provided with a delay" as the comment says in the code. This is because there's no guarantee that the get_time_info callback is going to take a new system timestamp; it may have previously recorded an audio timestamp along with a system timestamp as part of an interrupt handler. In this case, the timestamps you get might not match with the available frames and delay frames counts calculated by hw_ptr bookkeeping, but the driver_htstamp will let you know the closest system time to when those calculations were made.
In any case, the code is designed in both cases to capture htstamp and audio_htstamp as closely together as possible, and for htstamp - trigger_htstamp to represent the amount of system time that passed during the period measured by audio_htstamp of the audio clock. You mostly shouldn't need to use driver_htstamp, but I guess it might be used with the USB Audio driver, as I think it and HDAudio are the only ones that do anything special with these interfaces right now.
The documentation for this, although it doesn't contain all the details you might want to know, is part of the kernel documentation: http://lxr.free-electrons.com/source/Documentation/sound/alsa/timestamping.txt?v=4.9
The scope of this work is to query two machines' high resolution timer at the "same time" and get the time clock inaccuracy between both systems. This is done by having the 3rd machine sending an SNMP-get for a custom OID where the SNMP agent is configured to invoke a perl script to return the high-resolution timer. All works fine as in the snmp-get manages to return the expected result. However it appears that regardless of the frequency of the snmpget queries, the snmpagent only performs a fresh query to the script at ~5 second intervals. I am running NET SNMP version 5.4.3. After some research I've seen that this is typical of NET SNMP to cache the results and this is done on MIB tree basis. There is MIB (nsCacheTable) with the respective intervals by querying snmpwalk to 1.3.6.1.4.1.8072.1.5.3. Apparently the values can be changed to 0 to remove caching. Some of these are read-only though. Although I've set a few of them to 0 using SNMPset (as most of them return a Bad object type error).
I know very basic SNMP so I followed a guide online and mapped the below custom OID to the perl script with this line in the snmpd.conf.
extend .1.3.6.1.4.1.27654.3 return_date /usr/bin/perl [directory]/[perl script name].pl
Then the actual OID containing the output (time in epoch) is:
iso.3.6.1.4.1.27654.3.3.1.1.11.114.101.116.117.114.110.95.100.97.116.101
Anyone has any ideas how I can disable the caching for this OID?
Thanks in advance.
---EDIT---
According to this blog post, in order to avoid disabling the caching - one can instead use pass-persist scripts which look more complex to implement at first glance. The perl script I used to call is the below:
#!/usr/bin/perl
# THIS SCRIPT RETURNS THE EPOCH TIME OF DAY IN MICROSECONDS
use Time::HiRes qw(gettimeofday);
($s, $usec) = gettimeofday();
$newtime = $s.$usec;
print ($newtime);
Anyone can provide help in converting this script for pass-persist and how the snmpd.conf should look like?
i still want to check my Bootloader + Linux Startupcode for an embedded device. Therefore i want to catch the time for every command printed to the serial port.
I know there are programs like putty (which i can dearly recommend), getty, cutecom, picocom, screen etc. But none of these add timestamps to the incomming messages on the host screen (I'm not really talking about the date, more like how many ms have gone since the first output). It actually sounds not like a big deal.
I found out there is one script doing what i wanted to have, called grabserial but it's not working properly, since it's to slow to process the whole output. I discussed this problem in a different forum (if you want to know: grabserial problem but it's not part of the topic). So i can't use that script.
Now again: can you tell me a terminal for Linux which adds timestamps to every line, which was received from a Serial Port?
Thank you
[Edit:] I've found a pretty rough workaround with cereal, which wants to have some settings, since it locks the port everytime you use it. In the end, it adds the actual date and time, not the startup time and difftime between each step, so as you can see I'm still looking for an adequate solution.
This might come 3 years too late, but minicom (https://en.wikipedia.org/wiki/Minicom) supports timestamps for every line printed on the terminal. In Ubuntu it's directly available in the default repos.
You might want to look in to using strace to monitor the serial port. See How can I monitor data on a serial port in Linux?
If you are willing to build the binary by yourself, you can try a branched picocom (https://github.com/codepox/picocom). This is based on picocom 1.7 which is a little old.
I have forked and enhanced this picocom and made it be able to show either delta-time or wall-clock timestamp. You can find it here (https://github.com/tdwong/picocom-with-timestamp). You still have to build the binary by yourself.
Here is how I use it. Note, N is the command to enable/toggle timestamp.
$ picocom -b 115200 /dev/ttyUSB0
...
<ctrl-a> N # enable delta-time timestamp
<ctrl-a> N # toggle wall-clock timestamp
<ctrl-a> N # disable timestamp
tio found at https://tio.github.io provides various timestamp options:
-t, --timestamp
Enable line timestamp.
--timestamp-format <format>
Set timestamp format to any of the following timestamp formats:
24hour 24-hour format ("hh:mm:ss.sss")
24hour-start 24-hour format relative to start time
24hour-delta 24-hour format relative to previous timestamp
iso8601 ISO8601 format ("YYYY-MM-DDThh:mm:ss.sss")
Default format is 24hour
In your case, showing how much time has passed since start, that would be something like this:
tio -t --timestamp-format 24hour-start /dev/ttyUSB0
Settings can also be enabled in tio configuration file ~/.tioconfig
I believe that ExtraPuTTY is the solution you are looking for.
However, I wasn't clear if you wanted something to run ON Linux or just to be able to monitor it (SSH to Linux). If you didn't want a Windows solution, then I apologize.