I am usign the Intel Advisor 2018 (build 523188) on Linux CentOS 7.4 to profile a collection of benchmarks (I want to plot them all in a single Roofline plot) and I am using the command line tool advixe-cl to collect the survey, tripcounts and flops information for each benchmark.
However, I cannot find a way to report the measured performance in FLOPs (for each loop or function or even whole program) using the command line interface. The documentation I am looking at is found here https://software.intel.com/en-us/advisor-help-lin-command-line-interface-reference, but I think that it is not complete e.g. the options -flops-and-masks and -no-tip-counts are not mentioned anywhere.
Do you know if there is any way to report the measured flops via the command line interface? Or do you know where I can find a complete documentation of advixe-cl?
You have to first "collect" (means "profile") FLOPS data and, secondly, report it via user interface.
To collect the data: follow https://software.intel.com/en-us/articles/intel-advisor-roofline (better and newer article) or https://software.intel.com/en-us/intel-advisor-2017-user-guide-linux-running-roofline-analysis (older syntax).
To report/investigate the FLOP values/data it is recommended to launch Intel Advisor GUI: $advixe-gui <project-dir>
Alternatively you could report/explore FLOP values using command line reports ($advixe-cl -report survey <project-dir>).
See advixe-cl documentation at :
https://software.intel.com/en-us/intel-advisor-2017-user-guide-linux-using-intel-advisor-command-line-interface or https://software.intel.com/en-us/intel-advisor-2017-user-guide-linux-report
Related
Using PyVISA on an Ubuntu-operating computer (Ubuntu 20.04.5 LTS), I would like to interact with a VNA machine (E8361A, Agilent technologies) in the following way,
1- Connect to the VNA through a port.
2- Send a signal to VNA to start S-parameter measurements within a specified frequency range at specified number of points.
3- Send a signal to VNA to stop S-parameter measurement, fetch the S-parameter data, transfer the data to the PC and save the data.
Q1: Which VNA port do you recommend to use (GPIB, Ethernet or USB), and why?
Q2: Depending on the VNA port, what hardware is required to connect the PC to the VNA?
Q3: Is there a way to adjust the power level of the VNA stimulus signal? If yes, how to inquire the maximum and minimum power levels? Can the power level be adjusted continuously or only discrete power levels are avilable? (Basically, how to control VNA's internal amplifiers/attenuators?
Q4: Can you please share a sample Python code which uses PyVISA to save S-paramater data in the fashion described in steps 1--3?
Q5: Does trigger port have to do anything with sending signals to the VNA to start and stop measurements?
Since I am not working on ubuntu, nor I have the machine you have I can't say much. I am also rather new to pyVISA so my knowledge there is also limited, but since no one else answered:
Q1. I use a usb port since it is the only one I have, but the faster the better i would assume. There seems to me to be more GPIB examples found on the internet so maybe it is an easier starting point.
Q2. As with others (like mine) I would assume only a cable.
Q3. Yes, see the [write][1] function. It may look something like this:
# Connect PNA to computer using resource_manager etc., it is well documented
PNA.Write(f':SOUR{channel_num}:POW:LEV:IMM:AMPL {source_level},"{source_name}"')
For the maximum and minimum values see the documentation of your hardware.
Q4. Your 'Send a signal' I don't know how to do. But you could just while-loop and send-and-recieve in each instance until you want it to stop. Here is an example from keysight:
https://edadocs.software.keysight.com/kkbopen/a-python-programming-example-for-the-pna-family-vnas-sweep-time-various-point-counts-no-error-correction-577935568.html
Q5. I don't know.
EDIT: On Q4: I found this [link][2] , which would what I believe result in
PNA.write('SENS:SWE:MODE CONT') for continous sweeping and
PNA.write('SENS:SWE:MODE SING') for single sweeping.
In this case change from CONT to SING when appropriate and it should stop sweeping.
[1]: https://pyvisa.readthedocs.io/en/latest/_modules/pyvisa/resources/messagebased.html#MessageBasedResource.write
[2]: https://rfmw.em.keysight.com/wireless/helpfiles/89600b/webhelp/Subsystems/gui/content/mnu_control_sweep.htm
There are multiple systems involved in my application in order to get the proper response to final system.
Above image shows first the request will go from system 1 to system 2 and so on to system n. The final response will be visible in system 1. Here I want to know how we can get the "NET response time of one request from system 2 to system 3 or system 1 to system 2 and so on". I am a beginner in performance testing. Please let me know how we can achieve this.
Thank you so much!
APM tool Integration at System n through 1 (assuming your systems are supported by your APM tool), or log analysis of messages with timestamps indicating start and completion of an event. As long as you have a unique correlating key in the logs you can then reconstruct timing records for various event types which are key to your understanding of performance in your ~n~ tier model
I'm trying to set up my LabView VI + my USB 6001 I/O box to be able to read multiple independent voltages at once, while also outputting a single constant voltage.
I've successfully gotten my USB box to output the voltage I want while reading back a single voltage, but so far I've been unable to read back more than one voltage (and if I do, the two voltages seem to be co-dependent on one another in some way).
Here's a screenshot of my VI:
Everything to the right of the screenshot window should be unimportant to the question.
If anyone is curious, this is to drive multiple LVDT's and read back their respective voltages.
Thank you all for your help!
Look at your DAQ's manual, especially the pages I noted below.
http://www.ni.com/pdf/manuals/374259a.pdf
Page 11
All the AI channels get multiplexed, and the low-side reference can be switched (RSE vs. differential). So the two channels you're sampling require both of those to switch. It might be a settling issue where the ADC is taking a sample before the input value is stable.
To verify this, try using using the same low side (differential or RSE) on both channels. Also try slowing down your sample rate (but your 1 kHz should already be slow enough...).
Page 14
Check this to make sure you have everything connected and grounded correctly.
Page 18
Check this for more details about switching between 2 sources quickly.
Perhaps you could try it using the Daqmx express VIs:
http://www.ni.com/tutorial/2744/en/
I am trying to measure certain hardware events on a (Intel Xeon) machine with multiple (physical) processors. Specifically, I wish to know how many requests are issued for reading 'offcore' data.
I found the OFFCORE_REQUESTS hardware event in Intels documentation and it gives the event descriptor 0xB0 and for data demands, the additional mask 0x01.
Would it then be correct to tell perf to record the event 0xB1 (i.e. 0xB0 | 0x01) and to call it as:
perf record -e r0B1 ./mytestapp someargs
Or is this incorrect?
Because perf report shows no output for events entered like this.
The perf documentation is rather sparse in this area, apart from a tutorial entry which does not say which event it was (though this one works for me), or how it was encoded...
Any help is greatly appreciated.
Ok, so I guess I figured it out.
For the the Intel machine I use, the format is as follows:
<umask><eventselector> where both are hexadecimal values. The leading zeros of the umask can be dropped, but not for the event selector.
So for the event 0xB0 with the mask 0x01 I can call:
perf record -e r1B0 ./mytestapp someargs
I could not manage to find the exact parsing of it in the perf kernel code (any kernel hacker here?), but I found these sources:
A description of the use of perf with raw events in the c't magazine 13/03 (subscription required), which describes some raw events with their description from the Intel Architecture Software Developers Manuel (Vol 3b)
A patch on the kernel mailing list, discussing the proper way to document it. It specified that the pattern above was "... was x86 specific and imcomplete at that"
(Updated) The man page of newer versions shows an example on Intel machines: man perf-list
Update:
As pointed out in the comments (thank you!), the libpfm translator can be used to obtain the proper event descriptor. The website linked in the comments (Bojan Nikolic: How to monitor the full range of CPU performance events), discovered by user 'osgx' explains it in further detail.
It seems you can use as well:
perf record -e cpu/event=0xB1,umask=0x1/u ./mytestapp someargs
I don't know where this syntax is documented.
You can probably use the other arguments (edge, inv, cmask) as well.
There are several libraries which can be helpful to work with raw PMU events.
perf's own wiki https://perf.wiki.kernel.org/index.php/Tutorial#Events recommends perf list --help man page for info about raw events encoding. And modern perf versions will list raw events as part of perf list output ("... if linked against the libpfm4 library, provides some short description of the events."). perf list --details will also print raw ids and masks of events.
Bojan Nikolic has "How to monitor the full range of CPU performance events" blog article about libpfm4 (perfmon2) lib usage to encode raw events for perf with help of showevtinfo and check_events tools, which are provided with the same library.
There is also perf python wrapper ocperf which accepts intel's event names. It is written by Andi Kleen (Intel Open Source Technology Center) as part of pmu-tools set of utilities (LWN post from 2013, event lists by intel at https://download.01.org/perfmon/). There is a demo of ocperf (2011) http://halobates.de/modern-pmus-yokohama.pdf:
ocperf
•Perf wrapper to support Intel specific events
•Allows symbolic events and some additional events
ocperf record -a −e offcore_response.any_data.remote_dram_0 sleep 10
PAPI library also has tool to explore raw events with some descriptions - papi_native_avail.
I have an SNMP monitoring box and want to monitor interface utilisation on a clustered database server. I'm trying to work out the correct OID to monitor - I just need SNMP to return the total interface throughput at a given time.
The SNMP box is already configured and will correctly graph it. All howtos I can find talk about setting up Catci or MRTG which is all well and good, but what I need seems simpler, yet I can't seem to find what I'm looking for. The SNMP box is already configured with the correct community name etc so this should be a really easy one in theory.
Any help very gratefully received
Thanks
When you say "interface utilisation", I assume you mean Ethernet interface utilization. If that assumption is correct, there are a couple OIDs to investigate:
1.3.6.1.2.1.2.2.1.10 - ifInOctets returns the total number of octets received on the interface, including framing characters.
1.3.6.1.2.1.2.2.1.16 - ifOutOctets returns the total number of octets transmitted out of the interface, including framing characters.
1.3.6.1.2.1.31.1.1.1.6 - ifHCInOctets returns the total number of octets received on the interface, including framing characters (this is the 64-bit version of ifInOctets).
1.3.6.1.2.1.31.1.1.1.10 - ifHCInOctets returns the total number of octets transmitted out of the interface, including framing characters (this is the 64-bit version of ifOutOctets).
Each OID is part of a table and will have an associated index that links it to an interface description (e.g., eth0 or br1).
These OIDs provide a count of octets received and transmitted so they require a little massaging to get into the utilization rates you desire. In the past when I've monitored these OIDs I've queried for two values a few seconds apart and then calculated the rate.
(QueryResult2 - QueryResult1) / (SecondsElapsed)
I would guess that Cacti (which I assume you're using since you tagged your question with it) has some way to calculate rates from SNMP values, however, I've never used it so I am not positive.
One other important note is that the default snmpd.conf included with CentOS may not have these OIDs enabled. If you run snmpwalk on 1.3.6.1.2.1.2 and 1.3.6.1.2.1.31 and receive empty results, edit /etc/snmpd.conf to configure the SNMP daemon to respond to those OIDs. I can't remember the exact syntax but I think adding a line like,
view all included .1
will enable all available OIDs on the server.
http://namhuy.net/908/how-to-install-iftop-bandwidth-monitoring-tool-in-rhel-centos-fedora.html
Requirements:
libpcap: module provides a user-level network packet capture information and statistics.
libncurses: is a API programming library that enables programmers to provide text-based interfaces in a terminal.
gcc: GNU Compiler Collection (GCC) is a compiler system produced by the GNU Project supporting various programming languages.
Install libpcap, libnurses, gcc via yum
yum -y install libpcap libpcap-devel ncurses ncurses-devel gcc
Download and Install iftop
wget http://www.ex-parrot.com/pdw/iftop/download/iftop-0.17.tar.gz
./configure
make
make install