cpupower utility linux - how to get a list of available frequencies - linux

I am trying to get a list of available frequencies for my cpu using the cpupower tool.
I am executing following command-
cpupower -c 0,1,4,5 frequency-info
This gives me much information but I need to see a list of available frequencies to which I can set these CPUs to.
On older versions of Fedora, I used to do this
$ cat /system/cpu/cpu3/cpufreq/scaling_available_frequencies
2201000 2200000 2100000 2000000 1800000 1700000 1600000 1500000 1300000 1200000 1100000
but on Fedora 20, cpufreq is obsolete. I googled and found that cpupower has same functionality like cpufreq.
How do you use it to get a list of available frequencies?

cpupower frequency-info
should give you your info

Related

Measure LLC/L3 Cache Miss Rate on AMD Zen2 CPU

I have question related to this one.
I want to (programatically) measure L3 Hits (Accesses) and Misses on an AMD EPYC 7742 CPU (Zen2). I run Linux Kernel 5.4.0-66-generic on Ubuntu Server 20.04.2 LTS. According to the question linked above, the events rFF04 (L3LookupState) and r0106 (L3CombClstrState) should represent the L3 accesses and misses, respectively. Furthermore, Kernel 5.4 should support these events.
However, when measuring it with perf, I run into issues. Similar to the question linked above, if I run numactl -C 0 -m 0 perf stat -e instructions,cycles,r0106,rFF04 ./benchmark, I only measure 0 values. If I try to use numactl -C 0 -m 0 perf stat -e instructions,cycles,amd_l3/r8001/,amd_l3/r0106/, perf complains about "unknown terms". If I use the perf event names, i.e. numactl -C 0 -m 0 perf stat -e instructions,cycles,l3_request_g1.caching_l3_cache_accesses, l3_comb_clstr_state.request_miss perf outputs <not supported> for these events.
Furthermore, I actually want to measure this using perf's C API. Currently, I dispatch a perf_event_attr with type PERF_TYPE_RAW and config set to, e.g., 0x8001. How do I get the amd_l3 PMU stuff into my perf_event_attr object? Otherwise, it would be equivalent to numactl -C 0 -m 0 perf stat -e instructions,cycles,r0106,rFF04 ./benchmark, which is measuring undefined values.
Thank you so much for your help.

Can I increase linux entropy by using rng-daemon without hardware generator?

I want to continuously increase /prco/sys/random/entropy_avail when it reduced.
I first check the rngd (https://wiki.archlinux.org/index.php/Rng-tools)
It says /dev/random is very slow since it only collects entropy from device drivers and other (slow) sources and I think that is why we use rngd.
And it says rngd mainly uses hardware random number generators (TRNG), present in modern hardware like recent AMD/Intel processors, VIA Nano or even Raspberry Pi.
However, when I start rngd it says
[root#localhost init.d]# rngd
can't open entropy source(tpm or intel/amd rng)
Maybe RNG device modules are not loaded
But I don't have Intel RDRAND confirmed by cat /proc/cpuinfo | grep rdrand:
[root#localhost init.d]# cat /proc/cpuinfo | grep rdrand | wc -l
0
If there is any possible resources that I can use?
Alternatively, is it possible making script to increase /proc/sys/random/entropy_avail?
Try this:
sudo apt-get install haveged

Why does perf fail to collect any samples?

sudo perf top shows "Events: 0 cycles".
sudo perf record -ag sleep 10 shows
[ perf record: Woken up 1 time to write data ]
[ perf record: Captured and wrote 0.154 MB perf.data (~6725 samples) ]
However, sudo perf report shows "The perf.data file has no samples!". Also I checked the perf.data recorded and confirmed there is no any samples in it.
The system is "3.2.0-86-virtual #123-Ubuntu SMP Sun Jun 14 18:25:12 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux".
perf version 3.2.69
Inputs are appreciated.
There may be no real samples on idle virtualized system (your linux kernel version has "-virtual" suffix); or there may be no access to hardware counters (-e cycles), which are used by default.
Try to profile some real application like
echo '2^2345678%2'| sudo perf record /usr/bin/bc
Also check software counters like -e cpu-clock:
echo '2^2345678%2'| sudo perf record -e cpu-clock /usr/bin/bc
You may try perf stat (perf stat -d) with same example to find which basic counters are really incremented in your system:
echo '2^2345678%2'| sudo perf stat /usr/bin/bc
About "(~6725 samples)" output - perf record doesn't not count samples in its output, it just estimates their count but this estimation is always wrong. There is some fixed part of any perf.data file without any sample, it may use tens of kb in system-wide mode; and estimation incorrectly counts this part as containing some events of mean length.

How do you get the set of available CPUs in a Linux kernel module?

I would like to start one kernel thread per CPU with kthread_create()/kthread_bind(). However, I can't for the life of me figure out how to query the number of available CPUs. I did find the CPU_SET man page but that didn't help either.
Any thoughts?
You can use num_online_cpus() to get the number of available cpus. This may be different from things like nr_cpu_ids if the system was booted using a maxcpus setting that is not the same as the number of cpus in the system.
See following link, cpuinfo.c, proc.c, may be help you. And
at line 143, you can use two functions for traversing cpus, cpumask_first, cpumask_next. I think, by try and error, you can find the solutions.
You can use x86info. It's not per default installed (sudo apt-get install x86info (ubuntu))
x86info | grep Found
Found 2 CPUs
another way is:
grep processor /proc/cpuinfo | wc -l
2
Is that you are looking for?
If you're using a system that is Fedora Linux / RHEL / CentOS v6+ / Debian Linux v6+ you can use lscpu:
michael#test:~$ lscpu
Architecture: i686
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 37
Stepping: 5
CPU MHz: 1199.000
BogoMIPS: 5319.88
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
Particularly you might be interested in the -p option which gives you parseable output:
michael#test:~$ lscpu -p
# The following is the parsable format, which can be fed to other
# programs. Each different item in every column has an unique ID
# starting from zero.
# CPU,Core,Socket,Node,,L1d,L1i,L2,L3
0,0,0,,,0,0,0,0
1,0,0,,,0,0,0,0
2,1,0,,,1,1,1,0
3,1,0,,,1,1,1,0
$ nproc --all
4
--all print the number of installed processors

What is the significance of the numbers in the name of the flush processes for newer linux kernels?

I am running kernel 2.6.33.7.
Previously, I was running v2.6.18.x. On 2.6.18, the flush processes were named pdflush.
After upgrading to 2.6.33.7, the flush processes have a format of "flush-:".
For example, currently I see flush process "flush-8:32" popping up in top.
In doing a google search to try to determine an answer to this question, I saw examples of "flush-8:38", "flush-8:64" and "flush-253:0" just to name a few.
I understand what the flush process itself does, my question is what is the significance of the numbers on the end of the process name? What do they represent?
Thanks
Device numbers used to identify block devices. A kernel thread may be spawned to handle a particular device.
(On one of my systems, block devices are currently numbered as shown below. They may change from boot to boot or hotplug to hotplug.)
$ grep ^ /sys/class/block/*/dev
/sys/class/block/dm-0/dev:254:0
/sys/class/block/dm-1/dev:254:1
/sys/class/block/dm-2/dev:254:2
/sys/class/block/dm-3/dev:254:3
/sys/class/block/dm-4/dev:254:4
/sys/class/block/dm-5/dev:254:5
/sys/class/block/dm-6/dev:254:6
/sys/class/block/dm-7/dev:254:7
/sys/class/block/dm-8/dev:254:8
/sys/class/block/dm-9/dev:254:9
/sys/class/block/loop0/dev:7:0
/sys/class/block/loop1/dev:7:1
/sys/class/block/loop2/dev:7:2
/sys/class/block/loop3/dev:7:3
/sys/class/block/loop4/dev:7:4
/sys/class/block/loop5/dev:7:5
/sys/class/block/loop6/dev:7:6
/sys/class/block/loop7/dev:7:7
/sys/class/block/md0/dev:9:0
/sys/class/block/md1/dev:9:1
/sys/class/block/sda/dev:8:0
/sys/class/block/sda1/dev:8:1
/sys/class/block/sda2/dev:8:2
/sys/class/block/sdb/dev:8:16
/sys/class/block/sdb1/dev:8:17
/sys/class/block/sdb2/dev:8:18
/sys/class/block/sdc/dev:8:32
/sys/class/block/sdc1/dev:8:33
/sys/class/block/sdc2/dev:8:34
/sys/class/block/sdd/dev:8:48
/sys/class/block/sdd1/dev:8:49
/sys/class/block/sdd2/dev:8:50
/sys/class/block/sde/dev:8:64
/sys/class/block/sdf/dev:8:80
/sys/class/block/sdg/dev:8:96
/sys/class/block/sdh/dev:8:112
/sys/class/block/sdi/dev:8:128
/sys/class/block/sr0/dev:11:0
/sys/class/block/sr1/dev:11:1
/sys/class/block/sr2/dev:11:2
You should also be able to figure this out by searching for those numbers in /proc/self/mountinfo, eg:
$ grep 8:32 /proc/self/mountinfo
25 22 8:32 / /var rw,relatime - ext4 /dev/mapper/sysvg-var rw,barrier=1,data=ordered
This has the side benefit of working with nfs as well:
$ grep 0:73 /proc/self/mountinfo
108 42 0:73 /foo /mnt/foo rw,relatime - nfs host.domain.com:/volume/path rw, ...
Note, the data I included here is fabricated, but the mechanism works just fine.

Resources