why is cpu-cycles much less than cpu current frequency? - linux

My cpu max frequency is 2.8GHZ and cpu frequency mode is performance, but cpu-cycles is only 0.105GHZ from perf, why??
The cpu-cycles event is 0x3c, it is CPU_CLK_UNHALTED.THREAD_P or CPU_CLK_THREAD_UNHALTED.REF_XCLK ?
Could I read the PMC register from perf directly?
Now the usage of cpu-8 reaches 90% by the command 'mpstat'.
CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
8 0.00 0.00 0.98 0.00 0.00 0.00 0.00 89.22 0.00 9.80
8 0.00 0.00 0.99 0.00 0.00 0.00 0.00 88.12 0.00 10.89
The cpu is Intel(R) Xeon(R) CPU E5-2680 v2 # 2.80GHz.
processor : 8
vendor_id : GenuineIntel
cpu family : 6
model : 62
model name : Intel(R) Xeon(R) CPU E5-2680 v2 # 2.80GHz
stepping : 4
microcode : 0x428
cpu MHz : 2800.000
cache size : 25600 KB
I want to get some idea about the cpu-8 by perf.
perf stat -C 8
Performance counter stats for 'CPU(s) 8':
8828.237941 task-clock (msec) # 1.000 CPUs utilized
11,550 context-switches # 0.001 M/sec
0 cpu-migrations # 0.000 K/sec
0 page-faults # 0.000 K/sec
926,167,840 cycles # 0.105 GHz
4,012,135,689 stalled-cycles-frontend # 433.20% frontend cycles idle
473,099,833 instructions # 0.51 insn per cycle
# 8.48 stalled cycles per insn
98,346,040 branches # 11.140 M/sec
1,254,592 branch-misses # 1.28% of all branches
8.828177754 seconds time elapsed
The cpu-cycles is only 0.105GHZ,it is really strange.
I try to understand the cpu-cycles meaning.
cat /sys/bus/event_source/devices/cpu/events/cpu-cycles
event=0x3c
I look up the document "Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3", at 19.6 session, page 40.
I also check the cpu frequency setting, the cpu should be running at the max frequency.
cat scaling_governor
performance
cat scaling_governor
performance
==============================================
I try this command:
taskset -c 8 stress --cpu 1
perf stat -C 8 sleep 10
Performance counter stats for 'CPU(s) 8':
10000.633899 task-clock (msec) # 1.000 CPUs utilized
1,823 context-switches # 0.182 K/sec
0 cpu-migrations # 0.000 K/sec
8 page-faults # 0.001 K/sec
29,792,267,638 cycles # 2.979 GHz
5,866,181,553 stalled-cycles-frontend # 19.69% frontend cycles idle
54,171,961,339 instructions # 1.82 insn per cycle
# 0.11 stalled cycles per insn
16,356,002,578 branches # 1635.497 M/sec
33,041,249 branch-misses # 0.20% of all branches
10.000592203 seconds time elapsed
some detail information about my environment
I run a application, let's call it 'A', in a virtual machine 'V', in a host 'H'。
The virtual machine is created by qume-kvm.
The application is used to receive packets from network and deal with them.

cpu-cycles could be frozen due to that CPU enters C1 or C2 idle state.

Related

How to keep the default events when using `perf stat` with custom events

When the perf stat command is used, many default events are measured. For example, when I run perf stat ls, I obtain the following output:
Performance counter stats for 'ls':
0,55 msec task-clock # 0,598 CPUs utilized
0 context-switches # 0,000 /sec
0 cpu-migrations # 0,000 /sec
99 page-faults # 179,071 K/sec
2 324 694 cycles # 4,205 GHz
1 851 372 instructions # 0,80 insn per cycle
357 918 branches # 647,403 M/sec
12 897 branch-misses # 3,60% of all branches
0,000923884 seconds time elapsed
0,000993000 seconds user
0,000000000 seconds sys
Now, let's suppose I also want to measure the cache-references and cache-misses events.
If I run perf stat -e cache-references,cache-misses, the output is:
Performance counter stats for 'ls':
101 148 cache-references
34 261 cache-misses # 33,872 % of all cache refs
0,000973384 seconds time elapsed
0,001014000 seconds user
0,000000000 seconds sys
Is there a way to add events with the -e flag, but also keep the default events shown when not using -e (without having to list all of them explicitly in the command) ?

Linux perf record not generating any samples

I am trying to profile my userspace program on aria10 fpga board (with 2 ARM Cortex A9 CPUs) which has PMU support. I am running windriver linux version 9.x. I built my kernel with almost all of the CONFIG_ options people suggested over the internet. Also, my pgm is compiled with –fno-omit-frame-pointer and –g options.
What I see is that ‘perf record’ doesn’t generate any samples at all. ‘perf stat true’ output looks to be valid though (not sure what to make out of it). Does anyone have suggestion/ideas why I am not seeing any sample being generated?
~: perf record --call-graph dwarf -- my_app
^C
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.003 MB perf.data ]
~: perf report -g graph --no-children
Error:
The perf.data file has no samples!
To display the perf.data header info, please use --header/--header-only options.
~: perf stat true
Performance counter stats for 'true':
1.095300 task-clock (msec) # 0.526 CPUs utilized
0 context-switches # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
22 page-faults # 0.020 M/sec
1088056 cycles # 0.993 GHz
312708 instructions # 0.29 insn per cycle
29159 branches # 26.622 M/sec
16386 branch-misses # 56.20% of all branches
0.002082030 seconds time elapsed
I don't use a VM in this setup. Arria10 is intel FPGA with 2 ARM CPUs that supports PMU.
Edit:
1. I realize now that ARM CPU has HW PMU support (opposite to what I mentioned earlier). Even with HW PMU support, I am not able to do 'perf record' successfully.
This is an old question, but for people who find this via search:
perf record -e cpu-clock <command>
works for me. The problem seems to be that th default event (cycles) is not available

Total CPU usage - multicore system

I am using xen and with xen top I get the total CPU usage in percentage:
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
VM1 -----r 25724 299.4 3025244 12.0 20975616 83.4 12 1 14970253 27308358 1 3 146585 92257 10835706 9976308 0
As you can see from above I see the CPU usage is 299 %, but how I can get the total CPU usage from a VM ?
Top doesn't show me the total usage.
We usually see 100% cpu per core.
I guess there are at least 3 cores/cpus.
try this to count cores:
grep processor /proc/cpuinfo | wc -l
299% is the total cpu usage.
sar and mpstat are often used to display cpu usage of a system. Check that systat package is installed and display total cpu usage with:
$ mpstat 1 1
Linux 2.6.32-5-amd64 (debian) 05/01/2016 _x86_64_ (8 CPU)
07:48:51 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
07:48:52 PM all 0.12 0.00 0.50 0.00 0.00 0.00 0.00 0.00 99.38
Average: all 0.12 0.00 0.50 0.00 0.00 0.00 0.00 0.00 99.38
If you agree that CPU utilisation is (100 - %IDLE):
$ mpstat 1 1 | awk '/^Average/ {print 100-$NF,"%"}'
0.52 %

Perf does not support some performance events

I want to measure stalled cycles for my application using perf.
When I try: perf stat -B dd if=/dev/zero of=/dev/null count=1000000
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB) copied, 0.218456 s, 2.3 GB/s
Performance counter stats for 'dd if=/dev/zero of=/dev/null count=1000000':
218.420011 task-clock # 0.995 CPUs utilized
25 context-switches # 0.000 M/sec
1 CPU-migrations # 0.000 M/sec
255 page-faults # 0.001 M/sec
821,183,099 cycles # 3.760 GHz
<not supported> stalled-cycles-frontend
<not supported> stalled-cycles-backend
1,526,427,190 instructions # 1.86 insns per cycle
292,281,624 branches # 1338.163 M/sec
1,013,837 branch-misses # 0.35% of all branches
0.219551862 seconds time elapsed
As you can see, I'm getting for stalled-cycles* events. I couldn't find a solution or explanation for this online.
My kernel version is 3.2.0-59, perf version is 3.2.54, and my CPU is an i7-3770.

GNU parallel load balancing

I am trying to find a way to execute CPU intensive parallel jobs over a cluster. My objective is to schedule one job per core, so that every job hopefully gets 100% CPU utilization once scheduled. This is what a have come up with so far:
FILE build_sshlogin.sh
#!/bin/bash
serverprefix="compute-0-"
lastserver=15
function worker {
server="$serverprefix$1";
free=$(ssh $server /bin/bash << 'EOF'
cores=$(grep "cpu MHz" /proc/cpuinfo | wc -l)
stat=$(head -n 1 /proc/stat)
work1=$(echo $stat | awk '{print $2+$3+$4;}')
total1=$(echo $stat | awk '{print $2+$3+$4+$5+$6+$7+$8;}')
sleep 2;
stat=$(head -n 1 /proc/stat)
work2=$(echo $stat | awk '{print $2+$3+$4;}')
total2=$(echo $stat | awk '{print $2+$3+$4+$5+$6+$7+$8;}')
util=$(echo " ( $work2 - $work1 ) / ($total2 - $total1) " | bc -l );
echo " $cores * (1 - $util) " | bc -l | xargs printf "%1.0f"
EOF
)
if [ $free -gt 0 ]
then
echo $free/$server
fi
}
export serverprefix
export -f worker
seq 0 $lastserver | parallel -k worker {}
This script is used by GNU parallel as follows:
parallel --sshloginfile <(./build_sshlogin.sh) --workdir $PWD command args {1} ::: $(seq $runs)
The problem with this technique is that if someone starts another CPU intensive job on a server in the cluster, without checking the CPU usage, then the script will end up scheduling jobs to a core that is being used. In addition, if by the time the first jobs finishes, the CPU usage has changed, then the newly freed cores will not be included for scheduling by GNU parallel for the remaining jobs.
So my question is the following: Is there a way to make GNU parallel re-calculate the free cores/server before it schedules each job? Any other suggestions for solving the problem are welcome.
NOTE: In my cluster all cores have the same frequency. If someone can generalize to account for different frequencies, that's also welcome.
Look at --load which is meant for exactly this situation.
Unfortunately it does not look at CPU utilization but load average. But if your cluster nodes do not have heavy disk I/O then CPU utilization will be very close to load average.
Since load average changes slowly you probably also need to use the new --delay option to give the load average time to rise.
Try mpstat
mpstat
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db) 07/09/2011
10:25:32 PM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
10:25:32 PM all 5.68 0.00 0.49 2.03 0.01 0.02 0.00 91.77 146.55
This is an overall snapshot on a per core basis
$ mpstat -P ALL
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db) 07/09/2011 _x86_64_ (4 CPU)
10:28:04 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle
10:28:04 PM all 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 99.99
10:28:04 PM 0 0.01 0.00 0.01 0.01 0.00 0.00 0.00 0.00 99.98
10:28:04 PM 1 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00 99.98
10:28:04 PM 2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
10:28:04 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
There lot of options, these two give a simple actual %idle per cpu. Check the manpage.

Resources