isolcpus - binding not working - linux

Sorry, this is a duplicate question from ServerFault. I am posting this here since I didn't get any response there.
Question:
I am using isolcpus to isolate cores. I would like to bind specific threads to cores, but it is not working. The threads are moved to different cores after I bind them.
Cores 13, 14, and 15 are isolated:
$ cat /proc/cmdline
ro root=/dev/mapper/vg0-root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg0/swaprd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=137M#0M rd_NO_DM KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg0/root rhgb quiet audit=0 intel_idle.max_cstate=0 console=tty0 console=ttyS1,115200 printk.time=1 processor.max_cstate=1 idle=poll biosdevname=0 isolcpus=13-15
top -H -p pgrep -u prusr12 Ser -d 1 shows this: 5017 and 5018 should have been bound to 14 and 15 and 5014 and 5016 should have been on 13.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND
5017 prusr12 20 0 1312m 1.1g 1.1g R 99.9 0.9 9:53.93 5 Server-3.10.
5018 prusr12 20 0 1312m 1.1g 1.1g R 99.9 0.9 10:08.88 7 Server-3.10.
5014 prusr12 20 0 1312m 1.1g 1.1g S 0.0 0.9 0:00.40 2 Server-3.10.
5016 prusr12 20 0 1312m 1.1g 1.1g S 0.0 0.9 0:01.04 4 Server-3.10.
The command line is this:
sg devuser "taskset -c 13 /releases/3.10.0/bin/Server-3.10.0 -n X -e DEV -p DEFAULT > /logs/ServerDevPR_DEFAULT.out 2>&1 &"
There are 4 threads in the process. I want the main thread to start on 13, hence taskset -c 13. Then two threads are spawed and will bind them to 14 and 15. I see that the threads were bound to 14 and 15, but then they were moved to other cores. pthread_setaffinity_np() is being used to bind the threads to cores.
Log after I bind the threads to 14 and 15:
CpuSet returned by pthread_getaffinity_np() contained:CPU 14
CpuSet returned by pthread_getaffinity_np() contained:CPU 15
System details:
$ uname -a
Linux host123 2.6.32-573.12.1.el6.x86_64 #1 SMP Mon Nov 23 12:55:32 EST 2015 x86_64 x86_64 x86_64 GNU/Linux
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Stepping: 2
CPU MHz: 3199.847
BogoMIPS: 6399.06
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K
NUMA node0 CPU(s): 0-7
NUMA node1 CPU(s): 8-15
What could be going wrong? Thanks for your time.

Related

What is using so much memory on an idle linux server? Comparing output of "htop" and "ps aux"

I am trying to understand and compare the output I see from htop (sorted by mem%) and "ps aux --sort=-%mem | grep query.jar" and determine why 24.2G out of 32.3G is in use on an idle server.
The ps command shows a single parent (not child process I assume):
ps aux --sort=-%mem | grep query.jar
1000 67970 0.4 4.4 6721304 1452512 ? Sl 2020 163:55 java -Djava.security.egd=file:/dev/./urandom -Xmx700m -Xss256k -jar ./query.jar
Whereas htop shows PID 6790 as well as many other PIDs for query.jar below. I am trying to grasp what this means for memory usage. I also wonder if this has anything to do with open file handlers.
I ran this file handler command on the server: ls -la /proc/$$/fd
which produces this output (although I am not sure if this is showing me any issues):
total 0
lrwx------. 1 ziggy ziggy 64 Jan 2 09:14 0 -> /dev/pts/1
lrwx------. 1 ziggy ziggy 64 Jan 2 09:14 1 -> /dev/pts/1
lrwx------. 1 ziggy ziggy 64 Jan 2 09:14 2 -> /dev/pts/1
lrwx------. 1 ziggy ziggy 64 Jan 2 11:39 255 -> /dev/pts/1
lr-x------. 1 ziggy ziggy 64 Jan 2 09:14 3 -> /var/lib/sss/mc/passwd
Obviously the mem% output in htop (if totaled) exceeds 100% so I am guessing that despite there being different pids, the repetitive mem% values shown of 9.6 and 4.4 are not necessarily unique. Any clarification is appreciated here. I am trying to determine the best method to accurately report what is using 24.GB of memory on this server.
The complete output of the ps aux command is here which shows me all the different pids using memory. Again, I am confused by how this output differs form htop.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1000 40268 0.2 9.5 3432116 3143516 ? Sl 2020 73:33 /usr/local/bin/node --max-http-header-size=65000 index.js
1000 67970 0.4 4.4 6721304 1452516 ? Sl 2020 164:05 java -Djava.security.egd=file:/dev/./urandom -Xmx700m -Xss256k -jar ./query.jar
root 86212 2.6 3.0 15208548 989928 ? Ssl 2020 194:18 dgraph alpha --my=dgraph-public:9080 --lru_mb 2048 --zero dgraph-public:5080
1000 68027 0.2 2.9 6295452 956516 ? Sl 2020 71:43 java -Djava.security.egd=file:/dev/./urandom -Xmx512m -Xss256k -jar ./build.jar
1000 88233 0.3 2.9 6415084 956096 ? Sl 2020 129:25 java -Djava.security.egd=file:/dev/./urandom -Xmx500m -Xss256k -jar ./management.jar
1000 66554 0.4 2.4 6369108 803632 ? SLl 2020 159:23 ./TranslationService thrift sisense-zookeeper.sisense:2181 S1
polkitd 27852 1.2 2.3 2111292 768376 ? Ssl 2020 417:24 mongod --config /data/configdb/mongod.conf --bind_ip_all
root 52493 3.3 2.3 8361444 768188 ? Ssl 2020 1107:53 /bin/prometheus --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries --storage.tsdb.retention.size=7G
B --config.file=/etc/prometheus/config_out/prometheus.env.yaml --storage.tsdb.path=/prometheus --storage.tsdb.retention.time=30d --web.enable-lifecycle --storage.tsdb.no-lockfile --web.external-url=http://sisense-prom-oper
ator-prom-prometheus.monitoring:9090 --web.route-prefix=/
1000 54574 0.0 1.9 901996 628900 ? Sl 2020 13:47 /usr/local/bin/node dist/index.js
root 78245 0.9 1.9 11755696 622940 ? Ssl 2020 325:03 /fluent-bit/bin/fluent-bit -c /fluent-bit/etc/fluent-bit.conf
root 5838 4.4 1.4 781420 484736 ? Ssl 2020 1488:26 kube-apiserver --advertise-address=10.1.17.71 --allow-privileged=true --anonymous-auth=True --apiserver-count=1 --authorization-mode=Node,RBAC --bind-addre
ss=0.0.0.0 --client-ca-file=/etc/kubernetes/ssl/ca.crt --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=False --enable-bootstrap-token-auth=true --endpoint-reconciler-type=lease --etcd-cafile=/etc/ssl
/etcd/ssl/ca.pem --etcd-certfile=/etc/ssl/etcd/ssl/node-dev-analytics-2.pem --etcd-keyfile=/etc/ssl/etcd/ssl/node-dev-analytics-2-key.pem --etcd-servers=https://10.1.17.71:2379 --insecure-port=0 --kubelet-client-certificat
e=/etc/kubernetes/ssl/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP --profiling=
False --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key --request-timeout=1m0s --requestheader-allowed-names=front-proxy-client --request
header-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --runtime-config
= --secure-port=6443 --service-account-key-file=/etc/kubernetes/ssl/sa.pub --service-cluster-ip-range=10.233.0.0/18 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/etc/kubernetes/ssl/apiserve
r.crt --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key
1000 91921 0.1 1.2 7474852 415516 ? Sl 2020 41:04 java -Xmx4G -server -Dfile.encoding=UTF-8 -Djvmp -DEC2EC -cp /opt/sisense/jvmConnectors/jvmcontainer_1_1_0.jar com.sisense.container.launcher.ContainerLaunc
herApp /opt/sisense/jvmConnectors/connectors/ec2ec/com.sisense.connectors.Ec2ec.jar sisense-zookeeper.sisense:2181 connectors.sisense
1000 21035 0.3 0.8 2291908 290568 ? Ssl 2020 111:23 /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /zookeeper-3.4.12/bin/../build/classes:/zookeeper-
3.4.12/bin/../build/lib/*.jar:/zookeeper-3.4.12/bin/../lib/slf4j-log4j12-1.7.25.jar:/zookeeper-3.4.12/bin/../lib/slf4j-api-1.7.25.jar:/zookeeper-3.4.12/bin/../lib/netty-3.10.6.Final.jar:/zookeeper-3.4.12/bin/../lib/log4j-1
.2.17.jar:/zookeeper-3.4.12/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.12/bin/../lib/audience-annotations-0.5.0.jar:/zookeeper-3.4.12/bin/../zookeeper-3.4.12.jar:/zookeeper-3.4.12/bin/../src/java/lib/*.jar:/conf: -XX:MaxRA
MFraction=2 -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XshowSettings:vm -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMa
in /conf/zoo.cfg
1000 91955 0.1 0.8 7323208 269844 ? Sl 2020 40:40 java -Xmx4G -server -Dfile.encoding=UTF-8 -Djvmp -DGenericJDBC -cp /opt/sisense/jvmConnectors/jvmcontainer_1_1_0.jar com.sisense.container.launcher.Containe
rLauncherApp /opt/sisense/jvmConnectors/connectors/genericjdbc/com.sisense.connectors.GenericJDBC.jar sisense-zookeeper.sisense:2181 connectors.sisense
1000 92076 0.1 0.8 8302704 262772 ? Sl 2020 52:11 java -Xmx4G -server -Dfile.encoding=UTF-8 -Djvmp -Dsql -cp /opt/sisense/jvmConnectors/jvmcontainer_1_1_0.jar com.sisense.container.launcher.ContainerLaunche
rApp /opt/sisense/jvmConnectors/connectors/mssql/com.sisense.connectors.MsSql.jar sisense-zookeeper.sisense:2181 connectors.sisense
1000 91800 0.1 0.7 9667560 259928 ? Sl 2020 39:38 java -Xms128M -jar connectorService.jar jvmcontainer_1_1_0.jar /opt/sisense/jvmConnectors/connectors sisense-zookeeper.sisense:2181 connectors.sisense
1000 91937 0.1 0.7 7326312 253708 ? Sl 2020 40:14 java -Xmx4G -server -Dfile.encoding=UTF-8 -Djvmp -DExcel -cp /opt/sisense/jvmConnectors/jvmcontainer_1_1_0.jar com.sisense.container.launcher.ContainerLaunc
herApp /opt/sisense/jvmConnectors/connectors/excel/com.sisense.connectors.ExcelConnector.jar sisense-zookeeper.sisense:2181 connectors.sisense
1000 92085 0.1 0.7 7323660 244160 ? Sl 2020 39:53 java -Xmx4G -server -Dfile.encoding=UTF-8 -Djvmp -DSalesforceJDBC -cp /opt/sisense/jvmConnectors/jvmcontainer_1_1_0.jar com.sisense.container.launcher.Conta
inerLauncherApp /opt/sisense/jvmConnectors/connectors/salesforce/com.sisense.connectors.Salesforce.jar sisense-zookeeper.sisense:2181 connectors.sisense
1000 16326 0.1 0.7 3327260 243804 ? Sl 2020 12:21 /opt/sisense/monetdb/bin/mserver5 --zk_system_name=S1 --zk_address=sisense-zookeeper.sisense:2181 --external_server=ec-devcube-qry-10669921-96e0-4.sisense -
-instance_id=qry-10669921-96e0-4 --dbname=aDevCube --farmstate=Querying --dbfarm=/tmp/aDevCube_2020.12.28.16.46.23.280/dbfarm --set mapi_port 50000 --set gdk_nr_threads 4
100 64158 20.4 0.7 1381624 232548 ? Sl 2020 6786:08 /usr/local/lib/erlang/erts-11.0.3/bin/beam.smp -W w -K true -A 128 -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000
-stbt db -zdbbl 128000 -B i -- -root /usr/local/lib/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa -noshell -noinput -s rabbit boot -boot start_sasl -lager crash_log false -lager handlers []
root 1324 11.3 0.7 5105748 231100 ? Ssl 2020 3773:15 /usr/bin/dockerd --iptables=false --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 --dns 10.233.0.3 --dns 10.1.22.68 --dns 10.1.22.6
9 --dns-search default.svc.cluster.local --dns-search svc.cluster.local --dns-opt ndots:2 --dns-opt timeout:2 --dns-opt attempts:2
Adding more details:
$ free -m
Mem: 31993 23150 2602 1677 6240 6772
Swap: 0 0 0
$ top -b -n1 -o "%MEM"|head -n 20
top - 13:46:18 up 23 days, 3:26, 3 users, load average: 2.26, 1.95, 2.10
Tasks: 2201 total, 1 running, 2199 sleeping, 1 stopped, 0 zombie
%Cpu(s): 4.4 us, 10.3 sy, 0.0 ni, 85.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32761536 total, 2639584 free, 23730688 used, 6391264 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 6910444 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
40268 1000 20 0 3439284 3.0g 8228 S 0.0 9.6 73:39.94 node
67970 1000 20 0 6721304 1.4g 7216 S 0.0 4.4 164:24.16 java
86212 root 20 0 14.5g 996184 13576 S 0.0 3.0 197:36.83 dgraph
68027 1000 20 0 6295452 956516 7256 S 0.0 2.9 71:52.15 java
88233 1000 20 0 6415084 956096 9556 S 0.0 2.9 129:40.80 java
66554 1000 20 0 6385500 803636 8184 S 0.0 2.5 159:42.44 TranslationServ
27852 polkitd 20 0 2111292 766860 11368 S 0.0 2.3 418:26.86 mongod
52493 root 20 0 8399864 724576 15980 S 0.0 2.2 1110:34 prometheus
54574 1000 20 0 905324 631708 7656 S 0.0 1.9 13:48.66 node
78245 root 20 0 11.2g 623028 1800 S 0.0 1.9 325:43.74 fluent-bit
5838 root 20 0 781420 477016 22944 S 7.7 1.5 1492:08 kube-apiserver
91921 1000 20 0 7474852 415516 3652 S 0.0 1.3 41:10.25 java
21035 1000 20 0 2291908 290484 3012 S 0.0 0.9 111:38.03 java
The primary difference between htop and ps aux is that htop shows each individual thread belonging to a process rather than the process only - this is similar to ps auxm. Using the htop interactive command H, you can hide threads to get to a list that more closely corresponds to ps aux.
In terms of memory usage, those additional entries representing individual threads do not affect the actual memory usage total because threads share the address space of the associated process.
RSS (resident set size) in general is problematic because it does not adequately represent shared pages (due to shared memory or copy-on-write) for your purpose - the sum can be higher than expected in those cases. You can use smem -t to get a better picture with the PSS (proportional set size) column. Based on the facts you provided, that is not your issue, though.
In your case, it might make sense to dig deeper via smem -tw to get a memory usage breakdown that includes (non-cache) kernel resources. /proc/meminfo provides further details.

Run program on boot with initramfs

I'm running uClinux on a SmartFusion2 as part of a University team building a small cube satellite. However, I'm not super experienced in Linux kernel, and this issue has had me stumped for a few days. I'm trying to get the SmartFusion to run a program on bootup. Currently, the only .uImage that does this is the test 'hello' file. I'm trying to recreate the process for another program, but am running into some difficulties.
in my hello directory I have the following files: hello.busybox, hello.kernel.M2S, help.txt, hello.uImage, Makefile, hello.initramfs, hello (directory)
in the hello subdirectory (projects/hello/hello):
hello (executable), hello.c, hello.gdb, hello.h, hello.o, Makefile
to try and get the uImage to boot and run a different program, I made a copy of my projects/hello/hello directory and renamed it 'goodbye', with a few minor changes int the .h and .c files for testing purposes. Now I'm trying to get the executable 'hello' in projects/hello/goodbye to run on boot.
My initramfs file originally looked like this:
# This is a very simple, default initramfs
dir /dev 0755 0 0
nod /dev/console 0600 0 0 c 5 1
nod /dev/tty 0666 0 0 c 5 0
nod /dev/null 0600 0 0 c 1 3
nod /dev/mem 0600 0 0 c 1 1
nod /dev/kmem 0600 0 0 c 1 2
nod /dev/zero 0600 0 0 c 1 5
nod /dev/random 0600 0 0 c 1 8
nod /dev/urandom 0600 0 0 c 1 9
dir /dev/pts 0755 0 0
nod /dev/ptmx 0666 0 0 c 5 2
nod /dev/ttyS0 0666 0 0 c 4 64
nod /dev/ttyS1 0666 0 0 c 4 65
nod /dev/ttyS2 0666 0 0 c 4 66
nod /dev/ttyS3 0666 0 0 c 4 67
nod /dev/ttyS4 0666 0 0 c 4 68
nod /dev/ttyS5 0666 0 0 c 4 69
dir /bin 755 0 0
dir /proc 755 0 0
file /bin/hello ${INSTALL_ROOT}/projects/${SAMPLE}/hello/hello 755 0 0
slink /bin/init hello 777 0 0
I changed the last two lines of the initramfs to read as follows:
file /bin/hello ${INSTALL_ROOT}/projects/${SAMPLE}/hello/goodbye 755 0 0
slink /bin/init hello 777 0 0
But when I try and boot the SmartFusion2 after remaking the uImage, I get this, witht the error at the bottom:
Starting kernel ...
Linux version 2.6.33-arm1 (ecenstudent#EE10308) (gcc version 4.4.1 (Sourcery G++ Lite 2010q1-189) ) #38 Thu May 25 09:09:08 MDT 2017
CPU: ARMv7-M Processor [412fc231] revision 1 (ARMv7M)
CPU: NO data cache, 8K instruction cache
Machine: Microsemi M2S
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16256
Kernel command line: m2s_platform=m2s-fg484-som console=ttyS0,115200 panic=10 ip=10.2.118.102:10.2.118.101:192.168.0.1::m2s-fg484-som:eth0:off ethaddr=3C:FB:96:05:00:53
PID hash table entries: 256 (order: -2, 1024 bytes)
Dentry cache hash table entries: 8192 (order: 3, 32768 bytes)
Inode-cache hash table entries: 4096 (order: 2, 16384 bytes)
Memory: 64MB = 64MB total
Memory: 64408k/64408k available, 1128k reserved, 0K highmem
Virtual kernel memory layout:
vector : 0x00000000 - 0x00001000 ( 4 kB)
fixmap : 0xfff00000 - 0xfffe0000 ( 896 kB)
vmalloc : 0x00000000 - 0xffffffff (4095 MB)
lowmem : 0xa0000000 - 0xa4000000 ( 64 MB)
modules : 0xa0000000 - 0x01000000 (1552 MB)
.init : 0xa0008000 - 0xa0012000 ( 40 kB)
.text : 0xa0074bc0 - 0xa0083000 ( 58 kB)
.data : 0xa0084000 - 0xa008cce0 ( 36 kB)
Hierarchical RCU implementation.
NR_IRQS:83
Calibrating delay loop... 132.30 BogoMIPS (lpj=661504)
Mount-cache hash table entries: 512
Switching to clocksource mss_timer2
Serial: 8250/16550 driver, 2 ports, IRQ sharing disabled
serial8250.0: ttyS0 at MMIO 0x40000000 (irq = 10) is a 16550A
console [ttyS0] enabled
serial8250.1: ttyS1 at MMIO 0x40010000 (irq = 11) is a 16550A
Freeing init memory: 40K
Kernel panic - not syncing: No init found. Try passing init= option to kernel.
Backtrace: no frame pointer
Rebooting in 10 seconds..
Can somebody help explain why this is happening and what I need to do to my initramfs to make it run the proper program on boot? Thanks!!
As it turns out, I was confused about how those two lines worked. When I finally figured it out, they looked like this:
file /bin/hello ${INSTALL_ROOT}/projects/${SAMPLE}/goodbye/hello 755 0 0
slink /bin/init hello 777 0 0
then it worked as desired, and I was able to implement it into other uImages.

how to enable linux perf tool's branch sampling

I use linux perf tool to collect branch info of programs, and the command and result is as follow:
$ sudo perf record -b /bin/ls
Error:
No hardware sampling interrupt available.
No APIC? If so then you can boot the kernel with the "lapic" boot parameter to force-enable it.
the content in /pro/cpuinfo is below:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5405 # 2.00GHz
stepping : 10
microcode : 0xa07
cpu MHz : 1994.921
cache size : 6144 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 **apic** sep mtrr pge mca cmov pat pse36 clflush dts acpi strong text mmx fxsr sse sse2 ss ht tm
pbe syscall nx lm constant_tsc arch_perfmon pebs **bts** rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx tm2 ssse3 cx16 xtpr pdcm
dca sse4_1 xsave lahf_lm dtherm tpr_shadow vnmi flexpriority
bugs :
bogomips : 3989.84
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:
apic and bts in flags entry is strengthened(I want but just encapsuled by "**") and I don't know what else is import for this case. And the other 7 processors are same to processor 0.
The boot parameter "lapic" is added by modifying /boot/grub/grub.cfg:
menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-0ed8a872-4eb7-4339-a0bb-6c0033da582e' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 ced80bc6-08a9-4909-9717-97658cf0c4fd
else
search --no-floppy --fs-uuid --set=root ced80bc6-08a9-4909-9717-97658cf0c4fd
fi
linux /vmlinuz-4.2.0-42-generic root=/dev/mapper/fedora_hustyong-root ro **lapic** quiet splash $vt_handoff
initrd /initrd.img-4.2.0-42-generic
}
just add lapic in linux entry.
But no sense after rebooting.
My questions:
1) What does the error info means?
2) Does the perf tool branch sampling use Intel Branch Trace Store(BTS)? Or Last Branch Record(LBR)?
3) How can I look up the LBR support?
4) what is different of the LBR and BTS support between x86 32bit and 64bit?
My OS is Ubuntu 14.04 64bit:
$ uname -a
Linux user-S5000VSA 4.2.0-42-generic #49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
The perf install instructions:
$ sudo apt-get install linux-tools-common
$ sudo apt-get install linux-tools-4.2.0-27-generic linux-cloud-tools-4.2.0-27-generic
update:
the content of /proc/interrupts:
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
0: 127780 52084 127784 126729 127706 128431 127785 126822 IO-APIC 2-edge timer
1: 52 42 3 2 62 49 5 2 IO-APIC 1-edge i8042
8: 0 0 0 0 0 0 0 1 IO-APIC 8-edge rtc0
9: 0 0 0 0 0 0 0 0 IO-APIC 9-fasteoi acpi
12: 1428 1307 52 47 1424 1324 53 58 IO-APIC 12-edge i8042
14: 0 0 0 0 0 0 0 0 IO-APIC 14-edge ata_piix
15: 0 0 0 0 0 0 0 0 IO-APIC 15-edge ata_piix
17: 47 276 1004 49 52 295 993 50 IO-APIC 17-fasteoi radeon
20: 31062 4201 7533 29935 31080 4297 7540 29824 IO-APIC 20-fasteoi ata_piix
22: 0 0 0 0 0 0 0 0 IO-APIC 22-fasteoi uhci_hcd:usb3, uhci_hcd:usb5
23: 0 0 0 0 0 0 0 0 IO-APIC 23-fasteoi ehci_hcd:usb1, uhci_hcd:usb2, uhci_hcd:usb4
25: 2 755654 3 3 1 1 3 6 PCI-MSI 2621440-edge eth0
27: 0 0 0 0 0 0 0 1 PCI-MSI 131072-edge ioat-msi
NMI: 6756 678 6894 5867 861 2168 4994 3700 Non-maskable interrupts
LOC: 343554 578094 1736638 773135 219952 777567 1459249 689292 Local timer interrupts
SPU: 0 0 0 0 0 0 0 0 Spurious interrupts
PMI: 6756 678 6894 5867 861 2168 4994 3700 Performance monitoring interrupts
IWI: 6756 678 6894 5867 861 2168 4994 3700 IRQ work interrupts
RTR: 0 0 0 0 0 0 0 0 APIC ICR read retries
RES: 82594 294601 142535 259797 77845 316210 84927 261455 Rescheduling interrupts
CAL: 4749 9296 7358 31330 7560 8564 5751 20364 Function call interrupts
TLB: 5933 2044 12867 11215 6563 4682 8669 8272 TLB shootdowns
TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts
DFR: 0 0 0 0 0 0 0 0 Deferred Error APIC interrupts
MCE: 0 0 0 0 0 0 0 0 Machine check exceptions
MCP: 292 292 292 292 292 292 292 292 Machine check polls
HYP: 0 0 0 0 0 0 0 0 Hypervisor callback interrupts
ERR: 0
MIS: 0
PIN: 0 0 0 0 0 0 0 0 Posted-interrupt notification event
PIW: 0 0 0 0 0 0 0 0 Posted-interrupt wakeup event
I install ubuntu 16.10 64bit in my PC and run perf record -b successfully. I think maybe it's wrong in kernel or linux-tools-4.2.0-27-generic or linux-cloud-tools-4.2.0-27-generic package.

How to check what is causing performance issue

I have inherited a server with some performance issues. It is running node js, nginx, basic MEAN stack. (DB on another server though)
Whenever I copied a file (log file with size of around 150MB) or vim a file with that size, the output of "iostat -x 1" will be like below
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 8137.62 0.00 49.50 0.00 29924.75 604.48 17.32 123.54 16.50 81.68
avg-cpu: %user %nice %system %iowait %steal %idle
1.59 0.00 24.34 0.00 0.00 74.07
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 39.39 0.00 36606.06 929.23 2.42 351.64 1.87 7.37
avg-cpu: %user %nice %system %iowait %steal %idle
2.78 0.00 24.44 0.00 0.00 72.78
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
The main reason I am bringing this up is because sometimes a simple RESTFUL API that my nodejs is serving will respond slowly (from 10ms to 500ms) and I am not sure what to look for or check what is causing this.
The same codebase copied to another server will run smoothly without issues, the problem mentioned above is the only lead I can find that there might be something wrong with the server but I am not sure what is it.
The codes are like below:
In one file statistic/index.js:
var tracker = function (data) {
piwik.tracker(data);
};
exports.tracker = tracker;
In another file statistic/piwik.js:
exports.tracker = function (data) {
var params = {};
/** Assign params with data - just static string **/
/** API_URL is another machine in same LAN **/
needle.post(API_URL, params, function (err, response, body) {
if (err || (response.statusCode !== 200 && response.statusCode !== 204)) {
util.error(err);
}
});
};
In the file that is calling the above route/getuser.js:
exports.getUser = function (req, res) {
async.auto({
get_user: function (cb) {
/** Read user data from DB **/
cb();
},
record_statistic: ['get_user', function (cb) {
statistic.tracker({ /** Pass static string data **/});
cb();
}]
}, function (err) {
if (err) {
res.json(err);
} else {
res.json();
}
});
};
The reason I am mentioning the above codes is because when I remarked out statistic.tracker({ /** Pass static string data **/}); The function will response within 50ms, but if I include it most of the time it will respond between 100ms - 500ms. I have even put a timestamp check for the "needle" HTTP post, and it respond within 10 - 20ms.
When I copy a file (cp -p x.txt y.txt) especially when it is a > 100MB file, it will also slow down my node js. But even when I am not doing anything on the server besides nginx and node js listening for request the codes below will respond slowly. (If I didn't remark out the code)
I am suspecting IO but where else to check? or what to look out for?
Below are some info about the server:
[ec2-user#tlp-backend logs]$ uname -a
Linux tlp-backend 2.6.32-71.el6.x86_64 #1 SMP Fri May 20 03:51:51 BST 2011 x86_64 x86_64 x86_64 GNU/Linux
[ec2-user#tlp-backend logs]$ cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: NECVMWar Model: VMware IDE CDR10 Rev: 1.00
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: VMware, Model: VMware Virtual S Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02
[ec2-user#tlp-backend logs]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 30502
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[ec2-user#tlp-backend logs]$ more /proc/meminfo
MemTotal: 3918960 kB
MemFree: 392260 kB
Buffers: 296116 kB
Cached: 1205652 kB
SwapCached: 364 kB
Active: 1725084 kB
Inactive: 1155564 kB
Active(anon): 949492 kB
Inactive(anon): 430528 kB
Active(file): 775592 kB
Inactive(file): 725036 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 4095992 kB
SwapFree: 4092872 kB
Dirty: 28 kB
Writeback: 0 kB
AnonPages: 1378528 kB
Mapped: 29860 kB
Shmem: 1140 kB
Slab: 588628 kB
SReclaimable: 461108 kB
SUnreclaim: 127520 kB
KernelStack: 2296 kB
PageTables: 16940 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 6055472 kB
Committed_AS: 1829320 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 288456 kB
VmallocChunk: 34359446140 kB
HardwareCorrupted: 0 kB
AnonHugePages: 507904 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 10240 kB
DirectMap2M: 4184064 kB
[ec2-user#tlp-backend logs]$ more /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5645 # 2.40GHz
stepping : 2
cpu MHz : 2400.000
cache size : 12288 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch
_perfmon pebs bts rep_good xtopology tsc_reliable nonstop_tsc aperfmperf unfair_
spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lah
f_lm ida arat
bogomips : 4800.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5645 # 2.40GHz
stepping : 2
cpu MHz : 2400.000
cache size : 12288 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch
_perfmon pebs bts rep_good xtopology tsc_reliable nonstop_tsc aperfmperf unfair_
spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lah
f_lm ida arat
bogomips : 4800.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5645 # 2.40GHz
stepping : 2
cpu MHz : 2400.000
cache size : 12288 KB
physical id : 1
siblings : 2
core id : 0
cpu cores : 2
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch
_perfmon pebs bts rep_good xtopology tsc_reliable nonstop_tsc aperfmperf unfair_
spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lah
f_lm ida arat
bogomips : 4800.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5645 # 2.40GHz
stepping : 2
cpu MHz : 2400.000
cache size : 12288 KB
physical id : 1
siblings : 2
core id : 1
cpu cores : 2
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch
_perfmon pebs bts rep_good xtopology tsc_reliable nonstop_tsc aperfmperf unfair_
spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lah
f_lm ida arat
bogomips : 4800.00
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
top - 18:02:58 up 26 days, 18:49, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 176 total, 1 running, 175 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 3918960k total, 3522368k used, 396592k free, 295340k buffers
Swap: 4095992k total, 3120k used, 4092872k free, 1201660k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 19340 1248 1040 S 0.0 0.0 0:03.45 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root RT 0 0 0 0 S 0.0 0.0 0:02.55 migration/0
4 root 20 0 0 0 0 S 0.0 0.0 0:01.57 ksoftirqd/0
5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
6 root RT 0 0 0 0 S 0.0 0.0 0:18.87 migration/1
7 root 20 0 0 0 0 S 0.0 0.0 0:01.34 ksoftirqd/1
8 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/1
9 root RT 0 0 0 0 S 0.0 0.0 0:01.90 migration/2
10 root 20 0 0 0 0 S 0.0 0.0 0:01.30 ksoftirqd/2
11 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/2
Apparently, the main problem is coming from the code itself. When you are mixing async.auto with needle package, you need to explicitly stating { connection: 'keep-alive' } in HTTP header
See more info here: https://github.com/tomas/needle/issues/148

How can I see which CPU core a thread is running in?

In Linux, supposing a thread's pid is [pid], from the directory /proc/[pid] we can get many useful information. For example, these proc files, /proc/[pid]/status,/proc/[pid]/stat and /proc/[pid]/schedstat are all useful. But how can I get the CPU core number that a thread is running in? If a thread is in sleep state, how can I know which core it will run after it is scheduled again?
BTW, is there a way to dump the process(thread) list of running and sleeping tasks for each CPU core?
The "top" command may help towards this, it does not have CPU-grouped list of threads but rather you can see the list of threads (probably for a single process) and which CPU cores the threads are running on by
top -H -p {PROC_ID}
then pressing f to go into field selection, j to enable the CPU core column, and Enter to display.
The answer below is no longer accurate as of 2014
Tasks don't sleep in any particular core. And the scheduler won't know ahead of time which core it will run a thread on because that will depend on future usage of those cores.
To get the information you want, look in /proc/<pid>/task/<tid>/status. The third field will be an 'R' if the thread is running. The sixth from the last field will be the core the thread is currently running on, or the core it last ran on (or was migrated to) if it's not currently running.
31466 (bc) S 31348 31466 31348 34819 31466 4202496 2557 0 0 0 5006 16 0 0 20 0 1 0 10196934 121827328 1091 18446744073709551615 4194304 4271839 140737264235072 140737264232056 217976807456 0 0 0 137912326 18446744071581662243 0 0 17 3 0 0 0 0 0
Not currently running. Last ran on core 3.
31466 (bc) R 31348 31466 31348 34819 31466 4202496 2557 0 0 0 3818 12 0 0 20 0 1 0 10196934 121827328 1091 18446744073709551615 4194304 4271839 140737264235072 140737264231824 4235516 0 0 0 2 0 0 0 17 2 0 0 0 0 0
Currently running on core 2.
To see what the rest of the fields mean, have a look at the Linux kernel source -- specifically the do_task_stat function in fs/proc/array.c or Documentation/filesystems/stat.txt.
Note that all of this information may be obsolete by the time you get it. It was true at some point between when you made the open call on the file in proc and when that call returned.
You can also use ps, something like this:
ps -mo pid,tid,%cpu,psr -p `pgrep BINARY-NAME`
The threads are not necessary to bound one particular Core (if you did not pin it). Therefore to see the continuous switching of the core you can use (a modified answer of Dmitry):
watch -tdn0.5 ps -mo pid,tid,%cpu,psr -p \`pgrep BINARY-NAME\`
For example:
watch -tdn0.5 ps -mo pid,tid,%cpu,psr -p \`pgrep firefox\`
This can be done with top command. The default top command output does not show these details. To view this detail you will have to press f key while on top command interface and then press j(press Enter key after you pressed j). Now the output will show you details regarding a process and which processor its running. A sample output is shown below.
top - 04:24:03 up 96 days, 13:41, 1 user, load average: 0.11, 0.14, 0.15
Tasks: 173 total, 1 running, 172 sleeping, 0 stopped, 0 zombie
Cpu(s): 7.1%us, 0.2%sy, 0.0%ni, 88.4%id, 0.1%wa, 0.0%hi, 0.0%si, 4.2%st
Mem: 1011048k total, 950984k used, 60064k free, 9320k buffers
Swap: 524284k total, 113160k used, 411124k free, 96420k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND
12426 nginx 20 0 345m 47m 29m S 77.6 4.8 40:24.92 7 php-fpm
6685 mysql 20 0 3633m 34m 2932 S 4.3 3.5 63:12.91 4 mysqld
19014 root 20 0 15084 1188 856 R 1.3 0.1 0:01.20 4 top
9 root 20 0 0 0 0 S 1.0 0.0 129:42.53 1 rcu_sched
6349 memcache 20 0 355m 12m 224 S 0.3 1.2 9:34.82 6 memcached
1 root 20 0 19404 212 36 S 0.0 0.0 0:20.64 3 init
2 root 20 0 0 0 0 S 0.0 0.0 0:30.02 4 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:12.45 0 ksoftirqd/0
The P column in the output shows the processor core number where the process is currently being executed. Monitoring this for a few minutes will make you understand that a pid is switching processor cores in between. You can also verify whether your pid for which you have set affinity is running on that particular core only
top f navigation screen ( a live system example ) :
Fields Management for window 1:Def, whose current sort field is forest view
Navigate with Up/Dn, Right selects for move then <Enter> or Left commits,
'd' or <Space> toggles display, 's' sets sort. Use 'q' or <Esc> to end!
* PID = Process Id
* USER = Effective User Name
* PR = Priority
* NI = Nice Value
* VIRT = Virtual Image (KiB)
* RES = Resident Size (KiB)
* SHR = Shared Memory (KiB)
* S = Process Status
* %CPU = CPU Usage
* %MEM = Memory Usage (RES)
* TIME+ = CPU Time, hundredths
* COMMAND = Command Name/Line
PPID = Parent Process pid
UID = Effective User Id
RUID = Real User Id
RUSER = Real User Name
SUID = Saved User Id
SUSER = Saved User Name
GID = Group Id
GROUP = Group Name
PGRP = Process Group Id
TTY = Controlling Tty
TPGID = Tty Process Grp Id
SID = Session Id
nTH = Number of Threads
* P = Last Used Cpu (SMP)
TIME = CPU Time
SWAP = Swapped Size (KiB)
CODE = Code Size (KiB)
DATA = Data+Stack (KiB)
nMaj = Major Page Faults
nMin = Minor Page Faults
nDRT = Dirty Pages Count
WCHAN = Sleeping in Function
Flags = Task Flags <sched.h>
CGROUPS = Control Groups
SUPGIDS = Supp Groups IDs
SUPGRPS = Supp Groups Names
TGID = Thread Group Id
ENVIRON = Environment vars
vMj = Major Faults delta
vMn = Minor Faults delta
USED = Res+Swap Size (KiB)
nsIPC = IPC namespace Inode
nsMNT = MNT namespace Inode
nsNET = NET namespace Inode
nsPID = PID namespace Inode
nsUSER = USER namespace Inode
nsUTS = UTS namespace Inode
Accepted answer is not accurate. Here are the ways to find out which CPU is running the thread (or was the last one to run) at the moment of inquiry:
Directly read /proc/<pid>/task/<tid>/stat. Before doing so, make sure format didn't change with latest kernel. Documentation is not always up to date, but at least you can try https://www.kernel.org/doc/Documentation/filesystems/proc.txt. As of this writing, it will be the 14th value from the end.
Use ps. Either give it -F switch, or use output modifiers and add code PSR.
Use top with Last Used Cpu column (hitting f gets you to column selection)
Use htop with PROCESSOR column (hitting F2 gets you to setup screen)
To see the threads of a process :
ps -T -p PID
To see the thread run info
ps -mo pid,tid,%cpu,psr -p PID
Example :
/tmp # ps -T -p 3725
PID SPID TTY TIME CMD
3725 3725 ? 00:00:00 Apps
3725 3732 ? 00:00:10 t9xz1d920
3725 3738 ? 00:00:00 XTimer
3725 3739 ? 00:00:05 Japps
3725 4017 ? 00:00:00 QTask
3725 4024 ? 00:00:00 Kapps
3725 4025 ? 00:00:17 PTimer
3725 4026 ? 00:01:17 PTask
3725 4027 ? 00:00:00 RTask
3725 4028 ? 00:00:00 Recv
3725 4029 ? 00:00:00 QTimer
3725 4033 ? 00:00:01 STask
3725 4034 ? 00:00:02 XTask
3725 4035 ? 00:00:01 QTimer
3725 4036 ? 00:00:00 RTimer
3725 4145 ? 00:00:00 t9xz1d920
3725 4147 ? 00:00:02 t9xz1d920
3725 4148 ? 00:00:00 t9xz1d920
3725 4149 ? 00:00:00 t9xz1d920
3725 4150 ? 00:00:00 t9xz1d920
3725 4865 ? 00:00:02 STimer
/tmp #
/tmp #
/tmp # ps -mo pid,tid,%cpu,psr -p 3725
PID TID %CPU PSR
3725 - 1.1 -
- 3725 0.0 2
- 3732 0.1 0
- 3738 0.0 0
- 3739 0.0 0
- 4017 0.0 6
- 4024 0.0 3
- 4025 0.1 0
- 4026 0.7 0
- 4027 0.0 3
- 4028 0.0 7
- 4029 0.0 0
- 4033 0.0 4
- 4034 0.0 1
- 4035 0.0 0
- 4036 0.0 2
- 4145 0.0 2
- 4147 0.0 0
- 4148 0.0 5
- 4149 0.0 2
- 4150 0.0 7
- 4865 0.0 0
/tmp #

Resources