cstime error in /proc/pid/stat file - linux

stime or cstime in /proc/pid/stat file is so huge that doesn't make any sense. But just some processes have this wrong cstime on occasion. Just as following:
# ps -eo pid,ppid,stime,etime,time,%cpu,%mem,command |grep nsc
4815 1 Jan08 1-01:20:02 213503-23:34:33 20226149 0.1 /usr/sbin/nscd
#
# cat /proc/4815/stat
4815 (nscd) S 1 4815 4815 0 -1 4202560 2904 0 0 0 21 1844674407359 0 0 20 0 9 0 4021 241668096 326 18446744073709551615 139782748139520 139782748261700 140737353849984 140737353844496 139782734487251 0 0 3674112 16390 18446744073709551615 0 0 17 1 0 0 0 0 0
You can see the stime of proc 4815, nscd, is 1844674407359, equal to 213503-23:34:33, but has just been running for 1-01:20:02.
Another problem process has wrong cstime is following:
a bash fork a sh, which fork a sleep.
8155 (bash) S 3124 8155 8155 0 -1 4202752 1277 6738 0 0 3 0 4 1844674407368 20 0 1 0 1738175 13258752 451 18446744073709551615 4194304 4757932 140736528897536 140736528896544 47722675403157 0 65536 4100 65538 18446744071562341410 0 0 17 5 0 0 0 0 0
8184 (sh) S 8155 8155 8155 0 -1 4202496 475 0 0 0 0 0 0 0 20 0 1 0 1738185 11698176 357 18446744073709551615 4194304 4757932 140733266239472 140733266238480 47964680542613 0 65536 4100 65538 18446744071562341410 0 0 17 6 0 0 0 0 0
8185 (sleep) S 8184 8155 8155 0 -1 4202496 261 0 0 0 0 0 0 0 20 0 1 0 1738186 8577024 177 18446744073709551615 4194304 4212204 140734101195248 140734101194776 48002231427168 0 0 4096 0 0 0 0 17 7 0 0 0 0 0
So you can see that cstime in proc bash is 1844674407368, which is much larger than total cpu time of its children.
My server has one Intel(R) Xeon(R) CPU E5620 # 2.40GHz, which is 4 cores and 8 threads. Operating system is Suse Linux Enterprise Server SP1 x86_64, as following
# lsb_release -a
LSB Version: core-2.0-noarch:core-3.2-noarch:core-4.0-noarch:core-2.0-x86_64:core-3.2-x86_64:core-4.0-x86_64:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch
Distributor ID: SUSE LINUX
Description: SUSE Linux Enterprise Server 11 (x86_64)
Release: 11
Codename: n/a
#
# uname -a
Linux node2 2.6.32.12-0.7-xen #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux
So is it the kernel's problem? Can anybody help fix it?

I suspect that you might be simply seeing a kernel bug. Update to the latest offered update kernel for SLES (which is something like 2.6.32.42 or so) and see if it still occurs. Btw, it's stime, not cstime, that is unusually high—in fact, looking close you will notice it is a value that is like a string truncation of 18446744073709551615 (2^64-1) ±a few clocks offset.
pid_nr: 4815
tcomm: (nscd)
state: S
ppid: 1
pgid: 4815
sid: 4815
tty_nr: 0
tty_pgrp: -1
task_flags: 4202560 / 0x402040
min_flt: 2904
cmin_flt: 0
max_flt: 0
cmax_flt: 0
utime: 21 clocks (= 21 clocks) (= 0.210000 s)
stime: 1844674407359 clocks (= 1844674407359 clocks) (= 18446744073.590000 s)
cutime: 0 clocks (= 0 clocks) (= 0.000000 s)
cstime: 0 clocks (= 0 clocks) (= 0.000000 s)
priority: 20
nice: 0
num_threads: 9
always-zero: 0
start_time: 4021
vsize: 241668096
get_mm_rss: 326
rsslim: 18446744073709551615 / 0xffffffffffffffff
mm_start_code: 139782748139520 / 0x7f21b50c7000
mm_end_code: 139782748261700 / 0x7f21b50e4d44
mm_start_stack: 140737353849984 / 0x7ffff7fb9c80
esp: 140737353844496 / 0x7ffff7fb8710
eip: 139782734487251 / 0x7f21b43c1ed3
obsolete-pending-signals: 0 / 0x0
obsolete-blocked-signals: 0 / 0x0
obsolete-sigign: 3674112 / 0x381000
obsolete-sigcatch: 16390 / 0x4006
wchan: 18446744073709551615 / 0xffffffffffffffff
always-zero: 0
always-zero: 0
task_exit_signal: 17
task_cpu: 1
task_rt_priority: 0
task_policy: 0
delayacct_blkio_ticks: 0
gtime: 0 clocks (= 0 clocks) (= 0.000000 s)
cgtime: 0 clocks (= 0 clocks) (= 0.000000 s)

Related

How to interpret such value of the time column in /proc/self/mountstats - does it indicate a performance issue?

I have some bladefs volume and I just checked /proc/self/mountstats where I see statistics per operations:
...
opts: rw,vers=3,rsize=131072,wsize=131072,namlen=255,acregmin=1800,acregmax=1800,acdirmin=1800,acdirmax=1800,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.2.100,mountvers=3,mountport=903,mountproto=tcp,local_lock=all
age: 18129
caps: caps=0x3fc7,wtmult=512,dtsize=32768,bsize=0,namlen=255
sec: flavor=1,pseudoflavor=1
events: 18840 116049 23 5808 22138 21048 146984 13896 287 2181 0 7560 31380 0 9565 5106 0 6471 0 0 13896 0 0 0 0 0 0
bytes: 339548407 48622919 0 0 311167118 48622919 76846 13896
RPC iostats version: 1.0 p/v: 100003/3 (nfs)
xprt: tcp 875 1 7 0 0 85765 85764 1 206637 0 37 1776 35298
per-op statistics
NULL: 0 0 0 0 0 0 0 0
GETATTR: 18840 18840 0 2336164 2110080 92 8027 8817
SETATTR: 0 0 0 0 0 0 0 0
LOOKUP: 21391 21392 0 3877744 4562876 118 103403 105518
ACCESS: 20183 20188 0 2584304 2421960 72 10122 10850
READLINK: 0 0 0 0 0 0 0 0
READ: 3425 3425 0 465848 311606600 340 97323 97924
WRITE: 2422 2422 0 48975488 387520 763 200645 201522
CREATE: 2616 2616 0 447392 701088 21 870 1088
MKDIR: 858 858 0 188760 229944 8 573 705
SYMLINK: 0 0 0 0 0 0 0 0
MKNOD: 0 0 0 0 0 0 0 0
REMOVE: 47 47 0 6440 6768 0 8 76
RMDIR: 23 23 0 4876 3312 0 3 5
RENAME: 23 23 0 7176 5980 0 5 6
LINK: 0 0 0 0 0 0 0 0
READDIR: 160 160 0 23040 4987464 0 16139 16142
READDIRPLUS: 15703 15703 0 2324044 8493604 43 1041634 1041907
FSSTAT: 1 1 0 124 168 0 0 0
FSINFO: 2 2 0 248 328 0 0 0
PATHCONF: 1 1 0 124 140 0 0 0
COMMIT: 68 68 0 9248 10336 2 272 275...
about my bladefs. I am interested in READ operation statistics. As I know the last column (97924) means:
execute: How long ops of this type take to execute (from
rpc_init_task to rpc_exit_task) (microsecond)
How to interpret this? Is it the average time of each read operation regardless of the block size? I have very strong suspicion that I have problems with NFS: am I right? The value of 0.1 sec looks bad for me, but I am not sure how exactly to interpret this time: average, some sum...?
After reading the kernel source, the statistics are printed from net/sunrpc/stats.c rpc_clnt_show_stats() and the 8th column of per-op statistics statistics seems to printed from _print_rpc_iostats, it's printing struct rpc_iostats member om_execute. (The newest kernel has 9 columns with errors on the last column.)
That member looks to be only referenced/actually changed in rpc_count_iostats_metrics with:
execute = ktime_sub(now, task->tk_start);
op_metrics->om_execute = ktime_add(op_metrics->om_execute, execute);
Assuming ktime_add does what it says, the value of om_execute only increases. So the 8th column of mountstats would be the sum of the time of operations of this type.

100% Kernel CPU kills connections to the server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
My server runs Centos 6.9 (64gb Ram), and nginx, the problem is that every 10 minutes there are random 100% kernel cpu spikes in htop, generated by "events/10" and "ksoftirqd/10". I don't know how to find out which exact process is generating this problem.
This is my /proc/interrupts
$ cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 CPU8 CPU9 CPU10 CPU11 CPU12 CPU13 CPU14 CPU15
0: 77897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-edge timer
1: 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-edge i8042
8: 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-edge rtc0
9: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-fasteoi acpi
12: 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IO-APIC-edge i8042
56: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge aerdrv
63: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd
64: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd
65: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd
66: 1426061273 0 0 0 0 0 0 0 0 0 0 1914508 0 0 0 0 PCI-MSI-edge ahci
67: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge ahci
68: 3084636512 0 0 0 0 0 0 0 0 0 10149560 0 0 0 0 0 PCI-MSI-edge eth0
NMI: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Non-maskable interrupts
LOC: 1972128636 528409367 3519065090 2616991376 2762882221 3577269786 2407615998 2889069038 1939478243 2270996522 1940319131 2244314760 2033706214 2339089941 2303043400 2629954396 Local timer interrupts
SPU: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Spurious interrupts
PMI: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Performance monitoring interrupts
IWI: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 IRQ work interrupts
RES: 1349612979 1979915818 1044674069 463586597 1410841781 641984863 3396971132 3062175502 2189512469 2034852778 1686264346 1571882114 1410891335 1330892006 1273321645 1195645068 Rescheduling interrupts
CAL: 1771384 1771300 1775694 1780259 1778017 1782331 1761855 1755630 1758801 1759472 1770034 1773352 1775468 1779579 1778401 1778652 Function call interrupts
TLB: 1295395722 623515438 528231713 457109575 438669843 412327240 413878597 392015004 399091958 373918339 391267007 362582716 383312220 348908971 376811042 337426419 TLB shootdowns
TRM: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Machine check exceptions
MCP: 77730 77730 77730 77730 77730 77730 77730 77730 77730 77730 77730 77730 77730 77730 77730 77730 Machine check polls
ERR: 0
MIS: 0
This is my /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 23
model : 1
model name : AMD Ryzen 7 PRO 1700X Eight-Core Processor
stepping : 1
cpu MHz : 2200.000
cache size : 512 KB
physical id : 0
siblings : 16
core id : 0
cpu cores : 8
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb perfctr_l2 arat xsaveopt npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold fsgsbase bmi1 avx2 smep bmi2 rdseed adx
bogomips : 6786.47
TLB size : 2560 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm hwpstate eff_freq_ro [13] [14]
Hope you can help me, these spikes make the server really unstable (even with nginx and most of the software turned off)
I also already tried installing irqbalance but it just switched the cpu that was going 100% from the first to the eleventh.
I also made the host switch my drives to another machine of the same architecture, but that didn't work either.

Is it possible to change which core timer interrupts happen on?

On my Debian 8 system, when I run the command watch -n0.1 --no-title cat /proc/interrupts, I get the output below.
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 [0/1808]
0: 46 0 0 10215 0 0 0 0 IO-APIC-edge timer
1: 1 0 0 2 0 0 0 0 IO-APIC-edge i8042
8: 0 0 0 1 0 0 0 0 IO-APIC-edge rtc0
9: 0 0 0 0 0 0 0 0 IO-APIC-fasteoi acpi
12: 0 0 0 4 0 0 0 0 IO-APIC-edge i8042
18: 0 0 0 0 8 0 0 0 IO-APIC-fasteoi i801_smbus
19: 7337 0 0 0 0 0 0 0 IO-APIC-fasteoi ata_piix, ata_piix
21: 0 66 0 0 0 0 0 0 IO-APIC-fasteoi ehci_hcd:usb1
23: 0 0 35 0 0 0 0 0 IO-APIC-fasteoi ehci_hcd:usb2
40: 208677 0 0 0 0 0 0 0 HPET_MSI-edge hpet2
41: 0 4501 0 0 0 0 0 0 HPET_MSI-edge hpet3
42: 0 0 2883 0 0 0 0 0 HPET_MSI-edge hpet4
43: 0 0 0 1224 0 0 0 0 HPET_MSI-edge hpet5
44: 0 0 0 0 1029 0 0 0 HPET_MSI-edge hpet6
45: 0 0 0 0 0 0 0 0 PCI-MSI-edge aerdrv, PCIe PME
46: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME
47: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME
48: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME
49: 0 0 0 0 0 8570 0 0 PCI-MSI-edge eth0-rx-0
50: 0 0 0 0 0 0 1684 0 PCI-MSI-edge eth0-tx-0
51: 0 0 0 0 0 0 0 2 PCI-MSI-edge eth0
NMI: 8 2 2 2 1 2 1 49 Non-maskable interrupts
LOC: 36 31 29 26 21 7611 886 1390 Local timer interrupts
SPU: 0 0 0 0 0 0 0 0 Spurious interrupts
PMI: 8 2 2 2 1 2 1 49 Performance monitoring interrupts
IWI: 0 0 0 1 1 0 1 0 IRQ work interrupts
RTR: 7 0 0 0 0 0 0 0 APIC ICR read retries
RES: 473 1027 1530 739 1532 3567 1529 1811 Rescheduling interrupts
CAL: 846 1012 1122 1047 984 1008 1064 1145 Function call interrupts
TLB: 2 7 5 3 12 15 10 6 TLB shootdowns
TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 0 0 0 0 Machine check exceptions
MCP: 4 4 4 4 4 4 4 4 Machine check polls
THR: 0 0 0 0 0 0 0 0 Hypervisor callback interrupts
ERR: 0
MIS: 0
Observe that the timer interrupt is firing mostly on CPU3.
Is it possible to move the timer interrupt to CPU0?
The name of the concept is IRQ SMP affinity.
It's possible to set the smp_affinity of an IRQ by setting the affinity mask in /proc/irq/<IRQ_NUMBER>/smp_affinity or the affinity list in /proc/irq/<IRQ_NUMBER>/smp_affinity_list.
The affinity mask is a bit field where each bit represents a core, the IRQ is allowed to be served on the cores corresponding to bits set.
The command
echo 1 > /proc/irq/0/smp_affinity
executed as root should pin the IRQ0 to CPU0.
The conditional is mandatory as setting the affinity for an IRQ is subject to a set of prerequisites, the list includes: an interrupt controller that supports a redirection table (like the IO-APIC), the affinity mask must contains at least one active CPUs, the IRQ affinity must not be managed by the kernel and the feature must be enabled.
In my virtualised Debian 8 system I was unable to set the affinity of the IRQ0, failing with an EIO error.
I was also unable to track down the exact reason.
If you are willing to dive into the Linux source code, you can start from write_irq_affinity in proc.c
Use isolcpus. It may not reduce your timer interrupts to 0, but on our servers they are greatly reduced.
If you use isolcpus, then the kernel will not affine interrupts to your CPUs that it might otherwise do. For example, we have systems with 12 core dual CPUs. We noticed NVME interrupts on our CPU1 (the second CPU), even with the CPUs isolated via tuned and its cpu-partitioning scheme. nvme drives on our Dell systems are connected to the PCIe lanes on CPU1, hence the interrupts on those cores.
As per my ticket with Red Hat (and Margaret Bloom, who wrote an excellent answer here), if you don't want the interrupts to be affined to your CPUs, you need to use isolcpus on the kernel boot line. And lo and behold, I tried it and our interrupts went to 0 for the NVME drives on all isolated CPU cores.
I have not attempted to isolate ALL cores on CPU1; I don't know if they'll simply be affined to CPU0 or what.
And, in a short summary: any interrupt in /proc/interrupts with "MSI" in the name, is managed by the kernel.

Linux Top command giving output of both the cpu cores using command line [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have a system with 2 cores running linux. I will want to log the cpu usages of the individual cores at regular intervals of say 15mins.
I can use top and regex to get the info. But it gives me the overall info of the cpu. When I manually press "1", then both the cores usages are shown separately.
My question is how can I display both the cores cpu usage without manually pressing "1" after invoking top command.
Current research by me:
-I can use the -b option to run in batch mode and output to a file. But the next question is how I can input data to the top command in the batch mode. Is there a script that top command reads to run in a batch mode?
The Linux top command obtains its information from /proc/stat which is (somewhat) dependent upon the kernel version. Perhaps you could write a program which reads from that. Here is a sample from a 2.6.32 system with 20 cores:
cpu 46832272 794980 8521784 1312627944 853989 247 34947 0 0
cpu0 6404288 173468 806918 60455445 377313 1 1799 0 0
cpu1 2980140 137898 937163 64278592 68373 0 118 0 0
cpu2 5099227 86676 841568 62395343 27685 0 64 0 0
cpu3 11255325 20062 767603 56427120 9388 0 85 0 0
cpu4 2618170 1002 501629 65394095 4369 0 62 0 0
cpu5 635453 867 154898 67725523 2981 212 58 0 0
cpu6 343657 32 66510 68113208 2769 0 64 0 0
cpu7 327935 688 38431 68158263 1703 0 55 0 0
cpu8 118687 78 27436 68382190 1992 0 33 0 0
cpu9 329990 49 42224 68138515 1643 0 49 0 0
cpu10 3462177 160918 814788 63701724 202763 3 5444 0 0
cpu11 3006524 112533 484490 64877526 37455 0 6840 0 0
cpu12 2696919 61285 695966 65004324 17277 0 133 0 0
cpu13 3453005 34509 957663 64035215 10938 0 101 0 0
cpu14 2068954 2039 679830 65764151 6418 0 50 0 0
cpu15 628390 159 367213 67531841 2593 0 41 0 0
cpu16 331139 77 76690 68120995 2971 0 51 0 0
cpu17 616895 2482 182239 67595814 70070 29 19797 0 0
cpu18 343472 51 38712 68148369 2481 0 46 0 0
cpu19 111916 96 39803 68379681 2797 0 47 0 0
intr 1991637171 173 2 0 0 2 0 0 0 1 0 0 0 4 0 0 0 0 1 56 1416833 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1644 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2285 0 0 0 0 0 0 0 3211641 4799987 3235 31624105 11000098 0 ...
ctxt 3201588026
btime 1460672984
processes 2430161
procs_running 2
procs_blocked 0
softirq 1391193131 0 626556634 166050 33864038 3892307 0 11210298 67287467 2880340 645335997
According to the man page (man 5 proc then search for /proc/stat), the lines for cpu entries are:
The amount of time, measured in units of USER_HZ (1/100ths of a second on most architectures, use sysconf(_SC_CLK_TCK) to obtain the right value), that the system spent in user mode, user mode with low priority (nice), system mode, and the idle task, respectively. The last value should be USER_HZ times the second entry in the uptime pseudo-file.
iowait - time waiting for I/O to complete; irq - time servicing interrupts; softirq - time servicing softirqs.
steal - stolen time, which is the time spent in other operating systems when running in a virtualized environment
guest, which is the time spent running a virtual CPU for guest operating systems under the control of the Linux kernel.
guest_nice time spent running a niced guest (virtual CPU for get operating systems under the control of the Linux kernel).
I looked at a 4.4.6 kernel system too. The cpu entries have the tenth item.

Wondering about Virtuoso's database commit durability – switch settings ?)

I am running/testing the openlinksw.com Virtuoso database server and am noticing something odd – there appears to be no transaction logging.
Likely there is a switch/parm that needs to be set to enable logging of the individual commits/transactions, but I have not found in the documentation.
I have a script that looks like this:
set autocommit manual;
ttlp(file_to_string_output ('./events.nq/00000000'), 'bruce', 'bob', 512);
commit work;
ttlp(file_to_string_output ('./events.nq/00000001'), 'bruce', 'bob', 512);
commit work;
<repeat ttlp/commit pair many times>
Each of the 19,978 input files contains 50 quads.
I ran:
bin/isql < script.sql
and while it was running, I ran 'vmstat 1'. The script takes about 4
minutes to finish, which gives a rate of about 83 commits per second.
However, vmstat's 'bo' (blocks out) column only occassionally showed
disk i/o. Most of the time, 'bo' was zero, with occassional bursts of
activity. I would expect that, for each commit to be durable, there
would have to be at least a small bit of i/o per commit for
transaction logging. Am I doing something wrong? I'm using the
default database parameters.
Example vmstat 1 output:
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 113900 34730612 527836 11647812 0 0 0 0 4024 2928 6 1 94 0 0
1 0 113900 34729992 527840 11647876 0 0 0 36 4035 2727 6 0 93 1 0
2 0 113900 34729392 527840 11648440 0 0 0 0 3799 2612 6 1 94 0 0
1 0 113900 34728896 527840 11649004 0 0 0 0 3814 2693 6 0 94 0 0
1 0 113900 34724100 527840 11649556 0 0 0 0 3775 2653 6 1 94 0 0
1 0 113900 34723008 527840 11650128 0 0 0 8 3696 2838 6 0 94 0 0
1 0 113900 34722512 527844 11650708 0 0 0 16 3594 2996 6 0 93 0 0
1 0 113900 34721892 527844 11651868 0 0 0 0 4073 3066 6 0 94 0 0
1 0 113900 34721272 527844 11652488 0 0 0 0 4175 3077 6 1 94 0 0
1 0 113900 34721024 527844 11652568 0 0 0 5912 3744 2929 6 1 94 0 0
1 0 113900 34719540 527844 11653696 0 0 0 60 3786 3143 6 1 93 0 0
1 0 113900 34719044 527844 11653772 0 0 0 32 3809 2911 6 1 94 0 0
1 0 113900 34718052 527844 11654396 0 0 0 0 3963 2842 6 1 94 0 0
1 0 113900 34717060 527844 11654988 0 0 0 0 3956 2904 6 1 94 0 0
1 0 113900 34714748 527844 11656140 0 0 0 0 3920 2928 6 1 94 0 0
1 0 113900 34714144 527844 11656212 0 0 0 4 4059 2984 6 1 93 1 0
1 0 113900 34713656 527848 11657360 0 0 0 16 3945 2908 6 1 94 0 0
1 0 113900 34712540 527848 11657972 0 0 0 0 3978 2984 6 1 93 0 0
1 0 113900 34712044 527848 11658052 0 0 0 0 3758 2889 6 1 94 0 0
1 0 113900 34711088 527848 11658640 0 0 0 0 3643 2712 6 1 94 0 0
1 0 113900 34710468 527848 11659224 0 0 0 0 3763 2812 6 1 94 0 0
Running on
Version: 7.1
64-bit linux

Resources