I try to compare the times of poll from strace -c with strace -T:
sudo timeout 1 strace -T -epoll -fp $(pgrep myapp)
<...>
[pid 31771] poll([{fd=139, events=POLLIN|POLLPRI}], 1, 10) = 1 ([{fd=139, revents=POLLIN|POLLPRI}]) <0.003386>
[pid 31771] poll([{fd=139, events=POLLIN|POLLPRI}], 1, 10) = 1 ([{fd=139, revents=POLLIN|POLLPRI}]) <0.000051>
[pid 31771] poll([{fd=139, events=POLLIN|POLLPRI}], 1, 10) = 1 ([{fd=139, revents=POLLIN|POLLPRI}]) <0.000010>
[pid 31771] poll([{fd=139, events=POLLIN|POLLPRI}], 1, 10) = 0 (Timeout) <0.010149>
[pid 31771] poll([{fd=139, events=POLLIN|POLLPRI}], 1, 10) = 1 ([{fd=139, revents=POLLIN|POLLPRI}]) <0.006912>
[pid 31771] poll([{fd=139, events=POLLIN|POLLPRI}], 1, 10) = 1 ([{fd=139, revents=POLLIN|POLLPRI}]) <0.000047>
[pid 31771] poll([{fd=139, events=POLLIN|POLLPRI}], 1, 10) = 1 ([{fd=139, revents=POLLIN|POLLPRI}]) <0.000404>
[pid 31771] poll([{fd=139, events=POLLIN|POLLPRI}], 1, 10) = 0 (Timeout) <0.010107>
<...>
sudo timeout 1 strace -c -fp $(pgrep myapp)
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
41.11 0.483387 1265 382 80 futex
34.60 0.406887 14532 28 16 restart_syscall
8.13 0.095651 3543 27 clock_nanosleep
6.38 0.075064 12511 6 epoll_pwait
4.92 0.057812 2065 28 nanosleep
4.57 0.053776 3585 15 epoll_wait
0.09 0.001075 0 2245 clock_gettime
0.07 0.000843 7 113 ioctl
0.07 0.000806 6 146 poll
0.02 0.000203 1 337 getpid
0.01 0.000163 1 223 gettid
0.01 0.000092 1 114 sched_yield
0.00 0.000050 5 11 epoll_ctl
0.00 0.000042 3 13 read
0.00 0.000041 1 63 write
0.00 0.000022 1 34 27 openat
0.00 0.000010 1 13 timerfd_settime
0.00 0.000004 1 8 close
0.00 0.000004 1 3 3 access
0.00 0.000003 3 1 1 connect
0.00 0.000002 1 2 lseek
0.00 0.000001 1 2 mmap
0.00 0.000001 1 2 munmap
0.00 0.000000 0 2 fstat
0.00 0.000000 0 1 socket
0.00 0.000000 0 1 getsockopt
0.00 0.000000 0 1 1 unlink
------ ----------- ----------- --------- --------- ----------------
100.00 1.175939 3821 128 total
Assuming that all poll commands are issued by one thread, how would I compare the times from -T with the times from -c? Do I have to add the times from epoll_pwait, epoll_wait, epoll_ctl and poll to get the overall poll time from -c and they would be equivalent to all the times <...> added up from -T?
Regards
Related
I have a program that performs some AI inference task.
With the time(1) command, I find that it spent a fair amount of time in the kernel (i.e. the system time as outputted by time(1)).
Is there a way to find a more detailed breakdown of this time? For example, how much time spent on syscalls, context-switches, I/O interactions and such.
I believe the best approach would be to profile the source code, for example if this is a python program you're running you could use cProfile.
If you don't have access to the source code or you are only interested in syscalls, you could try to use strace (1).
To print a summary of the time spent on each syscall use the -c flag :
$ strace -c -f ls
some random files
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
31.91 0.000150 6 27 mmap
17.02 0.000080 4 18 mprotect
15.74 0.000074 7 10 open
8.09 0.000038 3 11 fstat
7.66 0.000036 5 8 read
6.60 0.000031 2 13 close
2.77 0.000013 13 1 write
2.77 0.000013 13 1 openat
2.55 0.000012 6 2 getdents
2.34 0.000011 6 2 1 access
1.70 0.000008 4 2 munmap
0.43 0.000002 1 2 ioctl
0.43 0.000002 2 1 arch_prctl
0.00 0.000000 0 1 stat
0.00 0.000000 0 3 brk
0.00 0.000000 0 2 rt_sigaction
0.00 0.000000 0 1 rt_sigprocmask
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 getrlimit
0.00 0.000000 0 2 statfs
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 1 set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00 0.000470 111 1 total
If you want to know precisely how much time is spent on each syscall, you can use the -T flag (time is on the right) :
$ strace -T -f ls
execve("/usr/bin/ls", ["ls"], [/* 49 vars */]) = 0 <0.000183>
brk(NULL) = 0x2066000 <0.000016>
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7fbe67d000 <0.000012>
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) <0.000011>
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 <0.000012>
fstat(3, {st_mode=S_IFREG|0644, st_size=121028, ...}) = 0 <0.000011>
mmap(NULL, 121028, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f7fbe65f000 <0.000010>
close(3) = 0 <0.000006>
open("/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3 <0.000014>
[...]
From the man page :
-f Trace child processes as they are created by currently traced processes
as a result of the fork(2), vfork(2) and clone(2) system calls.
-c Count time, calls, and errors for each system call and report a summary
on program exit.
-T Show the time spent in system calls. This records the time difference
between the beginning and the end of each system call.
I'm trying to create a function that will iterate through a date_id column in a dataframe and create a group for each 6 sequential values and sum the values in the 'value' column, then also return the max value form each group of six along with the result.
date_id item_id value
0 1828 32 1180727.00
1 1828 43 944937.00
2 1828 40 806681.00
3 1828 42 721810.02
4 1828 36 567950.00
5 1828 45 545306.38
6 1828 26 480506.00
7 1828 53 375788.00
8 1828 37 236000.00
9 1828 38 234780.00
10 1828 21 208998.47
11 1828 41 135000.00
12 1797 39 63420.00
13 1828 28 24410.00
14 1462 52 0.00
15 1493 16 0.00
16 1493 17 0.00
17 1493 18 0.00
18 1493 15 0.00
19 1462 53 0.00
20 1462 47 0.00
21 1462 51 0.00
22 1462 50 0.00
23 1462 49 0.00
24 1462 45 0.00
The desired output for each item_id would be
date_id item_id value
0 max value from each date group 36 sum of all values in each date grouping
I have tried using a lambda
df_rslt = df.groupby('date_id')['value'].apply(lambda grp: grp.nlargest(6).sum())
but then quickly realized this will only return one result.
I then tried something like this in a for loop, but it got nowhere
grp_data = df.groupby(['date_id','item_id'])
.aggregate({'value':np.sum})
df_rslt = grp_data.groupby('date_id')
.apply(lambda x: x.nlargest(6,'value'))
.reset_index(level=0, drop=True)
From this
iterate through a date_id column in a dataframe and create a group for each 6 sequential values
I believe you need to identify a block first, then groupby on those blocks along with the date:
blocks = df.groupby('date_id').cumcount()//6
df.groupby(['date_id', blocks], sort=False)['value'].agg(['sum','max'])
Output:
sum max
date_id
1828 0 4767411.40 1180727.0
1 1671072.47 480506.0
1797 0 63420.00 63420.0
1828 2 24410.00 24410.0
1462 0 0.00 0.0
1493 0 0.00 0.0
1462 1 0.00 0.0
I have got results for strace -c on RHEL 7 and RHEL 6
for this command:
strace -c /bin/sleep 20
and I don't understand why the seconds column for nanosleep is equal to 0. I had expected it to be 20.
0.00 0.000000 0 1 nanosleep
Here is a full strace report:
$ strace -c /bin/sleep 20
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.000019 1 15 12 open
0.00 0.000000 0 1 read
0.00 0.000000 0 5 close
0.00 0.000000 0 8 6 stat
0.00 0.000000 0 3 fstat
0.00 0.000000 0 9 mmap
0.00 0.000000 0 3 mprotect
0.00 0.000000 0 1 munmap
0.00 0.000000 0 3 brk
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 nanosleep
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.000019 52 19 total
And there is a call to nanosleep in a detailed report:
nanosleep({20, 0}, NULL) = 0
So seconds must be 20, not 0. What do you think?
From the manual page of strace(1):
-c On Linux, this attempts to show system time (CPU time spent running in the kernel)
I think that:
when a process calls nanosleep(), it asks the kernel to be suspended for a period of time. The kernel sets up a few things (like some flag, a timer, a timestamp...), suspends the calling process, and goes to do something else.
strace(1) reports the time spent by the kernel to do this, not the time the process stays suspended.
May be that this -c strace option can be thought as "-cost": how much time this syscall costs?
In order to understand this question I run strace for strace -c /bin/sleep:
This is how it looked:
$ strace -T -o syscalls.txt -v strace -c /bin/sleep 20
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
31.54 0.000429 29 15 12 open
13.68 0.000186 23 8 mmap
13.60 0.000185 46 4 mprotect
11.62 0.000158 20 8 6 stat
6.91 0.000094 19 5 close
5.96 0.000081 81 1 munmap
4.63 0.000063 16 4 brk
3.38 0.000046 46 1 arch_prctl
3.16 0.000043 43 1 nanosleep
2.21 0.000030 30 1 read
1.47 0.000020 20 1 1 access
1.32 0.000018 6 3 fstat
0.51 0.000007 7 1 execve
------ ----------- ----------- --------- --------- ----------------
100.00 0.001360 53 19 total
Below is some lines from sycalls.txt related to the nanosleep syscall:
ptrace(PTRACE_SYSCALL, 6498, 0, SIG_0) = 0 <0.000028>
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 <0.000017>
wait4(-1, [{WIFSTOPPED(s) && WSTOPSIG(s) == 133}], __WALL, {ru_utime={0, 0}, ru_stime={0, 3706}, ru_maxrss=616, ru_ixrss=0, ru_idrss=0, ru_isrss=0, ru_minflt=205, ru_majflt=0, ru_nswap=0, ru_inblock=0, ru_oublock=0, ru_msgsnd=0, ru_msgrcv=0, ru_nsignals=0, ru_nvcsw=108, ru_nivcsw=1}) = 6498 <20.000423>
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=6498, si_status=SIGTRAP, si_utime=0, si_stime=0} ---
So, nanosleep itself lasts for 20 seconds as it is shown in the end of the line: <20.000423>. However wait4 returns for it:
{ru_utime={0, 0}, ru_stime={0, 3706}
So it takes 3 microseconds to do nanosleep according to the report. So the seconds column is likely to mean (user_time + sys_time + some-unclear-overhead) spent by OS to handle a syscall. It is not meant to be wall time for a syscall.
Running Debian Jessie Linux. In learning nmap, I was using nc (netcat) in listen mode so that nmap would find an open port. All done on a single host.
I got the nc syntax wrong, so the command was 'nc -l 3306' where it should have been 'nc -l -p 3306'. nmap then did not report 3306 open but it DID show SOME port open, say 40000.
So I did 'lsof | grep nc' and indeed nc had 40000, or some similar high port open, bizarre.
I then straced nc. With the bad syntax of 'nc -l 3306', the relevant system call sequence is socket,listen,accept. Note no bind call. The correct 'nc -l -p 3306' command produces the expected socket,bind,listen,accept.
So, my point is, you can open a socket for accepting without specifying a port! Obviously, you don't know WHICH port you will get, but you DO get one. Now, if user space (i.e. nc) isn't supplying ANY bind info, that implies to me that kernel space is picking the port ?? I first expected the port to come from undefined stack memory in the nc code, but if no bind call is done, that memory's values would not be transferred to the kernel at all.
Bizarre!
Not weird at all, that's how netcat (nc) works (in certain versions will change its options) please reefer to this document.
netcat -l -p port [-options] [hostname] [port]
If you want to specify an specific port you must use "-p".
executing this:
netcat -l 3306
is just the same as executing:
netcat -l
which selects a random socket to listen.
Now, you are right, there's no bind call, but also other system calls are not present:
Calling with bad syntax
mortiz#florida:~/Documents/projects$ strace -c netcat -l 4000
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
15.09 0.000040 8 5 read
10.94 0.000029 7 4 openat
10.19 0.000027 4 6 rt_sigaction
9.81 0.000026 26 1 stat
9.81 0.000026 8 3 brk
7.55 0.000020 5 4 close
7.55 0.000020 20 1 socket
6.04 0.000016 4 4 fstat
6.04 0.000016 16 1 alarm
5.28 0.000014 7 2 getpid
5.28 0.000014 7 2 setsockopt
4.53 0.000012 12 1 listen
1.89 0.000005 5 1 rt_sigprocmask
0.00 0.000000 0 7 mmap
0.00 0.000000 0 4 mprotect
0.00 0.000000 0 1 munmap
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.000265 50 1 total
Calling with the right syntax
mortiz#florida:~/Documents/projects$ strace -c netcat -l -p 4000
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
33.81 0.000334 8 41 32 openat
21.76 0.000215 6 33 28 stat
8.10 0.000080 5 14 mmap
5.97 0.000059 5 11 close
5.77 0.000057 4 14 read
4.76 0.000047 15 3 munmap
4.05 0.000040 20 2 2 connect
3.95 0.000039 4 9 fstat
3.34 0.000033 5 6 rt_sigaction
2.83 0.000028 9 3 socket
2.73 0.000027 4 6 mprotect
1.21 0.000012 2 6 lseek
0.61 0.000006 3 2 getpid
0.51 0.000005 5 1 listen
0.30 0.000003 3 1 rt_sigprocmask
0.30 0.000003 3 1 alarm
0.00 0.000000 0 3 brk
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 bind
0.00 0.000000 0 2 setsockopt
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.000988 162 63 total
Calling with right syntax (without specific port)
mortiz#florida:~/Documents/projects$ strace -c netcat -l
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
17.62 0.000040 6 6 rt_sigaction
17.18 0.000039 7 5 read
13.66 0.000031 7 4 openat
12.33 0.000028 28 1 listen
9.69 0.000022 22 1 socket
7.93 0.000018 4 4 close
7.49 0.000017 8 2 setsockopt
3.96 0.000009 2 4 fstat
3.52 0.000008 8 1 rt_sigprocmask
3.52 0.000008 8 1 alarm
3.08 0.000007 3 2 getpid
0.00 0.000000 0 1 stat
0.00 0.000000 0 7 mmap
0.00 0.000000 0 4 mprotect
0.00 0.000000 0 1 munmap
0.00 0.000000 0 3 brk
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 arch_prctl
------ ----------- ----------- --------- --------- ----------------
100.00 0.000227 50 1 total
So we can see changing parameters obviously will change system calls (that has sense).
The fact that the port being assigned doesn't use bind() could be related to the different netcat implementations as #UrielKatz mentioned in my comment.
We have very strange problem on our Web-project.
We use:
2 Intel(R) Xeon(R) CPU E5520 # 2.27GHz
12 GB memory
We have about 20 hits per seconds. 4-5 requests per second are heavy – it is a search requests.
We use nginx + php-fpm (5.3.22)
MySQL server installed on another machine.
Most of time we have load average less than 10 and cpu usage about 50%
Sometimes we get cpu usage about 95% and after that load average grows to 50 and more!!!
You can see Load Average and CPU Usage here (my reputation low to send images here)
Load Average
CPU Usage
We have to reload php-fpm ( /etc/init.d/php-fpm reload) to normalize situation.
This can happens 4-5 times per day.
I tried to use strace to exam this situation.
Sorry for long logs! This output of command strace -cp PID
PID – is the random php-fpm process id (We start 100 php-fpm processes).
This two results in the moment with high cpu usage.
Process 17272 attached - interrupt to quit
Process 17272 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
65.56 0.008817 267 33 munmap
13.38 0.001799 900 2 clone
9.66 0.001299 2 589 read
7.43 0.000999 125 8 mremap
2.84 0.000382 1 559 96 access
0.59 0.000080 40 2 waitpid
0.29 0.000039 0 627 gettimeofday
0.16 0.000022 0 346 write
0.04 0.000006 0 56 getcwd
0.04 0.000005 0 348 poll
0.00 0.000000 0 55 open
0.00 0.000000 0 69 close
0.00 0.000000 0 17 chdir
0.00 0.000000 0 189 time
0.00 0.000000 0 28 lseek
0.00 0.000000 0 2 pipe
0.00 0.000000 0 17 times
0.00 0.000000 0 8 brk
0.00 0.000000 0 8 getrusage
0.00 0.000000 0 18 setitimer
0.00 0.000000 0 8 flock
0.00 0.000000 0 1 nanosleep
0.00 0.000000 0 11 rt_sigaction
0.00 0.000000 0 13 rt_sigprocmask
0.00 0.000000 0 6 pread64
0.00 0.000000 0 7 pwrite64
0.00 0.000000 0 33 mmap2
0.00 0.000000 0 18 4 stat64
0.00 0.000000 0 34 lstat64
0.00 0.000000 0 92 fstat64
0.00 0.000000 0 63 fcntl64
0.00 0.000000 0 53 clock_gettime
0.00 0.000000 0 1 socket
0.00 0.000000 0 1 1 connect
0.00 0.000000 0 9 accept
0.00 0.000000 0 1 send
0.00 0.000000 0 21 recv
0.00 0.000000 0 9 1 shutdown
0.00 0.000000 0 1 getsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 0.013448 3363 102 total
[root#hp-php ~]# strace -cp 30767
Process 30767 attached - interrupt to quit
Process 30767 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
52.88 0.016926 220 77 munmap
29.06 0.009301 2 4343 read
8.73 0.002794 466 6 clone
3.59 0.001149 0 5598 time
3.18 0.001017 0 3745 write
1.12 0.000358 0 7316 gettimeofday
0.64 0.000205 1 164 fcntl64
0.39 0.000124 21 6 waitpid
0.22 0.000070 0 1496 326 access
0.13 0.000041 0 3769 poll
0.03 0.000009 0 151 close
0.02 0.000008 0 114 clock_gettime
0.02 0.000007 0 110 getcwd
0.00 0.000000 0 112 open
0.00 0.000000 0 38 chdir
0.00 0.000000 0 47 lseek
0.00 0.000000 0 6 pipe
0.00 0.000000 0 38 times
0.00 0.000000 0 135 brk
0.00 0.000000 0 3 ioctl
0.00 0.000000 0 14 getrusage
0.00 0.000000 0 38 setitimer
0.00 0.000000 0 19 flock
0.00 0.000000 0 40 mlock
0.00 0.000000 0 40 munlock
0.00 0.000000 0 6 nanosleep
0.00 0.000000 0 27 rt_sigaction
0.00 0.000000 0 31 rt_sigprocmask
0.00 0.000000 0 13 pread64
0.00 0.000000 0 18 pwrite64
0.00 0.000000 0 78 mmap2
0.00 0.000000 0 111 10 stat64
0.00 0.000000 0 49 lstat64
0.00 0.000000 0 182 fstat64
0.00 0.000000 0 8 socket
0.00 0.000000 0 8 5 connect
0.00 0.000000 0 19 accept
0.00 0.000000 0 7 send
0.00 0.000000 0 66 recv
0.00 0.000000 0 3 recvfrom
0.00 0.000000 0 20 1 shutdown
0.00 0.000000 0 5 setsockopt
0.00 0.000000 0 4 getsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 0.032009 28080 342 total
Yes, out scripts reads much information. This is normal.
But why munmap works very long??!! And when we have problem munmap ALWAYS in top!
For comparison this is result of trace random php-fpm process in regular situation:
Process 28606 attached - interrupt to quit
Process 28606 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
45.72 0.001816 1 2601 read
32.88 0.001306 435 3 clone
9.19 0.000365 0 2175 write
6.95 0.000276 0 7521 time
2.24 0.000089 0 4158 gettimeofday
2.01 0.000080 1 114 brk
0.28 0.000011 0 2166 poll
0.20 0.000008 0 833 155 access
0.20 0.000008 0 53 recv
0.18 0.000007 2 3 waitpid
0.15 0.000006 0 18 munlock
0.00 0.000000 0 69 open
0.00 0.000000 0 96 close
0.00 0.000000 0 29 chdir
0.00 0.000000 0 36 lseek
0.00 0.000000 0 3 pipe
0.00 0.000000 0 29 times
0.00 0.000000 0 10 getrusage
0.00 0.000000 0 5 munmap
0.00 0.000000 0 1 ftruncate
0.00 0.000000 0 29 setitimer
0.00 0.000000 0 1 sigreturn
0.00 0.000000 0 11 flock
0.00 0.000000 0 18 mlock
0.00 0.000000 0 5 nanosleep
0.00 0.000000 0 19 rt_sigaction
0.00 0.000000 0 24 rt_sigprocmask
0.00 0.000000 0 6 pread64
0.00 0.000000 0 12 pwrite64
0.00 0.000000 0 69 getcwd
0.00 0.000000 0 5 mmap2
0.00 0.000000 0 35 7 stat64
0.00 0.000000 0 41 lstat64
0.00 0.000000 0 96 fstat64
0.00 0.000000 0 108 fcntl64
0.00 0.000000 0 87 clock_gettime
0.00 0.000000 0 5 socket
0.00 0.000000 0 4 4 connect
0.00 0.000000 0 16 2 accept
0.00 0.000000 0 8 send
0.00 0.000000 0 15 shutdown
0.00 0.000000 0 4 getsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 0.003972 20541 168 total
Process 29168 attached - interrupt to quit
Process 29168 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
54.81 0.002366 1 1717 read
26.41 0.001140 1 1696 poll
8.29 0.000358 0 1662 write
7.37 0.000318 2 131 121 stat64
1.53 0.000066 0 3249 gettimeofday
1.18 0.000051 0 746 525 access
0.23 0.000010 0 27 fcntl64
0.19 0.000008 0 62 brk
0.00 0.000000 0 1 restart_syscall
0.00 0.000000 0 7 open
0.00 0.000000 0 16 close
0.00 0.000000 0 3 chdir
0.00 0.000000 0 1039 time
0.00 0.000000 0 1 lseek
0.00 0.000000 0 3 times
0.00 0.000000 0 3 ioctl
0.00 0.000000 0 1 getrusage
0.00 0.000000 0 4 munmap
0.00 0.000000 0 3 setitimer
0.00 0.000000 0 1 sigreturn
0.00 0.000000 0 1 flock
0.00 0.000000 0 1 rt_sigaction
0.00 0.000000 0 1 rt_sigprocmask
0.00 0.000000 0 2 pwrite64
0.00 0.000000 0 3 getcwd
0.00 0.000000 0 4 mmap2
0.00 0.000000 0 7 fstat64
0.00 0.000000 0 9 clock_gettime
0.00 0.000000 0 6 socket
0.00 0.000000 0 5 1 connect
0.00 0.000000 0 3 2 accept
0.00 0.000000 0 5 send
0.00 0.000000 0 64 recv
0.00 0.000000 0 3 recvfrom
0.00 0.000000 0 2 shutdown
0.00 0.000000 0 1 getsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 0.004317 10489 649 total
And you can see that munmap not in top.
Now we don’t have ideas how to solve this problem :(
We examined next potential problems and answers are "NO":
additioan user activity
long scripts execution (several seconds)
using swap
Can you help us?