Running top and free-m commands in parallel - linux

I am trying to write a simple script that continuously monitors memory usage of a certain process. So I am trying to use 'top' with grep command to capture the memory usage of a certain process, as well as running free -m command and writing their outputs to text files.
top | grep --line-buffered -i process_name >> /home/output_txt1 &
while (1)
free -m >> /home/output_txt2
sleep 2
end &
However, when I run the commands, I get
Suspended (tty output) top | grep --line-buffered -i process_name >> /home/output_txt1 &
What am I doing wrong and how can I implement what I want? Knowing that I also tried using 'watch' before using the while loop and it didn't work also.

As told in my comment, from man top:
-b : Batch mode operation
Starts top in âBatch modeâ, which could be useful for sending output from top to other programs or to a file. In this mode, top will not accept input and runs until the iterations limit youâve set with the â-nâ command-line option or until killed.
and also:
-n : Number of iterations limit as: -n number
Specifies the maximum number of iterations, or frames, top should produce before ending.
I don't understand exactly what you want to do, but you should check if the below script fullfill your requirements:
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
OUT_TOP="${DIR}/out_top"
OUT_FREE="${DIR}/out_free"
echo "$(date) Reset" > "${OUT_TOP}"
echo "$(date) Reset" > "${OUT_FREE}"
while true
do
date >> "${OUT_TOP}"
date >> "${OUT_FREE}"
top -b -n 1 | grep --line-buffered -i apache >> "${OUT_TOP}"
free -m >> "${OUT_FREE}"
sleep 2
done
Infinite loop is coming from here.
When I tested, it gave a result close to what you may be searching:
[so46072643] ./topandfree &
[1] 27313
[so46072643] sleep 8
[so46072643] kill 27313
[so46072643] for f in `ls ./out*`; do printf "\n%s\n<<--\n" $f; cat $f; echo "-->>"; done
./out_free
<<--
Wed Sep 6 12:54:54 CEST 2017 Reset
Wed Sep 6 12:54:54 CEST 2017
total used free shared buffers cached
Mem: 7985 6808 1177 0 263 1400
-/+ buffers/cache: 5144 2840
Swap: 8031 117 7914
Wed Sep 6 12:54:56 CEST 2017
total used free shared buffers cached
Mem: 7985 6808 1176 0 263 1400
-/+ buffers/cache: 5145 2840
Swap: 8031 117 7914
Wed Sep 6 12:54:59 CEST 2017
total used free shared buffers cached
Mem: 7985 6808 1176 0 263 1400
-/+ buffers/cache: 5145 2840
Swap: 8031 117 7914
Wed Sep 6 12:55:01 CEST 2017
total used free shared buffers cached
Mem: 7985 6808 1176 0 263 1400
-/+ buffers/cache: 5144 2840
Swap: 8031 117 7914
Wed Sep 6 12:55:04 CEST 2017
total used free shared buffers cached
Mem: 7985 6808 1177 0 263 1400
-/+ buffers/cache: 5144 2840
Swap: 8031 117 7914
Wed Sep 6 12:55:06 CEST 2017
total used free shared buffers cached
Mem: 7985 6808 1177 0 263 1400
-/+ buffers/cache: 5144 2841
Swap: 8031 117 7914
[1]+ Terminated ./topandfree
-->>
./out_top
<<--
Wed Sep 6 12:54:54 CEST 2017 Reset
Wed Sep 6 12:54:54 CEST 2017
4747 apache 20 0 171m 2068 436 S 0.0 0.0 0:00.00 httpd
4748 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4750 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4751 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4752 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4753 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4754 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4755 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
Wed Sep 6 12:54:56 CEST 2017
4747 apache 20 0 171m 2068 436 S 0.0 0.0 0:00.00 httpd
4748 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4750 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4751 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4752 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4753 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4754 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4755 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
Wed Sep 6 12:54:59 CEST 2017
4747 apache 20 0 171m 2068 436 S 0.0 0.0 0:00.00 httpd
4748 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4750 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4751 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4752 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4753 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4754 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4755 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
Wed Sep 6 12:55:01 CEST 2017
4747 apache 20 0 171m 2068 436 S 0.0 0.0 0:00.00 httpd
4748 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4750 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4751 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4752 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4753 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4754 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4755 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
Wed Sep 6 12:55:04 CEST 2017
4747 apache 20 0 171m 2068 436 S 0.0 0.0 0:00.00 httpd
4748 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4750 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4751 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4752 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4753 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4754 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4755 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
Wed Sep 6 12:55:06 CEST 2017
4747 apache 20 0 171m 2068 436 S 0.0 0.0 0:00.00 httpd
4748 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4750 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4751 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4752 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4753 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4754 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
4755 apache 20 0 171m 2072 440 S 0.0 0.0 0:00.00 httpd
-->>
[so46072643]
Hope it helps.

Related

How can I get the whole thread name when I use `ps -T -p [pid]`

ps -T -p [pid] and top -H -p [pid] can only display the first 15 characters like http-nio-8080-e, but I would like to get the whole thread name like http-nio-8080-exec-9, What should I do? thank you!
for example:
[root#localhost ~]# ps -T -p 2251
PID SPID TTY TIME CMD
2251 2251 ? 00:00:00 java
2251 2808 ? 00:00:03 java
2251 2821 ? 00:00:00 VM Thread
2251 2822 ? 00:00:00 Reference Handl
2251 2823 ? 00:00:00 Finalizer
2251 2824 ? 00:00:00 Signal Dispatch
2251 2825 ? 00:00:02 C2 CompilerThre
2251 2832 ? 00:00:02 C1 CompilerThre
2251 2835 ? 00:00:00 Sweeper thread
2251 2851 ? 00:00:00 Service Thread
2251 2866 ? 00:00:00 VM Periodic Tas
2251 2867 ? 00:00:00 Common-Cleaner
2251 6518 ? 00:00:00 Catalina-utilit
2251 6520 ? 00:00:00 Catalina-utilit
2251 6531 ? 00:00:00 container-0
2251 7370 ? 00:00:00 NioBlockingSele
2251 7374 ? 00:00:00 http-nio-8080-e
2251 7375 ? 00:00:00 http-nio-8080-e
2251 7376 ? 00:00:00 http-nio-8080-e
2251 7377 ? 00:00:00 http-nio-8080-e
2251 7378 ? 00:00:00 http-nio-8080-e
2251 7379 ? 00:00:00 http-nio-8080-e
2251 7380 ? 00:00:00 http-nio-8080-e
2251 7381 ? 00:00:00 http-nio-8080-e
2251 7382 ? 00:00:00 http-nio-8080-e
2251 7383 ? 00:00:00 http-nio-8080-e
2251 7384 ? 00:00:00 http-nio-8080-C
2251 7395 ? 00:00:00 http-nio-8080-A
and
[root#localhost ~]# top -H -p 2251
top - 12:00:38 up 15 min, 1 user, load average: 0.06, 0.11, 0.19
Threads: 28 total, 0 running, 28 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 3880412 total, 2276824 free, 903836 used, 699752 buff/cache
KiB Swap: 1581052 total, 1581052 free, 0 used. 2745864 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2251 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.07 java
2808 root 20 0 3043676 152880 14232 S 0.0 3.9 0:03.00 java
2821 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.27 VM Thread
2822 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 Reference Handl
2823 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 Finalizer
2824 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 Signal Dispatch
2825 root 20 0 3043676 152880 14232 S 0.0 3.9 0:02.76 C2 CompilerThre
2832 root 20 0 3043676 152880 14232 S 0.0 3.9 0:02.07 C1 CompilerThre
2835 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.01 Sweeper thread
2851 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 Service Thread
2866 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.74 VM Periodic Tas
2867 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 Common-Cleaner
6518 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.02 Catalina-utilit
6520 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.12 Catalina-utilit
6531 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 container-0
7370 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.05 NioBlockingSele
7374 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.14 http-nio-8080-e
7375 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 http-nio-8080-e
7376 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.03 http-nio-8080-e
7377 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 http-nio-8080-e
7378 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 http-nio-8080-e
7379 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 http-nio-8080-e
7380 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 http-nio-8080-e
7381 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 http-nio-8080-e
7382 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 http-nio-8080-e
7383 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 http-nio-8080-e
7384 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.06 http-nio-8080-C
7395 root 20 0 3043676 152880 14232 S 0.0 3.9 0:00.00 http-nio-8080-A
It can only display the first 15 characters like http-nio-8080-e, I would like to get the whole thread name like http-nio-8080-exec-9, What should I do?
"http-nio-8080-exec-9" #25 daemon prio=5 os_prio=0 cpu=0.13ms elapsed=1013.48s tid=0x00007fc0708d9000 nid=0x1f waiting on condition [0x00007fc0506b1000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base#11.0.2/Native Method)
- parking to wait for <0x00000000c5c22c20> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(java.base#11.0.2/LockSupport.java:194)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base#11.0.2/AbstractQueuedSynchronizer.java:2081)
at java.util.concurrent.LinkedBlockingQueue.take(java.base#11.0.2/LinkedBlockingQueue.java:433)
at org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:107)
at org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:33)
at java.util.concurrent.ThreadPoolExecutor.getTask(java.base#11.0.2/ThreadPoolExecutor.java:1054)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base#11.0.2/ThreadPoolExecutor.java:1114)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base#11.0.2/ThreadPoolExecutor.java:628)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(java.base#11.0.2/Thread.java:834)
The thread name length is restricted to 16 characters (including the terminating null byte \0). If the length, including the \0, exceeds 16 bytes, the string is silently truncated before storing it.
See pthread_setname_np and proc.5 -> find /proc/[pid]/task/[tid]/comm.
Related:
How to get the full executable name of a running process in Linux

top command displaying extra 10 line when run in a for loop

I run the command
for cpu in `cat /proc/cpuinfo | grep processor | wc -l`; do
top -b -c -n$cpu | egrep -v 'Mem|Swap|{top -b -c}' | grep load -A10 | grep -v grep
done
however shell prints extra 10 lines and I would like the last 10 lines after each invocation removed.
Here is the complete output and I would like the lines after '--' removed from each paragraph
]# for cpu in `cat /proc/cpuinfo |grep processor |wc -l`;do top -b -c -n$cpu |egrep -v 'Mem|Swap|{top -b -c}' |grep load -A10 |grep -v grep; done
top - 07:34:27 up 17 days, 9:04, 1 user, load average: 0.00, 0.02, 0.68
Tasks: 268 total, 1 running, 267 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.9%us, 0.1%sy, 0.0%ni, 98.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4193 root 20 0 28164 1652 1136 R 2.0 0.0 0:00.01 top -b -c -n8
14303 appuser 20 0 12.6g 3.2g 56m S 2.0 10.2 180:45.23 /apps/jdk1.8.0_151/bin/java -D[Standalone] -XX:+UseCompressedOops -server -Xms2048m -Xmx6144m -XX:PermSize=256m -XX:MaxPermSize=512m -Djava.ne
1 root 20 0 30068 1664 1444 S 0.0 0.0 0:01.54 /sbin/init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 [kthreadd]
3 root RT 0 0 0 0 S 0.0 0.0 0:00.39 [migration/0]
4 root 20 0 0 0 0 S 0.0 0.0 0:02.43 [ksoftirqd/0]
--
8629 daemon 20 0 56504 8412 5756 S 0.0 0.0 0:05.57 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
8630 daemon 20 0 56476 7724 5148 S 0.0 0.0 0:00.04 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
8631 daemon 20 0 54404 7436 4840 S 0.0 0.0 0:00.22 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
13089 root 18 -2 10764 392 304 S 0.0 0.0 0:00.00 /sbin/udevd -d
13090 root 18 -2 10764 416 288 S 0.0 0.0 0:00.00 /sbin/udevd -d
13203 root 20 0 254m 8120 4944 S 0.0 0.0 12:02.50 /usr/sbin/vmtoolsd
13227 root 20 0 60060 9m 7280 S 0.0 0.0 0:00.06 /usr/lib/vmware-vgauth/VGAuthService -s
14259 root 20 0 0 0 0 S 0.0 0.0 0:13.53 [flush-253:6]
14262 appuser 20 0 103m 1456 1196 S 0.0 0.0 0:00.00 /bin/sh ./standalone-mdm-hub-server.sh -c standalone-full.xml -b 0.0.0.0 -bmanagement 0.0.0.0 -u 230.0.0.4 -Djboss.server.base.dir=../mdm-hub-
--
top - 07:34:30 up 17 days, 9:04, 1 user, load average: 0.00, 0.02, 0.68
Tasks: 268 total, 1 running, 267 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3581 root 20 0 2055m 51m 17m S 1.0 0.2 237:05.21 /usr/share/metricbeat/bin/metricbeat -c /etc/metricbeat/metricbeat.yml -path.home /usr/share/metricbeat -path.config /etc/metricbeat -path.dat
4193 root 20 0 28172 1760 1236 R 0.7 0.0 0:00.03 top -b -c -n8
14303 appuser 20 0 12.6g 3.2g 56m S 0.7 10.2 180:45.25 /apps/jdk1.8.0_151/bin/java -D[Standalone] -XX:+UseCompressedOops -server -Xms2048m -Xmx6144m -XX:PermSize=256m -XX:MaxPermSize=512m -Djava.ne
42 root 20 0 0 0 0 S 0.3 0.0 3:05.13 [events/7]
13203 root 20 0 254m 8120 4944 S 0.3 0.0 12:02.51 /usr/sbin/vmtoolsd
14553 appuser 20 0 22.5g 18g 54m S 0.3 57.3 1467:43 /apps/jdk1.8.0_151/bin/java -D[Standalone] -XX:+UseCompressedOops -server -Xms2048m -Xmx16000m -XX:PermSize=512m -XX:MaxPermSize=1048m -Djava.
--
8629 daemon 20 0 56504 8412 5756 S 0.0 0.0 0:05.57 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
8630 daemon 20 0 56476 7724 5148 S 0.0 0.0 0:00.04 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
8631 daemon 20 0 54404 7436 4840 S 0.0 0.0 0:00.22 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
13089 root 18 -2 10764 392 304 S 0.0 0.0 0:00.00 /sbin/udevd -d
13090 root 18 -2 10764 416 288 S 0.0 0.0 0:00.00 /sbin/udevd -d
13227 root 20 0 60060 9m 7280 S 0.0 0.0 0:00.06 /usr/lib/vmware-vgauth/VGAuthService -s
14259 root 20 0 0 0 0 S 0.0 0.0 0:13.53 [flush-253:6]
14262 appuser 20 0 103m 1456 1196 S 0.0 0.0 0:00.00 /bin/sh ./standalone-mdm-hub-server.sh -c standalone-full.xml -b 0.0.0.0 -bmanagement 0.0.0.0 -u 230.0.0.4 -Djboss.server.base.dir=../mdm-hub-
14512 appuser 20 0 103m 1452 1196 S 0.0 0.0 0:00.01 /bin/sh ./standalone-mdm-process-server.sh -c standalone-full.xml -b 0.0.0.0 -bmanagement 0.0.0.0 -Djboss.server.base.dir=../mdm-process-serve
Because it finds "grep load". Always take a moment to break down what you are doing, and look at the intermediates. Try:
top -b -c -n1 |egrep -v 'Mem|Swap|{top -b -c}' |grep load -A10
The quick fix would be to add -m1 to only shopw the first match.

Git processes living on a linux server

So I have access to my shared hosting server via SSH. The hosting company warned me about too much resource consuming. When I logged to the server and run top to view the resource usage information on the Linux shared hosting server I get the following log:
top - 09:31:20 up 4 days, 19:59, 5 users, load average: 8.30, 8.84, 9.30
Tasks: 39 total, 1 running, 38 sleeping, 0 stopped, 0 zombie
Cpu(s): 26.8%us, 4.0%sy, 1.1%ni, 64.9%id, 2.9%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 65709912k total, 63435736k used, 2274176k free, 6961904k buffers
top - 09:38:42 up 4 days, 20:07, 5 users, load average: 7.06, 8.13, 8.81
Tasks: 40 total, 1 running, 39 sleeping, 0 stopped, 0 zombie
Cpu(s): 16.8%us, 3.4%sy, 0.2%ni, 77.7%id, 1.7%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 65709912k total, 64754844k used, 955068k free, 7025016k buffers
Swap: 4190204k total, 1044576k used, 3145628k free, 43596764k cached
And this is the table with processes:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
81448 my_username 20 0 105m 1920 1504 S 0.0 0.0 0:00.02 bash
107642 my_username 20 0 11708 1800 1388 S 0.0 0.0 0:00.00 bash
232879 my_username 20 0 105m 1200 1196 S 0.0 0.0 0:00.00 bash
232892 my_username 20 0 11804 508 504 S 0.0 0.0 0:00.00 git
232895 my_username 20 0 105m 1236 1232 S 0.0 0.0 0:00.00 git-pull
232911 my_username 20 0 11808 616 612 S 0.0 0.0 0:00.00 git
232912 my_username 20 0 76316 1980 1972 S 0.0 0.0 0:00.00 git-remote-http
389588 my_username 20 0 15048 1216 972 R 0.0 0.0 0:00.33 top
391829 my_username 20 0 105m 1912 1504 S 0.0 0.0 0:00.01 bash
693783 my_username 20 0 105m 1216 1212 S 0.0 0.0 0:00.00 bash
693792 my_username 20 0 11804 508 504 S 0.0 0.0 0:00.00 git
693793 my_username 20 0 105m 1280 1248 S 0.0 0.0 0:00.00 git-pull
693809 my_username 20 0 11808 732 612 S 0.0 0.0 0:00.00 git
693810 my_username 20 0 76316 2832 1972 S 0.0 0.0 0:00.00 git-remote-http
724630 my_username 20 0 105m 1216 1212 S 0.0 0.0 0:00.00 bash
724639 my_username 20 0 11804 508 504 S 0.0 0.0 0:00.00 git
724642 my_username 20 0 105m 1252 1248 S 0.0 0.0 0:00.00 git-pull
724695 my_username 20 0 11808 616 612 S 0.0 0.0 0:00.00 git
724696 my_username 20 0 76316 1976 1972 S 0.0 0.0 0:00.00 git-remote-http
880773 my_username 20 0 105m 1216 1212 S 0.0 0.0 0:00.00 bash
880782 my_username 20 0 11804 508 504 S 0.0 0.0 0:00.00 git
880783 my_username 20 0 105m 1252 1248 S 0.0 0.0 0:00.00 git-pull
880799 my_username 20 0 11808 616 612 S 0.0 0.0 0:00.00 git
880800 my_username 20 0 76316 1976 1972 S 0.0 0.0 0:00.00 git-remote-http
894182 my_username 20 0 105m 1216 1212 S 0.0 0.0 0:00.00 bash
894191 my_username 20 0 11804 508 504 S 0.0 0.0 0:00.00 git
894193 my_username 20 0 105m 1252 1248 S 0.0 0.0 0:00.00 git-pull
894209 my_username 20 0 11808 620 612 S 0.0 0.0 0:00.00 git
894210 my_username 20 0 76316 1976 1972 S 0.0 0.0 0:00.00 git-remote-http
994122 my_username 20 0 105m 1216 1212 S 0.0 0.0 0:00.00 bash
994131 my_username 20 0 11804 508 504 S 0.0 0.0 0:00.00 git
994132 my_username 20 0 105m 1252 1248 S 0.0 0.0 0:00.00 git-pull
994148 my_username 20 0 11808 616 612 S 0.0 0.0 0:00.00 git
994149 my_username 20 0 76316 1976 1972 S 0.0 0.0 0:00.00 git-remote-http
1025934 my_username 20 0 339m 21m 17m S 0.0 0.0 0:00.75 lsphp 1044176 my_username 20 0 105m 1200 1196 S 0.0 0.0 0:00.00 bash
So as you can see most of the commands that are issuing this problem are the git commands that I am running on the server. Why are they not killed after some git operation? How to deal with this.
BTW, I have a limit of 50 processes and their number is constantly around 40 .
Thanks!

Memory usage up 105% on mediatemple

Three hours ago the server memory usage blowed up to 105% from around 60%.I am using a dedicated MediaTemple server with 512mb RAM.Should I be worried?Why would something like this happen?
Any help would be greatly appreciated.
Tasks: 38 total, 2 running, 36 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 946344k total, 550344k used, 396000k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 0k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 15 0 10364 740 620 S 0.0 0.1 0:38.54 init
3212 root 18 0 96620 4068 3200 R 0.0 0.4 0:00.21 sshd
3214 root 15 0 12080 1728 1316 S 0.0 0.2 0:00.05 bash
3267 apache 15 0 412m 43m 4396 S 0.0 4.7 0:03.88 httpd
3290 apache 15 0 412m 43m 4340 S 0.0 4.7 0:02.98 httpd
3348 root 15 0 114m 52m 2112 S 0.0 5.6 0:48.94 spamd
3349 popuser 15 0 114m 50m 972 S 0.0 5.5 0:00.06 spamd
3455 sw-cp-se 18 0 60116 3216 1408 S 0.0 0.3 0:00.12 sw-cp-serverd
3525 admin 18 0 81572 4604 2912 S 0.0 0.5 0:01.74 in.proftpd
3585 apache 18 0 379m 15m 3356 S 0.0 1.7 0:00.01 httpd
3589 root 15 0 12624 1224 956 R 0.0 0.1 0:00.00 top
7397 root 15 0 21660 944 712 S 0.0 0.1 0:00.58 xinetd
9500 named 16 0 301m 5284 1968 S 0.0 0.6 0:00.43 named
9575 root 15 -4 12632 680 356 S 0.0 0.1 0:00.00 udevd
9788 root 25 0 13184 608 472 S 0.0 0.1 0:00.00 couriertcpd
9790 root 25 0 3672 380 312 S 0.0 0.0 0:00.00 courierlogger
9798 root 25 0 13184 608 472 S 0.0 0.1 0:00.00 couriertcpd
First analyze the process which was taking that much of CPU by the same top command. If the process was a multi-threaded program use the following top command:
top -H -p "pid of that process"
It will help you find the thread whichever is taking a lot of CPU for further diagnosis.

Can I measure memory taken by mod_perl?

Problem: my mod_perl leaks and I cannot control it.
I run mod_perl script under Ubuntu (production code).
Usually there are 8-10 script instances running concurrently.
According to Unix "top" utilty each instance takes 55M of memory.
55M is a lot, but I was told here that most of this memory is shared.
The memory is leaking.
There are 512M on the server.
There is a significant decrease of free memory in 24 hours after reboot.
Test: free memory on the system at the moment 10 scripts are running:
-after reboot: 270M
-in 24 hours since reboot: 50M
In 24 hours memory taken by each script is roughly the same - 55M (according to "top" utility).
I don't understand where the memory leakes out.
And don't know how can I find the leaks.
I share memory, I preload all the modules required by the script in startup.pl.
One more test.
A very simple mod_perl script ("Hello world!") takes 52M (according to "top")
According to "Practical mod_perl" I can use GTop utility to measure the real memory taken by mod_perl.
I have made a very simple script that measures the memory with GTop.
It shows there are 54M real memory taken by a very simple perl script!
54 Megabytes by "Hello world"?!!!
proc-mem-size: 59,707392
proc-mem-share: 52,59264
diff: 54,448128
There must be something wrong in the way I measure mod_perl memory.
Help please!
This problem is driving me mad for several days.
These are the snapshots of "top" output after reboot and in 24 hours after reboot.
The processes are sorted by Memory.
---- RIGHT AFTER REBOOT ----
top - 10:25:24 up 55 min, 2 users, load average: 0.10, 0.07, 0.07
Tasks: 59 total, 3 running, 56 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 97.3%id, 0.7%wa, 0.0%hi, 0.0%si, 2.0%st
Mem: 524456k total, 269300k used, 255156k free, 12024k buffers
Swap: 0k total, 0k used, 0k free, 71276k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2307 www-data 15 0 58500 27m 5144 S 0.0 5.3 0:02.02 apache2
2301 www-data 15 0 58492 27m 4992 S 0.0 5.3 0:02.09 apache2
2302 www-data 15 0 57936 26m 4960 R 0.0 5.2 0:01.74 apache2
2895 www-data 15 0 57812 26m 5048 S 0.0 5.2 0:00.98 apache2
2903 www-data 15 0 56944 26m 4792 S 0.0 5.1 0:01.12 apache2
2886 www-data 15 0 56860 26m 4784 S 0.0 5.1 0:01.20 apache2
2896 www-data 15 0 56520 26m 4804 S 0.0 5.1 0:00.85 apache2
2911 www-data 15 0 56404 25m 4768 S 0.0 5.1 0:00.87 apache2
2901 www-data 15 0 56520 25m 4744 S 0.0 5.1 0:00.84 apache2
2893 www-data 15 0 56608 25m 4740 S 0.0 5.1 0:00.73 apache2
2277 root 15 0 51504 22m 6332 S 0.0 4.5 0:01.02 apache2
2056 mysql 18 0 98628 21m 5164 S 0.0 4.2 0:00.64 mysqld
3162 root 15 0 6356 3660 1276 S 0.0 0.7 0:00.00 vi
2622 root 15 0 8584 2980 2392 R 0.0 0.6 0:00.07 sshd
3083 root 15 0 8448 2968 2392 S 0.0 0.6 0:00.06 sshd
3164 par 15 0 5964 2828 1868 S 0.0 0.5 0:00.05 proftpd
1 root 18 0 3060 1900 576 S 0.0 0.4 0:00.00 init
2690 root 17 0 4272 1844 1416 S 0.0 0.4 0:00.00 bash
3151 root 15 0 4272 1844 1416 S 0.0 0.4 0:00.00 bash
2177 root 15 0 8772 1640 520 S 0.0 0.3 0:00.00 sendmail-mta
2220 proftpd 15 0 5276 1448 628 S 0.0 0.3 0:00.00 proftpd
2701 root 15 0 2420 1120 876 R 0.0 0.2 0:00.09 top
1966 root 18 0 5396 1084 692 S 0.0 0.2 0:00.00 sshd
---- ROUGHLY IN 24 HOURS AFTER REBOOT
top - 17:45:38 up 23:39, 1 user, load average: 0.02, 0.09, 0.11
Tasks: 55 total, 2 running, 53 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 524456k total, 457660k used, 66796k free, 127780k buffers
Swap: 0k total, 0k used, 0k free, 114620k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16248 www-data 15 0 63712 35m 6668 S 0.0 6.8 0:23.79 apache2
19417 www-data 15 0 60396 31m 6472 S 0.0 6.2 0:10.95 apache2
19419 www-data 15 0 60276 31m 6376 S 0.0 6.1 0:11.71 apache2
19321 www-data 15 0 60480 29m 4888 S 0.0 5.8 0:11.51 apache2
21241 www-data 15 0 58632 29m 6260 S 0.0 5.8 0:05.18 apache2
22063 www-data 15 0 57400 28m 6396 S 0.0 5.6 0:02.05 apache2
21240 www-data 15 0 58520 27m 4856 S 0.0 5.5 0:04.60 apache2
21236 www-data 15 0 58244 27m 4868 S 0.0 5.4 0:05.24 apache2
22499 www-data 15 0 56736 26m 4776 S 0.0 5.1 0:00.70 apache2
2055 mysql 15 0 100m 25m 5656 S 0.0 5.0 0:20.95 mysqld
2277 root 18 0 51500 22m 6332 S 0.0 4.5 0:01.07 apache2
22686 www-data 15 0 53004 21m 4092 S 0.0 4.3 0:00.21 apache2
22689 root 15 0 8584 2980 2392 R 0.0 0.6 0:00.06 sshd
2176 root 15 0 8768 1928 736 S 0.0 0.4 0:00.00 sendmail-
+mta
1 root 18 0 3064 1900 576 S 0.0 0.4 0:00.02 init
22757 root 15 0 4268 1844 1416 S 0.0 0.4 0:00.00 bash
2220 proftpd 18 0 5276 1448 628 S 0.0 0.3 0:00.00 proftpd
22768 root 15 0 2424 1100 876 R 0.0 0.2 0:00.00 top
1965 root 15 0 5400 1088 692 S 0.0 0.2 0:00.00 sshd
2258 root 18 0 3416 1036 820 S 0.0 0.2 0:00.01 cron
1928 klog 25 0 2248 1008 420 S 0.0 0.2 0:00.04 klogd
1946 messageb 19 0 2648 804 596 S 0.0 0.2 0:01.63 dbus-daem
+on
1908 syslog 18 0 2016 716 556 S 0.0 0.1 0:00.17 syslogd
It doesn't actually look like the number of apache/mod_perl processes in existence or the memory they use has changed much between the two reports you post. I note you did not post the header for the second report. It would be interesting to see the "cached" figure after 24 hours. I am going to go out on a limb and guess that this is where your memory is going - Linux is using it for caching file I/O. You can think of the file I/O cache as essentially free memory, since Linux will make that memory available if processes need it.
You can also check that this is what's going on by performing
sync; echo 3 > /proc/sys/vm/drop_caches
as root to cause the memory in use by the caches to be released, and confirming that this causes the amount of free memory reported to revert to initial values.

Resources