How to remove kdevtmpfsi cryptominer malware - linux

I used Alibaba Cloud ECS to set up a server. In the past 2 months, this is the third time it has been attacked by a mining virus, so I want to get a solution here. The following are my attempts to some public answers on the Internet, but they didn’t succeed in the end
top output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
552060 root 20 0 2873424 2.3g 2712 S 129.4 3.7 51:33.70 kdevtmpfsi
551850 root 20 0 3070036 2.3g 2712 S 123.5 3.7 47:00.41 kdevtmpfsi
552074 root 20 0 3070032 2.3g 2712 S 123.5 3.7 49:39.04 kdevtmpfsi
23883 1000 20 0 6785676 408104 26328 S 5.9 0.6 2:09.43 java
564739 root 20 0 227268 4788 3868 R 5.9 0.0 0:00.02 top
1 root 20 0 170004 12132 9124 S 0.0 0.0 0:03.19 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-events_highpri
8 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_tasks_rude_
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_tasks_trace
11 root 20 0 0 0 0 S 0.0 0.0 0:00.25 ksoftirqd/0
12 root 20 0 0 0 0 I 0.0 0.0 0:21.31 rcu_sched
13 root rt 0 0 0 0 S 0.0 0.0 0:00.01 migration/0
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0
15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1
16 root rt 0 0 0 0 S 0.0 0.0 0:00.58 migration/1
17 root 20 0 0 0 0 S 0.0 0.0 0:00.78 ksoftirqd/1
19 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/1:0H-events_highpri
kill -9 PID not work (kdevtmpfsi will restart in 1 minute)
There is no kdevtmpfsi file in the /tmp path
systemctl status PID also not work
nothing in the crontab
use find / -iname kdevtmpfsi* -exec rm -fv {} ;
Terminal commands tried:
[root#Stock-DMP tmp]# ps -ef | grep kdevtmpfsi
root 551850 35245 99 15:02 ? 00:49:38 /tmp/kdevtmpfsi
root 552060 35687 99 15:02 ? 00:54:11 /tmp/kdevtmpfsi
root 552074 35462 99 15:02 ? 00:52:16 /tmp/kdevtmpfsi
root 565438 543813 0 15:41 pts/0 00:00:00 grep --color=auto kdevtmpfsi
[root#Stock-DMP tmp]# pwd
/tmp
[root#Stock-DMP tmp]# ll
total 12
-rw------- 1 root root 0 Jan 5 12:12 AliyunAssistClientSingleLock.lock
-rw-r--r-- 1 root root 3 Jan 5 13:00 CmsGoAgent.pid
drwx------ 3 root root 4096 Jan 5 13:00 systemd-private-cef6b94dbb0f4abbb2fb81aed53c1bdf-chronyd.service-iwnjti
drwx------ 3 root root 4096 Jan 5 13:00 systemd-private-cef6b94dbb0f4abbb2fb81aed53c1bdf-systemd-resolved.service-KyX7Wf
[root#Stock-DMP tmp]# systemctl status 551850
Failed to get unit for PID 551850: PID 551850 does not belong to any loaded unit.
[root#Stock-DMP tmp]# systemctl status 552060
Failed to get unit for PID 552060: PID 552060 does not belong to any loaded unit.
[root#Stock-DMP tmp]# systemctl status 552074
Failed to get unit for PID 552074: PID 552074 does not belong to any loaded unit.
[root#Stock-DMP tmp]# systemctl status 555438
Failed to get unit for PID 555438: PID 555438 does not belong to any loaded unit.
[root#Stock-DMP tmp]# ls -l /proc/551850/exe
lrwxrwxrwx 1 root root 0 Jan 6 15:02 /proc/551850/exe -> '/tmp/kdevtmpfsi (deleted)'
[root#Stock-DMP tmp]# ls -l /proc/552060/exe
lrwxrwxrwx 1 root root 0 Jan 6 15:02 /proc/552060/exe -> '/tmp/kdevtmpfsi (deleted)'
[root#Stock-DMP tmp]# ls -l /proc/552074/exe
lrwxrwxrwx 1 root root 0 Jan 6 15:02 /proc/552074/exe -> '/tmp/kdevtmpfsi (deleted)'
[root#Stock-DMP tmp]# ls -l /proc/555438/exe
ls: cannot access '/proc/555438/exe': No such file or directory
[root#Stock-DMP tmp]# crontab -l
no crontab for root
[root#Stock-DMP tmp]# find / -iname kdevtmpfsi* -exec rm -fv {} \;
removed '/var/lib/docker/overlay2/003f8255259b3a7551887255badebc03e3051bf7ccbf39cdabb669be17454cc9/merged/tmp/kdevtmpfsi'
removed '/var/lib/docker/overlay2/ebb11958a3df7d4dc3019a6b7f5d9f6d6e0bad8e6c8330b3cb2d994000b0d70e/merged/tmp/kdevtmpfsi'
removed '/var/lib/docker/overlay2/7782d102817437c1dc0e502b5f2ceb47f485ca9c69961b90f3d1f828074be59d/merged/tmp/kdevtmpfsi'
find: ‘/proc/571578’: No such file or directory
find: ‘/proc/571579’: No such file or directory
[root#Stock-DMP tmp]# find / -iname kinsing* -exec rm -fv {} \;
I want to know where kdevtmpfsi hacked into my server
How to delete kdevtmpfsi completely
Later defense methods (I use home network development, so it is difficult to close all ports in the security group or restrict access to designated IP)

Related

top command displaying extra 10 line when run in a for loop

I run the command
for cpu in `cat /proc/cpuinfo | grep processor | wc -l`; do
top -b -c -n$cpu | egrep -v 'Mem|Swap|{top -b -c}' | grep load -A10 | grep -v grep
done
however shell prints extra 10 lines and I would like the last 10 lines after each invocation removed.
Here is the complete output and I would like the lines after '--' removed from each paragraph
]# for cpu in `cat /proc/cpuinfo |grep processor |wc -l`;do top -b -c -n$cpu |egrep -v 'Mem|Swap|{top -b -c}' |grep load -A10 |grep -v grep; done
top - 07:34:27 up 17 days, 9:04, 1 user, load average: 0.00, 0.02, 0.68
Tasks: 268 total, 1 running, 267 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.9%us, 0.1%sy, 0.0%ni, 98.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4193 root 20 0 28164 1652 1136 R 2.0 0.0 0:00.01 top -b -c -n8
14303 appuser 20 0 12.6g 3.2g 56m S 2.0 10.2 180:45.23 /apps/jdk1.8.0_151/bin/java -D[Standalone] -XX:+UseCompressedOops -server -Xms2048m -Xmx6144m -XX:PermSize=256m -XX:MaxPermSize=512m -Djava.ne
1 root 20 0 30068 1664 1444 S 0.0 0.0 0:01.54 /sbin/init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 [kthreadd]
3 root RT 0 0 0 0 S 0.0 0.0 0:00.39 [migration/0]
4 root 20 0 0 0 0 S 0.0 0.0 0:02.43 [ksoftirqd/0]
--
8629 daemon 20 0 56504 8412 5756 S 0.0 0.0 0:05.57 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
8630 daemon 20 0 56476 7724 5148 S 0.0 0.0 0:00.04 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
8631 daemon 20 0 54404 7436 4840 S 0.0 0.0 0:00.22 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
13089 root 18 -2 10764 392 304 S 0.0 0.0 0:00.00 /sbin/udevd -d
13090 root 18 -2 10764 416 288 S 0.0 0.0 0:00.00 /sbin/udevd -d
13203 root 20 0 254m 8120 4944 S 0.0 0.0 12:02.50 /usr/sbin/vmtoolsd
13227 root 20 0 60060 9m 7280 S 0.0 0.0 0:00.06 /usr/lib/vmware-vgauth/VGAuthService -s
14259 root 20 0 0 0 0 S 0.0 0.0 0:13.53 [flush-253:6]
14262 appuser 20 0 103m 1456 1196 S 0.0 0.0 0:00.00 /bin/sh ./standalone-mdm-hub-server.sh -c standalone-full.xml -b 0.0.0.0 -bmanagement 0.0.0.0 -u 230.0.0.4 -Djboss.server.base.dir=../mdm-hub-
--
top - 07:34:30 up 17 days, 9:04, 1 user, load average: 0.00, 0.02, 0.68
Tasks: 268 total, 1 running, 267 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3581 root 20 0 2055m 51m 17m S 1.0 0.2 237:05.21 /usr/share/metricbeat/bin/metricbeat -c /etc/metricbeat/metricbeat.yml -path.home /usr/share/metricbeat -path.config /etc/metricbeat -path.dat
4193 root 20 0 28172 1760 1236 R 0.7 0.0 0:00.03 top -b -c -n8
14303 appuser 20 0 12.6g 3.2g 56m S 0.7 10.2 180:45.25 /apps/jdk1.8.0_151/bin/java -D[Standalone] -XX:+UseCompressedOops -server -Xms2048m -Xmx6144m -XX:PermSize=256m -XX:MaxPermSize=512m -Djava.ne
42 root 20 0 0 0 0 S 0.3 0.0 3:05.13 [events/7]
13203 root 20 0 254m 8120 4944 S 0.3 0.0 12:02.51 /usr/sbin/vmtoolsd
14553 appuser 20 0 22.5g 18g 54m S 0.3 57.3 1467:43 /apps/jdk1.8.0_151/bin/java -D[Standalone] -XX:+UseCompressedOops -server -Xms2048m -Xmx16000m -XX:PermSize=512m -XX:MaxPermSize=1048m -Djava.
--
8629 daemon 20 0 56504 8412 5756 S 0.0 0.0 0:05.57 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
8630 daemon 20 0 56476 7724 5148 S 0.0 0.0 0:00.04 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
8631 daemon 20 0 54404 7436 4840 S 0.0 0.0 0:00.22 /opt/quest/sbin/.vasd -p /var/opt/quest/vas/vasd/.vasd.pid
13089 root 18 -2 10764 392 304 S 0.0 0.0 0:00.00 /sbin/udevd -d
13090 root 18 -2 10764 416 288 S 0.0 0.0 0:00.00 /sbin/udevd -d
13227 root 20 0 60060 9m 7280 S 0.0 0.0 0:00.06 /usr/lib/vmware-vgauth/VGAuthService -s
14259 root 20 0 0 0 0 S 0.0 0.0 0:13.53 [flush-253:6]
14262 appuser 20 0 103m 1456 1196 S 0.0 0.0 0:00.00 /bin/sh ./standalone-mdm-hub-server.sh -c standalone-full.xml -b 0.0.0.0 -bmanagement 0.0.0.0 -u 230.0.0.4 -Djboss.server.base.dir=../mdm-hub-
14512 appuser 20 0 103m 1452 1196 S 0.0 0.0 0:00.01 /bin/sh ./standalone-mdm-process-server.sh -c standalone-full.xml -b 0.0.0.0 -bmanagement 0.0.0.0 -Djboss.server.base.dir=../mdm-process-serve
Because it finds "grep load". Always take a moment to break down what you are doing, and look at the intermediates. Try:
top -b -c -n1 |egrep -v 'Mem|Swap|{top -b -c}' |grep load -A10
The quick fix would be to add -m1 to only shopw the first match.

incrond processes with shell script only exit if script exit code is 1?

Configuration
I have incrond 0.5.12 on CentOS 7.6 configured as follows in /etc/incron.d/example:
/var/tmp/dir IN_CREATE sh /root/incron_script.sh $#/$#
My /root/incron_script.sh simply contains the following: echo "$#" >> /tmp/incrond_log.log
What this means is that, when I create a file in var/tmp/dir, the file full path is appended to /tmp/incrond_log.log. That's it.
Problem definition
The problem is basically that, if incrond is configured to call a shell script, processes are being created and are not being stopped unless that shell script exits with something other than 0.
What I'm looking at is the output of systemctl status incrond (or ps aux | grep ..., same thing).
So below, for example, there are 2 created processes.
[root#server ~]# systemctl status incrond
● incrond.service - Inotify System Scheduler
Loaded: loaded (/usr/lib/systemd/system/incrond.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2018-12-11 13:39:55 +03; 11min ago
Process: 16746 ExecStart=/usr/sbin/incrond (code=exited, status=0/SUCCESS)
Main PID: 16747 (incrond)
Tasks: 498
Memory: 5.9M
CGroup: /system.slice/incrond.service
├─13687 /usr/sbin/incrond
├─13747 /usr/sbin/incrond
Testing
We create 5 files, check if their names were appended to the log (incrond is working) and check how many processes are being spawned.
mkdir -p /var/tmp/dir
rm -f /var/tmp/dir/*
echo -n > /tmp/incrond_log.log
systemctl restart incrond
for i in $(seq 1 5);
do
touch /var/tmp/dir/a$i.txt
sleep 0.5
tail -n1 /tmp/incrond_log.log
systemctl status incrond | grep /usr/sbin/incrond | wc -l
done
Expected result
I would expect incrond to fork a process for every file created at this directory but to exit immediately after since there's not much to do really.
If the log shows that the file path is in the log file, this means that the incrond process should have stopped since it did its job.
By default, there were 2 processes in systemctl status incrond, so the expected result of the command is:
/var/tmp/dir/a1.txt
2
/var/tmp/dir/a2.txt
2
/var/tmp/dir/a3.txt
2
/var/tmp/dir/a4.txt
2
/var/tmp/dir/a5.txt
2
Actual result
The actual result is:
/var/tmp/dir/a1.txt
3
/var/tmp/dir/a2.txt
4
/var/tmp/dir/a3.txt
5
/var/tmp/dir/a4.txt
6
/var/tmp/dir/a5.txt
7
Diagnosis
The problem is manifesting as zombie processes:
root 1540 0.0 0.0 12784 224 ? S 19:49 0:00 /usr/sbin/incrond
root 1551 0.0 0.0 12784 672 ? S 19:49 0:00 /usr/sbin/incrond
root 1553 0.0 0.0 12784 224 ? S 19:49 0:00 /usr/sbin/incrond
root 1566 0.0 0.0 12784 224 ? S 19:49 0:00 /usr/sbin/incrond
root 1576 0.0 0.0 12784 224 ? S 19:49 0:00 /usr/sbin/incrond
root 2339 0.0 0.0 12784 224 ? S 19:49 0:00 /usr/sbin/incrond
root 2348 0.0 0.0 12784 224 ? S 19:49 0:00 /usr/sbin/incrond
root 2351 0.0 0.0 12784 224 ? S 19:49 0:00 /usr/sbin/incrond
root 2355 0.0 0.0 12784 224 ? S 19:49 0:00 /usr/sbin/incrond
root 5471 0.0 0.0 0 0 ? Z 19:17 0:00 [incrond] <defunct>
root 5480 0.0 0.0 0 0 ? Z 19:17 0:00 [incrond] <defunct>
root 5483 0.0 0.0 0 0 ? Z 19:17 0:00 [incrond] <defunct>
root 5561 0.0 0.0 0 0 ? Z 19:17 0:00 [incrond] <defunct>
root 8012 0.0 0.0 0 0 ? Z 19:12 0:00 [incrond] <defunct>
root 8023 0.0 0.0 0 0 ? Z 19:12 0:00 [incrond] <defunct>
root 8025 0.0 0.0 0 0 ? Z 19:12 0:00 [incrond] <defunct>
root 8148 0.0 0.0 0 0 ? Z 19:12 0:00 [incrond] <defunct>
This is as far as I can inspect. I don't know how to look into this further.
The fix
If, instead of a regular exit, I exit 1, processes exit properly. So my /root/incron_script becomes: echo "$#" >> /tmp/incrond_log.log && exit 1.
My status now looks like:
[root#server ~]# systemctl status incrond
● incrond.service - Inotify System Scheduler
Loaded: loaded (/usr/lib/systemd/system/incrond.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2018-12-11 14:09:04 +03; 16s ago
Process: 7882 ExecStart=/usr/sbin/incrond (code=exited, status=0/SUCCESS)
Main PID: 7888 (incrond)
Tasks: 6
Memory: 220.0K
CGroup: /system.slice/incrond.service
└─7888 /usr/sbin/incrond
Dec 11 14:09:09 server.example.com incrond[7888]: PATH (/var/tmp/dir) FILE (a1.txt) EVENT (IN_CREATE)
Dec 11 14:09:09 server.example.com incrond[7888]: (system::example) CMD (sh /root/incron_script.sh /var/tmp/dir/a1.txt )
Dec 11 14:09:10 server.example.com incrond[7888]: PATH (/var/tmp/dir) FILE (a2.txt) EVENT (IN_CREATE)
Dec 11 14:09:10 server.example.com incrond[7888]: (system::example) CMD (sh /root/incron_script.sh /var/tmp/dir/a2.txt )
Dec 11 14:09:10 server.example.com incrond[7888]: PATH (/var/tmp/dir) FILE (a3.txt) EVENT (IN_CREATE)
Dec 11 14:09:10 server.example.com incrond[7888]: (system::example) CMD (sh /root/incron_script.sh /var/tmp/dir/a3.txt )
Dec 11 14:09:11 server.example.com incrond[7888]: PATH (/var/tmp/dir) FILE (a4.txt) EVENT (IN_CREATE)
Dec 11 14:09:11 server.example.com incrond[7888]: (system::example) CMD (sh /root/incron_script.sh /var/tmp/dir/a4.txt )
Dec 11 14:09:11 server.example.com incrond[7888]: PATH (/var/tmp/dir) FILE (a5.txt) EVENT (IN_CREATE)
Dec 11 14:09:11 server.example.com incrond[7888]: (system::example) CMD (sh /root/incron_script.sh /var/tmp/dir/a5.txt )
Question
So is this the expected behavior then? Why does exit 0 keep the process alive while exit 1 doesn't? Where is this documented? Any suggestions on how I can debug this further?
Updates
2018-12-12: added diagnosis (zombie threads)
This seems to be part of a larger issue with incron 0.5.12 (incron/issues/52, incron/issues/53)

yum update throws Could not create lock at /var/run/yum.pid on centos 6.5

I have deployed a fresh CentOS 6.5 instance on my VMServer with developement-tools, X11 and few other packages installed. The first day, it seems everythings works fine. Later I couldn't use yum installer to update or install any packages and it throws error as follows :
[root#localDev ~]# yum update
Loaded plugins: fastestmirror, refresh-packagekit, security
Cannot open logfile /var/log/yum.log
Could not create lock at /var/run/yum.pid: [Errno 30] Read-only file system: '/var/run/yum.pid'
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 20 M RSS (315 MB VSZ)
Started: Wed Jul 20 22:01:54 2016 - 00:03 ago
State : Running, pid: 10750
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 20 M RSS (315 MB VSZ)
Started: Wed Jul 20 22:01:54 2016 - 00:05 ago
State : Running, pid: 10750
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 20 M RSS (315 MB VSZ)
Started: Wed Jul 20 22:01:54 2016 - 00:07 ago
State : Running, pid: 10750
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 20 M RSS (315 MB VSZ)
Started: Wed Jul 20 22:01:54 2016 - 00:09 ago
State : Running, pid: 10750
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 20 M RSS (315 MB VSZ)
Started: Wed Jul 20 22:01:54 2016 - 00:11 ago
State : Running, pid: 10750
^C
Exiting on user cancel.
[root#localDev ~]#
Even no such process is running with the mentioned pid 10750 in the result of ps command.
[root#localDev ~]# ps -eaf
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Jul19 ? 00:00:00 /sbin/init
root 2 0 0 Jul19 ? 00:00:00 [kthreadd]
root 3 2 0 Jul19 ? 00:00:00 [migration/0]
root 4 2 0 Jul19 ? 00:00:00 [ksoftirqd/0]
root 5 2 0 Jul19 ? 00:00:00 [migration/0]
root 6 2 0 Jul19 ? 00:00:00 [watchdog/0]
root 7 2 0 Jul19 ? 00:00:00 [migration/1]
root 8 2 0 Jul19 ? 00:00:00 [migration/1]
root 9 2 0 Jul19 ? 00:00:00 [ksoftirqd/1]
root 10 2 0 Jul19 ? 00:00:00 [watchdog/1]
root 11 2 0 Jul19 ? 00:00:02 [events/0]
root 12 2 0 Jul19 ? 00:01:03 [events/1]
root 13 2 0 Jul19 ? 00:00:00 [cgroup]
root 14 2 0 Jul19 ? 00:00:00 [khelper]
root 15 2 0 Jul19 ? 00:00:00 [netns]
root 16 2 0 Jul19 ? 00:00:00 [async/mgr]
root 17 2 0 Jul19 ? 00:00:00 [pm]
root 18 2 0 Jul19 ? 00:00:00 [sync_supers]
root 19 2 0 Jul19 ? 00:00:00 [bdi-default]
root 20 2 0 Jul19 ? 00:00:00 [kintegrityd/0]
root 21 2 0 Jul19 ? 00:00:00 [kintegrityd/1]
root 22 2 0 Jul19 ? 00:00:00 [kblockd/0]
root 23 2 0 Jul19 ? 00:00:00 [kblockd/1]
root 24 2 0 Jul19 ? 00:00:00 [kacpid]
root 25 2 0 Jul19 ? 00:00:00 [kacpi_notify]
root 26 2 0 Jul19 ? 00:00:00 [kacpi_hotplug]
root 27 2 0 Jul19 ? 00:00:00 [ata_aux]
root 28 2 0 Jul19 ? 00:00:00 [ata_sff/0]
root 29 2 0 Jul19 ? 00:00:00 [ata_sff/1]
root 30 2 0 Jul19 ? 00:00:00 [ksuspend_usbd]
root 31 2 0 Jul19 ? 00:00:00 [khubd]
root 32 2 0 Jul19 ? 00:00:00 [kseriod]
root 33 2 0 Jul19 ? 00:00:00 [md/0]
root 34 2 0 Jul19 ? 00:00:00 [md/1]
root 35 2 0 Jul19 ? 00:00:00 [md_misc/0]
root 36 2 0 Jul19 ? 00:00:00 [md_misc/1]
root 37 2 0 Jul19 ? 00:00:00 [linkwatch]
root 38 2 0 Jul19 ? 00:00:00 [khungtaskd]
root 39 2 0 Jul19 ? 00:00:00 [kswapd0]
root 40 2 0 Jul19 ? 00:00:00 [ksmd]
root 41 2 0 Jul19 ? 00:00:00 [khugepaged]
root 42 2 0 Jul19 ? 00:00:00 [aio/0]
root 43 2 0 Jul19 ? 00:00:00 [aio/1]
root 44 2 0 Jul19 ? 00:00:00 [crypto/0]
root 45 2 0 Jul19 ? 00:00:00 [crypto/1]
root 50 2 0 Jul19 ? 00:00:00 [kthrotld/0]
root 51 2 0 Jul19 ? 00:00:00 [kthrotld/1]
root 52 2 0 Jul19 ? 00:00:00 [pciehpd]
root 54 2 0 Jul19 ? 00:00:00 [kpsmoused]
root 55 2 0 Jul19 ? 00:00:00 [usbhid_resumer]
root 85 2 0 Jul19 ? 00:00:00 [kstriped]
root 162 2 0 Jul19 ? 00:00:00 [scsi_eh_0]
root 163 2 0 Jul19 ? 00:00:00 [scsi_eh_1]
root 169 2 0 Jul19 ? 00:00:02 [mpt_poll_0]
root 170 2 0 Jul19 ? 00:00:00 [mpt/0]
root 187 2 0 Jul19 ? 00:00:37 [scsi_eh_2]
root 291 2 0 Jul19 ? 00:00:00 [jbd2/sda2-8]
root 292 2 0 Jul19 ? 00:00:00 [ext4-dio-unwrit]
root 381 1 0 Jul19 ? 00:00:00 /sbin/udevd -d
root 564 2 0 Jul19 ? 00:00:02 [vmmemctl]
root 713 2 0 Jul19 ? 00:00:00 [jbd2/sda1-8]
root 714 2 0 Jul19 ? 00:00:00 [ext4-dio-unwrit]
root 715 2 0 Jul19 ? 00:00:00 [jbd2/sda3-8]
root 716 2 0 Jul19 ? 00:00:00 [ext4-dio-unwrit]
root 717 2 0 Jul19 ? 00:00:00 [jbd2/sda6-8]
root 718 2 0 Jul19 ? 00:00:00 [ext4-dio-unwrit]
root 761 2 0 Jul19 ? 00:00:00 [kauditd]
root 995 1 0 Jul19 ? 00:00:00 auditd
root 1020 1 0 Jul19 ? 00:00:01 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
root 1050 1 0 Jul19 ? 00:00:14 irqbalance --pid=/var/run/irqbalance.pid
rpc 1064 1 0 Jul19 ? 00:00:00 rpcbind
rpcuser 1082 1 0 Jul19 ? 00:00:00 rpc.statd
dbus 1192 1 0 Jul19 ? 00:00:00 dbus-daemon --system
root 1208 1 0 Jul19 ? 00:00:00 cupsd -C /etc/cups/cupsd.conf
root 1233 1 0 Jul19 ? 00:00:00 /usr/sbin/acpid
68 1242 1 0 Jul19 ? 00:00:00 hald
root 1243 1242 0 Jul19 ? 00:00:00 hald-runner
root 1282 1243 0 Jul19 ? 00:00:00 hald-addon-input: Listening on /dev/input/event0 /dev/input/event2
68 1290 1243 0 Jul19 ? 00:00:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket
root 1310 1 0 Jul19 ? 00:00:00 automount --pid-file /var/run/autofs.pid
root 1343 1 0 Jul19 ? 00:00:00 /usr/sbin/sshd
postgres 1377 1 0 Jul19 ? 00:00:00 /usr/pgsql-9.2/bin/postmaster -p 5432 -D /var/lib/pgsql/9.2/data
postgres 1379 1377 0 Jul19 ? 00:00:00 postgres: logger process
postgres 1381 1377 0 Jul19 ? 00:00:00 postgres: checkpointer process
postgres 1382 1377 0 Jul19 ? 00:00:00 postgres: writer process
postgres 1383 1377 0 Jul19 ? 00:00:01 postgres: wal writer process
postgres 1384 1377 0 Jul19 ? 00:01:13 postgres: autovacuum launcher process
postgres 1385 1377 0 Jul19 ? 00:00:01 postgres: stats collector process
root 1463 1 0 Jul19 ? 00:00:00 /usr/libexec/postfix/master
postfix 1472 1463 0 Jul19 ? 00:00:00 qmgr -l -t fifo -u
root 1487 1 0 Jul19 ? 00:00:00 /usr/sbin/abrtd
root 1506 1 0 Jul19 ? 00:00:00 /usr/sbin/atd
root 1545 1 0 Jul19 ? 00:00:00 /usr/sbin/certmonger -S -p /var/run/certmonger.pid
root 1558 1 0 Jul19 tty1 00:00:00 /sbin/mingetty /dev/tty1
root 1560 1 0 Jul19 tty2 00:00:00 /sbin/mingetty /dev/tty2
root 1562 1 0 Jul19 tty3 00:00:00 /sbin/mingetty /dev/tty3
root 1564 1 0 Jul19 tty4 00:00:00 /sbin/mingetty /dev/tty4
root 1566 1 0 Jul19 tty5 00:00:00 /sbin/mingetty /dev/tty5
root 1568 1 0 Jul19 tty6 00:00:00 /sbin/mingetty /dev/tty6
root 1569 381 0 Jul19 ? 00:00:00 /sbin/udevd -d
root 1570 381 0 Jul19 ? 00:00:00 /sbin/udevd -d
root 10436 1343 0 19:28 ? 00:00:00 sshd: root#pts/0
root 10438 1343 0 19:28 ? 00:00:00 sshd: root#notty
root 10440 10438 0 19:28 ? 00:00:00 /usr/libexec/openssh/sftp-server
root 10449 10436 0 19:28 pts/0 00:00:00 -bash
postfix 10670 1463 0 21:15 ? 00:00:00 pickup -l -t fifo -u
root 10756 2 0 22:04 ? 00:00:00 [flush-8:0]
root 10765 10449 0 22:09 pts/0 00:00:00 ps -eaf
[root#localDev ~]#
After some googling and found that the root partition is mounted as ro in the setup. Attempted to remount the root partition "/" as rw using the command mount -o remount,rw / which results another error message :
[root#localDev ~]# mount -o remount,rw /
mount: cannot remount block device /dev/sda2 read-write, is write-protected
Following is the output of command cat /proc/mounts :
[root#localDev ~]# cat /proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
devtmpfs /dev devtmpfs rw,relatime,size=1952148k,nr_inodes=488037,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/sda2 / ext4 ro,relatime,barrier=1,data=ordered 0 0
/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0
/dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0
/dev/sda3 /home ext4 rw,relatime,barrier=1,data=ordered 0 0
/dev/sda6 /tmp ext4 rw,relatime,barrier=1,data=ordered 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
/etc/auto.misc /misc autofs rw,relatime,fd=7,pgrp=1310,timeout=300,minproto=5,maxproto=5,indirect 0 0
-hosts /net autofs rw,relatime,fd=13,pgrp=1310,timeout=300,minproto=5,maxproto=5,indirect 0 0
cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0
cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0
What's wrong with this setup? With my little debugging knowledge, tried to modify the mount configuration during booting and results failure. Kindly suggest me a fix to resolve this issue.
Thanks in advance...
»Another app is currently holding the yum lock; waiting for it to
exit ...
The other application is: yum«
CentOS 6.5 is an old version, "12´ Jan 2013". The current update level is 6.8 .
You can either wait for the "search for updates process" to finish with the "1000" updates, or kill the process.
Try executing the below commands.
[root#xyz ~]# ps -ef | grep yum
root 4511 4383 24 15:21 ? 00:00:39 /usr/bin/python/usr/share/PackageKit/helpers/yum/yumBackend.py get-updates none
root 4558 4524 0 15:24 pts/1 00:00:00 grep yum
[root#xyz ~]# kill 4511
Now execute # yum update
It happens because another app holding the process to resolve it we should kill the current running process.
To know current running process ids:-
$ ps aux | grep yum
root 2640 1 0 Nov09 ? 00:00:00 /usr/bin/python -tt /usr/sbin/yum-updatesd
root 13974 6577 0 10:27 pts/1 00:00:00 grep yum
root 17552 2640 0 09:16 ? 00:00:00 /usr/bin/python -tt /usr/libexec/yum-updatesd-helper --check --dbus
To Kill the process:-
$ kill process_id
Kill all the running process.
kill 2640
kill 17552
Please check again if any other process is running. If it is then kill that one also.
Now update
$ yum update -y
Option#1 kill the process
kill -9 process_id
Option#2 kill all yum process
killall -9 yum
Option#3 remove yum.pid process_id
rm -f /var/run/yum.pid 2600
yum -y update
Easy way to fix this issue
rm -f /var/run/yum.pid

FATAL ERROR: Evacuation Allocation failed - process out of memory

whatever I run on my ubuntu server, I always get this error, does anyone know why ?
FATAL ERROR: Evacuation Allocation failed - process out of memory
$ node app.js
FATAL ERROR: Evacuation Allocation failed - process out of memory
Aborted (core dumped)
$ npm install
FATAL ERROR: Evacuation Allocation failed - process out of memory
Aborted (core dumped)
$ grunt -grunfile Gruntfile-online.js
FATAL ERROR: Malloced operator new Allocation failed - process out of memory
Aborted (core dumped)
EDIT1
$ free
total used free shared buffers cached
Mem: 4194304 2177148 2017156 0 0 936864
-/+ buffers/cache: 1240284 2954020
Swap: 3145728 4 3145724
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/ploop36057p1 150G 7.6G 137G 6% /
none 2.0G 4.0K 2.0G 1% /dev
none 410M 64K 410M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 0 2.0G 0% /run/shm
EDIT2
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
511 mongodb 20 0 879m 30m 7896 S 0.3 0.8 28:37.01 mongod
689 youtrack 20 0 2034m 671m 6632 S 0.3 16.4 57:36.62 java
28610 my 20 0 17288 1380 1080 R 0.3 0.0 0:00.03 top
1 root 20 0 24148 1804 1060 S 0.0 0.0 0:05.11 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd/107656
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper/107656
4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/0
5 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
6 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/2
7 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/3
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/4
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/5
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/6
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/7
12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/8
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/9
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
15 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
16 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
17 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
18 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
20 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
21 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
22 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
23 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/1
24 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/2
25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/2
26 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/2
27 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/107656/2
28 root 20 0 0 0 0 S 0.0 0.0 0:00.00 nfsiod/107656
If you do not have swap activated your process will fail if there is not enough memory avaiable, it like sounds that is the issue.
https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04

httpd not listed in process list although its running without any issue

How to check the process status if process is running and not listed in ps or top command output.
I have started httpd(Apache) service and its working perfectly, i am able to see the webpage. But top or ps command doesn't display the httpd process.
What is the issue? I am logged in as root user.
Can we check the process status by any command if process id is not listed?
[root#ip-xx-xxx-xx-xxx /]# service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]
top - 19:54:08 up 10 days, 5:04, 1 user, load average: 0.00, 0.01, 0.05
Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 617044k total, 307312k used, 309732k free, 30660k buffers
Swap: 0k total, 0k used, 0k free, 218968k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 2892 1360 1164 S 0.0 0.2 0:00.34 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:01.82 ksoftirqd/0
4 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
6 root 20 0 0 0 0 S 0.0 0.0 0:26.84 events/0
7 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuset
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 netns
12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 async/mgr
17 root 20 0 0 0 0 S 0.0 0.0 0:00.00 xenwatch
18 root 20 0 0 0 0 S 0.0 0.0 0:00.00 xenbus
64 root 20 0 0 0 0 S 0.0 0.0 0:02.37 sync_supers
66 root 20 0 0 0 0 S 0.0 0.0 0:02.58 bdi-default
67 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kintegrityd/0
69 root 20 0 0 0 0 S 0.0 0.0 0:00.03 kblockd/0
76 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kseriod
184 root 20 0 0 0 0 S 0.0 0.0 0:00.16 khungtaskd
185 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kswapd0
186 root 25 5 0 0 0 S 0.0 0.0 0:00.00 ksmd
238 root 20 0 0 0 0 S 0.0 0.0 0:00.00 aio/0
241 root 20 0 0 0 0 S 0.0 0.0 0:00.00 crypto/0
252 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khvcd
332 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kstriped
519 root 20 0 0 0 0 S 0.0 0.0 0:00.37 jbd2/xvda1-8
520 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ext4-dio-unwrit
548 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khubd
599 root 16 -4 2504 644 352 S 0.0 0.1 0:00.02 udevd
824 root 20 0 0 0 0 S 0.0 0.0 0:00.09 kauditd
858 root 18 -2 2500 640 352 S 0.0 0.1 0:00.00 udevd
859 root 18 -2 2500 636 348 S 0.0 0.1 0:00.00 udevd
983 root 20 0 2840 760 488 S 0.0 0.1 0:00.01 dhclient
1020 root 16 -4 10896 580 428 S 0.0 0.1 0:00.85 auditd
1035 root 20 0 29628 1436 964 S 0.0 0.2 0:01.37 rsyslogd
1056 dbus 20 0 2980 884 700 S 0.0 0.1 0:00.58 dbus-daemon
1151 root 20 0 8192 888 468 S 0.0 0.1 0:00.43 sshd
1171 ntp 20 0 5072 1368 1036 S 0.0 0.2 0:01.34 ntpd
I prefer to use:
pgrep -l httpd
Example:
[root#mywebserver ~]$ pgrep -l httpd
3661 httpd
3665 httpd
3673 httpd
3678 httpd
3683 httpd
3688 httpd
3694 httpd
3701 httpd
: .. more....
Counting those lines also helps me to see if the server is getting overloaded.
$ ps -efw |grep -i httpd
also
$ top
press: u == display user
'u'
followed by the username, in this case 'apache' enter:
apache

Resources