remove xmr.crypto-pool.fr process ubuntu - security

there been an anonymous process running on my Ubuntu server, which is utilizing 100% memory.
User: Tomcat
process : /tmp/autox -B -a cryptonight -o stratum+tcp://xmr.crypto-pool.fr:443 -u 47TS1NQvebb3Feq91MqKdSGCUq18dTEdmfTTrRSGFFC2fK85NRdABwUasUA8EUaiuLiGa6wYtv5aoR8BmjYsDmTx9DQbfRX -p x
I keep trying to kill the process and file which I found in /tmp folder but still, it recreates the file with different name and starts the process back.
i Drop INPUT & OUTPUT for xmr.crypto-pool.fr in IPTABLES
now It has been an irritating on the server.
guys please help !

It seems that your server is hacked or infected with a Miner malware (Malware which uses your system to mine Crypto Currency [Monero, in this case]).
Look for any suspicious process/Cronjob.
https://security.stackexchange.com/questions/129448/how-can-i-kill-minerd-malware-on-an-aws-ec2-instance

I just realized today that my server was infected as well. The process for me wasn't /tmp/autox but an instance of ./http.conf (not the ./ in the beginning). When looking at the crons i found this one:
* * * * * /tmp/.-/update >/dev/null 2>&1
I then deleted the line from my crons and went to /tmp/.-/ where the malware is indeed siting.
As a temporary fix i added an exit statement at the top of each script found in that folder. I don't want to it delete just yet as I want to investigate a bit more.
I also deleted all the key added in .ssh/authorized_keys.
Now when killing the process it doesn't start again.
I'll update my answer if I find anything else.

Related

kthreaddk in postgres uses high cpu

I used Postgres in node.js project but my cpu is 100% in ubuntu server
I used this command
killall -9 kthreaddk
I stopped my project and stop postgresql service, after killing kthreaddk cpu is 0% but after 30 second kthreaddk run again and cpu will be 100% agian
what is khtreaddk and how to stopped it forever?
I try many ways that here is in stackoverflow but I can't solve it
kthreaddk is started by cron job. After it runs, it usually places its code in different directories and keeps updating crontab all the time.
To get rid of it follow these steps:
Identify which user crontab is running it.
$ cd /var/spool/cron/crontabs
# Preview each file here, e.g.
$ cat www-data
* * * * * /run/c8aaf4bea
The /run/c8aaf4bea looks weird, but do not remove it yet...
Block specific user from updating crontab (e.g. www-data). Edit cron.deny file
$ sudo vim /etc/cron.deny
and add a user name
www-data
Now the threaddk process is not able to edit crontab anymore.
Kill all the threaddk processes
$ sudo pkill -9 threaddk
Remove suspected line from the crontab
$ sudo vim /var/spool/cron/crontabs/www-data
* * * * * /run/c8aaf4bea <- remove this line
Remove the user from cron.deny file
It's miner. It use crontab for restart(start) itself.
crontab -u postgres -l
I have similar problem. Just remove job from crontab and restart server imidiatly
I had miner on my vps. My CPU usage was always 100%. First moment i was thinking i have memory leak in my java app or tomcat. I could kill process but it was starting another one in few seconds. In my case it was on user account which i didn't use. I killed all user processes with pkill -u username and then fast deleted user by sudo deluser --remove-home username before miner started its' processes. After this vps worked fine. Maybe it will help someone.

How to log the live output of a running process

I want to run a game server inside my Ubuntu machine. I want to run it in the background and write the live output of that process inside a log file. I tried using nohup and running the game server using "&" at the end but I couldn't make it work the way I wanted.
Then I started reading about named pipes and actually gave it a go. I made a simple script that in theory should work. But, of course I am missing something.
First, I made a pipe using the mkfifo command.
mkfifo testpipe
Then I created a small script:
#!/bin/bash
./mta-server64 > pipe &
pid=$!
echo $pid // so I know the pid of the process
cat < pipe > log.txt &
(Note: I wrote this code from memory.)
The code works only when there is an error and the process stops. It actually records the game console error. But when the game server is running I get no output in the log file.
I want to read the output (stdout and stderr if I am not mistaken) of a process running in background and record it those inside a log file.
I also thought about using screen as it logs everything inside a file but I would prefer not using it if there is a better solution.
EDIT:
First of all: thank you for the interest you had in helping me. In the same way, I have to apologize for only giving scarce details about what I intend to do with this small project and for my limited understanding of stdout and stderr.
Let's go to the first base.
I want to run a game server named Multi Theft Auto (https://multitheftauto.com/). This is GTA San Andreas but multiplayer.
I can easily run this game server in my Ubuntu server by calling the executable ./mta-server-64. After calling it the game server console appears:
[|] MTA: San Andreas :: 0/32 players :: 196 resources :: 125 fps (25)
MTA:BLUE Server for MTA:SA
==================================================================
= Multi Theft Auto: San Andreas v1.5.6 [64 bit]
==================================================================
= Server name : Default MTA Server
= Server IP address: auto
= Server port : 22884
=
= Log file : /root/mta/mods/deathmatch/logs/server.log
= Maximum players : 32
= HTTP port : 22564
= Voice Chat : Disabled
= Bandwidth saving : Medium
==================================================================
[09:49:07] Resource 'mapmanager' requests some acl rights. Use the command 'aclrequest list mapmanager'
[09:49:07] Resources: 196 loaded, 0 failed
[09:49:07] Starting resources...
[09:49:07] Server minclientversion is now 1.5.6-9.16588.0
[09:49:07] INFO: MAPMANAGER: Some important ACL permissions are missing. To ensure the correct functioning of Mapmanager, please write: aclrequest allow mapmanager all
[09:49:07] Gamemode 'play' started.
[09:49:07] Authorized serial account protection is enabled for the ACL group(s): `Admin` See http://mtasa.com/authserial
[09:49:07] WARNING: <owner_email_address> not set
[09:49:07] Server started and is ready to accept connections!
[09:49:07] To stop the server, type 'shutdown' or press Ctrl-C
[09:49:07] Type 'help' for a list of commands.
[09:49:07] Querying MTA master server... success! (Auto detected IP:xxx.xxx.xxx.xxx)
I am using the following script to run the process in the background and (try to) get the live output from:
#!/bin/bash
newport=$(shuf -i 22003-22900 -n 1)
newip=$(shuf -i 22003-22900 -n 1)
rm -rf ~/server/*
cp -r /home/user*/ftp/server/mtaserver/serverfiles/* ~/server
sed -i "s/<httpport>[0-9][0-9][0-9][0-9][0-9]<\/httpport>/<httpport>$newport<\/httpport>/g" ~/server/mods/deathmatch/mtaserver.conf
sed -i "s/<serverport>[0-9][0-9][0-9][0-9][0-9]<\/serverport>/<serverport>$newip<\/serverport>/g" ~/server/mods/deathmatch/mtaserver.conf
~/server/mta-server64 2>&1 | tee -a outfile &
mta_pid=$!
echo $mta_pid
sleep 6
pkill $mta_pid
(Note: Because of some technical problems I had to add the first few lines of script which automatically replace the game files with new ones and also replace the existing ports with random ones.)
This script starts the server and tries to log the output of the process. The process is automatically killed after few seconds so there is only one instance of the game server at any given time.
THE ISSUE:
This script only logs the output if there is an error. I still cannot get the live output of the process when it is still running. Maybe this is an issue with the game server but truly believe there should be a way to make it work the way I intend.
I believe you want to use tee command to split the pipe output to log file.
I suggest you read this article and these answers 1 2.
Usually this is enough nohup somecommand > somecommand.log 2>&1 & then, tail -F somecommand.log to follow the logs.
After 2 days I finally figured out a way to make it work (the way I intended to work, without taking in consideration any major security/performance risks).
Reading the comments made me realize I was attacking the wrong point. The stdout of the game server is buffered, thus making it impossible to log it into a log file using the methods I tried when I posted my question At least this is what I came to understand).
I did some research on how to run the application without having the stdout buffered: https://serverfault.com/questions/294218/is-there-a-way-to-redirect-output-to-a-file-without-buffering-on-unix-linux
My code now:
stdbuf -o0 ~/server/mta-server64 >> pipe &
cat < pipe | tee -a outfile &
After creating the named pipe it executes the game server inside that pipe and then appends the stdout into the log file.
The stdbug -o0 command disables the stdout buffering (as noted in the link above).
This works for me and I cannot guarantee it will work for anybody else. I am still not aware if disabling the buffering is a safe approach to my issue but for now it is what I need.

Ubuntu bash script spiking CPU usage and not dropping, when run via crontab

I'm pretty new to bash scripting, but have constructed something that works. It copies new/changed files from a folder on my web server to another directory. This directory is then compressed and the compressed folder is uploaded to my drop box account.
This works perfectly when I run it manually with;
sudo run-parts /path/to/bash/scripts
I wanted to automate this, so I edited my crontab file using sudo crontab -e to include the following;
0 2 * * * sudo run-parts /path/to/bash/scripts
This works, but with one issue. It spikes my CPU usage to 60% and it doesn't drop until I open htop and kill the final process (the script that does the uploading). When it runs the next day, CPU usage spikes to 100% and stays there, because it was still running from the previous day. This issue doesn't occur when manually running the scripts.
Thoughts?

Shell script only starting applications when used through ssh

What can cause .sh scripts to work fine through an SSH shell, but not when executed through either PHP or crontab?
I have a VPS where I run game servers on, but in order to make it maintainable, I am planning on automating much of the tedious processes (like setting up or deleting the server) and making important features (like starting and stopping servers) easily acceptable for the ones who actually need it.
Now, when I made the shell scripts and tested them, they worked absolutely fine. startserver started the server, restartserver restarted it, etc. But when run from PHP, or - as I later figured out - crontab, starting servers magically does not work. Stopping them, checking if they are running, updating and all other features worked like intended, but starting a server just did not do anything. It just returned 0 while printing nothing.
For example, here is an example of a script which works in either case: (statusserver.sh)
/sbin/start-stop-daemon -v -t --start --exec ~mta/servers/$1/files/mta-server -- -d
And here is one which does not work in any case: (startserver.sh)
/sbin/start-stop-daemon -v --start --exec ~mta/servers/$1/files/mta-server -- -d
The only difference is that statusserver.sh has "-t", which will only tell you if doing the same command without -t will actually be successful. And executing statusserver.sh like so:
sudo -u mta ~mta/sh/statusserver.sh test
Indeed does work, printing something along the lines of "Would start ~mta/servers/test/files/mta-server -d". But doing this:
sudo -u mta ~mta/sh/startserver.sh $2
Does absolutely nothing. It does not print anything, and it actually returns 0. (which is supposed to mean the operation was successful)
Now for the fun part: When the server is already running, startserver.sh will do what it is supposed to do: Say that the server is already running, and returning an error code. (Because start-stop-daemon is kind enough to do that for me) But it flat out refuses to launch anything.
Replacing start-stop-daemon with something like:
sudo -u mta ~mta/servers/test/files/mta-server -d
Does exactly the same thing: It will just refuse to run, while still returning 0.
Oh by the way, it's not a sudo problem. Of that I am quite sure, since the following works fine too
sudo -u web1 sudo -u mta ~mta/scripts/startserver.sh test
So back to my question: What can cause Linux, Shell, Bash or whatever to flat out refuse to start an application when run through either PHP or crontab, while happily accepting it when launched through SSH? Is there any setting I need to switch? Any package that can be blocking up what I want to do? Any other thing I am just missing?
Look into using sudo.
Set up /etc/sudoer (using visudo) for the user that Apache runs as (usually for the 'nobody' user, or 'apache' user) as this is what Apache usually runs as. Grant sudo access to the commands you want to run, with the NOPASSWD option.
In your PHP script, use exec() to execute the commands to start/stop daemons and prefix the commands with the sudo command.
Here is an article about sudo:
http://www.cyberciti.biz/tips/allow-a-normal-user-to-run-commands-as-root.html
As I think Justin was touching on, but didn't say specifically, it would seem the problem of not being able to run the script is that the apache user account (which is generally pretty limited on purpose) can't see into the user's home directory because of the permissions. Generally only the user and root can see into their own home directory. You can do a few things, sudo to run the script in the home directory, move it out of the user's home directory or possibly change permissions on the scripts/homes so they can be run in the user's home directory by apache.

How to make sure an application keeps running on Linux

I'm trying to ensure a script remains running on a development server. It collates stats and provides a web service so it's supposed to persist, yet a few times a day, it dies off for unknown reasons. When we notice we just launch it again, but it's a pain in the rear and some users don't have permission (or the knowhow) to launch it up.
The programmer in me wants to spend a few hours getting to the bottom of the problem but the busy person in me thinks there must be an easy way to detect if an app is not running, and launch it again.
I know I could cron-script ps through grep:
ps -A | grep appname
But again, that's another hour of my life wasted on doing something that must already exist... Is there not a pre-made app that I can pass an executable (optionally with arguments) and that will keep a process running indefinitely?
In case it makes any difference, it's Ubuntu.
I have used a simple script with cron to make sure that the program is running. If it is not, then it will start it up. This may not be the perfect solution you are looking for, but it is simple and works rather well.
#!/bin/bash
#make-run.sh
#make sure a process is always running.
export DISPLAY=:0 #needed if you are running a simple gui app.
process=YourProcessName
makerun="/usr/bin/program"
if ps ax | grep -v grep | grep $process > /dev/null
then
exit
else
$makerun &
fi
exit
Then add a cron job every minute, or every 5 minutes.
Monit is perfect for this :)
You can write simple config files which tell monit to watch e.g. a TCP port, a PID file etc
monit will run a command you specify when the process it is monitoring is unavailable/using too much memory/is pegging the CPU for too long/etc. It will also pop out an email alert telling you what happened and whether it could do anything about it.
We use it to keep a load of our websites running while giving us early warning when something's going wrong.
-- Your faithful employee, Monit
Notice: Upstart is in maintenance mode and was abandoned by Ubuntu which uses systemd. One should check the systemd' manual for details how to write service definition.
Since you're using Ubuntu, you may be interested in Upstart, which has replaced the traditional sysV init. One key feature is that it can restart a service if it dies unexpectedly. Fedora has moved to upstart, and Debian is in experimental, so it may be worth looking into.
This may be overkill for this situation though, as a cron script will take 2 minutes to implement.
#!/bin/bash
if [[ ! `pidof -s yourapp` ]]; then
invoke-rc.d yourapp start
fi
If you are using a systemd-based distro such as Fedora and recent Ubuntu releases, you can use systemd's "Restart" capability for services. It can be setup as a system service or as a user service if it needs to be managed by, and run as, a particular user, which is more likely the case in OP's particular situation.
The Restart option takes one of no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always.
To run it as a user, simply place a file like the following into ~/.config/systemd/user/something.service:
[Unit]
Description=Something
[Service]
ExecStart=/path/to/something
Restart=on-failure
[Install]
WantedBy=graphical.target
then:
systemctl --user daemon-reload
systemctl --user [status|start|stop|restart] something
No root privilege / modification of system files needed, no cron jobs needed, nothing to install, flexible as hell (see all the related service options in the documentation).
See also https://wiki.archlinux.org/index.php/Systemd/User for more information about using the per-user systemd instance.
I have used from cron "killall -0 programname || /etc/init.d/programname start". kill will error if the process doesn't exist. If it does exist, it'll deliver a null signal to the process (which the kernel will ignore and not bother passing on.)
This idiom is simple to remember (IMHO). Generally I use this while I'm still trying to discover why the service itself is failing. IMHO a program shouldn't just disappear unexpectedly :)
Put your run in a loop- so when it exits, it runs again... while(true){ run my app.. }
I couldn't get Chris Wendt solution to work for some reason, and it was hard to debug. This one is pretty much the same but easier to debug, excludes bash from the pattern matching. To debug just run: bash ./root/makerun-mysql.sh. In the following example with mysql-server just replace the value of the variables for process and makerun for your process.
Create a BASH-script like this (nano /root/makerun-mysql.sh):
#!/bin/bash
process="mysql"
makerun="/etc/init.d/mysql restart"
if ps ax | grep -v grep | grep -v bash | grep --quiet $process
then
printf "Process '%s' is running.\n" "$process"
exit
else
printf "Starting process '%s' with command '%s'.\n" "$process" "$makerun"
$makerun
fi
exit
Make sure it's executable by adding proper file permissions (i.e. chmod 700 /root/makerun-mysql.sh)
Then add this to your crontab (crontab -e):
# Keep processes running every 5 minutes
*/5 * * * * bash /root/makerun-mysql.sh
The supervise tool from daemontools would be my preference - but then everything Dan J Bernstein writes is my preference :)
http://cr.yp.to/daemontools/supervise.html
You have to create a particular directory structure for your application startup script, but it's very simple to use.
first of all, how do you start this app? Does it fork itself to the background? Is it started with nohup .. & etc? If it's the latter, check why it died in nohup.out, if it's the first, build logging.
As for your main question: you could cron it, or run another process on the background (not the best choice) and use pidof in a bashscript, easy enough:
if [ `pidof -s app` -eq 0 ]; then
nohup app &
fi
You could make it a service launched from inittab (although some Linuxes have moved on to something newer in /etc/event.d). These built in systems make sure your service keeps running without writing your own scripts or installing something new.
It's a job for a DMD (daemon monitoring daemon). there are a few around; but I usually just write a script that checks if the daemon is running, and run if not, and put it in cron to run every minute.
Check out 'nanny' referenced in Chapter 9 (p197 or thereabouts) of "Unix Hater's Handbook" (one of several sources for the book in PDF).
A nice, simple way to do this is as follows:
Write your server to die if it can't listen on the port it expects
Set a cronjob to try to launch your server every minute
If it isn't running it'll start, and if it is running it won't. In any case, your server will always be up.
I think a better solution is if you test the function, too. For example, if you had to test an apache, it is not enough only to test, if "apache" processes on the systems exist.
If you want to test if apache OK is, then try to download a simple web page, and test if your unique code is in the output.
If not, kill the apache with -9 and then do a restart. And send a mail to the root (which is a forwarded mail address to the roots of the company/server/project).
It's even simplier:
#!/bin/bash
export DISPLAY=:0
process=processname
makerun="/usr/bin/processname"
if ! pgrep $process > /dev/null
then
$makerun &
fi
You have to remember though to make sure processname is unique.
One can install minutely monitoring cronjob like this:
crontab -l > crontab;echo -e '* * * * * export DISPLAY=":0.0" && for
app in "eiskaltdcpp-qt" "transmission-gtk" "nicotine";do ps aux|grep
-v grep|grep "$app";done||"$app" &' >> crontab;crontab crontab
disadvantage is that the app names you enter have to be found in ps aux|grep "appname" output and at same time being able to be launched using that name: "appname" &
also you can use the pm2 library.
sudo apt-get pm2
And if its a node app can install.
Sudo npm install pm2 -g
them can run the service.
linux service:
sudo pm2 start [service_name]
npm service app:
pm2 start index.js

Resources