Ubuntu bash script spiking CPU usage and not dropping, when run via crontab - linux

I'm pretty new to bash scripting, but have constructed something that works. It copies new/changed files from a folder on my web server to another directory. This directory is then compressed and the compressed folder is uploaded to my drop box account.
This works perfectly when I run it manually with;
sudo run-parts /path/to/bash/scripts
I wanted to automate this, so I edited my crontab file using sudo crontab -e to include the following;
0 2 * * * sudo run-parts /path/to/bash/scripts
This works, but with one issue. It spikes my CPU usage to 60% and it doesn't drop until I open htop and kill the final process (the script that does the uploading). When it runs the next day, CPU usage spikes to 100% and stays there, because it was still running from the previous day. This issue doesn't occur when manually running the scripts.
Thoughts?

Related

Is there a way to make crontab run a gnu screen session?

I have a discord bot running on a raspberry pi that i need to restart every day, I'm trying to do this through crontab but with no luck.
It manages to kill the active screen processes but never starts an instance, not that I can see with "screen -ls" and I can tell that it doesn't create one that I can't see as the bot itself does not come online.
Here is the script:
#!/bin/bash
sudo pkill screen
sleep 2
screen -dmS discordBot
sleep 2
screen -S "discordBot" -X stuff "node discordBot/NEWNEWNEWN\n"
Here is also the crontab:
0 0 * * * /bin/bash /home/admin/discordBot/script.sh
Is it possible to have crontab run a screen session? And if so how?
Previously I tried putting the screen command stright into cron but now I have it in a bash script instead.
If I run the script in the terminal it works perfectly, it’s just cron where it fails. Also replacing "screen" with the full path "/usr/bin/screen" does not change anything.
So the best way of doing it, without investigating the underlying problem would be to create a systemd service and putting a restart command into cron.
 
/etc/systemd/system/mydiscordbot.service:
[Unit]
Description=A very simple discord bot
[Service]
Type=simple
ExecStart=node /path/to/my/discordBot.js
[Install]
WantedBy=multi-user.target
Now you can run your bot with systemctl start mydiscordbot and can view your logs with journalctl --follow -u mydiscord bot
Now you only need to put
45 2 * * * systemctl restart discordbot
into root's crontab and you should be good to go.
You probably should also write the logs into a logfile, maybe /var/log/mydiscordbot.log, but how and if you do that is up to you.
OLD/WRONG ANSWER:
cron is run with a minimal environment, and depending on your os, /usr/bin/ is probably not in the $PATH var. screen is mostlikely located at /usr/bin/screen, so cron can't run it because it can't find the screen binary. try replacing screen in your script with /usr/bin/screen
But the other question here is: Why does your discord bot need to be restarted every day. I run multiple bots with 100k+ users, and they don't need to be restarted at all. Maybe you should open a new question with the error and some code snippets.
I don't know what os your rpi is running, but I had a similar issue earlier today trying to get vms on a server to launch a terminal and run a python script with crontab. I found a very easily solution to automatically restart it on crashes, and to run something simply in the background. Don't rely on crontab or an init service. If your rpi as an x server running, or anything that can be launched on session start, there is a simple shell solution. Make a bash script like
while :; do
/my/script/to/keep/running/here -foo -bar -baz >> /var/log/myscriptlog 2>&1
done
and then you would start it on .xprofile or some similar startup operation like
/path/to/launcher.sh &
to have it run the background. It will restart automatically everytime it closes if started in something like .xprofile, .xinitrc, or anything ran at startup.
Then maybe make a cronjob to restart the raspberry pi or whatever system is running the script but this script wil restart the service whenever it's closed anyway. Maybe you can put that cronjob on a script but I don't think it would launch the GUI.
Some other things you can do to launch a GUI terminal in my research with cronjob that you can try, though they didn't work for my situation on these custom linux vms, and that I read was a security risk to do this from a cronjob, is you can specify the display.
* * * * * DISPLAY=:0 /your/gui/stuff/here
but you would would need to make sure the user has the right permissions and it's considered an unsafe hack to even do this as far as I have read.
for my issue, I had to launch a terminal that stayed open, and then changed to the directory of a python script and ran the script, so that the local files in directory would be called in the python script. here is a rough example of the "launcher.sh" I called from the startup method this strange linux distro used lol.
#!/bin/sh
while :; do
/usr/bin/urxvt -e /bin/bash -c "cd /path/to/project && python loader.py"
done
Also check this source for process management
https://mywiki.wooledge.org/ProcessManagement

kthreaddk in postgres uses high cpu

I used Postgres in node.js project but my cpu is 100% in ubuntu server
I used this command
killall -9 kthreaddk
I stopped my project and stop postgresql service, after killing kthreaddk cpu is 0% but after 30 second kthreaddk run again and cpu will be 100% agian
what is khtreaddk and how to stopped it forever?
I try many ways that here is in stackoverflow but I can't solve it
kthreaddk is started by cron job. After it runs, it usually places its code in different directories and keeps updating crontab all the time.
To get rid of it follow these steps:
Identify which user crontab is running it.
$ cd /var/spool/cron/crontabs
# Preview each file here, e.g.
$ cat www-data
* * * * * /run/c8aaf4bea
The /run/c8aaf4bea looks weird, but do not remove it yet...
Block specific user from updating crontab (e.g. www-data). Edit cron.deny file
$ sudo vim /etc/cron.deny
and add a user name
www-data
Now the threaddk process is not able to edit crontab anymore.
Kill all the threaddk processes
$ sudo pkill -9 threaddk
Remove suspected line from the crontab
$ sudo vim /var/spool/cron/crontabs/www-data
* * * * * /run/c8aaf4bea <- remove this line
Remove the user from cron.deny file
It's miner. It use crontab for restart(start) itself.
crontab -u postgres -l
I have similar problem. Just remove job from crontab and restart server imidiatly
I had miner on my vps. My CPU usage was always 100%. First moment i was thinking i have memory leak in my java app or tomcat. I could kill process but it was starting another one in few seconds. In my case it was on user account which i didn't use. I killed all user processes with pkill -u username and then fast deleted user by sudo deluser --remove-home username before miner started its' processes. After this vps worked fine. Maybe it will help someone.

Setting up a cronjob on Google Compute Engine

I am new to setting up cronjobs and I'm trying to do it on a virtual machine in google compute engine. After a bit of research, I found this StackOverflow question: Running Python script at Regular intervals using Cron in Virtual Machine (Google Cloud Platform)
As per the answer, I managed to enter the crontab -e edit mode and set up a test cronjob like 10 8 * * * /usr/bin/python /scripts/kite-data-pull/dataPull.py. I also checked the system time, which was in UTC, and entered the time according to that.
The step I'm supposed to take, as per the answer, is to run sudo systemctl restart cron which is throwing an error for me:
sudo systemctl restart cron
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
Any suggestions on what I can do to set up this cronjob correctly?
Edit a cron jobs with crontab -e and inset a line:
* * * * * echo test123 > /your_homedir_path/file.log
That will write test123 every minute into file.log file.
Then do tail if and wait a couple minutes. You should see test123 lines appearing in the file (and screen).
If it runs try running your python file but first make your .py file executable with "chmod +x script.py"
Here you can find my reply to similar question.

remove xmr.crypto-pool.fr process ubuntu

there been an anonymous process running on my Ubuntu server, which is utilizing 100% memory.
User: Tomcat
process : /tmp/autox -B -a cryptonight -o stratum+tcp://xmr.crypto-pool.fr:443 -u 47TS1NQvebb3Feq91MqKdSGCUq18dTEdmfTTrRSGFFC2fK85NRdABwUasUA8EUaiuLiGa6wYtv5aoR8BmjYsDmTx9DQbfRX -p x
I keep trying to kill the process and file which I found in /tmp folder but still, it recreates the file with different name and starts the process back.
i Drop INPUT & OUTPUT for xmr.crypto-pool.fr in IPTABLES
now It has been an irritating on the server.
guys please help !
It seems that your server is hacked or infected with a Miner malware (Malware which uses your system to mine Crypto Currency [Monero, in this case]).
Look for any suspicious process/Cronjob.
https://security.stackexchange.com/questions/129448/how-can-i-kill-minerd-malware-on-an-aws-ec2-instance
I just realized today that my server was infected as well. The process for me wasn't /tmp/autox but an instance of ./http.conf (not the ./ in the beginning). When looking at the crons i found this one:
* * * * * /tmp/.-/update >/dev/null 2>&1
I then deleted the line from my crons and went to /tmp/.-/ where the malware is indeed siting.
As a temporary fix i added an exit statement at the top of each script found in that folder. I don't want to it delete just yet as I want to investigate a bit more.
I also deleted all the key added in .ssh/authorized_keys.
Now when killing the process it doesn't start again.
I'll update my answer if I find anything else.

Synology - Cron job

I'm trying to make cron jobs or task schduler working, but I can not figure out why my script is not taken in consideration.
I'm trying to simply archive a folder with:
tar -cvf /volume1/NetBackup/Backups/Monday.tgz /volume1/NetBackup/Backups/ns3268116.ovh.net/
Each time the script starts working but cannot achieve the work. Either with task scheduler or crontab, a file Monday.tgz is created in folder /volume1/NetBackup/Backups/, but this file is only 1024 bytes.
Synology Cron is really fussy.
Here are my own personal notes for Synology DS413j, DSM 5.2:
Hand edit /etc/crontab as root, crontab -e isn't available
Ensure you use tabs not spaces to separate the columns
Your crontab changes may not survive a reboot if there are syntax problems
The who column in crontab may not be reliable. Use root in the who column and /bin/su -c '<command>' <username> to run as another other user
remember that it uses ash not bash so check for bashisms, e.g use >> /path/to/logfile 2>&1' not&>> /path/to/logfile`
It doesn't support 'MAILTO='
you need to restart crond synoservicectl --reload crond for the new crontab to take effect
You may try adding some diagnostics to it. For instance:
Add MAILTO into the crontab file (on top of crontab -e) to receive cron errors by email:
MAILTO=username#domain.com
Redirect output of your tar command to the file:
your command > ~/log.txt 2>&1
Check cron log and look for anomalies. For instance (it may depend on your configuration):
/var/log/cron.log
You may also try searching through /var/log/messages at the time of your cron job.
Is volume1 a resource on remote host? If yes, it is worth checking this part of the system.
I agree about the really nagging nature of Crontab on Synology Linux OSs.
I would certainly suggest to create de desired job as a .sh shell script and call it via CRON task inserted by using the GUI, as suggested here.
As for today (March 2017) is the best method I have found, since working with crontab via CLI is nearly a pain.

Resources