All of my cronjobs seem to be running multiple times per day on my Synology NAS. I am not sure what is going on. I have already found this solution on SO but I am not sure how to fix the problem. I want to have crond log to a certain file so I have run crond -L /var/log/cron.log
In my Cron logs, they seem to only be running once but in my script outputs, they seem to be running multiple times (twice).
Here is the output of ps | grep cron
20726 root 3816 S /usr/sbin/crond
21937 root 3816 S crond -L /var/log/cron.log
28497 root 3768 S grep cron
Is have /usr/sbin/crond and my custom crond command the reason this is happening? How would I only have one instance of crond but also have it log to a specific file?
Related
I used Postgres in node.js project but my cpu is 100% in ubuntu server
I used this command
killall -9 kthreaddk
I stopped my project and stop postgresql service, after killing kthreaddk cpu is 0% but after 30 second kthreaddk run again and cpu will be 100% agian
what is khtreaddk and how to stopped it forever?
I try many ways that here is in stackoverflow but I can't solve it
kthreaddk is started by cron job. After it runs, it usually places its code in different directories and keeps updating crontab all the time.
To get rid of it follow these steps:
Identify which user crontab is running it.
$ cd /var/spool/cron/crontabs
# Preview each file here, e.g.
$ cat www-data
* * * * * /run/c8aaf4bea
The /run/c8aaf4bea looks weird, but do not remove it yet...
Block specific user from updating crontab (e.g. www-data). Edit cron.deny file
$ sudo vim /etc/cron.deny
and add a user name
www-data
Now the threaddk process is not able to edit crontab anymore.
Kill all the threaddk processes
$ sudo pkill -9 threaddk
Remove suspected line from the crontab
$ sudo vim /var/spool/cron/crontabs/www-data
* * * * * /run/c8aaf4bea <- remove this line
Remove the user from cron.deny file
It's miner. It use crontab for restart(start) itself.
crontab -u postgres -l
I have similar problem. Just remove job from crontab and restart server imidiatly
I had miner on my vps. My CPU usage was always 100%. First moment i was thinking i have memory leak in my java app or tomcat. I could kill process but it was starting another one in few seconds. In my case it was on user account which i didn't use. I killed all user processes with pkill -u username and then fast deleted user by sudo deluser --remove-home username before miner started its' processes. After this vps worked fine. Maybe it will help someone.
I know this has been asked countless times but I am looking for a solution that uses crond's native log function. I do not want to pipe the output of each cron and prepend the timestamp.
I am launching crond like this:
crond -L /var/log/cron.log -f
the logs are like this:
crond: crond (busybox 1.30.1) started, log level 8
crond: USER root pid 16 cmd echo "hello"
crond: USER root pid 18 cmd echo "hello"
crond: USER root pid 19 cmd echo "hello"
I'd like to add the timestamp before the line. I do not want to add some stdout command to each individual cron and prepend the date.
Maybe I could watch the file and append to each new line or something? How do I get access to crond's stream and modify it?
I believe that the answer is that it's not possible to modify the crond output file.
The actual implementation detail of the cron do not make it easy to control the log file for individual jobs. Also, the crond is running as root, which will make it hard to user jobs to change the file. Trying to change the file, while crond is running will likely result in problems.
Consider instead the following option
Write a process that will tail -f the log file, and create a new log file, with each line prefixed by the timestamp.
Run the process at boot time.
tail -f /var/log/cron.log | while read x ; do echo "$(date) $x" ; done >> /var/log/cron-ts.log
Or configure to whatever format you need.
I have a Docker container in which I have my Python tools installed, including my Luigi pipeline interface. I would like to run a shell script which kicks off my Luigi pipeline on a weekly basis using cron.
I have tried high and low to get cron to work within a Docker container. I cannot, for the life of me, get my crontab -e file to run.
In my file I have:
0 0 * * Sun /data/myscript.sh
followed by a new line. Cron is running in the background - ps aux | grep cron shows /usr/sbin/cron is running. Furthermore, in my /var/log/syslog file, I have:
/USR/SBIN/CRON[2037]: (root) CMD (/data/myscript.sh)
I've also tried using 0 0 * * Sun . /root/.bashrc ; sh /data/myscript.sh
However, my script does not run (when I run my script manually using bash myscript.sh, I get the expected results).
Suggestions?
Scheduled tasks won't run inside of a normal container since there is no scheduler running. The only active task will be that which you have elected to run via the CMD keyword or the Entrypoint.
In order to execute schedule tasks, it's more prudent to utilize the host scheduler and docker exec commands:
docker exec <container> <command>
docker exec <container> /data/myscript.sh
So you would end up with a cron on your host something like :
(Crontab Style)
0 * * * * root docker exec mycontainer /data/myscript.sh
If you have a cluster, you would have to query the cluster first to locate the container, or even have a script do it for you.
A container is meant to only one run main process. You either need to run crond as the main process for a container, or ensure that crond is running alongside your main process. This kind of breaks the contracts / point of containers, but sometimes it's easier to set it up this way. Instructions below:
My Dockerfile has the following ENTYPOINT:
ENTRYPOINT ["/startup.sh"]
And then within startup.sh I do a couple of things to spin up the container, but most importantly before executing the last command, I have this:
crond
exec start_my_service
crond starts the daemon that executes the crons, and start_my_service then becomes the primary process for my container.
I wrote a python script to work with a message queue and the script was launched by crontab. I removed from crontab but the root user of my linux system keeps launching it every 9 minutes.
I've rebooted system and restarted cron but this script keeps getting executed.
Any idea how to keep it from happening?
If you start a cron, service does not stop even if you delete the file in which you have specified the cron.
This link should help:
https://askubuntu.com/questions/313033/how-can-i-see-stop-current-running-crontab-tasks
Also, you can also kill your cron by looking its PId, using: ps -e | grep cron-name, then kill -9 PId
I have cron jobs in cPanel that are scheduled every night. Yesterday, I noticed that these cron jobs haven't run since 2 days ago. I checked the cron log in /var/log/cron, and it shows me errors when trying to access the file.
Errors:
Nov 6 11:25:01 web2 crond[17439]: (laptoplc) ERROR (failed to change user)
Nov 6 11:25:01 web2 crond[17447]: (projecto) ERROR (failed to change user)
Nov 6 11:25:01 web2 crond[17446]: (CRON) ERROR (setreuid failed): Resource temporarily unavailable
Nov 6 11:25:01 web2 crond[17446]: (laptoppa) ERROR (failed to change user)
What could be the problem?
There could be several things caused this. Here are ways to debug your crons:
Run it manually from shell:
php yourcron.php
Add logging from your cron file, maybe by adding error_log('check if running'); to see if it is indeed running.
As suggested above it could be permission issue too. Add execute permission to your cron:
chmod 755 yourcron.php
Check whether any Zombie processes for these users exist using the below command.
ps -eLF |grep -i username
Try killing those processes and check whether cronjobs are running after that.
sudo ps -eLF |grep username |awk '{print $2}' |xargs sudo kill -9
Dont kill any important running process !
I had a similar problem today. The cron in /var/spool/cron/userXXX had a script for /home/userYYY (another user) and so this error occurred. I removed the line that had userYYY and this was resolved.