Cron won't execute none of my commands on Ubuntu 21.10 impish - linux

I'm trying to run a Docker container every other minute that is stopped via cron job but it seems not working.
What I've done is launch the command crontab -e and add the line
*/1 * * * * docker start sender >> /home/cronlog.log 2>&1
I've added the user group to Docker as explained here (in fact I can access docker from the terminal without sudo)
I have also tried to add the command into a script as below
*/1 * * * * /home/start_container.sh >> /home/cronlog.log 2>&1
with the script containing
#!/bin/sh
docker start sender
but still, nothing happens. The cron process is working tho as using the command ps -ef | grep cron I got
root 881 1 0 08:42 ? 00:00:00 /usr/sbin/cron -f -P
nicola 10905 10178 0 11:31 pts/0 00:00:00 grep --color=auto cron
Am I missing something? (Obviously, the commands work if launched manually from the terminal)

Try using the docker path instead.
type the following command to get the path of docker.
$ where docker
/usr/bin/docker
/bin/docker
then try any one of the paths in the cron script
*/1 * * * * /bin/docker start sender >> /home/cronlog.log 2>&1
or
*/1 * * * * /usr/bin/docker start sender >> /home/cronlog.log 2>&1

It turned out that, for some reason, the cron doesn't like the /home/ (at least, in this specific instance)
I've fixed using another path such as
*/1 * * * * docker start sender >> /tmp/cronlog.log 2>&1

Related

Docker cron scheduled job not running

I am trying to use a docker container based on an Alpine image to run a scheduled cron job, following this tutorial, but after printing the statement in my startup script, the container just exits, without running my other script.
My docker-compose service is configured as follows:
cron:
image: alpine:3.11
command: /usr/local/startup.sh && crond -f -l 8
volumes:
- ./cron_tasks_folder/1min:/etc/periodic/1min/:ro
- ./cron_tasks_folder/15min:/etc/periodic/15min/:ro
- ./cron_tasks_folder/hourly:/etc/periodic/hourly/:ro
- ./scripts/startup.sh:/usr/local/startup.sh:ro
So it runs an initial script called startup.sh and then starts the cron daemon. The startup.sh script contains the following:
#!/bin/sh
echo "Starting startup.sh.."
echo "* * * * * run-parts /etc/periodic/1min" >> /etc/crontabs/root
crontab -l
sleep 300
I dropped a sleep command in there just so I could launch an interactive shell on the container and make sure everything inside it looks good. The script creates another folder for 1min scripts. I have added a test script in there, and I can verify it's there:
/etc/periodic/1min # ls -a
. .. testScript
The script is executable:
/etc/periodic/1min # ls -l testScript
-rwxr-xr-x 1 root root 31 Jul 30 01:51 testScript
And testScript is just an echo statement to make sure it's working first:
echo "The donkey is in charge"
And looking at the root file in etc/crontabs, I see the following (I've re-run the container several times, and each time it's creating a new 1min folder, which is unnecessary, but I think not the problem here):
# do daily/weekly/monthly maintenance
# min hour day month weekday command
*/15 * * * * run-parts /etc/periodic/15min
0 * * * * run-parts /etc/periodic/hourly
0 2 * * * run-parts /etc/periodic/daily
0 3 * * 6 run-parts /etc/periodic/weekly
0 5 1 * * run-parts /etc/periodic/monthly
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
* * * * * run-parts /etc/periodic/1min
The echo statement in testScript is never printed to my terminal, and the container exits with exit code 0 shortly after starting. I want to print this statement every minute... what am I missing?
In the docker compose file you have
command: /usr/local/startup.sh && crond -f -l 8
The intention is to run as a shell command, but it's not at all clear from the question that's what's going to happen; that depends on your ENTRYPOINT. Since it's defined with [] brackets, not additional shell will be provided. The command value will be passed as arguments to the ENTRYPOINT.
Assuming that will become a shell command, && in the shell runs the left hand side, and if that succeeds, then runs the right hand side. So startup.sh needs to complete before crond is executed. startup.sh ends with
sleep 300
crond is invoked only after that 300 seconds.
In either case, crond is either not invoked at all, or sleep has not been completing. The comments show that an error starting crond was discovered.
Using an entrypoint such as this is standard practice to configure the environment before, or provide runtime parameters when, invoking the main executable. To do it right, you should make sure to use exec to run the main executable so that it receives the signals that would otherwise go to the bash shell running the entrypoint script.
So at the end of startup.sh:
exec crond -f -l 8
Will replace the shell running startup.sh with crond, so that crond receives all signals (at this point the shell is gone). It's subtle but important!
In general, keep the invocation of the application as simple as possible. Case in point, your execution process was split between entrypoint, command, and startup script, with no clear interface between them. You wouldn't have gotten hung up on the invocation if you had put crond directly into the Dockerfile and left it at that. Sometimes arguments must be provided at runtime, but environment variables - which have names, not just positions - are often preferred. This keeps invocations simple and debugging straightforward. But, when that doesn't work, a shell script entrypoint is a great solution - just make sure to exec your final process!

Cronjob stopped executing

I used the crontab before but cannot get any command running anymore.
I am editing directly via crontab -e and testing with simple commands like
* * * * * echo "hello there" >> /Users/myUsername/Desktop/test.txt
Running this command ps -ef | grep cron | grep -v grep gives me this output:
0 270 1 0 6Sep20 ?? 0:00.61 /usr/sbin/cron
Today is 22Sep20. Did the crontab stop running?
My shell is zsh on MacOS.
On MacOS run crontab -l to list all installed cron scripts
or crontab -l -u [user] for another user.
Your * * * * * syntax means it's running every minute and that looks all fine to me. Check syntax here

bash script doesn't work through crontab

I am running a bash script that transfers files to my AWS bucket.If i run the bash script through my terminal it works fine (via ./myBash.sh).
However I put it in my crontab but there it doesn't work.This is my bash script
#!/bin/bash
s3cmd put /home/anonymous/commLogs.txt s3://myBucket/
echo transfer completed
echo now listing files in the s3 bucket
s3cmd ls s3://myBucket/
echo check
And this is my crontab-
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
46 13 * * * /bin/bash myBash.sh
And here is a list of things i have aready tried -
1)tried running the crontab with a node app to test whether crontab was working(the answer was yes)
2)tried running the crontab without the SHELL and PATH
3)Tried running the bash script from cron using sudo (46 13 * * * sudo myBash.sh)
4)tried running the bash without the /bin/bash
5) Searched many sites on the net for an answer without satisfactory results
Can anyone help me with what the problem may be?(I am running Ubuntu 14.04)
After a long time getting the same error, I just did this :
SHELL=/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
* * * * * /bin/bash /home/joaovitordeon/Documentos/test.sh
For anyone coming to this post.
I was having same problem and reason was crontab was running under root user and s3cmd was configured under ubuntu user.
so need to copy .s3cfg to root
cp -i /home/ubuntu/.s3cfg /root/.s3cfg

AWS Linux crontab job isnt executing script

First of all I've tried multiple solutions I know there are several posts with similar problems none of those were a solution for me.
I have a Clojure application which is started using:
lein run -m tsdb-delete.core
The plan is to execute this every day at midnight I want to avoid using Clojure based cron libraries and this is a very light weight application.
I created the following script (start.sh):
/usr/bin/lein run -m tsdb-delete.core
which calls this script at run time (delete.sh):
#!/bin/bash
echo "Deleting:" $1
OUTPUT="$(sudo /opt/opentsdb/build/tsdb scan --delete 30d-ago 7d-ago sum $1)"
echo "${OUTPUT}"
If I call './start.sh' manually it all works as expected and I see console output.
start.sh is located at /home/ec2-user/tsdb-delete/start.sh and delete.sh is located at /home/ec2-user/tsdb-delete/delete.sh
I have added the following to me crontab using crontab -e
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin
SHELL=/bin/bash
*/5 * * * * /home/ec2-user/tsdb-delete/start.sh > /var/tmp/tsdb-delete.out
* * * * * env > /tmp/env.output
* * * * * env > /tmp/env.output is used for debugging purposes, the contents of env.output are as follows:
SHELL=/bin/bash
USER=ec2-user
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin
PWD=/home/ec2-user
LANG=en_GB.UTF-8
SHLVL=1
HOME=/home/ec2-user
LOGNAME=ec2-user
_=/bin/env
and if I run env in the terminal myself I get the following:
HOSTNAME=ip-xx-xx-xx-xx
LESS_TERMCAP_md=
LESS_TERMCAP_me=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=xxxxxxxxx
LESS_TERMCAP_ue=
SSH_TTY=/dev/pts/0
USER=ec2-user
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
LESS_TERMCAP_us=
MAIL=/var/spool/mail/ec2-user
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/ec2-user/bin
PWD=/var/tmp
LANG=en_GB.UTF-8
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/ec2-user
LOGNAME=ec2-user
SSH_CONNECTION= xxxxxxx
LESSOPEN=||/usr/bin/lesspipe.sh %s
LESS_TERMCAP_se=
_=/bin/env
OLDPWD=/home/ec2-user
The key environment attributes seem to match, and in var/cron/log I see the following:
Oct 28 11:45:01 ip-xx-xx-xx-xx CROND[23591]: (ec2-user) CMD (/home/ec2-user/tsdb-delete/start.sh > /var/tmp/tsdb-delete.out)
Oct 28 11:45:01 ip-xx-xx-xx-xx CROND[23592]: (ec2-user) CMD (env > /tmp/env.output)
and in /var/spool/mail/ I don't see any error messages being thrown and the file /var/tmp/tsdb-delete.out is not created.
Any ideas?
sudo requires a tty, which doesn't exist while running a cron. (1,2)
Here's a better solution; place this in /etc/cron.d/tsdb-delete:
*/5 * * * * root /home/ec2-user/tsdb-delete/start.sh > /var/tmp/tsdb-delete.out
This requires having the execute bit set on start.sh. Also note the /etc/cron entries take a sixth argument which is the user.
It's also bad form to use something like sudo in a cron, and generally using a user cron (crontab -e) is not terribly friendly for configuration management. The above fixes these problems. Still, I'd recommend moving the script to a safer location (since it's running via root), and since it's root you can easily send output to /var/log/ (and append it):
*/5 * * * * root /opt/tsdb-delete/start.sh >> /var/log/tsdb-delete.out

How to reboot via cron on scheduled basis. Ubuntu 14.04

I have a very simple script that works from the command line.
#!/bin/bash
reboot
When I put a call to execute the script into root users crontab -e using the following format it does not run. It does run the first two commands, just that last one is giving me grief. I have no MTA installed as I do not need it.
*/10 * * * * service jwtpay restart
0 3 * * * bash /root/backup/mongo.backup.s3.sh kickass /root/backup >/dev/null 2>&1
0 */3 * * * bash /root/reboot.sh >/dev/null 2>&1
What am I missing?
Maybe the script is not executable... Since you use root's crontab why call the binary via a script and not the binary itself? Use the full path to the binary. It may vary on your system. Find out where it is with which reboot.
0 */3 * * * /sbin/reboot
Don't forget to restart the cron daemon, after changeing the crontab.

Resources