Laravel schedule: cron job special characters not allowed - cron

I'm trying to add a cron job on a shared hosting like this
/usr/bin/php /home/USER/public_html/LARAVEL_PROJECT/artisan schedule:run >> /dev/null 2>&1
The support confirmed the path and command is correct but they don't allow special characters in the command so I have to remove >> /dev/null 2>&1 and that doesn't make it work. Any work around?

You can create own shell script, add there the entire command (with redirection) and ask to add this shell file in to the cron
You can create the script to be: /home/USER/script.sh and to contain
#!/bin/bash
/usr/bin/php /home/USER/public_html/LARAVEL_PROJECT/artisan schedule:run >> /dev/null 2>&1
Then you need to make it executable (via ssh for example)
chmod +x /home/USER/script.sh
or
chmod 750 /home/USER/script.sh
And then ask support to run this script instead of your line.

Related

Root Crontab say command not found in bash script

Currently I am writing a little script which should add a cronjob to the root crontable. But it seems that my root crontable stopped working. When I try to run the crontab commands in my bash scrip, I get "command not found". Also it worked for some time and stopped working yesterday. Now when I enter "sudo crontab -l" I don't get "no crontab for root" anymore. I am not sure what I did wrong. Here is my code:
#!/bin/bash
sudo crontab -l > rootcron 2> /dev/null
sudo echo "test" >> rootcron
sudo crontab rootcron
sudo rm rootcron
You didn't specify when the command is to be run. Typically you would see something like:
*/5 * * * * touch /tmp/test-cron
So basically you probably have an invalid cron file. What are the contents of the file now?

Concatenate file output text with Crontab

I have followed up this question successfully, Using CRON jobs to visit url?, to maintain the following Cron task:
*/30 * * * * wget -O - https://example.com/operation/lazy-actions?lt=SOME_ACCESS_TOKEN_HERE >/dev/null 2>&1
The above Cron task works fine and it visits the URL periodically every 30 minutes.
However, the access token is recorded in a text file found in /home/myaccount/www/site/aToken.txt, the aToken file is very simple text file of one line which just contains the token string.
I have tried to read its contents and pass, using cat, it to the crontab command like the following:
*/30 * * * * wget -O - https://example.com/operation/lazy-actions?lt=|cat /home/myaccount/www/site/aToken.txt| >/dev/null 2>&1
However, the above solution has been failed to run the cronjob.
I edit Cronjobs using crontab -e using nano on Ubuntu 16.04
This is a quick solution that will do exactly what you want without the complicated one-liner:
Create this file in your myaccount -- You may also put it into your bin directory if you so desire just remember where you put it so you can call it from your CRON. Also make sure the user has permissions to read/write to the directory the sh file is in
wget.sh
#!/bin/bash
#simple cd -- change directory
cd /home/myaccount/www/site/
#grab token into variable aToken
aToken=`cat aToken.txt`
#simple cd -- move to wget directory
cd /wherever/you/want/the/wget/results/saved
#Notice the $ -- This is how we let the shell know that aToken is a variable = $aToken
#wget -O - https://example.com/operation/lazy-actions?lt=$aToken
wget -q -nv -O /tmp/wget.txt https://example.com/operation/lazy-actions?lt=$aToken >/dev/null 2>/dev/null
# You can writle logs etc etc afterward here. IE
echo "Job was successful" >> /dir/to/logs/success.log
Then simply call this file with your CRON like you are doing already.
*/30 * * * * sh /home/myaccount/www/site/wget.sh >/dev/null 2>&1
Built on that question, Concatenate in bash the output of two commands without newline character, I have got the following simple solution:
wget -O - https://example.com/operation/lazy-actions?lt="$(cat /home/myaccount/www/site/aToken.txt)" >/dev/null 2>&1
Where it able to read the contents of the text file and then echoed to the command stream.

Cron not executing the shell script + Linux [duplicate]

I have a script that checks if the PPTP VPN is running, and if not it reconnects the PPTP VPN. When I run the script manually it executes fine, but when I make a cron job, it's not running.
* * * * * /bin/bash /var/scripts/vpn-check.sh
Here is the script:
#!/bin/sh
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult="`echo "$result" | sed 's/^\(.................................\).*$$'`"
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
finally i found a solution ... instead of entering the cronjob with
crontab -e
i needed to edit the crontab file directly
nano /etc/crontab
adding e.g. something like
*/5 * * * * root /bin/bash /var/scripts/vpn-check.sh
and its fine now!
Thank you all for your help ... hope my solution will help other people as well.
After a long time getting errors, I just did this:
SHELL=/bin/bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin
* * * * * /bin/bash /home/joaovitordeon/Documentos/test.sh
Where test.sh contains:
#!/bin/bash
/usr/bin/python3 /home/joaovitordeon/Documentos/test.py;
In my case, the issue was that the script wasn't marked as executable. To make sure it is, run the following command:
chmod +x your_script.sh
If you're positive the script runs outside of cron, execute
printf "SHELL=$SHELL\nPATH=$PATH\n* * * * * /bin/bash /var/scripts/vpn-check.sh\n"
Do crontab -e for whichever crontab you're using and replace it with output of the above command. This should mirror most of your environment in case there is some missing path issue or something else. Also check logs for any errors it's getting.
Though it definitly looks like the script has an error or you messed something up when copying it here
sed: -e expression #1, char 44: unterminated `s' command
./bad.sh: 5: ./bad.sh: [[: not found
Simple alternate script
#!/bin/bash
if [[ $(ping -c3 192.168.17.27) == *"0 received"* ]]; then
/usr/sbin/pppd call home
fi
Your script can be corrected and simplified like this:
#!/bin/sh
log=/tmp/vpn-check.log
{ date; ping -c3 192.168.17.27; } > $log
if grep -q '0 received' $log; then
/usr/sbin/pppd call home >>$log 2>&1
fi
Through our discussion in comments we confirmed that the script itself works, but pppd doesn't, when running from cron. This is because something must be different in an interactive shell like your terminal window, and in cron. This kind of problem is very common by the way.
The first thing to do is try to remember what configuration is necessary for pppd. I don't use it so I don't know. Maybe you need to set some environment variables? In which case most probably you set something in a startup file, like .bashrc, which is usually not used in a non-interactive shell, and would explain why pppd doesn't work.
The second thing is to check the logs of pppd. If you cannot find the logs easily, look into its man page, and it's configuration files, and try to find the logs, or how to make it log. Based on the logs, you should be able to find what is missing when running in cron, and resolve your problem.
Was having a similar problem that was resolved when a sh was put before the command in crontab
This did not work :
#reboot ~myhome/mycommand >/tmp/logfile 2>&1
This did :
#reboot sh ~myhome/mycommand >/tmp/logfile 2>&1
my case:
crontab -e
then adding the line:
* * * * * ( cd /directory/of/script/ && /bin/sh /directory/of/script/scriptItself.sh )
in fact, if I added "root" as per the user, it thought "root" was a command, and it didn't work.
As a complement of other's answers, didn't you forget the username in your crontab script ?
Try this :
* * * * * root /bin/bash /var/scripts/vpn-check.sh
EDIT
Here is a patch of your code
#!/bin/sh
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult=`echo "$result" | /bin/sed 's/^\(.................................\).*$/\1/'`
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
In my case, it could be solved by using this:
* * * * * root ( cd /directory/of/script/ && /directory/of/script/scriptItself.sh )
I used some ./folder/-references in the script, which didn't work.
The problem statement is script is getting executed when run manually in the shell but when run through cron, it gives "java: command not found" error -
Please try below 2 options and it should fix the issue -
Ensure the script is executable .If it's not, execute below -
chmod a+x your_script_name.sh
The cron job doesn’t run with the same user with which you are executing the script manually - so it doesn't have access to the same $PATH variable as your user which means it can't locate the Java executable to execute the commands in the script. We should first fetch the value of PATH variable as below and then set it(export) in the script -
echo $PATH can be used to fetch the value of PATH variable.
and your script can be modified as below - Please see second line starting with export
#!/bin/sh
export PATH=<provide the value of echo $PATH>
/bin/ping -c3 192.168.17.27 > /tmp/pingreport
result=`grep "0 received" /tmp/pingreport`
truncresult="`echo "$result" | sed 's/^\(.................................\).*$$'`"
if [[ $truncresult == "3 packets transmitted, 0 received" ]]; then
/usr/sbin/pppd call home
fi
First of all, check if cron service is running. You know the first question of the IT helpdesk: "Is the PC plugged in?".
For me, this was happening because the cronjob was executing from /root directory but my shell script (a script to pull the latest code from GitHub and run the tests) were in a different directory. So, I had to edit my script to have a cd to my scripts folder. My debug steps were
Verified that my script run independent of cron job
Checked /var/log/cron to see if the cron jobs are running. Verified that the job is running at the intended time
Added an echo command to the script to log the start and end times to a file. Verified that those were getting logged but not the actual commands
Logged the result of pwd to the same file and saw that the commands were getting executed from /root
Tried adding a cd to my script directory as the first line in the script. When the cron job kicked off this time, the script got executed just like in step 1.
it was timezone in my case. I scheduled cron with my local time but server has different timezone and it does not run at all. so make sure your server has same time by date cmd
first run command env > env.tmp
then run cat env.tmp
copy PATH=.................. Full line and paste into crontab -e, line before your cronjobs.
try this
/home/your site folder name/public_html/gistfile1.sh
set cron path like above

sh file not running on cron ubuntu

I am trying to run a shell script on crontab on Ubuntu platform. I have tried googling and other links but nothing has helped so far.
This is my crontab:
*/2 * * * * sudo bash /data/html/mysite/site_cleanup.sh
This is the content of my sh file:
#!/bin/sh
# How many days retention do we want ?
DAYS=0
# geting present day
now=$(date +"%m_%d_%Y")
# Where is the base directory
BASEDIR=/data/html/mysite
#where is the backup directory
BKPDIR=/data/html/backup
# Where is the log file
LOGFILE=$BKPDIR/log/mysite.log
# add to tar
tar -cvzf $now.tar.gz $BASEDIR
mv $now.tar.gz $BKPDIR
# REMOVE OLD FILES
echo `date` Purge Started >> $LOGFILE
find $BASEDIR -mtime +$DAYS | xargs rm
echo `date` Purge Completed >> $LOGFILE
The same script runs from a terminal and gives the desired result.
Generic troubleshooting for noninteractive shell scripts
Put set -x; exec 2>/path/to/logfile at the top of your script to log all subsequent commands to a file as they're run. If this doesn't work, you'll know that your script isn't being run at all; if it does, you'll know where it fails and how.
If this is a personal crontab
If you're running crontab -e as a user (without sudo), then the crontab being modified is one for commands run with that user's permissions. Check that file permissions allow that user to modify the content in question (which, if these files are in a cgi-bin directory, may require being run by the same user as the web server).
If your intent is to have commands run as root, rather than as your own user, be sure you use sudo when editing the crontab to edit the system crontab instead (but please take care as to your script's correctness in this case -- carelessness such as missing quotes or lack of appropriate precautions in xargs usage can cause a script to delete the wrong files if malicious filenames are created):
sudo crontab -e ## to edit the system (root) crontab
...or, if you're cleaning up files owned by the apache user (for example; check which account is correct for your own operating system and web server):
sudo -u apache crontab -e ## to edit the apache user's crontab
Troubleshooting for a system crontab
Do not attempt to put a sudo command within the commands run by cron; with sudo's default configuration, it requires a TTY (a keyboard and screen) to be attached to a session in order to run. Thus, your crontab line should not contain sudo, but instead should look like the following:
*/2 * * * * bash /data/html/mysite/site_cleanup.sh
Your issue is likely coming from the sudo call from your user level cron. Unless you've gone through and edited the bashrc profile to allow that script to run without sudo it'll hang up every time.
So you can lookup how to run a script with no password by modifying the bashrc profile, remove the sudo call if you aren't doing something in your script that calls for Super User permissions, or as a last ditch, extremely bad idea you can call your script from root's cron by doing sudo crontab -e or sudo env EDITOR=nano crontab -e if you prefer nano as your editor.
try to add this line to the crontab of user root and without the sudo.
like this:
*/2 * * * * bash /data/html/mysite/site_cleanup.sh

rdiff-backup bash script and cron trouble

I have this very simple bash script:
#!/opt/bin/bash
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/bkp /volume1/backups/sql 2>&1 > /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/www/vhosts /volume1/backups/vhosts 2>&1 >> /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics myhost::/etc /volume1/backups/etc 2>&1 >> /var/log/rdiff-backup.log;
/opt/bin/rdiff-backup --force --print-statistics /volume1/homes /volume1/backups/homes 2>&1 >> /var/log/rdiff-backup.log;
cat /var/log/rdiff-backup.log | /opt/bin/nail -s "rdiff-backup log" me#email.com;
if I run the script from the command line, in this way:
nohup /path/to/my/scipt.sh &
it works fine, appending each rdiff-backup statistic report to the rdiff-backup.log and sending this file to my email address, as expected. But if I put the script in the crontab, the script make only one rdiff-backup job sending statistics via email. I cannot understand because the script doesn't work in the same way...
Any idea?
this is my cronjob entry:
30 19 * * * /opt/bin/bash /volume1/backups/backup.sh
via crontab only the last job is executed correctly, I think because this is the only one local backup. When I execute the script from command line I use the root user, and the public key of the root user is in the /root/./ssh/authorized_keys of the remote machine. The owner of the crontab file is the root user too, I created them through "crontab -e" using the root account.
First of all you need to make sure the script used in cron doesn't output anything, otherwise:
cron will assume there is an error
you will not see the error if any
A solution for this is to use
30 19 * * * /opt/bin/bash /volume1/backups/backup.sh 2>&1 >> /var/log/rdiff-backup-cron.log;
Second of all, it appears you are losing env variables when executing via cron, try adding the env settings to your script
#!/opt/bin/bash
. /root/.profile
/opt/bin/rdiff-backup --force --print-statistics myhost::/var/bkp /volume1/backups/sql 2>&1 > /var/log/rdiff-backup.log
if /root/.profile doesn't, exist try adding . /root/.bashrc or /etc/profile
I hope this helps.

Resources