I have a shell script being executed by my cron tab that runs a couple rsync commands. When I run it from command line it writes the log file as expected but when run by the cron I don't get the log file.
Here is what I'm running.
rsync --log-file=noon_file_backup.log -urv -e ssh /var/www/html/ root#192.168.10.3:/var/devbackups/noon
Related
I'm trying to run cron job in azure cloud shell
but it is not working
This is my simple cron job
* * * * * /home/meet/clouddrive/temp.sh
where
cat /home/meet/clouddrive/temp.sh
#!/bin/bash
echo "meet" >> /home/meet/clouddrive/test.txt
pwd
/home/meet/clouddrive
meet [ ~/clouddrive ]$ ls
temp.sh
The script is located at /home/meet/clouddrive/temp.sh. However, the pwd command in the script outputs /home/meet/clouddrive, which suggests that the script is not in the current working directory. To fix this, you can either adjust the path to the script in the cron job, or you can change the current working directory in the script to the correct location.
Make sure that the script has the appropriate permissions to be executed. You can check the permissions for the script using the ls -l command, and you can change the permissions using the chmod command. For example, to make the script executable by the owner, you can run chmod u+x /home/meet/clouddrive/temp.sh.
Can you try running the script manually from the command line to see if it works fine?
I tried to reproduce the same in my environment and got the results like below:
I tried to run Cron job in azure cloud shell like below:
when I try crontab -l there is no job from here.
Then create crontab -e
Run */1 * * * * echo "this is a test" /home/imran123/testfile.txt
Then try to create a file using vi testfile.txt and I mention "This is test"
Then try to give execute permission like below:
chmod +x test.sh
Then when I executed cmd it run successfully like below:
crontab -l
cat testfile.txt
I have created crontab file using the following command.
crontab -e
Then in crontab file
#reboot sleep 120 && /home/xavier/run.sh
run.sh file has
#crontab -e
SHELL=/bin/bash
PATH=/bin:/sbin/:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-app
./deepstream-app -c ../../../../samples/configs/deepstream-app/stluke_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_record.txt
deepstream-app program runs after two minutes. But it stops. I can see in systemmonitor.
I'm not sure passing arguments to deepstream-app is correct.
On terminal, the following two commands are used to run c++ exe file.
cd /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-app
./deepstream-app -c ../../../../samples/configs/deepstream-app/stluke_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_record.txt
So I run exactly in shell file.
What could be wrong?
I have a shell script called import.sh . This script will be used only once and will run for atleast 2 days.
I am able to schedule a cronjob like below.
02 10 25 7 * while IFS=',' read a;do /home/$USER/import.sh $a;done < /home/$USER/input/xaa
input.sh is the shell script
xaa is the file that contains arguments.
Now I want to run this script now.
I have tried ./import.sh xaa and sh -x import.sh xaa but If I run them in a terminal then I have to leave the terminal open for the time the script runs which might take more than 2 days.
How can I schedule the job to run now and terminate as soon as it finishes.
When using the command line interface for Linux, prefixing any command with nohup prevents the command from being aborted if you log out or exit the command line interface.
So you can do something like below.
nohup ./import.sh xaa
I have an etc/init.d script that tries to call a shell script in a /home/myuser directory. One shell script for startup, another shell script for shutdown.
The start script gets called just fine ( from rc3.d/S99cslink , rc5.d/S99cslink ) but when I try to call the stop script ( from rc0.d/K01cslink ) I get the message that /home/myuser/bin/stop_service.sh cannot be found.
/etc/rc.d/init.d/cslink: line 42: /home/myuser/bin/stop_service.sh: No such file or directory
I verified that at the point in time when I am trying to run /home/myuser/bin/stop_service.sh , /home/myuser is unavailable (an ls -l /home/myuser >/tmp/mylog.log 2>&1 from inside the init.d script shows the error
ls: cannot access /home/myuser: No such file or directory
Both the start) script and the stop) script in init.d are run with runuser -l myuser, so I don't think it's a permissions problem.
Why would /home/myuser be unavailab.e, and can I run my script at a different point in time when /home/myuser is still available?
All the answers I see through searching are saying that I should check for Windows-style carriage returns in my stop_service.sh script, but I have checked and that's not the issue here.
I have a backup script written that will do the following in this order:
Zip up files via SSH on a remote backup server
Dump my local database
Transfer my local database via SSH rsync to the backup server
Now when I run this script from the command line in RHEL it works a-ok perfectly fine.
BUT when I set this script to run via a cronjob, the script does run, but from what I can tell, it's somehow running those above 3 commands simultaneously. Because of that, things are getting done out of order (my local database is completed dumping and transferred before the #1 zip job is actually complete).
Has anyone run across such a strange scenario? As the most simple fix, is there a way to force a script to run synchronously? Maybe add some kind of command to wait for the prior line to complete before moving on?
EDIT I added a example version of my backup script. It seems that the second line of my script runs at the same time as the first line of my script, so while the SSH command has been issued, it has not yet completed before my second line triggers and an SQL dump begins.
#!/bin/bash
THEDIR="sample"
THEDBNAME="mydatabase"
ssh -i /rsync/mirror-rsync-key sample#sample.com "tar zcvpf /$THEDIR/old-1.tar /$THEDIR/public_html/*"
mysqldump --opt -Q $THEDBNAME > mySampleDb
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/public_html/ sample#sample.com:/$THEDIR/public_html/
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/ sample#sample.com:/$THEDIR/
Unless you're explicitly using backgrounding (&) everything should run one-by-one, waiting until the prior finishes.
Perhaps you are actually seeing overlapping prior executions by cron? If so, you can prevent multi-execution by calling your script with flock
e.g. midnight cron entry from
0 0 * * * backup.sh
to
0 0 * * * flock -n /tmp/backup.lock -c backup.sh
If you want to run commands in a sequential order you can use ; operator.
; – semicolon operator
This operator Run multiple commands in one go, but in a sequential order. If we take three commands separated by semicolon, second command will run after first command completion, third command will run only after second command execution completes. One point we should know is that to run second command, it do not depend on first command exit status.
Execute ls, pwd, whoami commands in one line sequentially one after the other.
ls;pwd;whoami
Please correct me if i am not understanding your question correctly.