wget in script not working when called from cron - cron

I've read a load of similar cases and can't for the life of me figure this one out...
I'm running a wget command inside a .sh script which is called from cron on reboot as follows:
#reboot /home/user/reboot_script.sh
The .sh script starts with
#!/bin/bash
And I have done chmod +x reboot_script.sh
The line that fails is :
Either
mac=$(</home/user/mac.txt)
Which may not be providing the content to the variable in the wget
OR
/usr/bin/wget "http://my.domain.com/$mac/line.txt" -O /home/user/line.txt
If I run the script from command line, it works absolutely fine but if it runs from the cron on reboot, the script runs, but line.txt is saved as an empty file (0 bytes). Again, if run directly from command line, it works fine.
I've looked at file permissions, absolute paths, everything I can think of, but I've been staring at this for hours now.
Any help would be appreciated. Thanks.

#reboot is too early in boot process IMHO. You should create a systemd script to wait network.
As a workaround, you can add a
sleep 30
or better:
until ping -c1 domain.com &>/dev/null; do
sleep 5
done
before your wget

Related

Script runs fully when ran manually but misses commands when ran from another script

I have 2 scripts that I'm testing to automate starting services on my server however they behave weirdly.
The first script is
#!/bin/sh
screen -dmS Test_Screen
sleep 1
sudo sh cd.sh
echo "finished"
Which runs perfectly however the script it runs does not and is as follows
#!/bin/sh
screen -S Test_Screen -X stuff "cd /home/Test"
sleep 1
screen -S Test_Screen -X eval "stuff \015"
sleep 1
echo "Complete"
The second script will run perfect if I run it from command line and will CD into the directory within the screen. However, if it runs from the first script it Will Not CD into the correct directory within the screen, but it will still print "Complete".
I'm Using CENTOS 6.7 and the latest version of GNU screen
Any Ideas?
This seems to be a problem with session nesting.
In your first script you create a session named Test_Screen.
In your second script the -S parameter tells screen to create a session of the same name. This might cause screen to exit and not cd into the correct directory.
You could move the cd command in front of the sudo sh cd.sh and remove those screen calls from the second script leaving only
stuff \015
echo "Complete"
Using the correct screenflags should also work.
#!/bin/sh
screen -dr Test_Screen -X stuff "cd /home/Test"
sleep 1
screen -dr Test_Screen -X eval "stuff \015"
sleep 1
echo "Complete"
For a more modern alternative to screen, have a look at tmux.
Ok so this turned out really weird. After posting i tried a couple of things on a centos 6.7 hyper V test environment and got the exact same issue. However, later in the day we ended up changing service provider and upgrading to centos 7 in the process. I am not sure why but since the update the script now runs perfectly and i was able to actually merge the two scripts into one in order to make it more efficient. If anyone knows why the update fixed it feel free to let me know.

script doesnt run while runing through crontab -e

So i have a script for waking up my computer using rtc... The script manually works fine but when i am trying to run it through crontab -e it doesnt work. I am not very familiar with cron so maby i am doing something wrong.
at this time i use the command:
#reboot /nikos/script/auto.sh
just to try and see if it is working...a tried some other ways(using path and some others but nothing work)
THe code of the script
#!/bin/bash
sh -c "echo 0 > /sys/class/rtc/rtc0/wakealarm"
sh -c "echo `date '+%s' -d '+ 420 minutes'` > /sys/class/rtc/rtc0/wakealarm"
any help is apreciated
EDIT:
in order to see if it worked i run:
cat /proc/driver/rtc
and i see that it rtc alarm is not enabled
I see one thing right off. You need to provide absolute paths to /bin/sh, /bin/echo and /bin/date
Putting absolute paths in scripts executed in cron solves most problems. If your scripts run fine on the command line but not in cron that is usually the culprit.
/bin/sh -c "/bin/echo 0 > /sys/class/rtc/rtc0/wakealarm"
/bin/sh -c "/bin/echo /bin/date '+%s' -d '+ 420 minutes' > /sys/class/rtc/rtc0/wakealarm"
Answers to another post at stackexchange also makes mention of cron having issues with #reboot scripts for many reasons. Check here: https://unix.stackexchange.com/questions/109804/crontabs-reboot-only-works-for-root
It may be worth your while to run it by adding it to a bootup script such as /etc/rc.d/rc.local instead of using cron. You will still have best results providing absolute paths to commands to make sure they are accessible.
Also: test if #reboot is working properly on your system. Not all versions of cron execute that properly. Add this script to test: #reboot /bin/echo "#reboot works" > /tmp/reboot_test
If your target directory is not available at boot time by the time your reboot script starts that will also cause a problem.

Simple bash script runs asynchronously when run as a cron job

I have a backup script written that will do the following in this order:
Zip up files via SSH on a remote backup server
Dump my local database
Transfer my local database via SSH rsync to the backup server
Now when I run this script from the command line in RHEL it works a-ok perfectly fine.
BUT when I set this script to run via a cronjob, the script does run, but from what I can tell, it's somehow running those above 3 commands simultaneously. Because of that, things are getting done out of order (my local database is completed dumping and transferred before the #1 zip job is actually complete).
Has anyone run across such a strange scenario? As the most simple fix, is there a way to force a script to run synchronously? Maybe add some kind of command to wait for the prior line to complete before moving on?
EDIT I added a example version of my backup script. It seems that the second line of my script runs at the same time as the first line of my script, so while the SSH command has been issued, it has not yet completed before my second line triggers and an SQL dump begins.
#!/bin/bash
THEDIR="sample"
THEDBNAME="mydatabase"
ssh -i /rsync/mirror-rsync-key sample#sample.com "tar zcvpf /$THEDIR/old-1.tar /$THEDIR/public_html/*"
mysqldump --opt -Q $THEDBNAME > mySampleDb
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/public_html/ sample#sample.com:/$THEDIR/public_html/
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/ sample#sample.com:/$THEDIR/
Unless you're explicitly using backgrounding (&) everything should run one-by-one, waiting until the prior finishes.
Perhaps you are actually seeing overlapping prior executions by cron? If so, you can prevent multi-execution by calling your script with flock
e.g. midnight cron entry from
0 0 * * * backup.sh
to
0 0 * * * flock -n /tmp/backup.lock -c backup.sh
If you want to run commands in a sequential order you can use ; operator.
; – semicolon operator
This operator Run multiple commands in one go, but in a sequential order. If we take three commands separated by semicolon, second command will run after first command completion, third command will run only after second command execution completes. One point we should know is that to run second command, it do not depend on first command exit status.
Execute ls, pwd, whoami commands in one line sequentially one after the other.
ls;pwd;whoami
Please correct me if i am not understanding your question correctly.

How can I trigger a delayed system shutdown from in a shell script?

On my private network I have a backup server, which runs a bacula backup every night. To save energy I use a cron job to wake the server, but I haven't found out, how to properly shut it down after the backup is done.
By the means of the bacula-director configuration I can call a script during the processing of the last backup job (i.e. the backup of the file catalog). I tried to use this script:
#!/bin/bash
# shutdown server in 10 minutes
#
# ps, 17.11.2013
bash -c "nohup /sbin/shutdown -h 10" &
exit 0
The script shuts down the server - but apparently it returns just during the shutdown,
and as a consequence that last backup job hangs just until the shutdown. How can I make the script to file the shutdown and return immediately?
Update: After an extensive search I came up with a (albeit pretty ugly) solution:
The script run by bacula looks like this:
#!/bin/bash
at -f /root/scripts/shutdown_now.sh now + 10 minutes
And the second script (shutdown_now.sh) looks like this:
#!/bin/bash
shutdown -h now
Actually I found no obvious method to add the required parameters of shutdown in the syntax of the 'at' command. Maybe someone can give me some advice here.
Depending on your backup server’s OS, the implementation of shutdown might behave differently. I have tested the following two solutions on Ubuntu 12.04 and they both worked for me:
As the root user I have created a shell script with the following content and called it in a bash shell:
shutdown -h 10 &
exit 0
The exit code of the script in the shell was correct (tested with echo $?). The shutdown was still in progress (tested with shutdown -c).
This bash function call in a second shell script worked equally well:
my_shutdown() {
shutdown -h 10
}
my_shutdown &
exit 0
No need to create a second BASH script to run the shutdown command. Just replace the following line in your backup script:
bash -c "nohup /sbin/shutdown -h 10" &
with this:
echo "/sbin/poweroff" | /usr/bin/at now + 10 min >/dev/null 2>&1
Feel free to adjust the time interval to suit your preference.
If you can become root: either log in as, or sudo -i this works (tested on ubuntu 14.04):
# shutdown -h 20:00 & //halts the machine at 8pm
No shell script needed. I can then log out, and log back in, and the process is still there. Interestingly, if I tried this with sudo in the command line, then when I log out, the process does go away!
BTW, just to note, that I also use this command to do occasional reboots after everyone has gone home.
# shutdown -r 20:00 & //re-boots the machine at 8pm

Run script with rc.local: script works, but not at boot

I have a node.js script which need to start at boot and run under the www-data user. During development I always started the script with:
su www-data -c 'node /var/www/php-jobs/manager.js
I saw exactly what happened, the manager.js works now great. Searching SO I found I had to place this in my /etc/rc.local. Also, I learned to point the output to a log file and to append the 2>&1 to "redirect stderr to stdout" and it should be a daemon so the last character is a &.
Finally, my /etc/rc.local looks like this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
su www-data -c 'node /var/www/php-jobs/manager.js >> /var/log/php-jobs.log 2>&1 &'
exit 0
If I run this myself (sudo /etc/rc.local): yes, it works! However, if I perform a reboot no node process is running, the /var/log/php-jobs.log does not exist and thus, the manager.js does not work. What is happening?
In this example of a rc.local script I use io redirection at the very first line of execution to my own log file:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
exec 1>/tmp/rc.local.log 2>&1 # send stdout and stderr from rc.local to a log file
set -x # tell sh to display commands before execution
/opt/stuff/somefancy.error.script.sh
exit 0
On some linux's (Centos & RH, e.g.), /etc/rc.local is initially just a symbolic link to /etc/rc.d/rc.local. On those systems, if the symbolic link is broken, and /etc/rc.local is a separate file, then changes to /etc/rc.local won't get seen at bootup -- the boot process will run the version in /etc/rc.d. (They'll work if one runs /etc/rc.local manually, but won't be run at bootup.)
Sounds like on dimadima's system, they are separate files, but /etc/rc.d/rc.local calls /etc/rc.local
The symbolic link from /etc/rc.local to the 'real' one in /etc/rc.d can get lost if one moves rc.local to a backup directory and copies it back or creates it from scratch, not realizing the original one in /etc was just a symbolic link.
I ended up with upstart, which works fine.
In Ubuntu I noticed there are 2 files. The real one is /etc/init.d/rc.local; it seems the other /etc/rc.local is bogus?
Once I modified the correct one (/etc/init.d/rc.local) it did execute just as expected.
You might also have made it work by specifying the full path to node. Furthermore, when you want to run a shell command as a daemon you should close stdin by adding 1<&- before the &.
I had the same problem (on CentOS 7) and I fixed it by giving execute permissions to /etc/local:
chmod +x /etc/rc.local
if you are using linux on cloud, then usually you don't have chance to touch the real hardware using your hands. so you don't see the configuration interface when booting for the first time, and of course cannot configure it. As a result, the firstboot service will always be in the way to rc.local. The solution is to disable firstboot by doing:
sudo chkconfig firstboot off
if you are not sure why your rc.local does not run, you can always check from /etc/rc.d/rc file because this file will always run and call other subsystems (e.g. rc.local).
I got my script to work by editing /etc/rc.local then issuing the following 3 commands.
sudo mv /filename /etc/init.d/
sudo chmod +x /etc/init.d/filename
sudo update-rc.d filename defaults
Now the script works at boot.
I am using CentOS 7.
$ cd /etc/profile.d
$ vim yourstuffs.sh
Type the following into the yourstuffs.sh script.
type whatever you want here to execute
export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH
Save and reboot the OS.
I have used rc.local in the past. But I have learned from my experience that the most reliable way to run your script at the system boot time is is to use #reboot command in crontab. For example:
#reboot path_to_the_start_up_script.sh
This is most probably caused by a missing or incomplete PATH environment variable.
If you provide full absolute paths to your executables (su and node) it will work.
It is my understanding that if you place your script in a certain RUN Level, you should use ln -s to link the script to the level you want it to work in.
first make the script executable using
sudo chmod 755 /path/of/the/file.sh
now add the script in the rc.local
sh /path/of/the/file.sh
before exit 0
in the rc.local,
next make the rc.local to executable with
sudo chmod 755 /etc/rc.local
next to initialize the rc.local use
sudo /etc/init.d/rc.local start
this will initiate the rc.local
now reboot the system.
Done..
I found that because I was using a network-oriented command in my rc.local, sometimes it would fail. I fixed this by putting sleep 3 at the top of my script. I don't know why but it seems when the script is run the network interfaces aren't properly configured or something, and this just allows some time for the DHCP server or something. I don't fully understand but I suppose you could give it a try.
I had exactly same issue, the script was running fine locally but when I reboot/power-on it was not.
I resolved the issue by changing the file path. Basically need to give the complete path in the script. While running locally, file can be accessed but when running on reboot, local path will not be understood.
1 Do not recommend using root to run the apps such as node app.
Well you can do it but may catch more exceptions.
2 The rc.local normally runs as root user.
So if the your script should runs as another user such as www U should make sure the PATH and other environment is ok.
3 I find a easy way to run a service as a user:
sudo -u www -i /the/path/of/your/script
Please prefer the sudo manual~
-i [command]
The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a loginshell...
rc.local only runs on startup. If you reboot and want the script to execute, it needs to go into the rc.0 file starting with the K99 prefix.

Resources