Unable to run shell script in crontab - linux

I am unable to make a script execute successfully from crontab.
When the script is executed manually, it works fine. When added to the crontab it gives errors.
When the script is executed manually as follows it all works fine:
cd /home/admin/git/Repo
./lunchpad2.sh
The script is added to crontab as follows:
sudo crontab -e
30 13 * * * /home/admin/git/Repo/lunchpad2.sh > /home/admin/git/Repo/outcome.err
lunchpad2.sh has 744 permissions set;
The script itself:
#!/bin/bash -p
PATH=$PATH:/home/admin/git/Repo
echo "--> Starting!"
echo "--> Stopping docker"
docker-compose down
echo "--> Switching files"
mv dc_conf_standby.py dc_conf_aboutready.py
mv dc_conf.py dc_conf_standby.py
mv dc_conf_aboutready.py dc_conf.py
echo "--> Building docker"
docker-compose up -d --build
echo "--> Completed!"
The errors that are generated:
/home/admin/git/Repo/lunchpad2.sh: line 7: docker-compose: command not found
mv: cannot stat ‘dc_conf_standby.py’: No such file or directory
mv: cannot stat ‘dc_conf.py’: No such file or directory
mv: cannot stat ‘dc_conf_aboutready.py’: No such file or directory
/home/admin/git/Repo/lunchpad2.sh: line 15: docker-compose: command not found

I see two issues here:
You need to either cd in the script or in the cron job. Cron runs the command in your home directory. You can echo "$PWD" to confirm.
You need to specify docker-compose executable path (Run "which docker-compose" to confirm)
#!/bin/bash -p
cd /home/admin/git/Repo
echo "--> Starting!"
echo "--> Stopping docker"
/usr/bin/docker-compose down
echo "--> Switching files"
mv dc_conf_standby.py dc_conf_aboutready.py
mv dc_conf.py dc_conf_standby.py
mv dc_conf_aboutready.py dc_conf.py
echo "--> Building docker"
/usr/bin/docker-compose up -d --build
echo "--> Completed!"

Related

How to run script multiple times and after every execution of command to wait until the device is ready to execute again?

I have this bash script:
#!/bin/bash
rm /etc/stress.txt
cat /dev/smd10 | tee /etc/stress.txt &
for ((i=0; i< 1000; i++))
do
echo -e "\nRun number: $i\n"
#wait untill module restart and bee ready for next restart
dmesg | grep ERROR
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
echo -e "\nADB device booted successfully\n"
done
I want to restart module 1000 times using this script.
Module is like android device witch has linux inside it. But I use Windows.
AT+CFUN=1,1 - reset
When I push script, after every restart I need a command which will wait module and start up again and execute script 1000 times. Then I do pull in .txt file and save all output content.
Which command should I use?
I try commands like wait, sleep, watch, adb wait-for-device, ps aux | grep... Nothing works.
Can someone help me with this?
I find the solution. This is how my script actually looks:
#!/bin/bash
cat /dev/smd10 &
TEST=$(cat /etc/output.txt)
RESTART_TIMES=1000
if [[ $TEST != $RESTART_TIMES ]]
then
echo $((TEST+1)) > /etc/output.txt
dmesg
echo -e 'AT+CFUN=1,1\r\n' > /dev/smd10
fi
These are the steps that you need to do:
adb push /path/to/your/script /etc/init.d
cd /etc
cat outputfile.txt - make an output file and write inside file 0 ( echo 0 > output.txt )
cd init.d
ls - you should see rc5.d
cd .. then cd rc5.d - go inside
ln -s ../init.d/yourscript.sh S99yourscript.sh
ls - you should see S99yourscript.sh
cd .. return to init.d directory
chmod +x yourscript.sh - add permision to your script
./yourscript.sh

Sourcing files in shell script vs sourcing on command line

I have the problem that my shell script is not acting exactly the same as my manual typing into a console. I am attempting to find and source some setup files in a shell script as follows:
#!/bin/bash
TURTLE_SHELL=bash
# source setup.sh from same directory as this file
_TURTLE_SETUP_DIR=$(builtin cd "`dirname "${BASH_SOURCE[0]}"`" > /dev/null && pwd)
. "$_TURTLE_SETUP_DIR/turtle_setup.sh"
This bash file calls a .sh file:
#!/bin/env sh
_TURTLE_ROS_SETUP_DIR=$_TURTLE_SETUP_DIR/../devel
if [ -z "$TURTLE_SHELL" ]; then
TURTLE_SHELL=sh
fi
if [ -d "$PX4_FIRMWARE_DIR/integrationtests" ]; then
if [ -f "$PX4_FIRMWARE_DIR/integrationtests/setup_gazebo_ros.bash" ]; then
. "$PX4_FIRMWARE_DIR/integrationtests/setup_gazebo_ros.bash" "$PX4_FIRMWARE_DIR"
fi
fi
if [ "$TURTLE_SHELL" = "bash" ]; then
if [ -f "$_TURTLE_ROS_SETUP_DIR/setup.bash" ]; then
source $_TURTLE_ROS_SETUP_DIR/setup.bash
fi
else
if [ "$TURTLE_SHELL" = "sh" ]; then
if [ -f "$_TURTLE_ROS_SETUP_DIR/setup.sh" ]; then
source $_TURTLE_ROS_SETUP_DIR/setup.sh
fi
fi
fi
The line in question is:
. "$PX4_FIRMWARE_DIR/integrationtests/setup_gazebo_ros.bash" "$PX4_FIRMWARE_DIR"
I have made sure that this code is actually running and that my environment variables are correct. If I run this command on the command line everything works well. However, the same is not true when the file is sourced via shell script. Why is this? Is there something different about the environment of a shell script that is different from a command line. Also, how can I fix this problem?
Edit:
I am sourcing either the .bash or the .sh scale, depending upon which shell I am using.
Edit 2:
I am sourcing this script. Thus, everything is run in my default bash terminal, and it is all run within the same terminal and not a terminal spawned from a child process. Why is the script not sourcing setup_gazebo_ros.bash within the current shell?
It's the same reason why you source the env script and not run it. When you run the script it runs in a new shell and the variables are not transferred back to the parent shell.
To illustrate
$ cat << ! > foo.sh
> export foo='FOO'
> !
$ chmod +x foo.sh
$ ./foo.sh
$ echo $foo
$ source ./foo.sh
$ echo $foo
FOO

Script is trying to move a directory instead of a file - need assistance

The script appears to be correct. However, after FTP'ing all the files in the directory, it gives me the error that it is trying to move a directory into a directory of itself.
Any ideas on why this is occurring?
mysql -u ????? -p????? -h ????? db < $SCRIPT_FOLDER/script.sql > script.xls
echo "###############################################################################"
echo "FTP the files"
#for FILE in `ls $SOURCE_FOLDER/`
for FILE in $SOURCE_FOLDER/*.xls
do
echo "# Uploading $SOURCE_FOLDER/$FILE" >> /tmp/CasesReport.copy.out
sshpass -p ???? sftp -oBatchMode=no -b - user#ftp << END
cd /source/directory/
put $SOURCE_FOLDER/$FILE
bye
END
echo "Moving $FILE to $SOURCE_FOLDER/history/"
mv $SOURCE_FOLDER/$FILE $SOURCE_FOLDER/history/$FILE
$FILE already contains $SOURCE_FOLDER, so you put command is doubling the path.
Example
$ cd /tmp
$ touch foo.txt bar.txt
$ cd
$ SOURCE_FOLDER=/tmp
$ for FILE in $SOURCE_FOLDER/*.txt; do echo "put $SOURCE_FOLDER/$FILE"; done
put /tmp//tmp/bar.txt
put /tmp//tmp/foo.txt
Inside the for loop, just use "$FILE"

script to copy, install and execute on multiple hosts

I am trying to copy few files into multiple hosts, install/configure those on each with running specific commands depending on OS type. The IP addresses for each host are read from host.txt file.
It appears when I run the script, it does not execute on the remote hosts. Can someone help identify the issues with this script? Sorry for this basic one, I am quite new into shell scripting.
#!/bin/bash
export AGENT=agent-x86-64-linux-5.8.1.tar.gz
export AGENT_PROPERTIES_NONDMZ=agent.properties.nondmz
export agent_INIT=agent.sh
echo "####Installing hqagent####"
while read host; do
scp $AGENT $AGENT_PROPERTIES_NONDMZ $agent_INIT root#$host:/opt
if ssh -n root#$host '[ "$(awk "/CentOS/{print}" /etc/*release)" ] '
then
cd /opt
tar -xvzf $AGENT
mv -f /opt/agent.properties.nondmz /opt/agent-5.8.1/conf/agent.properties
mkdir /opt/hqagent/
ln -s /opt/agent-5.8.1/ /opt/hqagent/agent-current
useradd hqagent
groupadd hqagent
chown -R hqagent:hqagent /opt/hqagent /opt/agent-5.8.1/
cd /etc/init.d
chmod 755 hqagent.sh
chkconfig --add hqagent.sh
su - hqagent
/opt/agent-5.8.1/bin/hq-agent.sh start
else
cd /opt
tar -xvzf $AGENT
mv -f /opt/agent.properties.nondmz /opt/agent-5.8.1/conf/agent.properties
rm -rf /opt/hqagent.sh
mkdir /opt/hqagent/
ln -s /opt/agent-5.8.1/ /opt/hqagent/agent-current
useradd hqagent
groupadd hqagent
chown -R hqagent:hqagent /opt/hqagent /opt/agent-5.8.1
cd /etc/init.d
ln -s /opt/hqagent/agent-current/bin/hq-agent.sh hqagent.sh
cd /etc/init.d/rc3.d/
ln -s /etc/init.d/hqagent.sh S99hqagent
ln -s /etc/init.d/hqagent.sh K01hqagent
cd ../rc5.d
ln -s /etc/init.d/hqagent.sh S99hqagent
ln -s /etc/init.d/hqagent.sh K01hqagent
chkconfig --add hqagent.sh
su - hqagent
/opt/agent-5.8.1/bin/hq-agent.sh start
fi
done < hosts.txt
error:
tar (child): agent-x86-64-linux-5.8.1.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
mv: cannot stat `/opt/agent.properties.nondmz': No such file or directory
mkdir: cannot create directory `/opt/hqagent/': File exists
ln: creating symbolic link `/opt/hqagent/agent-current': File exists
useradd: user 'hqagent' already exists
groupadd: group 'hqagent' already exists
chown: cannot access `/opt/agent-5.8.1/': No such file or directory
chmod: cannot access `hqagent.sh': No such file or directory
error reading information on service hqagent.sh: No such file or directory
-bash: line 1: 10.145.34.6: command not found
-bash: line 2: 10.145.6.10: command not found
./hq-install.sh: line 29: /opt/agent-5.8.1/bin/hq-agent.sh: No such file or directory
It appears that the problem is that you run this script on the "master" server, but somehow expect the branches of your if-statement to be run on the remote hosts. You need to factor those branches out into their own files, copy them to the remote hosts along with the other files, and in your if-statement, each branch should just be a ssh command to the remote host, triggering the script you copied over.
So your master script would look something like:
#!/bin/bash
export AGENT=agent-x86-64-linux-5.8.1.tar.gz
export AGENT_PROPERTIES_NONDMZ=agent.properties.nondmz
export agent_INIT=agent.sh
# Scripts containing the stuff you want done on the remote hosts
centos_setup=centos_setup.sh
other_setup=other_setup.sh
echo "####Installing hqagent####"
while read host; do
echo " ++ Copying files to $host"
scp $AGENT $AGENT_PROPERTIES_NONDMZ $agent_INIT root#$host:/opt
echo -n " ++ Running remote part on $host "
if ssh -n root#$host '[ "$(awk "/CentOS/{print}" /etc/*release)" ] '
then
echo "(centos)"
scp $centos_setup root#$host:/opt
ssh root#host "/opt/$centos_setup"
else
echo "(generic)"
scp $other_setup root#$host:/opt
ssh root#host "/opt/$other_setup"
fi
done < hosts.txt
The contents of the two auxiliary scrips would be the current contents of the if-branches in your original.

TERM environment variable not set

I have a file.sh with this, when run show : TERM environment variable not set.
smbmount //172.16.44.9/APPS/Interfas/HERRAM/sc5 /mnt/siscont5 -o
iocharset=utf8,username=backup,password=backup2011,r
if [ -f /mnt/siscont5/HER.TXT ]; then
echo "No puedo actualizar ahora"
umount /mnt/siscont5
else
if [ ! -f /home/emni/siscont5/S5.TXT ]; then
echo "Puedo actualizar... "
touch /home/emni/siscont5/HER.TXT
touch /mnt/siscont5/SC5.TXT
mv -f /home/emni/siscont5/CCORPOSD.DBF /mnt/siscont5
mv -f /home/emni/siscont5/CCTRASD.DBF /mnt/siscont5
rm /mnt/siscont5/SC5.TXT
rm /home/emni/siscont5/HER.TXT
echo "La actualizacion ha sido realizada..."
else
echo "No puedo actualizar ahora: Interfaz exportando..."
fi
fi
umount /mnt/siscont5
echo "/mnt/siscont5 desmontada..."
You can see if it's really not set. Run the command set | grep TERM.
If not, you can set it like that:
export TERM=xterm
Using a terminal command i.e. "clear", in a script called from cron (no terminal) will trigger this error message. In your particular script, the smbmount command expects a terminal in which case the work-arounds above are appropriate.
You've answered the question with this statement:
Cron calls this .sh every 2 minutes
Cron does not run in a terminal, so why would you expect one to be set?
The most common reason for getting this error message is because the script attempts to source the user's .profile which does not check that it's running in a terminal before doing something tty related. Workarounds include using a shebang line like:
#!/bin/bash -p
Which causes the sourcing of system-level profile scripts which (one hopes) does not attempt to do anything too silly and will have guards around code that depends on being run from a terminal.
If this is the entirety of the script, then the TERM error is coming from something other than the plain content of the script.
You can replace :
export TERM=xterm
with :
export TERM=linux
It works even in kernel with virgin system.
SOLVED: On Debian 10 by adding "EXPORT TERM=xterm" on the Script executed by CRONTAB (root) but executed as www-data.
$ crontab -e
*/15 * * * * /bin/su - www-data -s /bin/bash -c '/usr/local/bin/todos.sh'
FILE=/usr/local/bin/todos.sh
#!/bin/bash -p
export TERM=xterm && cd /var/www/dokuwiki/data/pages && clear && grep -r -h '|(TO-DO)' > /var/www/todos.txt && chmod 664 /var/www/todos.txt && chown www-data:www-data /var/www/todos.txt
If you are using the Docker PowerShell image set the environment variable for the terminal like this with the -e flag
docker run -i -e "TERM=xterm" mcr.microsoft.com/powershell

Resources