Online bash script that calls another script ignore functions and commands - linux

I wrote a bash script which downloads another script. To run the first script, I use
curl -s get.domain.com | bash
I use the following script to download:
#!/bin/bash
setup_dir=$(dirname "$(readlink -f $0)")
setupbuild=$vmsetup_dir/setupbuild
fileslocation=files.domain.com
wget -r -np -nH -A .sh ${fileslocation} -P ${setupbuild}
find ${setupbuild} -name "*.sh" -exec chmod +x {} +
exec ${vmsetupbuild}/menu.sh
At the end of the script, I want to run the downloaded script. This is where things go wrong and I don't understand it at all.
The menu of the called script seems to work. However, when a choice is made, the echo is ignored and the exit also doesn't do anything.
If I start the script from the prompt, everything works as expected.
I have tried to put an if in various places, but that also didn't help. It seems like certain things in the called script are being ignored.
menu.sh:
#!/bin/bash
main_menu() {
clear
echo "1) Option 1"
echo "2) Option 2"
echo "3) Other option"
echo "4) Quit"
read -p "Enter your choice: " main_menu_choice
case $main_menu_choice in
1)
option1
main_menu
;;
2)
option2
main_menu
;;
3)
other_option
main_menu
;;
4)
echo "Exit"
exit 1
;;
*)
echo "Invalid option. Please try again."
sleep 2
main_menu
;;
esac
}
option1() {
echo "You chose option1."
sleep 2
}
option2() {
echo "You chose option2."
sleep 2
}
other_option() {
echo "You chose other option."
sleep 2
}
main_menu

The script is reading standard input from the pipe, so you can't respond to prompts from the terminal.
When it executes menu.sh you can redirect input back to the terminal.
exec ${vmsetupbuild}/menu.sh < /dev/tty

Related

if statement not working in cron

I have the following code: (record.sh)
cd $(dirname $0)
dt=$(date '+%d/%m/%Y %H:%M:%S');
echo $dt;
read action < /home/nfs/sauger/web/pi/action.txt
echo $action;
if [[ $action == *"start"* ]]
then
echo "start recording"
./gone.sh
exit 1
elif [[ $action == *"stop"* ]]
then
echo "stop recording"
./gone.sh
exit 1
else
#More stuff done here
fi
When I run this script manually the output is the following:
19/01/2016 19:07:11
start
start recording
If the same script is run via a (root) cronjob, the output is the following:
19/01/2016 19:07:01
start
As you can see, the file "action.txt" has been read without a problem ("start" is logged both times) so this should not be an issue of permissions or wrong paths. But when run as a cronjob, the if-statement is not called. No "start recording" appears.
So my question is: Why does the if-statement work when I call the script manually, but not when this is done via cron?
Your script is written for bash; these errors are almost certainly indicative of it being run with /bin/sh instead.
Either add an appropriate shebang and ensure that it's being called in a way that honors it (/path/to/script rather than sh /path/to/script), or fix it to be compatible. For instance:
case $action in
*start*)
echo "start recording"
./gone.sh
exit 1
;;
*stop*)
echo "stop recording"
./gone.sh
exit 1
;;
esac

Bash script output not going to stdout

I have a build process, kicked off by Make, that executes a lot of child scripts.
A couple of these child scripts require root privileges, so instead of running everything as root, or everything as sudo, I'm trying to only execute the scripts that need to be as root, as root.
I'm accomplishing this like so:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
Arg $1 is the user to run the script as, arg $2 is the script.
Arg $1 is either root (gotten with: $(whoami) since everything is under sudo), or the current user's account (gotten with: $(logname))
The entire build is kicked off as:
sudo make all
Sample from the Makefile:
LOG="runtime.log"
ROTATE_LOG:=$(shell bash ./scripts/utils/rotate_log.sh)
system:
/bin/bash -c "time ./scripts/system.sh 2>&1 | tee ${LOG}"
My problem is... none of the child scripts are printing output to stdout. I believe it to be some sort of issue with an almost recursive call of su root... but I'm unsure. From my understanding, these scripts should already be outputting to stdout, so perhaps I'm mistaken where the output is going?
To be clear, I'm seeing no output in either the logfile nor displaying to the terminal (stdout).
Updating for clarity:
Previously, I just ran all the scripts either with sudo or just as the logged in user... which with my makefile above, would print to the terminal (stdout) and logfile. Adding the execute_as_user() function is where the issue cropped up. The scripts execute and build the project... just no display "that it's working" and no logs.
UPDATE
Here is some snippets:
system.sh snippet:
execute_script() {
echo "Executing as user $3: $2"
RETURN=$(execute_as_user $3 ${SYSTEM_SCRIPTS}/$2)
if [ ${RETURN} -ne ${OK} ]
then
error $1 $2 ${RETURN}
fi
}
build_package() {
local RETURN=0
case "$1" in
system)
declare -a scripts=(\
"rootfs.sh" \
"base_files.sh" \
"busybox.sh" \
"iana-etc.sh" \
"kernel.sh" \
"firmware.sh" \
"bootscripts.sh" \
"network.sh" \
"dropbear.sh" \
"wireless_tools.sh" \
"e2fsprogs.sh" \
"shared_libs.sh"
)
for SCRIPT_NAME in "${scripts[#]}"; do
execute_script $1 ${SCRIPT_NAME} $(logname)
echo ""
echo -n "${SCRIPT_NAME}"
show_status ${OK}
echo ""
done
# finalize base system
echo ""
echo "Finalizing base system"
execute_script $1 "finalize.sh" $(whoami)
echo ""
echo -n "finalize.sh"
show_status ${OK}
echo ""
# package into tarball
echo ""
echo "Packing base system"
execute_script $1 "archive.sh" $(whoami)
echo ""
echo -n "archive.sh"
show_status ${OK}
echo ""
echo ""
echo -n "Build System: "
show_status ${OK}
;;
*)
echo "$1 is not supported!"
exit 1
esac
}
sample child script executed by system.sh
cd ${CLFS_SOURCES}/
tar -xvjf ${PKG_NAME}-${PKG_VERSION}.tar.bz2
cd ${CLFS_SOURCES}/${PKG_NAME}-${PKG_VERSION}/
make distclean
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
ARCH="${CLFS_ARCH}" make defconfig
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
# fixup some bugs with musl-libc
sed -i 's/\(CONFIG_\)\(.*\)\(INETD\)\(.*\)=y/# \1\2\3\4 is not set/g' .config
sed -i 's/\(CONFIG_IFPLUGD\)=y/# \1 is not set/' .config
etc...
Here's the entire system.sh script:
https://github.com/SnakeDoc/LiLi/blob/master/scripts/system.sh
(i know the project is messy... it's a learn-as-you-go style project)
Previously, I just ran all the scripts either with sudo or just as the
logged in user... which with my makefile above, would print to the
terminal (stdout) and logfile. Adding the execute_as_user() function
is where the issue cropped up. The scripts execute and build the
project... just no display "that it's working" and no logs.
Just a guess, but you're probably not calling your function or not calling it properly:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
execute_as_user "$#"
I also noticed that you're not passing any argument to the script at all. Is this meant?
./scripts/system.sh ???

Why this bash script can't get the pid of the background process by $!

I have a script like that:
su lingcat -c PHPRC\=\/home\/lingcat\/etc\/php5\
PHP_FCGI_CHILDREN\=4\ \/usr\/bin\/php\-loop\.pl\ \/usr\/bin\/php5\-cgi\ \-b\
127\.0\.0\.1\:9006\ \>\>\/home\/lingcat\/logs\/php\.log\ 2\>\&1\ \<\/dev\/null\ \&\
echo\ \$\!\ \>\/var\/php\-nginx\/135488849520817\.php\.pid
This is working. But there is too many \ in the script, they make the code unreadable. So, I wrote a new shell script:
#!/bin/sh
case "$1" in
'start')
su biergaizi -c "PHPRC=/home/biergaizi/etc/php5 PHP_FCGI_CHILDREN=2
/usr/bin/php-loop.pl /usr/bin/php-cgi -b /var/run/virtualhost/php5-fpm-biergaizi.test.sock >>/home/biergaizi/logs/php.log 2>&1 </dev/null &
echo $! > /var/php-nginx/biergaizi.test.php.pid"
RETVAL=$?
;;
'stop')
su biergaizi -c "kill `cat /var/php-nginx/biergaizi.test.php.pid` ; sleep 1"
RETVAL=$?
;;
'restart')
$0 stop ; $0 start
RETVAL=$?
;;
*)
echo "Usage: $0 { start | stop }"
RETVAL=1
;;
esac
exit
But /var/php-nginx/biergaizi.test.php.pid is empty.
What's wrong?
The .pid file is empty, because $! gets substituted by the shell executing your script, instead of the shell executing the commands you pass through su. And as there is no recently started background command in your script, it substitutes an empty string. So, shell started by su executes simply echo > /var/php-nginx/biergaizi.test.php.pid.
To prevent that, quote your command passed to su using single quotes, instead of double quotes. It is better to do that to the "stop" command as well. Like this:
su biergaizi -c 'PHPRC=/home/biergaizi/etc/php5 PHP_FCGI_CHILDREN=2
/usr/bin/php-loop.pl /usr/bin/php-cgi -b /var/run/virtualhost/php5-fpm-biergaizi.test.sock >>/home/biergaizi/logs/php.log 2>&1 </dev/null &
echo $! > /var/php-nginx/biergaizi.test.php.pid'
And this:
su biergaizi -c 'kill `cat /var/php-nginx/biergaizi.test.php.pid` ; sleep 1'
See http://www.gnu.org/software/bash/manual/html_node/Quoting.html for details.
try this:
Escape $ from $!, before passing to su -c.

Bash script function call error

I am writing my first Bash script and am running into a syntax issue with a function call.
Specifically, I want to invoke my script like so:
sh myscript.sh -d=<abc>
Where <abc> is the name of a specific directory inside of a fixed parent directory (~/app/dropzone). If the child <abc> directory doesn't exist, I want the script to create it before going to that directory. If the user doesn't invoke the script with a -d argument at all, I want the script to exist with a simple usage message. Here's my best attempt at the script so far:
#!/bin/bash
dropzone="~/app/dropzone"
# If the directory the script user specified exists, overwrite dropzone value with full path
# to directory. If the directory doesn't exist, first create it. If user failed to specify
# -d=<someDirName>, exit the script with a usage statement.
validate_args() {
args=$(getopt d: "$*")
set -- $args
dir=$2
if [ "$dir" ]
then
if [ ! -d "${dropzone}/targets/$dir" ]
then
mkdir ${dropzone}/targets/$dir
fi
dropzone=${dropzone}/targets/$dir
else
usage
fi
}
usage() {
echo "Usage: $0" >&2
exit 1
}
# Validate script arguments.
validate_args $1
# Go to the dropzone directory.
cd dropzone
echo "Arrived at dropzone $dropzone."
# The script will now do other stuff, now that we're in the "dropzone".
# ...etc.
When I try running this I get the following error:
myUser#myMachine:~/app/scripts$ sh myscript.sh -dyoyo
mkdir: cannot create directory `/home/myUser/app/dropzone/targets/yoyo': No such file or directory
myscript.sh: 33: cd: can't cd to dropzone
Arrived at dropzone /home/myUser/app/dropzone/targets/yoyo.
Where am I going wrong, and is my general approach even correct? Thanks in advance!
Move the function definitions to the top of the script (below the hash-bang). bash is objecting to the undefined (at that point) call to validate_args. usage definition should precede the definition of validate_args.
There also should be spacing in the if tests "[ " and " ]".
if [ -d "$dropzone/targets/$1" ]
The getopt test for option d should be-:
if [ "$(getopt d "$1")" ]
Here is a version of validate_args that works for me.
I also had to change the drop zone as on my shell ~ wouldn't expand in mkdir command.
dropzone="/home/suspectus/app/dropzone"
validate_args() {
args=$(getopt d: "$*")
set -- $args
dir=$2
if [ "$dir" ]
then
if [ ! -d "${dropzone}/targets/$dir" ]
then
mkdir ${dropzone}/targets/$dir
fi
dropzone=${dropzone}/targets/$dir
else
usage
fi
}
To pass in all args use $* as parameter -:
validate_args $*
And finally call the script like this for getopt to parse correctly-:
myscript.sh -d dir_name
When invoked, a function is indistinguishable from a command — so you don't use parentheses:
validate_args($1) # Wrong
validate_args $1 # Right
Additionally, as suspectus points out in his answer, functions must be defined before they are invoked. You can see this with the script:
usage
usage()
{
echo "Usage: $0" >&2
exit 1
}
which will report usage: command not found assuming you don't have a command or function called usage available. Place the invocation after the function definition and it will work fine.
Your chosen interface is not the standard Unix calling convention for commands. You'd normally use:
dropzone -d subdir
rather than
dropzone -d=subdir
However, we can handle your chosen interface (but not using getopts, the built-in command interpreter, and maybe not using GNU getopt either, and certainly not using getopt as you tried to do so). Here's workable code supporting -d=subdir:
#!/bin/bash
dropzone="$HOME/app/dropzone/targets"
validate_args()
{
case "$1" in
(-d=*) dropzone="$dropzone/${1#-d=}"; mkdir -p $dropzone;;
(*) usage;;
esac
}
usage()
{
echo "Usage: $0 -d=dropzone" >&2
exit 1
}
# Validate script arguments.
validate_args $1
# Go to the dropzone directory.
cd $dropzone || exit 1
echo "Arrived at dropzone $dropzone."
# The script will now do other stuff, now that we're in the "dropzone".
# ...etc.
Note the cautious approach with the cd $dropzone || exit 1; if the cd fails, you definitely do not want to continue in the wrong directory.
Using the getopts built-in command interpreter:
#!/bin/bash
dropzone="$HOME/app/dropzone/targets"
usage()
{
echo "Usage: $0 -d dropzone" >&2
exit 1
}
while getopts d: opt
do
case "$opt" in
(d) dropzone="$dropzone/$OPTARG"; mkdir -p $dropzone;;
(*) usage;;
esac
done
shift $(($OPTIND - 1))
# Go to the dropzone directory.
cd $dropzone || exit 1
echo "Arrived at dropzone $dropzone."
# The script will now do other stuff, now that we're in the "dropzone".
# ...etc.

wget with errorlevel bash output

I want to create a bash file (.sh) which does the following:
I call the script like ./download.sh www.blabla.com/bla.jpg
the script has to echo then if the file has downloaded or not...
How can I do this? I know I can use errorlevel but I'm new to linux so...
Thanks in advance!
Typically applications in Linux will set the value of the environment variable $? on failure. You can examine this return code and see if it gets you any error for wget.
#!/bin/bash
wget $1 2>/dev/null
export RC=$?
if [ "$RC" = "0" ]; then
echo $1 OK
else
echo $1 FAILED
fi
You could name this script download.sh. Change the permissions to 755 with chmod 755. Call it with the name of the file you wish to download. ./download.sh www.google.com
You could try something like:
#!/bin/sh
[ -n $1 ] || {
echo "Usage: $0 [url to file to get]" >&2
exit 1
}
wget $1
[ $? ] && {
echo "Could not download $1" | mail -s "Uh Oh" you#yourdomain.com
echo "Aww snap ..." >&2
exit 1
}
# If we're here, it downloaded successfully, and will exit with a normal status
When making a script that will (likely) be called by other scripts, it is important to do the following:
Ensure argument sanity
Send e-mail, write to a log, or do something else so someone knows what went wrong
The >&2 simply redirects the output of error messages to stderror, which allows a calling script to do something like this:
foo-downloader >/dev/null 2>/some/log/file.txt
Since it is a short wrapper, no reason to forsake a bit of sanity :)
This also allows you to selectively direct the output of wget to /dev/null, you might actually want to see it when testing, especially if you get an e-mail saying it failed :)
wget executes in non-interactive way. This means that wget work in the background and you can't catch de return code with $?.
One solution it's to handle the "--server-response" property, searching http 200 status code
Example:
wget --server-response -q -o wgetOut http://www.someurl.com
sleep 5
_wgetHttpCode=`cat wgetOut | gawk '/HTTP/{ print $2 }'`
if [ "$_wgetHttpCode" != "200" ]; then
echo "[Error] `cat wgetOut`"
fi
Note: wget need some time to finish his work, for that reason I put "sleep 5". This is not the best way to do but worked ok for test the solution.

Resources