Pass a full bash script line to another bash function to execute - linux

in the BASH code below, the variable ECHO_ALL is a global and set to either 'yes' or 'no' based on input parsing of options.
--- begin of ~/scripts/util/util-optout.sh ---
########################################
# #param $#
# #return the return value from $#
# #brief A wrapper function to allow
# for OPTional OUTput of any
# command w/wo args
#######################################
optout()
{
if [ ${ECHO_ALL} = 'no' ]; then
"$#" 1>/dev/null 2>&1
return $?
else
"$#"
return $?
fi
}
--- end of file ---
in another bash file I source the above util-optout.sh file and use the optout() function to allow for conditional output.. essentially allow for conditional redirection of any commands output to /dev/null to make scripts silent.
for example in some other build script i have
source ~/scripts/util/util-optout.sh
optout pushd ${ZLIB_DIR}
optout rm -vf config.cache
optout CC=${BUILD_TOOL_CC} ./configure ${ZLIB_CONFIGURE_OPT} --prefix=${CURR_DIR}/${INSTALL_DIR}
# ^^^^^^^^^^^^^^^^^^^
# ^ this breaks my optout() command
# my optout() fails when there are prefixed bash env vars set like CC=${...} before ./configure
optout popd
optout make -C ${ZLIB_DIR} ${ZLIB_COMPILER_OPT} all
optout make -C ${ZLIB_DIR} install
for simple commands with any type of parameters after it like 'pushd' or 'rm'.. optout() works great.
even the optout make -C ones work fine.
but it gives me an error for commands that have prefix env-vars set like the optout CC=${...} ./configure ...
utils/util-optout.sh: line 33: CC=gcc: command not found
Is there a way to make my optout() function work for ANY possible valid bash script line.
i know it has something to do with the use of "$#" or "$*" in my optout() function, but i have studied the bash man pages in detail and I can't make it work for all possible bash line cases.
so far the only way to get past this limitation with my optout() is the following 3-line style; which is annoying.
export CC=${BUILD_TOOL_CC}
optout ./configure ${ZLIB_CONFIGURE_OPT} --prefix=${CURR_DIR}/${INSTALL_DIR}
unset CC
Any ideas on how to reduce it all back down to a single optout ... line

optout is a command like any other, and so must be preceded by any local modifications to the environment. The command that optout runs will inherit that environment.
CC=${BUILD_TOOL_CC} optout ./configure ${ZLIB_CONFIGURE_OPT} --prefix=${CURR_DIR}/${INSTALL_DIR}
By the way, this is just one of the problems you are likely to encounter with your optout function. You cannot run arbitrary command lines in that fashion, only a simple command followed by zero or more arguments (and I would expect there are some exceptions to even that restricted set, as well).

Related

Using history expansion in a bash alias or function

I am trying to make a simple thing to make my teammates lives easier. They are constantly copying quote into the command line that are formatted which breaks the command ie: “test“ vs. "test"
It's proved surprisingly annoying to do with:
function damn() { !!:gs/“/" }
or:
alias damn='!!:gs/“/"'
Neither seems to work and keeps giving me either the error
-bash: !!:gs/“/" : No such file or directory
or just:
>
I must be missing something obvious here.
! does not work in functions or aliases. According to bash manual:
History expansion is performed immediately after a complete line is read, before the shell breaks it into words.
You can use the builtin fc command:
[STEP 100] # echo $BASH_VERSION
4.4.19(1)-release
[STEP 101] # alias damn='fc -s “=\" ”=\" '
[STEP 102] # echo “test”
“test”
[STEP 103] # damn
echo "test"
test
[STEP 104] #
For quick referecne, the following is output of help fc.
fc: fc [-e ename] [-lnr] [first] [last] or fc -s [OLD=NEW] [command]
Display or execute commands from the history list.
fc is used to list or edit and re-execute commands from the history list.
FIRST and LAST can be numbers specifying the range, or FIRST can be a
string, which means the most recent command beginning with that
string.
Options:
-e ENAME select which editor to use. Default is FCEDIT, then EDITOR,
then vi
-l list lines instead of editing
-n omit line numbers when listing
-r reverse the order of the lines (newest listed first)
| With the `fc -s [OLD=NEW ...] [command]' format, COMMAND is
| re-executed after the substitution OLD=NEW is performed.
A useful alias to use with this is r='fc -s', so that typing `r cc'
runs the last command beginning with `cc' and typing `r' re-executes
the last command.
Exit Status:
Returns success or status of executed command; non-zero if an error occurs.
Here is a slightly more general solution using a bash function to wrap the fc call, if you want to do something to the string beyond substitution.
function damn() {
# Capture the previous command.
cmd=$(fc -ln -1)
# Do whatever you want with cmd here
cmd=$(echo $cmd | sed 's/[“”]/"/g')
# Re-run the command
eval $cmd
}

How do I rerun a bash script skipping over lines which have previously run sucesfully?

I have a bash script which acts as a wrapper for an analysis pipeline. If the script errors out I want to be able to run the script from the point at which the errors occurred by simply re-running the original command. I have set two different traps; one which will remove the last file being generated on a non-zero exit from my script, the other will remove all the temporary files on exit signal = 0 and essentially cleans up the file system at the end of the run. I turned on noclobber in the bash environment which allows my script to skip over lines of the script where files have already been written but this will only do this if I do not set the non-zero exit trap. As soon as I set this trap then it will exit at the first line where noclobber IDs a file it will not overwrite. Is there a way for me to skip over lines of code that have successfully run previously rather than having to re-run my code from the start? I know I could use conditional statements for each line but I thought there might be a neater way of doing this.
set -o noclobber
# Function to clean up temporary folders when script exits at the end
rmfile() { rm -r $1 }
# Function to remove the file being currently generated
# Function executed if script errors out
rmlast() {
if [ ! -z "$CURRENTFILE" ]
then
rm -r $1
exit 1
fi }
# Trap to remove the currently generated file
trap 'rmlast "$CURRENTFILE"' ERR SIGINT
#Make temporary directory if it has not been created in a previous run
TEMPDIR=$(find . -name "tmp*")
if [ -z "$TEMPDIR" ]
then
TEMPDIR=$(mktemp -d /test/tmpXXX)
fi
# Set CURRENTFILE variable
CURRENTFILE="${TEMPDIR}/Variants.vcf"
# Set CURRENTFILE variable
complexanalysis_tool input_file > $CURRENTFILE
# Set CURRENTFILE variable
CURRENTFILE="${TEMPDIR}/Filtered.vcf"
complexanalysis_tool2 input_file2 > $CURRENTFILE
CURRENTFILE="${TEMPDIR}/Filtered_2.vcf"
complexanalysis_tool3 input_file3 > $CURRENTFILE
# Move files to final destination folder
mv -nv $TEMPDIR/*.vcf /test/newdest/
# Trap to remove temporary folders when script finishes running
trap 'rmfile "$TEMPDIR"' 0
Update:
I have been offered answers suggesting the use of the make utility. I want to make use of its inbuilt utility to check if a dependency has been fulfilled.
In my hands the makefile suggested by VK Kashyap does not seem to skip execution for previously accomplished tasks. So for example I ran the script above and interrupted the script when it was running filtered.vcf with ctrl c. When I rerun the script again it runs from the beginning again i.e. starts again at varaints.vcf. Am I missing something in order to get the makefile to show sources as being fullfilled?
Answer to update:
OK this is a rookie mistake but since I am not familiar with generating makefiles I will post this explanation of my error. The reason my makefile was not rerunning from the exit point was that I had named the targets a different name to the output files being generated. So as VK Kashyap quite correctly answered if you name the targets eg.
variants.vcf
filtered.vcf
filtered2.vcf
the same as the output files being generated then the script will skip previously accomplished tasks.
make utility might be an answer for the thing you want to achive.
it has inbuilt dependecy checking (the stuff which you are trying to achive with tmp files)
#run all target when all of the files are available
all: variants.vcf filtered.vcf filtered2.vcf
mv -nv $(TEMPDIR)/*.vcf /test/newdest/
variants.vcf:
complexanalysis_tool input_file > variants.vcf
filtered.vcf:
complexanalysis_tool2 input_file2 > filtered.vcf
filtered2.vcf:
complexanalysis_tool3 input_file3 > filtered2.vcf
you may use bash script to invoke this make file as:
#/bin/bash
export TEMPDIR=xyz
make -C $TEMPDIR all
make utility will check itself for already accomplished task and skip execution for done stuffs. it will continue where you had the error finishing the task.
you can find more details on internet about exact syntax for makefile.
there is no built-in way to do that.
however, you could brew something like that by keeping track of the last successful line and building your own goto statement, as described here and in Is there a "goto" statement in bash? (just replace the 'labels' with actual line-numbers).
however, the question is whether this is really a smart idea.
a better way is to only run the commands needed, not the commands not-yet-executed.
this could be done either by explicit conditionals in your bash-script:
produce_if_missing() {
# check if first argument is existing
# if not run the rest of the arguments and pipe it into the first one
local curfile=$1
shift
if [ ! -e "${curfile}" ]; then
$# > "${curfile}"
fi
}
produce_if_missing Variants.vcf complexanalysis_tool input_file
produce_if_missing Filtered.vcf complexanalysis_tool2 input_file2
or using tools that are made for such things (see VK Kahyap's answer using make, though i prefer using variables in the make-rules to minimize typos):
Variants.vcf: input_file
complexanalysis_tool $^ > $#
Filtered.vcf: input_file
complexanalysis_tool2 $^ > $#

Clean the '-x option' inside a script

I've been using "set -x" inside bash scripts in order to help me debug some functions, and it has been working very well for me
-x After expanding each simple command, for command, case command,
select command, or arithmetic for command, display the expanded
value of PS4, followed by the command and its expanded arguments or
associated word list.
However I'd like to be able to clear it before I leave the function
Eg:
#/bin bash
function somefunction()
{
set -x
# some code I'm debugging
# clear the set -x
set ????
}
somefunction
Quoting the manual:
Using + rather than - causes these flags to be turned off.
So it's set +x what you are looking for.
Consider a function like
foo () {
set -x
# do something
set +x
}
The problem is that if the -x option was already set before foo was called, it will be turned off by foo.
If you want to restore the old value, you'll have to test whether it was enabled already using $-.
foo () {
[[ $- != *x* ]]; x_set=$? # 1 if already set, 0 otherwise
set -x
# do something
(( x_set )) || set +x # Turn off -x if it was off before
}
For some more info always refer to the basic guide. This clearly gives you the answer :
http://www.tldp.org/LDP/Bash-Beginners-Guide/html/Bash-Beginners-Guide.html
set -x # activate debugging from here
w
set +x # stop debugging from here

Checking cmd line argument in bash script bypass the source statement

I have an bash script "build.sh" like this:
# load Xilinx environment settings
source $XILINX/../settings32.sh
cp -r "../../../EDK/platform" "hw_platform"
if [ $# -ne 0 ]; then
cp $1/system.xml hw_platform/system.xml
fi
echo "Done"
Normally I run it as "./build.sh" and it execute the "source" statement to set environment variables correct. Sometimes I need to let the script to copy file from an alternative place, I run it as "./build.sh ~/alternative_path/"; My script check whether there is an cmd line argument by checking $# against 0.
When I do that, the "source" statement at the beginning of the script somehow get skipped, and build failed. I have put two "echo" before and after the "source", and I see echo statements get executed.
Currently I circumvent this issue by "source $XILINX/../settings32.sh; build.sh". However, please advise what I have done wrong in the script? Thanks.
Try storing the values of your positional paramaters first on an array variable then reset them to 0. "$XILINX/../settings32.sh" may be acting differently when it detects some arguments.
# Store arguments.
ARGS=("$#")
# Reset to 0 arguments.
set --
# load Xilinx environment settings
source "$XILINX/../settings32.sh"
cp -r "../../../EDK/platform" "hw_platform"
if [[ ${#ARGS[#]} -ne 0 ]]; then
cp "${ARGS[0]}/system.xml" hw_platform/system.xml
fi
echo "Done"

Bash config file or command line parameters

If I am writing a bash script, and I choose to use a config file for parameters. Can I still pass in parameters for it via the command line? I guess I'm asking can I do both on the same command?
The watered down code:
#!/bin/bash
source builder.conf
function xmitBuildFile {
for IP in "{SERVER_LIST[#]}"
do
echo $1#$IP
done
}
xmitBuildFile
builder.conf:
SERVER_LIST=( 192.168.2.119 10.20.205.67 )
$bash> ./builder.sh myname
My expected output should be myname#192.168.2.119 and myname#10.20.205.67, but when I do an $ echo $#, I am getting 0, even when I passed in 'myname' on the command line.
Assuming the "config file" is just a piece of shell sourced into the main script (usually containing definitions of some variables), like this:
. /etc/script.conf
of course you can use the positional parameters anywhere (before or after ". /etc/..."):
echo "$#"
test -n "$1" && ...
you can even define them in the script or in the very same config file:
test $# = 0 && set -- a b c
Yes, you can. Furthemore, it depends on your architecture of script. You can overwrite parametrs with values from config and vice versa.
By the way shflags may be pretty useful in writing such script.

Resources