Getting the calling script path from a bash script - linux

In one of my bash scripts I use a variable that contains the path of the script. This variable is set like:
current_folder=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
As this script is a part of several ones, I want to move this code to a global imported script. But logically then the variable is filled with the path of the global script.
/MyProgram/common/globals.sh
current_folder=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
/MyProgram/modules/myscript.sh
. /MyProgram/common/globals.sh
echo "$current_folder" # Returns /MyProgram/common
Is there a way of doing this without creating a function and passing original path as a parameter?
Having to create code on every script in every request to use the path seems counter-productive.

As glenn jackman pointed, simply replacing the index in BASH_SOURCE to 1 solves the problem.
The GNU manual states the definition for BASH_SOURCE as:
An array variable whose members are the source filenames where the
corresponding shell function names in the FUNCNAME array variable are
defined. The shell function ${FUNCNAME[$i]} is defined in the file
${BASH_SOURCE[$i]} and called from ${BASH_SOURCE[$i+1]}
By it's definition getting the inmediate previous script in the call stack will be 1, so the code finally stays:
caller_path=$(cd "$(dirname "${BASH_SOURCE[1]}")" && pwd)

Related

How to get the complete calling command of a BASH script from inside the script (not just the arguments)

I have a BASH script that has a long set of arguments and two ways of calling it:
my_script --option1 value --option2 value ... etc
or
my_script val1 val2 val3 ..... valn
This script in turn compiles and runs a large FORTRAN code suite that eventually produces a netcdf file as output. I already have all the metadata in the netcdf output global attributes, but it would be really nice to also include the full run command one used to create that experiment. Thus another user who receives the netcdf file could simply reenter the run command to rerun the experiment, without having to piece together all the options.
So that is a long way of saying, in my BASH script, how do I get the last command entered from the parent shell and put it in a variable? i.e. the script is asking "how was I called?"
I could try to piece it together from the option list, but the very long option list and two interface methods would make this long and arduous, and I am sure there is a simple way.
I found this helpful page:
BASH: echoing the last command run
but this only seems to work to get the last command executed within the script itself. The asker also refers to use of history, but the answers seem to imply that the history will only contain the command after the programme has completed.
Many thanks if any of you have any idea.
You can try the following:
myInvocation="$(printf %q "$BASH_SOURCE")$((($#)) && printf ' %q' "$#")"
$BASH_SOURCE refers to the running script (as invoked), and $# is the array of arguments; (($#)) && ensures that the following printf command is only executed if at least 1 argument was passed; printf %q is explained below.
While this won't always be a verbatim copy of your command line, it'll be equivalent - the string you get is reusable as a shell command.
chepner points out in a comment that this approach will only capture what the original arguments were ultimately expanded to:
For instance, if the original command was my_script $USER "$(date +%s)", $myInvocation will not reflect these arguments as-is, but will rather contain what the shell expanded them to; e.g., my_script jdoe 1460644812
chepner also points that out that getting the actual raw command line as received by the parent process will be (next to) impossible. Do tell me if you know of a way.
However, if you're prepared to ask users to do extra work when invoking your script or you can get them to invoke your script through an alias you define - which is obviously tricky - there is a solution; see bottom.
Note that use of printf %q is crucial to preserving the boundaries between arguments - if your original arguments had embedded spaces, something like $0 $* would result in a different command.
printf %q also protects against other shell metacharacters (e.g., |) embedded in arguments.
printf %q quotes the given argument for reuse as a single argument in a shell command, applying the necessary quoting; e.g.:
$ printf %q 'a |b'
a\ \|b
a\ \|b is equivalent to single-quoted string 'a |b' from the shell's perspective, but this example shows how the resulting representation is not necessarily the same as the input representation.
Incidentally, ksh and zsh also support printf %q, and ksh actually outputs 'a |b' in this case.
If you're prepared to modify how your script is invoked, you can pass $BASH_COMMANDas an extra argument: $BASH_COMMAND contains the raw[1]
command line of the currently executing command.
For simplicity of processing inside the script, pass it as the first argument (note that the double quotes are required to preserve the value as a single argument):
my_script "$BASH_COMMAND" --option1 value --option2
Inside your script:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
myInvocation=$1 # Save the command line in a variable...
shift # ... and remove it from "$#".
# Now process "$#", as you normally would.
Unfortunately, there are only two options when it comes to ensuring that your script is invoked this way, and they're both suboptimal:
The end user has to invoke the script this way - which is obviously tricky and fragile (you could however, check in your script whether the first argument contains the script name and error out, if not).
Alternatively, provide an alias that wraps the passing of $BASH_COMMAND as follows:
alias my_script='/path/to/my_script "$BASH_COMMAND"'
The tricky part is that this alias must be defined in all end users' shell initialization files to ensure that it's available.
Also, inside your script, you'd have to do extra work to re-transform the alias-expanded version of the command line into its aliased form:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
# Here we also re-transform the alias-expanded command line to
# its original aliased form, by replacing everything up to and including
# "$BASH_COMMMAND" with the alias name.
myInvocation=$(sed 's/^.* "\$BASH_COMMAND"/my_script/' <<<"$1")
shift # Remove the first argument from "$#".
# Now process "$#", as you normally would.
Sadly, wrapping the invocation via a script or function is not an option, because the $BASH_COMMAND truly only ever reports the current command's command line, which in the case of a script or function wrapper would be the line inside that wrapper.
[1] The only thing that gets expanded are aliases, so if you invoked your script via an alias, you'll still see the underlying script in $BASH_COMMAND, but that's generally desirable, given that aliases are user-specific.
All other arguments and even input/output redirections, including process substitutiions <(...) are reflected as-is.
"$0" contains the script's name, "$#" contains the parameters.
Do you mean something like echo $0 $*?

Can I find out who called a zsh script?

Assume a script master.sh, which is called as
./foo/bar/master.sh
and contains the lines
#!/bin/zsh
. ./x/y/slave.sh
Is it possible to find out from within slave.sh, that the script which is doing the sourcing, is ./foo/bar/master.sh ?
I can not use $0 here, because this would return ./x/y/slave.sh.
I'm using zsh 5.0.6
one way you can achieve this is that for the child script to take as optional argument the name of the caller. Thus this would be accessible with `$1``
ex:
#!/bin/zsh
# master/leader
. ./x/y/slave.sh $0 # or hardcoded path
#!/bin/zsh
# slave/worker
echo "Here is my master $1"
(you can also do another custom protocol using a environment variable set by the master)
(this solution would also works on bash, and other shell)
The information can already be obtained in zsh right now (thanks to Bart Schaefer, who pointed out to me the existence of the variable functrace in the zsh/parameter module):
#!/bin/zsh
# slave/worker
zmodload zsh/parameter
echo "Here is my master ${functrace[$#functrace]%:*}"
The '%:*' is necessary, because the entries in the functrace array also contain the line number of the call.

Set Enviroment Variable that contains two paths interelated paths

I am trying to create a custom environment variable that uses python to execute a py file.
Here is an example of what I have
export VAR=${VAR}:"/usr/bin/python2.7 /home/user/file"
When I use the variable I get the output:
bash: :/usr/bin/python2.7: No such file or directory
If I echo the variable I get the output:
/usr/bin/python2.7 /home/user/file
EDIT:
Trying "$VAR" gives me the output
bash: :/usr/bin/python2.7 /home/user/file: No such file or directory
If I run just this /usr/bin/python2.7 /home/user/file it works
I think an alias is more appropriate for all kinds like this (you may consider a more suitable name for the alias)
alias var="/usr/bin/python2.7 /home/user/file"
If you want to stick with your version you have to tell your shell to evaluate the content of VAR.
For this you just have to invoke
eval ${VAR}
By the way, why do you append the string "/usr/bin/python2.7 /home/user/file" to VAR instead of overwriting the content of VAR?

Bash config file or command line parameters

If I am writing a bash script, and I choose to use a config file for parameters. Can I still pass in parameters for it via the command line? I guess I'm asking can I do both on the same command?
The watered down code:
#!/bin/bash
source builder.conf
function xmitBuildFile {
for IP in "{SERVER_LIST[#]}"
do
echo $1#$IP
done
}
xmitBuildFile
builder.conf:
SERVER_LIST=( 192.168.2.119 10.20.205.67 )
$bash> ./builder.sh myname
My expected output should be myname#192.168.2.119 and myname#10.20.205.67, but when I do an $ echo $#, I am getting 0, even when I passed in 'myname' on the command line.
Assuming the "config file" is just a piece of shell sourced into the main script (usually containing definitions of some variables), like this:
. /etc/script.conf
of course you can use the positional parameters anywhere (before or after ". /etc/..."):
echo "$#"
test -n "$1" && ...
you can even define them in the script or in the very same config file:
test $# = 0 && set -- a b c
Yes, you can. Furthemore, it depends on your architecture of script. You can overwrite parametrs with values from config and vice versa.
By the way shflags may be pretty useful in writing such script.

Why doesn't the chmod command recognize the content of DIR as a directory?

I'm trying to write a makefile to do the following when executed:
CMVC_VIEW = ../../..
TB_DIR = $(CMVC_VIEW)/tarball_images
SMAC_TOOLS = $(TB_DIR)/smac_tools
SMAC_BIN = $(SMAC_TOOLS)/bin
DIR_LIST = $(TB_DIR) \
$(SMAC_TOOLS) \
$(SMAC_BIN)
install:
rm -f *.o
for DIR in $(DIR_LIST); do \
echo $${DIR}; \
chmod 2775 $${DIR}; \
done
When the makefile is run, however, i get an error saying that chmod: missing operand after 2775. I don't understand why this is happening, given that $${DIR} should contain the path corresponding to the directory that needs its access permissions changed.
This seems to work when I replace $${DIR} with a static directory path.
For the purposes of this makefile, suppose that DIR_LIST macro is assigned to a list of directories separated by whitespace.
You're getting confused by make variable references and shell variable references. Remember, make will interpret any string like $(FOO) as a reference to a make variable, lookup the variable with that name, and replace the reference with the value. Your shell-based for loop creates a shell variable called DIR, but since you have (had) just $(DIR), make was trying (and failing) to find a make variable called DIR.
Your solution works because the double $ prevents make from doing its own variable reference resolution, so the literal $(DIR) gets passed to the shell, which then does its own variable resolution! That works, of course, because the for loop created the DIR variable.

Resources