Redefine echo for several scripts by calling a script that redefines echo - linux

I have scripts A, B, C, etc. that I execute independently. I want to redefine how echo works for all scripts I am using.
I made the following script, preamble.sh, that I call from each script A, B, C:
#!/bin/bash
# Change the color of our output
# TODO: How to redefine echo for the parent caller?
PURPLE='\033[1;35m'
NC='\033[0m' # No Color
function echo() { builtin echo -e '\033[1;35m'$0'\033[0m'; }
unameOut="$(uname -s)"
case "${unameOut}" in
Linux*) machine=Linux ;;
Darwin*) machine=Mac ;;
CYGWIN*) machine=Cygwin ;;
MINGW*) machine=MinGw ;;
*) machine="UNKNOWN:${unameOut}" ;;
esac
echo "[Running on ${machine}...]"
When I do bash preamble.sh from another script, all I get is
preamble.sh
in purple (the color that I want echo to use in each script A, B, C, etc.).
I guess what ends up happening is echo is redefined correctly within preamble.sh but this is not how I expected it to work. When I call bash preamble.sh, preamble.sh gets executed but instead of telling me what machine it runs on, it just prints preamble.sh in purple because that must be the argument $0.
I realize I might be doing something that is not possibly to do directly.
How do I achieve what I am trying to achieve?

The arguments to a function are $1, $2, ...
You want:
function echo() { builtin echo -e '\033[1;35m'$1'\033[0m'; }
not:
function echo() { builtin echo -e '\033[1;35m'$0'\033[0m'; }
Whether inside a function of not, $0 will remain the name of the script itself.
Edit: For your other question, you would need to run the script within the current shell for the changes to persist. You can do this using either
source preamble.sh
or
. preamble.sh
This is necessary since by default, the script will run in a new shell and any variables, functions, etc you define will not be visible.

Related

BASH getopts Multiple Scripts with Same Options

I have a series of BASH scripts.
I am using getopts to parse arguments from the cmd line (although open to alternatives).
There are a series of common options to these scripts call this options set A
ie queue, ncores etc.
Each script then has a series of extra options ie set B1,B2,B3.
What I want is for script
"1 to be able to take options A+B1"
"2 to be able to take options A+B2"
"3 to be able to take options A+B2"
But I want to be able to store the code for options A in a central location (library/function) with having to write out in each script.
What I want is a way to insert generic code in getopts. Or alternatively a way to run getopts twice.
In fact I've done this by having getopts as a function which is sourced.
But the problem is I cant get the unrecognised option to work them.
I guess one way would be to remove the arguements from options A from the string before passing to a getopts for B1, B2 , B3 etc ?
Thanks Roger
That's a very nice question, to answer which we need to have a good understanding of how getopts works.
The key point here is that getopts is designed to iterate over the supplied arguments in a single loop. Thus, the solution to the question is to split the loop between different files rather then running the command twice:
#!/usr/bin/env bash
# File_1
getopts_common() {
builtin getopts ":ab:${1}" ${2} ${#:3} || return 1
case ${!2} in
'a')
echo 'a triggered'
continue
;;
'b')
echo "b argument supplied -- ${OPTARG}"
continue
;;
':')
echo "MISSING ARGUMENT for option -- ${OPTARG}" >&2
exit 1
;;
esac
}
#!/usr/bin/env bash
# File_2
# source "File_1"
while getopts_common 'xy:' OPTKEY ${#}; do
case ${OPTKEY} in
'x')
echo 'x triggered'
;;
'y')
echo "y argument supplied -- ${OPTARG}"
;;
'?')
echo "INVALID OPTION -- ${OPTARG}" >&2
exit 1
;;
':')
echo "MISSING ARGUMENT for option -- ${OPTARG}" >&2
exit 1
;;
*)
echo "UNIMPLEMENTED OPTION -- ${OPTKEY}" >&2
exit 1
;;
esac
done
Implementation notes
We start with File_2 since that's where the execution of the script starts:
Instead of invoking getopts directly, we call it via it's proxy: getopts_common, which is responsible for processing all common option.
getopts_common function is invoked with:
A string that defines which options to expect, and where to expect their arguments. This string only covers options defined in File_2.
The name of the shell-variable to use for option reporting.
A list of the command line arguments. (This simplifies accessing them from inside getopts_common function.)
Moving on to the sourced file (File_1) we need to bear in mind that getopts_common function runs inside the while loop defined in File_2:
getopts returns false if there is nothing left to parse, || return 1 bit insures that getopts_common function does the same.
The execution needs to move on to the next iteration of the loop when a valid option is processed. Hence, each valid option match ends with continue.
Silent error reporting (enabled when OPTSPEC starts with :) allows us to distinguish between INVALID OPTION and MISSING ARGUMENT. The later error is specific to the common options defined in File_1, thus it needs to be trapped there.
For more in-depth information on getopts, see Bash Hackers Wiki: Getopts tutorial

Why should eval be avoided in Bash, and what should I use instead?

Time and time again, I see Bash answers on Stack Overflow using eval and the answers get bashed, pun intended, for the use of such an "evil" construct. Why is eval so evil?
If eval can't be used safely, what should I use instead?
There's more to this problem than meets the eye. We'll start with the obvious: eval has the potential to execute "dirty" data. Dirty data is any data that has not been rewritten as safe-for-use-in-situation-XYZ; in our case, it's any string that has not been formatted so as to be safe for evaluation.
Sanitizing data appears easy at first glance. Assuming we're throwing around a list of options, bash already provides a great way to sanitize individual elements, and another way to sanitize the entire array as a single string:
function println
{
# Send each element as a separate argument, starting with the second element.
# Arguments to printf:
# 1 -> "$1\n"
# 2 -> "$2"
# 3 -> "$3"
# 4 -> "$4"
# etc.
printf "$1\n" "${#:2}"
}
function error
{
# Send the first element as one argument, and the rest of the elements as a combined argument.
# Arguments to println:
# 1 -> '\e[31mError (%d): %s\e[m'
# 2 -> "$1"
# 3 -> "${*:2}"
println '\e[31mError (%d): %s\e[m' "$1" "${*:2}"
exit "$1"
}
# This...
error 1234 Something went wrong.
# And this...
error 1234 'Something went wrong.'
# Result in the same output (as long as $IFS has not been modified).
Now say we want to add an option to redirect output as an argument to println. We could, of course, just redirect the output of println on each call, but for the sake of example, we're not going to do that. We'll need to use eval, since variables can't be used to redirect output.
function println
{
eval printf "$2\n" "${#:3}" $1
}
function error
{
println '>&2' '\e[31mError (%d): %s\e[m' "$1" "${*:2}"
exit $1
}
error 1234 Something went wrong.
Looks good, right? Problem is, eval parses twice the command line (in any shell). On the first pass of parsing one layer of quoting is removed. With quotes removed, some variable content gets executed.
We can fix this by letting the variable expansion take place within the eval. All we have to do is single-quote everything, leaving the double-quotes where they are. One exception: we have to expand the redirection prior to eval, so that has to stay outside of the quotes:
function println
{
eval 'printf "$2\n" "${#:3}"' $1
}
function error
{
println '&2' '\e[31mError (%d): %s\e[m' "$1" "${*:2}"
exit $1
}
error 1234 Something went wrong.
This should work. It's also safe as long as $1 in println is never dirty.
Now hold on just a moment: I use that same unquoted syntax that we used originally with sudo all of the time! Why does it work there, and not here? Why did we have to single-quote everything? sudo is a bit more modern: it knows to enclose in quotes each argument that it receives, though that is an over-simplification. eval simply concatenates everything.
Unfortunately, there is no drop-in replacement for eval that treats arguments like sudo does, as eval is a shell built-in; this is important, as it takes on the environment and scope of the surrounding code when it executes, rather than creating a new stack and scope like a function does.
eval Alternatives
Specific use cases often have viable alternatives to eval. Here's a handy list. command represents what you would normally send to eval; substitute in whatever you please.
No-op
A simple colon is a no-op in bash:
:
Create a sub-shell
( command ) # Standard notation
Execute output of a command
Never rely on an external command. You should always be in control of the return value. Put these on their own lines:
$(command) # Preferred
`command` # Old: should be avoided, and often considered deprecated
# Nesting:
$(command1 "$(command2)")
`command "\`command\`"` # Careful: \ only escapes $ and \ with old style, and
# special case \` results in nesting.
Redirection based on variable
In calling code, map &3 (or anything higher than &2) to your target:
exec 3<&0 # Redirect from stdin
exec 3>&1 # Redirect to stdout
exec 3>&2 # Redirect to stderr
exec 3> /dev/null # Don't save output anywhere
exec 3> file.txt # Redirect to file
exec 3> "$var" # Redirect to file stored in $var--only works for files!
exec 3<&0 4>&1 # Input and output!
If it were a one-time call, you wouldn't have to redirect the entire shell:
func arg1 arg2 3>&2
Within the function being called, redirect to &3:
command <&3 # Redirect stdin
command >&3 # Redirect stdout
command 2>&3 # Redirect stderr
command &>&3 # Redirect stdout and stderr
command 2>&1 >&3 # idem, but for older bash versions
command >&3 2>&1 # Redirect stdout to &3, and stderr to stdout: order matters
command <&3 >&4 # Input and output!
Variable indirection
Scenario:
VAR='1 2 3'
REF=VAR
Bad:
eval "echo \"\$$REF\""
Why? If REF contains a double quote, this will break and open the code to exploits. It's possible to sanitize REF, but it's a waste of time when you have this:
echo "${!REF}"
That's right, bash has variable indirection built-in as of version 2. It gets a bit trickier than eval if you want to do something more complex:
# Add to scenario:
VAR_2='4 5 6'
# We could use:
local ref="${REF}_2"
echo "${!ref}"
# Versus the bash < 2 method, which might be simpler to those accustomed to eval:
eval "echo \"\$${REF}_2\""
Regardless, the new method is more intuitive, though it might not seem that way to experienced programmed who are used to eval.
Associative arrays
Associative arrays are implemented intrinsically in bash 4. One caveat: they must be created using declare.
declare -A VAR # Local
declare -gA VAR # Global
# Use spaces between parentheses and contents; I've heard reports of subtle bugs
# on some versions when they are omitted having to do with spaces in keys.
declare -A VAR=( ['']='a' [0]='1' ['duck']='quack' )
VAR+=( ['alpha']='beta' [2]=3 ) # Combine arrays
VAR['cow']='moo' # Set a single element
unset VAR['cow'] # Unset a single element
unset VAR # Unset an entire array
unset VAR[#] # Unset an entire array
unset VAR[*] # Unset each element with a key corresponding to a file in the
# current directory; if * doesn't expand, unset the entire array
local KEYS=( "${!VAR[#]}" ) # Get all of the keys in VAR
In older versions of bash, you can use variable indirection:
VAR=( ) # This will store our keys.
# Store a value with a simple key.
# You will need to declare it in a global scope to make it global prior to bash 4.
# In bash 4, use the -g option.
declare "VAR_$key"="$value"
VAR+="$key"
# Or, if your version is lacking +=
VAR=( "$VAR[#]" "$key" )
# Recover a simple value.
local var_key="VAR_$key" # The name of the variable that holds the value
local var_value="${!var_key}" # The actual value--requires bash 2
# For < bash 2, eval is required for this method. Safe as long as $key is not dirty.
local var_value="`eval echo -n \"\$$var_value\""
# If you don't need to enumerate the indices quickly, and you're on bash 2+, this
# can be cut down to one line per operation:
declare "VAR_$key"="$value" # Store
echo "`var_key="VAR_$key" echo -n "${!var_key}"`" # Retrieve
# If you're using more complex values, you'll need to hash your keys:
function mkkey
{
local key="`mkpasswd -5R0 "$1" 00000000`"
echo -n "${key##*$}"
}
local var_key="VAR_`mkkey "$key"`"
# ...
How to make eval safe
eval can be safely used - but all of its arguments need to be quoted first. Here's how:
This function which will do it for you:
function token_quote {
local quoted=()
for token; do
quoted+=( "$(printf '%q' "$token")" )
done
printf '%s\n' "${quoted[*]}"
}
Example usage:
Given some untrusted user input:
% input="Trying to hack you; date"
Construct a command to eval:
% cmd=(echo "User gave:" "$input")
Eval it, with seemingly correct quoting:
% eval "$(echo "${cmd[#]}")"
User gave: Trying to hack you
Thu Sep 27 20:41:31 +07 2018
Note you were hacked. date was executed rather than being printed literally.
Instead with token_quote():
% eval "$(token_quote "${cmd[#]}")"
User gave: Trying to hack you; date
%
eval isn't evil - it's just misunderstood :)
I’ll split this answer in two parts, which, I think, cover a large proportion of the cases where people tend to be tempted by eval:
Running weirdly built commands
Fiddling with dynamically named variables
Running weirdly built commands
Many, many times, simple indexed arrays are enough, provided that you take on good habits regarding double quotes to protect expansions while defining the array.
# One nasty argument which must remain a single argument and not be split:
f='foo bar'
# The command in an indexed array (use `declare -a` if you really want to be explicit):
cmd=(
touch
"$f"
# Yet another nasty argument, this time hardcoded:
'plop yo'
)
# Let Bash expand the array and run it as a command:
"${cmd[#]}"
This will create foo bar and plop yo (two files, not four).
Note that sometimes it can produce more readable scripts to put just the arguments (or a bunch of options) in the array (at least you know at first glance what you’re running):
touch "${args[#]}"
touch "${opts[#]}" file1 file2
As a bonus, arrays let you, easily:
Add comments about a specific argument:
cmd=(
# Important because blah blah:
-v
)
Group arguments for readability by leaving blank lines within the array definition.
Comment out specific arguments for debugging purposes.
Append arguments to your command, sometimes dynamically according to specific conditions or in loops:
cmd=(myprog)
for f in foo bar
do
cmd+=(-i "$f")
done
if [[ $1 = yo ]]
then
cmd+=(plop)
fi
to_be_added=(one two 't h r e e')
cmd+=("${to_be_added[#]}")
Define commands in configuration files while allowing for configuration-defined whitespace-containing arguments:
readonly ENCODER=(ffmpeg -blah --blah 'yo plop')
# Deprecated:
#readonly ENCODER=(avconv -bloh --bloh 'ya plap')
# […]
"${ENCODER[#]}" foo bar
Log a robustly runnable command, that perfectly represents what is being run, using printf’s %q:
function please_log_that {
printf 'Running:'
# From `help printf`:
# “The format is re-used as necessary to consume all of the arguments.”
# From `man printf` for %q:
# “printed in a format that can be reused as shell input,
# escaping non-printable characters with the proposed POSIX $'' syntax.”
printf ' %q' "$#"
echo
}
arg='foo bar'
cmd=(prog "$arg" 'plop yo' $'arg\nnewline\tand tab')
please_log_that "${cmd[#]}"
# ⇒ “Running: prog foo\ bar plop\ yo $'arg\nnewline\tand tab'”
# You can literally copy and paste that ↑ to a terminal and get the same execution.
Enjoy better syntax highlighting than with eval strings, since you don’t need to nest quotes or use $-s that “will not be evaluated right away but will be at some point”.
To me, the main advantage of this approach (and conversely disadvantage of eval) is that you can follow the same logic as usual regarding quotation, expansion, etc. No need to rack your brain trying to put quotes in quotes in quotes “in advance” while trying to figure out which command will interpret which pair of quotes at which moment. And of course many of the things mentioned above are harder or downright impossible to achieve with eval.
With these, I never had to rely on eval in the past six years or so, and readability and robustness (in particular regarding arguments that contain whitespace) were arguably increased. You don’t even need to know whether IFS has been tempered with! Of course, there are still edge cases where eval might actually be needed (I suppose, for example, if the user has to be able to provide a full fledged piece of script via an interactive prompt or whatever), but hopefully that’s not something you’ll come across on a daily basis.
Fiddling with dynamically named variables
declare -n (or its within-functions local -n counterpart), as well as ${!foo}, do the trick most of the time.
$ help declare | grep -- -n
-n make NAME a reference to the variable named by its value
Well, it’s not exceptionally clear without an example:
declare -A global_associative_array=(
[foo]=bar
[plop]=yo
)
# $1 Name of global array to fiddle with.
fiddle_with_array() {
# Check this if you want to make sure you’ll avoid
# circular references, but it’s only if you really
# want this to be robust.
# You can also give an ugly name like “__ref” to your
# local variable as a cheaper way to make collisions less likely.
if [[ $1 != ref ]]
then
local -n ref=$1
fi
printf 'foo → %s\nplop → %s\n' "${ref[foo]}" "${ref[plop]}"
}
# Call the function with the array NAME as argument,
# not trying to get its content right away here or anything.
fiddle_with_array global_associative_array
# This will print:
# foo → bar
# plop → yo
(I love this trick ↑ as it makes me feel like I’m passing objects to my functions, like in an object-oriented language. The possibilities are mind-boggling.)
As for ${!…} (which gets the value of the variable named by another variable):
foo=bar
plop=yo
for var_name in foo plop
do
printf '%s = %q\n' "$var_name" "${!var_name}"
done
# This will print:
# foo = bar
# plop = yo

How can I run a function from a script in command line?

I have a script that has some functions.
Can I run one of the function directly from command line?
Something like this?
myScript.sh func()
Well, while the other answers are right - you can certainly do something else: if you have access to the bash script, you can modify it, and simply place at the end the special parameter "$#" - which will expand to the arguments of the command line you specify, and since it's "alone" the shell will try to call them verbatim; and here you could specify the function name as the first argument. Example:
$ cat test.sh
testA() {
echo "TEST A $1";
}
testB() {
echo "TEST B $2";
}
"$#"
$ bash test.sh
$ bash test.sh testA
TEST A
$ bash test.sh testA arg1 arg2
TEST A arg1
$ bash test.sh testB arg1 arg2
TEST B arg2
For polish, you can first verify that the command exists and is a function:
# Check if the function exists (bash specific)
if declare -f "$1" > /dev/null
then
# call arguments verbatim
"$#"
else
# Show a helpful error
echo "'$1' is not a known function name" >&2
exit 1
fi
If the script only defines the functions and does nothing else, you can first execute the script within the context of the current shell using the source or . command and then simply call the function. See help source for more information.
The following command first registers the function in the context, then calls it:
. ./myScript.sh && function_name
Briefly, no.
You can import all of the functions in the script into your environment with source (help source for details), which will then allow you to call them. This also has the effect of executing the script, so take care.
There is no way to call a function from a shell script as if it were a shared library.
Using case
#!/bin/bash
fun1 () {
echo "run function1"
[[ "$#" ]] && echo "options: $#"
}
fun2 () {
echo "run function2"
[[ "$#" ]] && echo "options: $#"
}
case $1 in
fun1) "$#"; exit;;
fun2) "$#"; exit;;
esac
fun1
fun2
This script will run functions fun1 and fun2 but if you start it with option
fun1 or fun2 it'll only run given function with args(if provided) and exit.
Usage
$ ./test
run function1
run function2
$ ./test fun2 a b c
run function2
options: a b c
I have a situation where I need a function from bash script which must not be executed before (e.g. by source) and the problem with #$ is that myScript.sh is then run twice, it seems... So I've come up with the idea to get the function out with sed:
sed -n "/^func ()/,/^}/p" myScript.sh
And to execute it at the time I need it, I put it in a file and use source:
sed -n "/^func ()/,/^}/p" myScript.sh > func.sh; source func.sh; rm func.sh
Edit: WARNING - seems this doesn't work in all cases, but works well on many public scripts.
If you have a bash script called "control" and inside it you have a function called "build":
function build() {
...
}
Then you can call it like this (from the directory where it is):
./control build
If it's inside another folder, that would make it:
another_folder/control build
If your file is called "control.sh", that would accordingly make the function callable like this:
./control.sh build
Solved post but I'd like to mention my preferred solution. Namely, define a generic one-liner script eval_func.sh:
#!/bin/bash
source $1 && shift && "#a"
Then call any function within any script via:
./eval_func.sh <any script> <any function> <any args>...
An issue I ran into with the accepted solution is that when sourcing my function-containing script within another script, the arguments of the latter would be evaluated by the former, causing an error.
The other answers here are nice, and much appreciated, but often I don't want to source the script in the session (which reads and executes the file in your current shell) or modify it directly.
I find it more convenient to write a one or two line 'bootstrap' file and run that. Makes testing the main script easier, doesn't have side effects on your shell session, and as a bonus you can load things that simulate other environments for testing. Example...
# breakfast.sh
make_donuts() {
echo 'donuts!'
}
make_bagels() {
echo 'bagels!'
}
# bootstrap.sh
source 'breakfast.sh'
make_donuts
Now just run ./bootstrap.sh.Same idea works with your python, ruby, or whatever scripts.
Why useful? Let's say you complicated your life for some reason, and your script may find itself in different environments with different states present. For example, either your terminal session, or a cloud provider's cool new thing. You also want to test cloud things in terminal, using simple methods. No worries, your bootstrap can load elementary state for you.
# breakfast.sh
# Now it has to do slightly different things
# depending on where the script lives!
make_donuts() {
if [[ $AWS_ENV_VAR ]]
then
echo '/donuts'
elif [[ $AZURE_ENV_VAR ]]
then
echo '\donuts'
else
echo '/keto_diet'
fi
}
If you let your bootstrap thing take an argument, you can load different state for your function to chew, still with one line in the shell session:
# bootstrap.sh
source 'breakfast.sh'
case $1 in
AWS)
AWS_ENV_VAR="arn::mumbo:jumbo:12345"
;;
AZURE)
AZURE_ENV_VAR="cloud::woo:_impress"
;;
esac
make_donuts # You could use $2 here to name the function you wanna, but careful if evaluating directly.
In terminal session you're just entering:
./bootstrap.sh AWS
Result:
# /donuts
you can call function from command line argument like below
function irfan() {
echo "Irfan khan"
date
hostname
}
function config() {
ifconfig
echo "hey"
}
$1
Once you defined the functions put $1 at the end to accept argument which function you want to call.
Lets say the above code is saved in fun.sh. Now you can call the functions like ./fun.sh irfan & ./fun.sh config in command line.

Unable to run BASH script in current environment multiple times

I have a bash script that I use to move from source to bin directories from anywhere I currently am (I call this script, 'teleport'). Since it basically is just a glorified 'cd' command, I have to run it in the current shell (i.e. . ./teleport.sh ). I've set up an alias in my .bashrc file so that 'teleport' matches '. teleport.sh'.
The first time I run it, it works fine. But then, if I run it again after it has run once, it doesn't do anything. It works again if I close my terminal and then open a new one, but only the first time. My intuition is that there is something internally going on with BASH that I'm not familiar with, so I thought I would run it through the gurus here to see if I can get an answer.
The script is:
numargs=$#
function printUsage
{
echo -e "Usage: $0 [-o | -s] <PROJECT>\n"
echo -e "\tMagically teleports you into the main source directory of a project.\n"
echo -e "\t PROJECT: The current project you wish to teleport into."
echo -e "\t -o: Teleport into the objdir.\n"
echo -e "\t -s: Teleport into the source dir.\n"
}
if [ $numargs -lt 2 ]
then
printUsage
fi
function teleportToObj
{
OBJDIR=${HOME}/Source/${PROJECT}/obj
cd ${OBJDIR}
}
function teleportToSrc
{
cd ${HOME}/Source/${PROJECT}/src
}
while getopts "o:s:" opt
do
case $opt in
o)
PROJECT=$OPTARG
teleportToObj
;;
s)
PROJECT=$OPTARG
teleportToSrc
;;
esac
done
My usage of it is something like:
sjohnson#corellia:~$ cd /usr/local/src
sjohnson#corellia:/usr/local/src$ . ./teleport -s some-proj
sjohnson#corellia:~/Source/some-proj/src$ teleport -o some-proj
sjohnson#corellia:~/Source/some-proj/src$
<... START NEW TERMINAL ...>
sjohnson#corellia:~$ . ./teleport -o some-proj
sjohnson#corellia:~/Source/some-proj/obj$
The problem is that getopts necessarily keeps a little bit of state so that it can be called in a loop, and you're not clearing that state. Each time it's called, it processes one more argument, and it increments the shell's OPTIND variable so it'll know which argument to process the next time it's called. When it's done with all the arguments, it returns 1 (false) every time it's invoked, which makes the while exit.
The first time you source your script, it works as expected. The second (and third, fourth...) time, getopts does nothing but return false.
Add one line to reset the state before you start looping:
unset OPTIND # clear state so getopts will start over
while getopts "o:s:" opt
do
# ...
done
(I assume there's a typo in your transcript, since it shows you invoking the script -- not sourcing it -- on the second try, but that's not the real problem here.)
The problem is that the first time you call is you are sourcing the script (thats what ". ./teleport") does which runs the script in the current shell thus preserving the cd. The second time you call it, it isn't sourced so you create a subshell, cd to the appropriate directory, and then exit the subshell putting you right back where you called the script from!
The way to make this work is simply to make teleportToSrc and teleportToObj aliases or functions in the current shell (i.e. outside a script)

Accessing variable from ARGV

I'm writing a cPanel postwwwact script, if you're not familiar with the script its run after a new account is created. it relies on the user account variable being passed to the script which i then use for various things (creating databases etc). However, I can't seem to find the right way to access the variable i want. I'm not that good with shell scripts so i'd appreciate some advice. I had read somewhere that the value i wanted would be included in $ARGV{'user'} but this simply gives "root" as opposed to the value i need. I've tried looping through all the arguments (list of arguments here) like this:
#!/bin/sh
for var
do
touch /root/testvars/$var
done
and the value i want is in there, i'm just not sure how to accurately target it. There's info here on doing this with PHP or Perl but i have to do this as a shell script.
EDIT Ideally i would like to be able to call the variable by something other than $1 or $2 etc as this would create issues if an argument is added or removed
..for example in the PHP code here:
function argv2array ($argv) {
$opts = array();
$argv0 = array_shift($argv);
while(count($argv)) {
$key = array_shift($argv);
$value = array_shift($argv);
$opts[$key] = $value;
}
return $opts;
}
// allows you to do the following:
$opts = argv2array($argv);
echo $opts[‘user’];
Any ideas?
The parameters are passed to your script as a hash:
/scripts/$hookname user $user password $password
You can use associative arrays in Bash 4, or in earlier versions of Bash you can use built up variable names.
#!/bin/bash
# Bash >= 4
declare -A argv
for ((i=1;i<=${##};i+=2))
do
argv[${#:i:1}]="${#:$((i+1)):1}"
done
echo ${argv['user']}
Or
#!/bin/bash
# Bash < 4
for ((i=1;i<=${##};i+=2))
do
declare ARGV${#:i:1}="${#:$((i+1)):1}"
done
echo ${!ARGV*} # outputs all variable names that begin with ARGV
echo $ARGVuser
Running either:
$ ./argvtest user dennis password secret
dennis
Note: you can also use shift to step through the arguments, but it's destructive and the methods above leave $# ($1, $2, etc.) in place.
#!/bin/bash
# Bash < 4
# using shift (can use in Bash 4, also)
for ((i=1;i<=${##}+2;i++))
do
declare ARGV$1="$2"
# Bash 4: argv[$1}]="$2"
shift 2
done
echo ${!ARGV*}
echo $ARGVuser
If it's passed as a command-line parameter to the script, it's available as $1 if it's first parameter, $2 for the second, and so on.
Why not start off your script with something like
ARG_USER=$1
ARG_FOO=$2
ARG_BAR=$3
And then later in your script refer to $ARG_USER, $ARG_FOO and $ARG_BAR instead of $1, $2, and $3. That way, if you decide to change the order of arguments, or insert a new argument somewhere other than at the end, there is only one place in your code that you need to update the association between argument order and argument meaning.
You could even do more complex processing of $* to set your $ARG_WHATEVER variables, if it's not always going to be that all of the are specified in the same order every time.
You can do the following:
#!/bin/bash
for var in $argv; do
<do whatver you want with $var>
done
And then, invoke the script as:
$ /path/to/script param1 arg2 item3 item4 etc

Resources