Accessing variable from ARGV - linux

I'm writing a cPanel postwwwact script, if you're not familiar with the script its run after a new account is created. it relies on the user account variable being passed to the script which i then use for various things (creating databases etc). However, I can't seem to find the right way to access the variable i want. I'm not that good with shell scripts so i'd appreciate some advice. I had read somewhere that the value i wanted would be included in $ARGV{'user'} but this simply gives "root" as opposed to the value i need. I've tried looping through all the arguments (list of arguments here) like this:
#!/bin/sh
for var
do
touch /root/testvars/$var
done
and the value i want is in there, i'm just not sure how to accurately target it. There's info here on doing this with PHP or Perl but i have to do this as a shell script.
EDIT Ideally i would like to be able to call the variable by something other than $1 or $2 etc as this would create issues if an argument is added or removed
..for example in the PHP code here:
function argv2array ($argv) {
$opts = array();
$argv0 = array_shift($argv);
while(count($argv)) {
$key = array_shift($argv);
$value = array_shift($argv);
$opts[$key] = $value;
}
return $opts;
}
// allows you to do the following:
$opts = argv2array($argv);
echo $opts[‘user’];
Any ideas?

The parameters are passed to your script as a hash:
/scripts/$hookname user $user password $password
You can use associative arrays in Bash 4, or in earlier versions of Bash you can use built up variable names.
#!/bin/bash
# Bash >= 4
declare -A argv
for ((i=1;i<=${##};i+=2))
do
argv[${#:i:1}]="${#:$((i+1)):1}"
done
echo ${argv['user']}
Or
#!/bin/bash
# Bash < 4
for ((i=1;i<=${##};i+=2))
do
declare ARGV${#:i:1}="${#:$((i+1)):1}"
done
echo ${!ARGV*} # outputs all variable names that begin with ARGV
echo $ARGVuser
Running either:
$ ./argvtest user dennis password secret
dennis
Note: you can also use shift to step through the arguments, but it's destructive and the methods above leave $# ($1, $2, etc.) in place.
#!/bin/bash
# Bash < 4
# using shift (can use in Bash 4, also)
for ((i=1;i<=${##}+2;i++))
do
declare ARGV$1="$2"
# Bash 4: argv[$1}]="$2"
shift 2
done
echo ${!ARGV*}
echo $ARGVuser

If it's passed as a command-line parameter to the script, it's available as $1 if it's first parameter, $2 for the second, and so on.

Why not start off your script with something like
ARG_USER=$1
ARG_FOO=$2
ARG_BAR=$3
And then later in your script refer to $ARG_USER, $ARG_FOO and $ARG_BAR instead of $1, $2, and $3. That way, if you decide to change the order of arguments, or insert a new argument somewhere other than at the end, there is only one place in your code that you need to update the association between argument order and argument meaning.
You could even do more complex processing of $* to set your $ARG_WHATEVER variables, if it's not always going to be that all of the are specified in the same order every time.

You can do the following:
#!/bin/bash
for var in $argv; do
<do whatver you want with $var>
done
And then, invoke the script as:
$ /path/to/script param1 arg2 item3 item4 etc

Related

How to create an associate array with value of a list of items in bash

I would like to create such a associated array in bash:
myarr = {
'key1' : ["command_name", "command name with arguments"],
'key2' : ["command_name", "command name with arguments"],
}
The reason I want to do the above is so that I can pass a key to the script and then do something like this:
Use the key to index into the associative array
Use some tool to check whether the application given by command_name is open
If the window is not open, lunch the application given by command name with arguments
Such a task is trivial in a popular programming language, but it doesn't seem to be as trivial in bash.
EDIT
I'd like to be able to create something like this:
declare -A array=(
[c]=("code" "code")
[e]=("dolphin" "XDG_CURRENT_DESKTOP=KDE dolphin")
[n]=("nvim" "kitty nvim")
)
Edit: take comments into account and replace now useless arrays by scalar strings.
As you want to set bash variables in the command's context we cannot execute them with "$cmd", this would not work for variable assignments. The following uses eval, which is extremely risky, especially if you do not fully control the inputs. A better solution, but more complicated, would be to use other variables for the execution environment, declare functions to limit the scope of variables and/or restore them afterwards, use eval only in last resort and only after sanitizing its parameters (printf '%q')... Anyway, you have been warned.
Storing bash commands and their arguments in variables is not recommended. But if you really need this it would be better to store the command names and the full commands in 2 different variables. They could be associative arrays or, if your bash is recent enough and supports namerefs, scalar variables named from your keys (if they are valid bash variable names).
Example where the key is stored in bash variable k, and the command is the second of your own example, plus some dummy arguments:
k="e"
# add a new command with key "$k"
declare -n cmd="${k}_cmd" lcmd="${k}_lcmd"
cmd="dolphin"
lcmd="XDG_CURRENT_DESKTOP=KDE dolphin arg1 arg2 'arg 3'"
...
# launch command with key "$k"
declare -n cmd="${k}_cmd" lcmd="${k}_lcmd"
if not_running "$cmd"; then
eval "$lcmd"
fi
Demo with key foo, command printf and arguments '%s\n' 'a b' 'c d':
$ k="foo"
$ declare -n cmd="${k}_cmd" lcmd="${k}_lcmd"
$ cmd="printf"
$ lcmd="printf '%s\n' 'a b' 'c d'"
$ eval "$lcmd"
a b
c d

Using "read" to set variables

In bash from the CLI I can do:
$ ERR_TYPE=$"OVERLOAD"
$ echo $ERR_TYPE
OVERLOAD
$ read ${ERR_TYPE}_ERROR
1234
$ echo $OVERLOAD_ERROR
1234
This works great to set my variable name dynamically; in a script it doesn't work. I tried:
#!/bin/env bash
ERR_TYPE=("${ERR_TYPE[#]}" "OVERLOAD" "PANIC" "FATAL")
for i in "${ERR_TYPE[#]}"
do
sh -c $(echo ${i}_ERROR=$"1234")
done
echo $OVERLOAD_ERROR # output is blank
# I also tried these:
# ${i}_ERROR=$(echo ${i}_ERROR=$"1234") # command not found
# read ${i}_ERROR=$(echo ${i}_ERROR=$"1234") # it never terminates
How would I set a variable as I do from CLI, but in a script? thanks
When you use dynamic variables names instead of associative arrays, you really need to question your approach.
err_type=("OVERLOAD" "PANIC" "FATAL")
declare -A error
for type in "${err_type[#]}"; do
error[$type]=1234
done
Nevertheless, in bash you'd use declare:
declare "${i}_error=1234"
Your approach fails because you spawn a new shell, passing the command OVERLOAD_ERROR=1234, and then the shell exits. Your current shell is not affected at all.
Get out of the habit of using ALLCAPSVARNAMES. One day you'll write PATH=... and then wonder why your script is broken.
If the variable will hold a number, you can use let.
#!/bin/bash
ERR_TYPE=("OVERLOAD" "PANIC" "FATAL")
j=0
for i in "${ERR_TYPE[#]}"
do
let ${i}_ERROR=1000+j++
done
echo $OVERLOAD_ERROR
echo $PANIC_ERROR
echo $FATAL_ERROR
This outputs:
1000
1001
1002
I'd use eval.
I think this would be considered bad practice though (it had some thing to do with the fact that eval is "evil" because it allows bad input or something):
eval "${i}_ERROR=1234"

In Bash, how do I stream the end of a pipeline into a variable?

I know that any variable set at the end of a pipeline is lost (excluding with the shell option in Bash 4 - unfortunately this needs to be a portable solution). However, I am sure there must be a way with file descriptors or something to stream the output of the end of a pipeline into a variable, even if via some convoluted route! :)
Ideally I want a command/function that takes one argument, the name of a variable that will eventually result in containing the output of the rest of the pipeline.
I have got a function that will find its next free file descriptor in a portable fashion:
getFd ()
{
# we'll start with 3 since 0..2 are mapped to standard in, out, and error respectively
local myFD='3'
# we'll get the upperbound from bash's ulimit
local FD_MAX=$( ulimit -n )
local FD_LOC
if [ -e /proc/$$/fd ]
then
FD_LOC="/proc/$$/fd"
elif [ -e /dev/fd ]
then
FD_LOC="/dev/fd"
else
return 1
fi
while [ -e "${FD_LOC}/${myFD}" ] && [ "${myFD}" -le "${FD_MAX}" ]
do
((++myFD))
done
eval FD="${myFD}"
}
I am thinking I might need to do something like previously creating a pool of open file descriptors that can be pulled in via some alias jiggery pokey or something, but am hoping that I am missing some much simpler way as am sure there must be a better way.
I was also thinking that if I added printf %s "${myFD}" at the end I could do something like alias '{FD}'="$( getFd )" to implement the Bash 4 feature of automatically finding the next available file descriptor for use in the form of {FD} <filename note: the need to have a space, but if this can be made to work it would be great to bring this feature to bash 3.0 for example. Also, would probably have to use shopt -s expand_aliases.
Any ideas would be greatly appreciated!
P.S. I am trying to avoid having to force the MyCommand MyVariable < <( command1 | command2 ; ) ; type syntax and and am striving if it is possible to end with the: $ command1 | command2 | MyCommand MyVariable ; type of use.
Your question is a bit confusing, but I guess you want to set a user-defined variable name to the 1st available file descriptor.
If that's the case, your function should simply echo the fd number:
getFD() {
...
echo $myFD
}
And then let's say foo contains the variable name to store the fd into, let's assume it contains the string bar:
eval $foo=`getFD`
After that variable bar will contain the first available fd.

Variable next to ./name.sh in bash

Simple question, I know squat about bash scripts.
I've got a script test.sh and it sends a mail with some parameters of our DB while we run some stuff. I want to add the options 1, 2, 3 next to the ./test.sh so that the mail contains the current step of the process.
Example:
./test.sh 1 #>> Sends the mail with "Pre-aplication" in its subject.
PS: I know where to change the subject of the mail, but don't know how to read the variable from beside the .sh and then choose the words.
Your first command line input is simply stored in the $1 variable within the script. So you can use $1 without any explicit assignment in test.sh to read the number defined in command line. Find an example here. Note that to get the value, you should use double-quote in your script: "$1"
What should be using arguments.
Assuming you have a function set-up as such
function_name () {
echo "Parameter #1 is $1"
}
You can pass in an argument from the command line like so
sh example.sh example
Basically you can pass in any number of arguments and access each one like so..
$1 ('first argument')... $2 ('second argument')... $n('Nth argument)
You can go through this Documentation on Advanced bash-scripting guide to know more
http://www.tldp.org/LDP/abs/html/internalvariables.html#ARGLIST

Why should eval be avoided in Bash, and what should I use instead?

Time and time again, I see Bash answers on Stack Overflow using eval and the answers get bashed, pun intended, for the use of such an "evil" construct. Why is eval so evil?
If eval can't be used safely, what should I use instead?
There's more to this problem than meets the eye. We'll start with the obvious: eval has the potential to execute "dirty" data. Dirty data is any data that has not been rewritten as safe-for-use-in-situation-XYZ; in our case, it's any string that has not been formatted so as to be safe for evaluation.
Sanitizing data appears easy at first glance. Assuming we're throwing around a list of options, bash already provides a great way to sanitize individual elements, and another way to sanitize the entire array as a single string:
function println
{
# Send each element as a separate argument, starting with the second element.
# Arguments to printf:
# 1 -> "$1\n"
# 2 -> "$2"
# 3 -> "$3"
# 4 -> "$4"
# etc.
printf "$1\n" "${#:2}"
}
function error
{
# Send the first element as one argument, and the rest of the elements as a combined argument.
# Arguments to println:
# 1 -> '\e[31mError (%d): %s\e[m'
# 2 -> "$1"
# 3 -> "${*:2}"
println '\e[31mError (%d): %s\e[m' "$1" "${*:2}"
exit "$1"
}
# This...
error 1234 Something went wrong.
# And this...
error 1234 'Something went wrong.'
# Result in the same output (as long as $IFS has not been modified).
Now say we want to add an option to redirect output as an argument to println. We could, of course, just redirect the output of println on each call, but for the sake of example, we're not going to do that. We'll need to use eval, since variables can't be used to redirect output.
function println
{
eval printf "$2\n" "${#:3}" $1
}
function error
{
println '>&2' '\e[31mError (%d): %s\e[m' "$1" "${*:2}"
exit $1
}
error 1234 Something went wrong.
Looks good, right? Problem is, eval parses twice the command line (in any shell). On the first pass of parsing one layer of quoting is removed. With quotes removed, some variable content gets executed.
We can fix this by letting the variable expansion take place within the eval. All we have to do is single-quote everything, leaving the double-quotes where they are. One exception: we have to expand the redirection prior to eval, so that has to stay outside of the quotes:
function println
{
eval 'printf "$2\n" "${#:3}"' $1
}
function error
{
println '&2' '\e[31mError (%d): %s\e[m' "$1" "${*:2}"
exit $1
}
error 1234 Something went wrong.
This should work. It's also safe as long as $1 in println is never dirty.
Now hold on just a moment: I use that same unquoted syntax that we used originally with sudo all of the time! Why does it work there, and not here? Why did we have to single-quote everything? sudo is a bit more modern: it knows to enclose in quotes each argument that it receives, though that is an over-simplification. eval simply concatenates everything.
Unfortunately, there is no drop-in replacement for eval that treats arguments like sudo does, as eval is a shell built-in; this is important, as it takes on the environment and scope of the surrounding code when it executes, rather than creating a new stack and scope like a function does.
eval Alternatives
Specific use cases often have viable alternatives to eval. Here's a handy list. command represents what you would normally send to eval; substitute in whatever you please.
No-op
A simple colon is a no-op in bash:
:
Create a sub-shell
( command ) # Standard notation
Execute output of a command
Never rely on an external command. You should always be in control of the return value. Put these on their own lines:
$(command) # Preferred
`command` # Old: should be avoided, and often considered deprecated
# Nesting:
$(command1 "$(command2)")
`command "\`command\`"` # Careful: \ only escapes $ and \ with old style, and
# special case \` results in nesting.
Redirection based on variable
In calling code, map &3 (or anything higher than &2) to your target:
exec 3<&0 # Redirect from stdin
exec 3>&1 # Redirect to stdout
exec 3>&2 # Redirect to stderr
exec 3> /dev/null # Don't save output anywhere
exec 3> file.txt # Redirect to file
exec 3> "$var" # Redirect to file stored in $var--only works for files!
exec 3<&0 4>&1 # Input and output!
If it were a one-time call, you wouldn't have to redirect the entire shell:
func arg1 arg2 3>&2
Within the function being called, redirect to &3:
command <&3 # Redirect stdin
command >&3 # Redirect stdout
command 2>&3 # Redirect stderr
command &>&3 # Redirect stdout and stderr
command 2>&1 >&3 # idem, but for older bash versions
command >&3 2>&1 # Redirect stdout to &3, and stderr to stdout: order matters
command <&3 >&4 # Input and output!
Variable indirection
Scenario:
VAR='1 2 3'
REF=VAR
Bad:
eval "echo \"\$$REF\""
Why? If REF contains a double quote, this will break and open the code to exploits. It's possible to sanitize REF, but it's a waste of time when you have this:
echo "${!REF}"
That's right, bash has variable indirection built-in as of version 2. It gets a bit trickier than eval if you want to do something more complex:
# Add to scenario:
VAR_2='4 5 6'
# We could use:
local ref="${REF}_2"
echo "${!ref}"
# Versus the bash < 2 method, which might be simpler to those accustomed to eval:
eval "echo \"\$${REF}_2\""
Regardless, the new method is more intuitive, though it might not seem that way to experienced programmed who are used to eval.
Associative arrays
Associative arrays are implemented intrinsically in bash 4. One caveat: they must be created using declare.
declare -A VAR # Local
declare -gA VAR # Global
# Use spaces between parentheses and contents; I've heard reports of subtle bugs
# on some versions when they are omitted having to do with spaces in keys.
declare -A VAR=( ['']='a' [0]='1' ['duck']='quack' )
VAR+=( ['alpha']='beta' [2]=3 ) # Combine arrays
VAR['cow']='moo' # Set a single element
unset VAR['cow'] # Unset a single element
unset VAR # Unset an entire array
unset VAR[#] # Unset an entire array
unset VAR[*] # Unset each element with a key corresponding to a file in the
# current directory; if * doesn't expand, unset the entire array
local KEYS=( "${!VAR[#]}" ) # Get all of the keys in VAR
In older versions of bash, you can use variable indirection:
VAR=( ) # This will store our keys.
# Store a value with a simple key.
# You will need to declare it in a global scope to make it global prior to bash 4.
# In bash 4, use the -g option.
declare "VAR_$key"="$value"
VAR+="$key"
# Or, if your version is lacking +=
VAR=( "$VAR[#]" "$key" )
# Recover a simple value.
local var_key="VAR_$key" # The name of the variable that holds the value
local var_value="${!var_key}" # The actual value--requires bash 2
# For < bash 2, eval is required for this method. Safe as long as $key is not dirty.
local var_value="`eval echo -n \"\$$var_value\""
# If you don't need to enumerate the indices quickly, and you're on bash 2+, this
# can be cut down to one line per operation:
declare "VAR_$key"="$value" # Store
echo "`var_key="VAR_$key" echo -n "${!var_key}"`" # Retrieve
# If you're using more complex values, you'll need to hash your keys:
function mkkey
{
local key="`mkpasswd -5R0 "$1" 00000000`"
echo -n "${key##*$}"
}
local var_key="VAR_`mkkey "$key"`"
# ...
How to make eval safe
eval can be safely used - but all of its arguments need to be quoted first. Here's how:
This function which will do it for you:
function token_quote {
local quoted=()
for token; do
quoted+=( "$(printf '%q' "$token")" )
done
printf '%s\n' "${quoted[*]}"
}
Example usage:
Given some untrusted user input:
% input="Trying to hack you; date"
Construct a command to eval:
% cmd=(echo "User gave:" "$input")
Eval it, with seemingly correct quoting:
% eval "$(echo "${cmd[#]}")"
User gave: Trying to hack you
Thu Sep 27 20:41:31 +07 2018
Note you were hacked. date was executed rather than being printed literally.
Instead with token_quote():
% eval "$(token_quote "${cmd[#]}")"
User gave: Trying to hack you; date
%
eval isn't evil - it's just misunderstood :)
I’ll split this answer in two parts, which, I think, cover a large proportion of the cases where people tend to be tempted by eval:
Running weirdly built commands
Fiddling with dynamically named variables
Running weirdly built commands
Many, many times, simple indexed arrays are enough, provided that you take on good habits regarding double quotes to protect expansions while defining the array.
# One nasty argument which must remain a single argument and not be split:
f='foo bar'
# The command in an indexed array (use `declare -a` if you really want to be explicit):
cmd=(
touch
"$f"
# Yet another nasty argument, this time hardcoded:
'plop yo'
)
# Let Bash expand the array and run it as a command:
"${cmd[#]}"
This will create foo bar and plop yo (two files, not four).
Note that sometimes it can produce more readable scripts to put just the arguments (or a bunch of options) in the array (at least you know at first glance what you’re running):
touch "${args[#]}"
touch "${opts[#]}" file1 file2
As a bonus, arrays let you, easily:
Add comments about a specific argument:
cmd=(
# Important because blah blah:
-v
)
Group arguments for readability by leaving blank lines within the array definition.
Comment out specific arguments for debugging purposes.
Append arguments to your command, sometimes dynamically according to specific conditions or in loops:
cmd=(myprog)
for f in foo bar
do
cmd+=(-i "$f")
done
if [[ $1 = yo ]]
then
cmd+=(plop)
fi
to_be_added=(one two 't h r e e')
cmd+=("${to_be_added[#]}")
Define commands in configuration files while allowing for configuration-defined whitespace-containing arguments:
readonly ENCODER=(ffmpeg -blah --blah 'yo plop')
# Deprecated:
#readonly ENCODER=(avconv -bloh --bloh 'ya plap')
# […]
"${ENCODER[#]}" foo bar
Log a robustly runnable command, that perfectly represents what is being run, using printf’s %q:
function please_log_that {
printf 'Running:'
# From `help printf`:
# “The format is re-used as necessary to consume all of the arguments.”
# From `man printf` for %q:
# “printed in a format that can be reused as shell input,
# escaping non-printable characters with the proposed POSIX $'' syntax.”
printf ' %q' "$#"
echo
}
arg='foo bar'
cmd=(prog "$arg" 'plop yo' $'arg\nnewline\tand tab')
please_log_that "${cmd[#]}"
# ⇒ “Running: prog foo\ bar plop\ yo $'arg\nnewline\tand tab'”
# You can literally copy and paste that ↑ to a terminal and get the same execution.
Enjoy better syntax highlighting than with eval strings, since you don’t need to nest quotes or use $-s that “will not be evaluated right away but will be at some point”.
To me, the main advantage of this approach (and conversely disadvantage of eval) is that you can follow the same logic as usual regarding quotation, expansion, etc. No need to rack your brain trying to put quotes in quotes in quotes “in advance” while trying to figure out which command will interpret which pair of quotes at which moment. And of course many of the things mentioned above are harder or downright impossible to achieve with eval.
With these, I never had to rely on eval in the past six years or so, and readability and robustness (in particular regarding arguments that contain whitespace) were arguably increased. You don’t even need to know whether IFS has been tempered with! Of course, there are still edge cases where eval might actually be needed (I suppose, for example, if the user has to be able to provide a full fledged piece of script via an interactive prompt or whatever), but hopefully that’s not something you’ll come across on a daily basis.
Fiddling with dynamically named variables
declare -n (or its within-functions local -n counterpart), as well as ${!foo}, do the trick most of the time.
$ help declare | grep -- -n
-n make NAME a reference to the variable named by its value
Well, it’s not exceptionally clear without an example:
declare -A global_associative_array=(
[foo]=bar
[plop]=yo
)
# $1 Name of global array to fiddle with.
fiddle_with_array() {
# Check this if you want to make sure you’ll avoid
# circular references, but it’s only if you really
# want this to be robust.
# You can also give an ugly name like “__ref” to your
# local variable as a cheaper way to make collisions less likely.
if [[ $1 != ref ]]
then
local -n ref=$1
fi
printf 'foo → %s\nplop → %s\n' "${ref[foo]}" "${ref[plop]}"
}
# Call the function with the array NAME as argument,
# not trying to get its content right away here or anything.
fiddle_with_array global_associative_array
# This will print:
# foo → bar
# plop → yo
(I love this trick ↑ as it makes me feel like I’m passing objects to my functions, like in an object-oriented language. The possibilities are mind-boggling.)
As for ${!…} (which gets the value of the variable named by another variable):
foo=bar
plop=yo
for var_name in foo plop
do
printf '%s = %q\n' "$var_name" "${!var_name}"
done
# This will print:
# foo = bar
# plop = yo

Resources