use an env variable as part of the name of a different env variable in bash [duplicate] - linux

I am confused about a bash script.
I have the following code:
function grep_search() {
magic_way_to_define_magic_variable_$1=`ls | tail -1`
echo $magic_variable_$1
}
I want to be able to create a variable name containing the first argument of the command and bearing the value of e.g. the last line of ls.
So to illustrate what I want:
$ ls | tail -1
stack-overflow.txt
$ grep_search() open_box
stack-overflow.txt
So, how should I define/declare $magic_way_to_define_magic_variable_$1 and how should I call it within the script?
I have tried eval, ${...}, \$${...}, but I am still confused.

I've been looking for better way of doing it recently. Associative array sounded like overkill for me. Look what I found:
suffix=bzz
declare prefix_$suffix=mystr
...and then...
varname=prefix_$suffix
echo ${!varname}
From the docs:
The ‘$’ character introduces parameter expansion, command substitution, or arithmetic expansion. ...
The basic form of parameter expansion is ${parameter}. The value of parameter is substituted. ...
If the first character of parameter is an exclamation point (!), and parameter is not a nameref, it introduces a level of indirection. Bash uses the value formed by expanding the rest of parameter as the new parameter; this is then expanded and that value is used in the rest of the expansion, rather than the expansion of the original parameter. This is known as indirect expansion. The value is subject to tilde expansion, parameter expansion, command substitution, and arithmetic expansion. ...

Use an associative array, with command names as keys.
# Requires bash 4, though
declare -A magic_variable=()
function grep_search() {
magic_variable[$1]=$( ls | tail -1 )
echo ${magic_variable[$1]}
}
If you can't use associative arrays (e.g., you must support bash 3), you can use declare to create dynamic variable names:
declare "magic_variable_$1=$(ls | tail -1)"
and use indirect parameter expansion to access the value.
var="magic_variable_$1"
echo "${!var}"
See BashFAQ: Indirection - Evaluating indirect/reference variables.

Beyond associative arrays, there are several ways of achieving dynamic variables in Bash. Note that all these techniques present risks, which are discussed at the end of this answer.
In the following examples I will assume that i=37 and that you want to alias the variable named var_37 whose initial value is lolilol.
Method 1. Using a “pointer” variable
You can simply store the name of the variable in an indirection variable, not unlike a C pointer. Bash then has a syntax for reading the aliased variable: ${!name} expands to the value of the variable whose name is the value of the variable name. You can think of it as a two-stage expansion: ${!name} expands to $var_37, which expands to lolilol.
name="var_$i"
echo "$name" # outputs “var_37”
echo "${!name}" # outputs “lolilol”
echo "${!name%lol}" # outputs “loli”
# etc.
Unfortunately, there is no counterpart syntax for modifying the aliased variable. Instead, you can achieve assignment with one of the following tricks.
1a. Assigning with eval
eval is evil, but is also the simplest and most portable way of achieving our goal. You have to carefully escape the right-hand side of the assignment, as it will be evaluated twice. An easy and systematic way of doing this is to evaluate the right-hand side beforehand (or to use printf %q).
And you should check manually that the left-hand side is a valid variable name, or a name with index (what if it was evil_code # ?). By contrast, all other methods below enforce it automatically.
# check that name is a valid variable name:
# note: this code does not support variable_name[index]
shopt -s globasciiranges
[[ "$name" == [a-zA-Z_]*([a-zA-Z_0-9]) ]] || exit
value='babibab'
eval "$name"='$value' # carefully escape the right-hand side!
echo "$var_37" # outputs “babibab”
Downsides:
does not check the validity of the variable name.
eval is evil.
eval is evil.
eval is evil.
1b. Assigning with read
The read builtin lets you assign values to a variable of which you give the name, a fact which can be exploited in conjunction with here-strings:
IFS= read -r -d '' "$name" <<< 'babibab'
echo "$var_37" # outputs “babibab\n”
The IFS part and the option -r make sure that the value is assigned as-is, while the option -d '' allows to assign multi-line values. Because of this last option, the command returns with an non-zero exit code.
Note that, since we are using a here-string, a newline character is appended to the value.
Downsides:
somewhat obscure;
returns with a non-zero exit code;
appends a newline to the value.
1c. Assigning with printf
Since Bash 3.1 (released 2005), the printf builtin can also assign its result to a variable whose name is given. By contrast with the previous solutions, it just works, no extra effort is needed to escape things, to prevent splitting and so on.
printf -v "$name" '%s' 'babibab'
echo "$var_37" # outputs “babibab”
Downsides:
Less portable (but, well).
Method 2. Using a “reference” variable
Since Bash 4.3 (released 2014), the declare builtin has an option -n for creating a variable which is a “name reference” to another variable, much like C++ references. Just as in Method 1, the reference stores the name of the aliased variable, but each time the reference is accessed (either for reading or assigning), Bash automatically resolves the indirection.
In addition, Bash has a special and very confusing syntax for getting the value of the reference itself, judge by yourself: ${!ref}.
declare -n ref="var_$i"
echo "${!ref}" # outputs “var_37”
echo "$ref" # outputs “lolilol”
ref='babibab'
echo "$var_37" # outputs “babibab”
This does not avoid the pitfalls explained below, but at least it makes the syntax straightforward.
Downsides:
Not portable.
Risks
All these aliasing techniques present several risks. The first one is executing arbitrary code each time you resolve the indirection (either for reading or for assigning). Indeed, instead of a scalar variable name, like var_37, you may as well alias an array subscript, like arr[42]. But Bash evaluates the contents of the square brackets each time it is needed, so aliasing arr[$(do_evil)] will have unexpected effects… As a consequence, only use these techniques when you control the provenance of the alias.
function guillemots {
declare -n var="$1"
var="«${var}»"
}
arr=( aaa bbb ccc )
guillemots 'arr[1]' # modifies the second cell of the array, as expected
guillemots 'arr[$(date>>date.out)1]' # writes twice into date.out
# (once when expanding var, once when assigning to it)
The second risk is creating a cyclic alias. As Bash variables are identified by their name and not by their scope, you may inadvertently create an alias to itself (while thinking it would alias a variable from an enclosing scope). This may happen in particular when using common variable names (like var). As a consequence, only use these techniques when you control the name of the aliased variable.
function guillemots {
# var is intended to be local to the function,
# aliasing a variable which comes from outside
declare -n var="$1"
var="«${var}»"
}
var='lolilol'
guillemots var # Bash warnings: “var: circular name reference”
echo "$var" # outputs anything!
Source:
BashFaq/006: How can I use variable variables (indirect variables, pointers, references) or associative arrays?
BashFAQ/048: eval command and security issues

Example below returns value of $name_of_var
var=name_of_var
echo $(eval echo "\$$var")

Use declare
There is no need on using prefixes like on other answers, neither arrays. Use just declare, double quotes, and parameter expansion.
I often use the following trick to parse argument lists contanining one to n arguments formatted as key=value otherkey=othervalue etc=etc, Like:
# brace expansion just to exemplify
for variable in {one=foo,two=bar,ninja=tip}
do
declare "${variable%=*}=${variable#*=}"
done
echo $one $two $ninja
# foo bar tip
But expanding the argv list like
for v in "$#"; do declare "${v%=*}=${v#*=}"; done
Extra tips
# parse argv's leading key=value parameters
for v in "$#"; do
case "$v" in ?*=?*) declare "${v%=*}=${v#*=}";; *) break;; esac
done
# consume argv's leading key=value parameters
while test $# -gt 0; do
case "$1" in ?*=?*) declare "${1%=*}=${1#*=}";; *) break;; esac
shift
done

Combining two highly rated answers here into a complete example that is hopefully useful and self-explanatory:
#!/bin/bash
intro="You know what,"
pet1="cat"
pet2="chicken"
pet3="cow"
pet4="dog"
pet5="pig"
# Setting and reading dynamic variables
for i in {1..5}; do
pet="pet$i"
declare "sentence$i=$intro I have a pet ${!pet} at home"
done
# Just reading dynamic variables
for i in {1..5}; do
sentence="sentence$i"
echo "${!sentence}"
done
echo
echo "Again, but reading regular variables:"
echo $sentence1
echo $sentence2
echo $sentence3
echo $sentence4
echo $sentence5
Output:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home
Again, but reading regular variables:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home

This will work too
my_country_code="green"
x="country"
eval z='$'my_"$x"_code
echo $z ## o/p: green
In your case
eval final_val='$'magic_way_to_define_magic_variable_"$1"
echo $final_val

This should work:
function grep_search() {
declare magic_variable_$1="$(ls | tail -1)"
echo "$(tmpvar=magic_variable_$1 && echo ${!tmpvar})"
}
grep_search var # calling grep_search with argument "var"

An extra method that doesn't rely on which shell/bash version you have is by using envsubst. For example:
newvar=$(echo '$magic_variable_'"${dynamic_part}" | envsubst)

For zsh (newers mac os versions), you should use
real_var="holaaaa"
aux_var="real_var"
echo ${(P)aux_var}
holaaaa
Instead of "!"

As per BashFAQ/006, you can use read with here string syntax for assigning indirect variables:
function grep_search() {
read "$1" <<<$(ls | tail -1);
}
Usage:
$ grep_search open_box
$ echo $open_box
stack-overflow.txt

Even though it's an old question, I still had some hard time with fetching dynamic variables names, while avoiding the eval (evil) command.
Solved it with declare -n which creates a reference to a dynamic value, this is especially useful in CI/CD processes, where the required secret names of the CI/CD service are not known until runtime. Here's how:
# Bash v4.3+
# -----------------------------------------------------------
# Secerts in CI/CD service, injected as environment variables
# AWS_ACCESS_KEY_ID_DEV, AWS_SECRET_ACCESS_KEY_DEV
# AWS_ACCESS_KEY_ID_STG, AWS_SECRET_ACCESS_KEY_STG
# -----------------------------------------------------------
# Environment variables injected by CI/CD service
# BRANCH_NAME="DEV"
# -----------------------------------------------------------
declare -n _AWS_ACCESS_KEY_ID_REF=AWS_ACCESS_KEY_ID_${BRANCH_NAME}
declare -n _AWS_SECRET_ACCESS_KEY_REF=AWS_SECRET_ACCESS_KEY_${BRANCH_NAME}
export AWS_ACCESS_KEY_ID=${_AWS_ACCESS_KEY_ID_REF}
export AWS_SECRET_ACCESS_KEY=${_AWS_SECRET_ACCESS_KEY_REF}
echo $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY
aws s3 ls

Wow, most of the syntax is horrible! Here is one solution with some simpler syntax if you need to indirectly reference arrays:
#!/bin/bash
foo_1=(fff ddd) ;
foo_2=(ggg ccc) ;
for i in 1 2 ;
do
eval mine=( \${foo_$i[#]} ) ;
echo ${mine[#]}" " ;
done ;
For simpler use cases I recommend the syntax described in the Advanced Bash-Scripting Guide.

KISS approach:
a=1
c="bam"
let "$c$a"=4
echo $bam1
results in 4

I want to be able to create a variable name containing the first argument of the command
script.sh file:
#!/usr/bin/env bash
function grep_search() {
eval $1=$(ls | tail -1)
}
Test:
$ source script.sh
$ grep_search open_box
$ echo $open_box
script.sh
As per help eval:
Execute arguments as a shell command.
You may also use Bash ${!var} indirect expansion, as already mentioned, however it doesn't support retrieving of array indices.
For further read or examples, check BashFAQ/006 about Indirection.
We are not aware of any trick that can duplicate that functionality in POSIX or Bourne shells without eval, which can be difficult to do securely. So, consider this a use at your own risk hack.
However, you should re-consider using indirection as per the following notes.
Normally, in bash scripting, you won't need indirect references at all. Generally, people look at this for a solution when they don't understand or know about Bash Arrays or haven't fully considered other Bash features such as functions.
Putting variable names or any other bash syntax inside parameters is frequently done incorrectly and in inappropriate situations to solve problems that have better solutions. It violates the separation between code and data, and as such puts you on a slippery slope toward bugs and security issues. Indirection can make your code less transparent and harder to follow.

For indexed arrays, you can reference them like so:
foo=(a b c)
bar=(d e f)
for arr_var in 'foo' 'bar'; do
declare -a 'arr=("${'"$arr_var"'[#]}")'
# do something with $arr
echo "\$$arr_var contains:"
for char in "${arr[#]}"; do
echo "$char"
done
done
Associative arrays can be referenced similarly but need the -A switch on declare instead of -a.

POSIX compliant answer
For this solution you'll need to have r/w permissions to the /tmp folder.
We create a temporary file holding our variables and leverage the -a flag of the set built-in:
$ man set
...
-a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands.
Therefore, if we create a file holding our dynamic variables, we can use set to bring them to life inside our script.
The implementation
#!/bin/sh
# Give the temp file a unique name so you don't mess with any other files in there
ENV_FILE="/tmp/$(date +%s)"
MY_KEY=foo
MY_VALUE=bar
echo "$MY_KEY=$MY_VALUE" >> "$ENV_FILE"
# Now that our env file is created and populated, we can use "set"
set -a; . "$ENV_FILE"; set +a
rm "$ENV_FILE"
echo "$foo"
# Output is "bar" (without quotes)
Explaining the steps above:
# Enables the -a behavior
set -a
# Sources the env file
. "$ENV_FILE"
# Disables the -a behavior
set +a

While I think declare -n is still the best way to do it there is another way nobody mentioned it, very useful in CI/CD
function dynamic(){
export a_$1="bla"
}
dynamic 2
echo $a_2
This function will not support spaces so dynamic "2 3" will return an error.

for varname=$prefix_suffix format, just use:
varname=${prefix}_suffix

Related

Increment a variable name in ksh [duplicate]

Seems that the recommended way of doing indirect variable setting in bash is to use eval:
var=x; val=foo
eval $var=$val
echo $x # --> foo
The problem is the usual one with eval:
var=x; val=1$'\n'pwd
eval $var=$val # bad output here
(and since it is recommended in many places, I wonder just how many scripts are vulnerable because of this...)
In any case, the obvious solution of using (escaped) quotes doesn't really work:
var=x; val=1\"$'\n'pwd\"
eval $var=\"$val\" # fail with the above
The thing is that bash has indirect variable reference baked in (with ${!foo}), but I don't see any such way to do indirect assignment -- is there any sane way to do this?
For the record, I did find a solution, but this is not something that I'd consider "sane"...:
eval "$var='"${val//\'/\'\"\'\"\'}"'"
A slightly better way, avoiding the possible security implications of using eval, is
declare "$var=$val"
Note that declare is a synonym for typeset in bash. The typeset command is more widely supported (ksh and zsh also use it):
typeset "$var=$val"
In modern versions of bash, one should use a nameref.
declare -n var=x
x=$val
It's safer than eval, but still not perfect.
Bash has an extension to printf that saves its result into a variable:
printf -v "${VARNAME}" '%s' "${VALUE}"
This prevents all possible escaping issues.
If you use an invalid identifier for $VARNAME, the command will fail and return status code 2:
$ printf -v ';;;' '%s' foobar; echo $?
bash: printf: `;;;': not a valid identifier
2
eval "$var=\$val"
The argument to eval should always be a single string enclosed in either single or double quotes. All code that deviates from this pattern has some unintended behavior in edge cases, such as file names with special characters.
When the argument to eval is expanded by the shell, the $var is replaced with the variable name, and the \$ is replaced with a simple dollar. The string that is evaluated therefore becomes:
varname=$value
This is exactly what you want.
Generally, all expressions of the form $varname should be enclosed in double quotes, to prevent accidental expansion of filename patterns like *.c.
There are only two places where the quotes may be omitted since they are defined to not expand pathnames and split fields: variable assignments and case. POSIX 2018 says:
Each variable assignment shall be expanded for tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal prior to assigning the value.
This list of expansions is missing the parameter expansion and the field splitting. Sure, that's hard to see from reading this sentence alone, but that's the official definition.
Since this is a variable assignment, the quotes are not needed here. They don't hurt, though, so you could also write the original code as:
eval "$var=\"the value is \$val\""
Note that the second dollar is escaped using a backslash, to prevent it from being expanded in the first run. What happens is:
eval "$var=\"the value is \$val\""
The argument to the command eval is sent through parameter expansion and unescaping, resulting in:
varname="the value is $val"
This string is then evaluated as a variable assignment, which assigns the following value to the variable varname:
the value is value
The main point is that the recommended way to do this is:
eval "$var=\$val"
with the RHS done indirectly too. Since eval is used in the same
environment, it will have $val bound, so deferring it works, and since
now it's just a variable. Since the $val variable has a known name,
there are no issues with quoting, and it could have even been written as:
eval $var=\$val
But since it's better to always add quotes, the former is better, or
even this:
eval "$var=\"\$val\""
A better alternative in bash that was mentioned for the whole thing that
avoids eval completely (and is not as subtle as declare etc):
printf -v "$var" "%s" "$val"
Though this is not a direct answer what I originally asked...
Newer versions of bash support something called "parameter transformation", documented in a section of the same name in bash(1).
"${value#Q}" expands to a shell-quoted version of "${value}" that you can re-use as input.
Which means the following is a safe solution:
eval="${varname}=${value#Q}"
Just for completeness I also want to suggest the possible use of the bash built in read. I've also made corrections regarding -d'' based on socowi's comments.
But much care needs to be exercised when using read to ensure the input is sanitized (-d'' reads until null termination and printf "...\0" terminates the value with a null), and that read itself is executed in the main shell where the variable is needed and not a sub-shell (hence the < <( ... ) syntax).
var=x; val=foo0shouldnotterminateearly
read -d'' -r "$var" < <(printf "$val\0")
echo $x # --> foo0shouldnotterminateearly
echo ${!var} # --> foo0shouldnotterminateearly
I tested this with \n \t \r spaces and 0, etc it worked as expected on my version of bash.
The -r will avoid escaping \, so if you had the characters "\" and "n" in your value and not an actual newline, x will contain the two characters "\" and "n" also.
This method may not be aesthetically as pleasing as the eval or printf solution, and would be more useful if the value is coming in from a file or other input file descriptor
read -d'' -r "$var" < <( cat $file )
And here are some alternative suggestions for the < <() syntax
read -d'' -r "$var" <<< "$val"$'\0'
read -d'' -r "$var" < <(printf "$val") #Apparently I didn't even need the \0, the printf process ending was enough to trigger the read to finish.
read -d'' -r "$var" <<< $(printf "$val")
read -d'' -r "$var" <<< "$val"
read -d'' -r "$var" < <(printf "$val")
Yet another way to accomplish this, without eval, is to use "read":
INDIRECT=foo
read -d '' -r "${INDIRECT}" <<<"$(( 2 * 2 ))"
echo "${foo}" # outputs "4"

Strange behavior with parameter expansion in program arguments

I'm trying to conditionally pass an argument to a bash script only if it has been set in the calling script and I've noticed some odd behavior.
I'm using parameter expansion to facilitate this, outputting an option only if the corresponding variable is set. The aim is to pass an argument from a 'parent' script to a 'child' script.
Consider the following example:
The calling script:
#!/bin/bash
# 1.sh
ONE="TEST_ONE"
TWO="TEST_TWO"
./2.sh \
--one "${ONE}" \
"${TWO:+"--two ${TWO}"}" \
--other
and the called script:
#!/bin/bash
# 2.sh
while [[ $# -gt 0 ]]; do
key="${1}"
case $key in
-o|--one)
ONE="${2}"
echo "ONE: ${ONE}"
shift
shift
;;
-t|--two)
TWO="${2}"
echo "TWO: ${TWO}"
shift
shift
;;
-f|--other)
OTHER=1
echo "OTHER: ${OTHER}"
shift
;;
*)
echo "UNRECOGNISED: ${1}"
shift
;;
esac
done
output:
ONE: TEST_ONE
UNRECOGNISED: --two TEST_TWO
OTHER: 1
Observe the behavior of the option '--two', which will be unrecognised. It looks like it is being expanded correctly, but is not recognised as being two distinct strings.
Can anyone explain why this is happening? I've seen it written in one source that it will not work with positional parameter arguments, but I'm still not understanding why this behaves as it does.
It is because when you pass $2 as a result of parameter expansion from 1.sh you are quoting it in a way that --two TEST_TWO is evaluated as one single argument, so that the number of arguments in 2.sh result in 4 instead of 5
But that said, using your $2 as ${TWO:+--two ${TWO}} would solve the problem, but that would word-split the content of $2 if it contains spaces. You need to use arrays.
As a much more recommended and fail-proof approach use arrays as below on 1.sh as
argsList=(--one "${ONE}" ${TWO:+--two "${TWO}"} --other)
and pass it along as
./2.sh "${argsList[#]}"
or if you are familiar with how quoting rules work (how and when to quote to prevent word-splitting from happening) use it directly on the command line as below. This would ensure that the contents variables ONE and TWO are preserved even if they have spaces.
./2.sh \
--one "${ONE}" \
${TWO:+--two "${TWO}"} \
--other
As a few recommended guidelines
Always use lower-case variable names for user defined variables to not confuse them with the environment variables maintained by the shell itself.
Use getopts() for more robust argument flags parsing

2-part String to variable names bash [duplicate]

I am confused about a bash script.
I have the following code:
function grep_search() {
magic_way_to_define_magic_variable_$1=`ls | tail -1`
echo $magic_variable_$1
}
I want to be able to create a variable name containing the first argument of the command and bearing the value of e.g. the last line of ls.
So to illustrate what I want:
$ ls | tail -1
stack-overflow.txt
$ grep_search() open_box
stack-overflow.txt
So, how should I define/declare $magic_way_to_define_magic_variable_$1 and how should I call it within the script?
I have tried eval, ${...}, \$${...}, but I am still confused.
I've been looking for better way of doing it recently. Associative array sounded like overkill for me. Look what I found:
suffix=bzz
declare prefix_$suffix=mystr
...and then...
varname=prefix_$suffix
echo ${!varname}
From the docs:
The ‘$’ character introduces parameter expansion, command substitution, or arithmetic expansion. ...
The basic form of parameter expansion is ${parameter}. The value of parameter is substituted. ...
If the first character of parameter is an exclamation point (!), and parameter is not a nameref, it introduces a level of indirection. Bash uses the value formed by expanding the rest of parameter as the new parameter; this is then expanded and that value is used in the rest of the expansion, rather than the expansion of the original parameter. This is known as indirect expansion. The value is subject to tilde expansion, parameter expansion, command substitution, and arithmetic expansion. ...
Use an associative array, with command names as keys.
# Requires bash 4, though
declare -A magic_variable=()
function grep_search() {
magic_variable[$1]=$( ls | tail -1 )
echo ${magic_variable[$1]}
}
If you can't use associative arrays (e.g., you must support bash 3), you can use declare to create dynamic variable names:
declare "magic_variable_$1=$(ls | tail -1)"
and use indirect parameter expansion to access the value.
var="magic_variable_$1"
echo "${!var}"
See BashFAQ: Indirection - Evaluating indirect/reference variables.
Beyond associative arrays, there are several ways of achieving dynamic variables in Bash. Note that all these techniques present risks, which are discussed at the end of this answer.
In the following examples I will assume that i=37 and that you want to alias the variable named var_37 whose initial value is lolilol.
Method 1. Using a “pointer” variable
You can simply store the name of the variable in an indirection variable, not unlike a C pointer. Bash then has a syntax for reading the aliased variable: ${!name} expands to the value of the variable whose name is the value of the variable name. You can think of it as a two-stage expansion: ${!name} expands to $var_37, which expands to lolilol.
name="var_$i"
echo "$name" # outputs “var_37”
echo "${!name}" # outputs “lolilol”
echo "${!name%lol}" # outputs “loli”
# etc.
Unfortunately, there is no counterpart syntax for modifying the aliased variable. Instead, you can achieve assignment with one of the following tricks.
1a. Assigning with eval
eval is evil, but is also the simplest and most portable way of achieving our goal. You have to carefully escape the right-hand side of the assignment, as it will be evaluated twice. An easy and systematic way of doing this is to evaluate the right-hand side beforehand (or to use printf %q).
And you should check manually that the left-hand side is a valid variable name, or a name with index (what if it was evil_code # ?). By contrast, all other methods below enforce it automatically.
# check that name is a valid variable name:
# note: this code does not support variable_name[index]
shopt -s globasciiranges
[[ "$name" == [a-zA-Z_]*([a-zA-Z_0-9]) ]] || exit
value='babibab'
eval "$name"='$value' # carefully escape the right-hand side!
echo "$var_37" # outputs “babibab”
Downsides:
does not check the validity of the variable name.
eval is evil.
eval is evil.
eval is evil.
1b. Assigning with read
The read builtin lets you assign values to a variable of which you give the name, a fact which can be exploited in conjunction with here-strings:
IFS= read -r -d '' "$name" <<< 'babibab'
echo "$var_37" # outputs “babibab\n”
The IFS part and the option -r make sure that the value is assigned as-is, while the option -d '' allows to assign multi-line values. Because of this last option, the command returns with an non-zero exit code.
Note that, since we are using a here-string, a newline character is appended to the value.
Downsides:
somewhat obscure;
returns with a non-zero exit code;
appends a newline to the value.
1c. Assigning with printf
Since Bash 3.1 (released 2005), the printf builtin can also assign its result to a variable whose name is given. By contrast with the previous solutions, it just works, no extra effort is needed to escape things, to prevent splitting and so on.
printf -v "$name" '%s' 'babibab'
echo "$var_37" # outputs “babibab”
Downsides:
Less portable (but, well).
Method 2. Using a “reference” variable
Since Bash 4.3 (released 2014), the declare builtin has an option -n for creating a variable which is a “name reference” to another variable, much like C++ references. Just as in Method 1, the reference stores the name of the aliased variable, but each time the reference is accessed (either for reading or assigning), Bash automatically resolves the indirection.
In addition, Bash has a special and very confusing syntax for getting the value of the reference itself, judge by yourself: ${!ref}.
declare -n ref="var_$i"
echo "${!ref}" # outputs “var_37”
echo "$ref" # outputs “lolilol”
ref='babibab'
echo "$var_37" # outputs “babibab”
This does not avoid the pitfalls explained below, but at least it makes the syntax straightforward.
Downsides:
Not portable.
Risks
All these aliasing techniques present several risks. The first one is executing arbitrary code each time you resolve the indirection (either for reading or for assigning). Indeed, instead of a scalar variable name, like var_37, you may as well alias an array subscript, like arr[42]. But Bash evaluates the contents of the square brackets each time it is needed, so aliasing arr[$(do_evil)] will have unexpected effects… As a consequence, only use these techniques when you control the provenance of the alias.
function guillemots {
declare -n var="$1"
var="«${var}»"
}
arr=( aaa bbb ccc )
guillemots 'arr[1]' # modifies the second cell of the array, as expected
guillemots 'arr[$(date>>date.out)1]' # writes twice into date.out
# (once when expanding var, once when assigning to it)
The second risk is creating a cyclic alias. As Bash variables are identified by their name and not by their scope, you may inadvertently create an alias to itself (while thinking it would alias a variable from an enclosing scope). This may happen in particular when using common variable names (like var). As a consequence, only use these techniques when you control the name of the aliased variable.
function guillemots {
# var is intended to be local to the function,
# aliasing a variable which comes from outside
declare -n var="$1"
var="«${var}»"
}
var='lolilol'
guillemots var # Bash warnings: “var: circular name reference”
echo "$var" # outputs anything!
Source:
BashFaq/006: How can I use variable variables (indirect variables, pointers, references) or associative arrays?
BashFAQ/048: eval command and security issues
Example below returns value of $name_of_var
var=name_of_var
echo $(eval echo "\$$var")
Use declare
There is no need on using prefixes like on other answers, neither arrays. Use just declare, double quotes, and parameter expansion.
I often use the following trick to parse argument lists contanining one to n arguments formatted as key=value otherkey=othervalue etc=etc, Like:
# brace expansion just to exemplify
for variable in {one=foo,two=bar,ninja=tip}
do
declare "${variable%=*}=${variable#*=}"
done
echo $one $two $ninja
# foo bar tip
But expanding the argv list like
for v in "$#"; do declare "${v%=*}=${v#*=}"; done
Extra tips
# parse argv's leading key=value parameters
for v in "$#"; do
case "$v" in ?*=?*) declare "${v%=*}=${v#*=}";; *) break;; esac
done
# consume argv's leading key=value parameters
while test $# -gt 0; do
case "$1" in ?*=?*) declare "${1%=*}=${1#*=}";; *) break;; esac
shift
done
Combining two highly rated answers here into a complete example that is hopefully useful and self-explanatory:
#!/bin/bash
intro="You know what,"
pet1="cat"
pet2="chicken"
pet3="cow"
pet4="dog"
pet5="pig"
# Setting and reading dynamic variables
for i in {1..5}; do
pet="pet$i"
declare "sentence$i=$intro I have a pet ${!pet} at home"
done
# Just reading dynamic variables
for i in {1..5}; do
sentence="sentence$i"
echo "${!sentence}"
done
echo
echo "Again, but reading regular variables:"
echo $sentence1
echo $sentence2
echo $sentence3
echo $sentence4
echo $sentence5
Output:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home
Again, but reading regular variables:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home
This will work too
my_country_code="green"
x="country"
eval z='$'my_"$x"_code
echo $z ## o/p: green
In your case
eval final_val='$'magic_way_to_define_magic_variable_"$1"
echo $final_val
This should work:
function grep_search() {
declare magic_variable_$1="$(ls | tail -1)"
echo "$(tmpvar=magic_variable_$1 && echo ${!tmpvar})"
}
grep_search var # calling grep_search with argument "var"
An extra method that doesn't rely on which shell/bash version you have is by using envsubst. For example:
newvar=$(echo '$magic_variable_'"${dynamic_part}" | envsubst)
For zsh (newers mac os versions), you should use
real_var="holaaaa"
aux_var="real_var"
echo ${(P)aux_var}
holaaaa
Instead of "!"
As per BashFAQ/006, you can use read with here string syntax for assigning indirect variables:
function grep_search() {
read "$1" <<<$(ls | tail -1);
}
Usage:
$ grep_search open_box
$ echo $open_box
stack-overflow.txt
Even though it's an old question, I still had some hard time with fetching dynamic variables names, while avoiding the eval (evil) command.
Solved it with declare -n which creates a reference to a dynamic value, this is especially useful in CI/CD processes, where the required secret names of the CI/CD service are not known until runtime. Here's how:
# Bash v4.3+
# -----------------------------------------------------------
# Secerts in CI/CD service, injected as environment variables
# AWS_ACCESS_KEY_ID_DEV, AWS_SECRET_ACCESS_KEY_DEV
# AWS_ACCESS_KEY_ID_STG, AWS_SECRET_ACCESS_KEY_STG
# -----------------------------------------------------------
# Environment variables injected by CI/CD service
# BRANCH_NAME="DEV"
# -----------------------------------------------------------
declare -n _AWS_ACCESS_KEY_ID_REF=AWS_ACCESS_KEY_ID_${BRANCH_NAME}
declare -n _AWS_SECRET_ACCESS_KEY_REF=AWS_SECRET_ACCESS_KEY_${BRANCH_NAME}
export AWS_ACCESS_KEY_ID=${_AWS_ACCESS_KEY_ID_REF}
export AWS_SECRET_ACCESS_KEY=${_AWS_SECRET_ACCESS_KEY_REF}
echo $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY
aws s3 ls
Wow, most of the syntax is horrible! Here is one solution with some simpler syntax if you need to indirectly reference arrays:
#!/bin/bash
foo_1=(fff ddd) ;
foo_2=(ggg ccc) ;
for i in 1 2 ;
do
eval mine=( \${foo_$i[#]} ) ;
echo ${mine[#]}" " ;
done ;
For simpler use cases I recommend the syntax described in the Advanced Bash-Scripting Guide.
KISS approach:
a=1
c="bam"
let "$c$a"=4
echo $bam1
results in 4
I want to be able to create a variable name containing the first argument of the command
script.sh file:
#!/usr/bin/env bash
function grep_search() {
eval $1=$(ls | tail -1)
}
Test:
$ source script.sh
$ grep_search open_box
$ echo $open_box
script.sh
As per help eval:
Execute arguments as a shell command.
You may also use Bash ${!var} indirect expansion, as already mentioned, however it doesn't support retrieving of array indices.
For further read or examples, check BashFAQ/006 about Indirection.
We are not aware of any trick that can duplicate that functionality in POSIX or Bourne shells without eval, which can be difficult to do securely. So, consider this a use at your own risk hack.
However, you should re-consider using indirection as per the following notes.
Normally, in bash scripting, you won't need indirect references at all. Generally, people look at this for a solution when they don't understand or know about Bash Arrays or haven't fully considered other Bash features such as functions.
Putting variable names or any other bash syntax inside parameters is frequently done incorrectly and in inappropriate situations to solve problems that have better solutions. It violates the separation between code and data, and as such puts you on a slippery slope toward bugs and security issues. Indirection can make your code less transparent and harder to follow.
For indexed arrays, you can reference them like so:
foo=(a b c)
bar=(d e f)
for arr_var in 'foo' 'bar'; do
declare -a 'arr=("${'"$arr_var"'[#]}")'
# do something with $arr
echo "\$$arr_var contains:"
for char in "${arr[#]}"; do
echo "$char"
done
done
Associative arrays can be referenced similarly but need the -A switch on declare instead of -a.
POSIX compliant answer
For this solution you'll need to have r/w permissions to the /tmp folder.
We create a temporary file holding our variables and leverage the -a flag of the set built-in:
$ man set
...
-a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands.
Therefore, if we create a file holding our dynamic variables, we can use set to bring them to life inside our script.
The implementation
#!/bin/sh
# Give the temp file a unique name so you don't mess with any other files in there
ENV_FILE="/tmp/$(date +%s)"
MY_KEY=foo
MY_VALUE=bar
echo "$MY_KEY=$MY_VALUE" >> "$ENV_FILE"
# Now that our env file is created and populated, we can use "set"
set -a; . "$ENV_FILE"; set +a
rm "$ENV_FILE"
echo "$foo"
# Output is "bar" (without quotes)
Explaining the steps above:
# Enables the -a behavior
set -a
# Sources the env file
. "$ENV_FILE"
# Disables the -a behavior
set +a
While I think declare -n is still the best way to do it there is another way nobody mentioned it, very useful in CI/CD
function dynamic(){
export a_$1="bla"
}
dynamic 2
echo $a_2
This function will not support spaces so dynamic "2 3" will return an error.
for varname=$prefix_suffix format, just use:
varname=${prefix}_suffix

BASH - How to echo a variable inside a variable [duplicate]

I am confused about a bash script.
I have the following code:
function grep_search() {
magic_way_to_define_magic_variable_$1=`ls | tail -1`
echo $magic_variable_$1
}
I want to be able to create a variable name containing the first argument of the command and bearing the value of e.g. the last line of ls.
So to illustrate what I want:
$ ls | tail -1
stack-overflow.txt
$ grep_search() open_box
stack-overflow.txt
So, how should I define/declare $magic_way_to_define_magic_variable_$1 and how should I call it within the script?
I have tried eval, ${...}, \$${...}, but I am still confused.
I've been looking for better way of doing it recently. Associative array sounded like overkill for me. Look what I found:
suffix=bzz
declare prefix_$suffix=mystr
...and then...
varname=prefix_$suffix
echo ${!varname}
From the docs:
The ‘$’ character introduces parameter expansion, command substitution, or arithmetic expansion. ...
The basic form of parameter expansion is ${parameter}. The value of parameter is substituted. ...
If the first character of parameter is an exclamation point (!), and parameter is not a nameref, it introduces a level of indirection. Bash uses the value formed by expanding the rest of parameter as the new parameter; this is then expanded and that value is used in the rest of the expansion, rather than the expansion of the original parameter. This is known as indirect expansion. The value is subject to tilde expansion, parameter expansion, command substitution, and arithmetic expansion. ...
Use an associative array, with command names as keys.
# Requires bash 4, though
declare -A magic_variable=()
function grep_search() {
magic_variable[$1]=$( ls | tail -1 )
echo ${magic_variable[$1]}
}
If you can't use associative arrays (e.g., you must support bash 3), you can use declare to create dynamic variable names:
declare "magic_variable_$1=$(ls | tail -1)"
and use indirect parameter expansion to access the value.
var="magic_variable_$1"
echo "${!var}"
See BashFAQ: Indirection - Evaluating indirect/reference variables.
Beyond associative arrays, there are several ways of achieving dynamic variables in Bash. Note that all these techniques present risks, which are discussed at the end of this answer.
In the following examples I will assume that i=37 and that you want to alias the variable named var_37 whose initial value is lolilol.
Method 1. Using a “pointer” variable
You can simply store the name of the variable in an indirection variable, not unlike a C pointer. Bash then has a syntax for reading the aliased variable: ${!name} expands to the value of the variable whose name is the value of the variable name. You can think of it as a two-stage expansion: ${!name} expands to $var_37, which expands to lolilol.
name="var_$i"
echo "$name" # outputs “var_37”
echo "${!name}" # outputs “lolilol”
echo "${!name%lol}" # outputs “loli”
# etc.
Unfortunately, there is no counterpart syntax for modifying the aliased variable. Instead, you can achieve assignment with one of the following tricks.
1a. Assigning with eval
eval is evil, but is also the simplest and most portable way of achieving our goal. You have to carefully escape the right-hand side of the assignment, as it will be evaluated twice. An easy and systematic way of doing this is to evaluate the right-hand side beforehand (or to use printf %q).
And you should check manually that the left-hand side is a valid variable name, or a name with index (what if it was evil_code # ?). By contrast, all other methods below enforce it automatically.
# check that name is a valid variable name:
# note: this code does not support variable_name[index]
shopt -s globasciiranges
[[ "$name" == [a-zA-Z_]*([a-zA-Z_0-9]) ]] || exit
value='babibab'
eval "$name"='$value' # carefully escape the right-hand side!
echo "$var_37" # outputs “babibab”
Downsides:
does not check the validity of the variable name.
eval is evil.
eval is evil.
eval is evil.
1b. Assigning with read
The read builtin lets you assign values to a variable of which you give the name, a fact which can be exploited in conjunction with here-strings:
IFS= read -r -d '' "$name" <<< 'babibab'
echo "$var_37" # outputs “babibab\n”
The IFS part and the option -r make sure that the value is assigned as-is, while the option -d '' allows to assign multi-line values. Because of this last option, the command returns with an non-zero exit code.
Note that, since we are using a here-string, a newline character is appended to the value.
Downsides:
somewhat obscure;
returns with a non-zero exit code;
appends a newline to the value.
1c. Assigning with printf
Since Bash 3.1 (released 2005), the printf builtin can also assign its result to a variable whose name is given. By contrast with the previous solutions, it just works, no extra effort is needed to escape things, to prevent splitting and so on.
printf -v "$name" '%s' 'babibab'
echo "$var_37" # outputs “babibab”
Downsides:
Less portable (but, well).
Method 2. Using a “reference” variable
Since Bash 4.3 (released 2014), the declare builtin has an option -n for creating a variable which is a “name reference” to another variable, much like C++ references. Just as in Method 1, the reference stores the name of the aliased variable, but each time the reference is accessed (either for reading or assigning), Bash automatically resolves the indirection.
In addition, Bash has a special and very confusing syntax for getting the value of the reference itself, judge by yourself: ${!ref}.
declare -n ref="var_$i"
echo "${!ref}" # outputs “var_37”
echo "$ref" # outputs “lolilol”
ref='babibab'
echo "$var_37" # outputs “babibab”
This does not avoid the pitfalls explained below, but at least it makes the syntax straightforward.
Downsides:
Not portable.
Risks
All these aliasing techniques present several risks. The first one is executing arbitrary code each time you resolve the indirection (either for reading or for assigning). Indeed, instead of a scalar variable name, like var_37, you may as well alias an array subscript, like arr[42]. But Bash evaluates the contents of the square brackets each time it is needed, so aliasing arr[$(do_evil)] will have unexpected effects… As a consequence, only use these techniques when you control the provenance of the alias.
function guillemots {
declare -n var="$1"
var="«${var}»"
}
arr=( aaa bbb ccc )
guillemots 'arr[1]' # modifies the second cell of the array, as expected
guillemots 'arr[$(date>>date.out)1]' # writes twice into date.out
# (once when expanding var, once when assigning to it)
The second risk is creating a cyclic alias. As Bash variables are identified by their name and not by their scope, you may inadvertently create an alias to itself (while thinking it would alias a variable from an enclosing scope). This may happen in particular when using common variable names (like var). As a consequence, only use these techniques when you control the name of the aliased variable.
function guillemots {
# var is intended to be local to the function,
# aliasing a variable which comes from outside
declare -n var="$1"
var="«${var}»"
}
var='lolilol'
guillemots var # Bash warnings: “var: circular name reference”
echo "$var" # outputs anything!
Source:
BashFaq/006: How can I use variable variables (indirect variables, pointers, references) or associative arrays?
BashFAQ/048: eval command and security issues
Example below returns value of $name_of_var
var=name_of_var
echo $(eval echo "\$$var")
Use declare
There is no need on using prefixes like on other answers, neither arrays. Use just declare, double quotes, and parameter expansion.
I often use the following trick to parse argument lists contanining one to n arguments formatted as key=value otherkey=othervalue etc=etc, Like:
# brace expansion just to exemplify
for variable in {one=foo,two=bar,ninja=tip}
do
declare "${variable%=*}=${variable#*=}"
done
echo $one $two $ninja
# foo bar tip
But expanding the argv list like
for v in "$#"; do declare "${v%=*}=${v#*=}"; done
Extra tips
# parse argv's leading key=value parameters
for v in "$#"; do
case "$v" in ?*=?*) declare "${v%=*}=${v#*=}";; *) break;; esac
done
# consume argv's leading key=value parameters
while test $# -gt 0; do
case "$1" in ?*=?*) declare "${1%=*}=${1#*=}";; *) break;; esac
shift
done
Combining two highly rated answers here into a complete example that is hopefully useful and self-explanatory:
#!/bin/bash
intro="You know what,"
pet1="cat"
pet2="chicken"
pet3="cow"
pet4="dog"
pet5="pig"
# Setting and reading dynamic variables
for i in {1..5}; do
pet="pet$i"
declare "sentence$i=$intro I have a pet ${!pet} at home"
done
# Just reading dynamic variables
for i in {1..5}; do
sentence="sentence$i"
echo "${!sentence}"
done
echo
echo "Again, but reading regular variables:"
echo $sentence1
echo $sentence2
echo $sentence3
echo $sentence4
echo $sentence5
Output:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home
Again, but reading regular variables:
You know what, I have a pet cat at home
You know what, I have a pet chicken at home
You know what, I have a pet cow at home
You know what, I have a pet dog at home
You know what, I have a pet pig at home
This will work too
my_country_code="green"
x="country"
eval z='$'my_"$x"_code
echo $z ## o/p: green
In your case
eval final_val='$'magic_way_to_define_magic_variable_"$1"
echo $final_val
This should work:
function grep_search() {
declare magic_variable_$1="$(ls | tail -1)"
echo "$(tmpvar=magic_variable_$1 && echo ${!tmpvar})"
}
grep_search var # calling grep_search with argument "var"
An extra method that doesn't rely on which shell/bash version you have is by using envsubst. For example:
newvar=$(echo '$magic_variable_'"${dynamic_part}" | envsubst)
For zsh (newers mac os versions), you should use
real_var="holaaaa"
aux_var="real_var"
echo ${(P)aux_var}
holaaaa
Instead of "!"
As per BashFAQ/006, you can use read with here string syntax for assigning indirect variables:
function grep_search() {
read "$1" <<<$(ls | tail -1);
}
Usage:
$ grep_search open_box
$ echo $open_box
stack-overflow.txt
Even though it's an old question, I still had some hard time with fetching dynamic variables names, while avoiding the eval (evil) command.
Solved it with declare -n which creates a reference to a dynamic value, this is especially useful in CI/CD processes, where the required secret names of the CI/CD service are not known until runtime. Here's how:
# Bash v4.3+
# -----------------------------------------------------------
# Secerts in CI/CD service, injected as environment variables
# AWS_ACCESS_KEY_ID_DEV, AWS_SECRET_ACCESS_KEY_DEV
# AWS_ACCESS_KEY_ID_STG, AWS_SECRET_ACCESS_KEY_STG
# -----------------------------------------------------------
# Environment variables injected by CI/CD service
# BRANCH_NAME="DEV"
# -----------------------------------------------------------
declare -n _AWS_ACCESS_KEY_ID_REF=AWS_ACCESS_KEY_ID_${BRANCH_NAME}
declare -n _AWS_SECRET_ACCESS_KEY_REF=AWS_SECRET_ACCESS_KEY_${BRANCH_NAME}
export AWS_ACCESS_KEY_ID=${_AWS_ACCESS_KEY_ID_REF}
export AWS_SECRET_ACCESS_KEY=${_AWS_SECRET_ACCESS_KEY_REF}
echo $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY
aws s3 ls
Wow, most of the syntax is horrible! Here is one solution with some simpler syntax if you need to indirectly reference arrays:
#!/bin/bash
foo_1=(fff ddd) ;
foo_2=(ggg ccc) ;
for i in 1 2 ;
do
eval mine=( \${foo_$i[#]} ) ;
echo ${mine[#]}" " ;
done ;
For simpler use cases I recommend the syntax described in the Advanced Bash-Scripting Guide.
KISS approach:
a=1
c="bam"
let "$c$a"=4
echo $bam1
results in 4
I want to be able to create a variable name containing the first argument of the command
script.sh file:
#!/usr/bin/env bash
function grep_search() {
eval $1=$(ls | tail -1)
}
Test:
$ source script.sh
$ grep_search open_box
$ echo $open_box
script.sh
As per help eval:
Execute arguments as a shell command.
You may also use Bash ${!var} indirect expansion, as already mentioned, however it doesn't support retrieving of array indices.
For further read or examples, check BashFAQ/006 about Indirection.
We are not aware of any trick that can duplicate that functionality in POSIX or Bourne shells without eval, which can be difficult to do securely. So, consider this a use at your own risk hack.
However, you should re-consider using indirection as per the following notes.
Normally, in bash scripting, you won't need indirect references at all. Generally, people look at this for a solution when they don't understand or know about Bash Arrays or haven't fully considered other Bash features such as functions.
Putting variable names or any other bash syntax inside parameters is frequently done incorrectly and in inappropriate situations to solve problems that have better solutions. It violates the separation between code and data, and as such puts you on a slippery slope toward bugs and security issues. Indirection can make your code less transparent and harder to follow.
For indexed arrays, you can reference them like so:
foo=(a b c)
bar=(d e f)
for arr_var in 'foo' 'bar'; do
declare -a 'arr=("${'"$arr_var"'[#]}")'
# do something with $arr
echo "\$$arr_var contains:"
for char in "${arr[#]}"; do
echo "$char"
done
done
Associative arrays can be referenced similarly but need the -A switch on declare instead of -a.
POSIX compliant answer
For this solution you'll need to have r/w permissions to the /tmp folder.
We create a temporary file holding our variables and leverage the -a flag of the set built-in:
$ man set
...
-a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands.
Therefore, if we create a file holding our dynamic variables, we can use set to bring them to life inside our script.
The implementation
#!/bin/sh
# Give the temp file a unique name so you don't mess with any other files in there
ENV_FILE="/tmp/$(date +%s)"
MY_KEY=foo
MY_VALUE=bar
echo "$MY_KEY=$MY_VALUE" >> "$ENV_FILE"
# Now that our env file is created and populated, we can use "set"
set -a; . "$ENV_FILE"; set +a
rm "$ENV_FILE"
echo "$foo"
# Output is "bar" (without quotes)
Explaining the steps above:
# Enables the -a behavior
set -a
# Sources the env file
. "$ENV_FILE"
# Disables the -a behavior
set +a
While I think declare -n is still the best way to do it there is another way nobody mentioned it, very useful in CI/CD
function dynamic(){
export a_$1="bla"
}
dynamic 2
echo $a_2
This function will not support spaces so dynamic "2 3" will return an error.
for varname=$prefix_suffix format, just use:
varname=${prefix}_suffix

The 'eval' command in Bash and its typical uses

After reading the Bash man pages and with respect to this post, I am still having trouble understanding what exactly the eval command does and which would be its typical uses.
For example, if we do:
$ set -- one two three # Sets $1 $2 $3
$ echo $1
one
$ n=1
$ echo ${$n} ## First attempt to echo $1 using brackets fails
bash: ${$n}: bad substitution
$ echo $($n) ## Second attempt to echo $1 using parentheses fails
bash: 1: command not found
$ eval echo \${$n} ## Third attempt to echo $1 using 'eval' succeeds
one
What exactly is happening here and how do the dollar sign and the backslash tie into the problem?
eval takes a string as its argument, and evaluates it as if you'd typed that string on a command line. (If you pass several arguments, they are first joined with spaces between them.)
${$n} is a syntax error in bash. Inside the braces, you can only have a variable name, with some possible prefix and suffixes, but you can't have arbitrary bash syntax and in particular you can't use variable expansion. There is a way of saying “the value of the variable whose name is in this variable”, though:
echo ${!n}
one
$(…) runs the command specified inside the parentheses in a subshell (i.e. in a separate process that inherits all settings such as variable values from the current shell), and gathers its output. So echo $($n) runs $n as a shell command, and displays its output. Since $n evaluates to 1, $($n) attempts to run the command 1, which does not exist.
eval echo \${$n} runs the parameters passed to eval. After expansion, the parameters are echo and ${1}. So eval echo \${$n} runs the command echo ${1}.
Note that most of the time, you must use double quotes around variable substitutions and command substitutions (i.e. anytime there's a $): "$foo", "$(foo)". Always put double quotes around variable and command substitutions, unless you know you need to leave them off. Without the double quotes, the shell performs field splitting (i.e. it splits value of the variable or the output from the command into separate words) and then treats each word as a wildcard pattern. For example:
$ ls
file1 file2 otherfile
$ set -- 'f* *'
$ echo "$1"
f* *
$ echo $1
file1 file2 file1 file2 otherfile
$ n=1
$ eval echo \${$n}
file1 file2 file1 file2 otherfile
$eval echo \"\${$n}\"
f* *
$ echo "${!n}"
f* *
eval is not used very often. In some shells, the most common use is to obtain the value of a variable whose name is not known until runtime. In bash, this is not necessary thanks to the ${!VAR} syntax. eval is still useful when you need to construct a longer command containing operators, reserved words, etc.
Simply think of eval as "evaluating your expression one additional time before execution"
eval echo \${$n} becomes echo $1 after the first round of evaluation. Three changes to notice:
The \$ became $ (The backslash is needed, otherwise it tries to evaluate ${$n}, which means a variable named {$n}, which is not allowed)
$n was evaluated to 1
The eval disappeared
In the second round, it is basically echo $1 which can be directly executed.
So eval <some command> will first evaluate <some command> (by evaluate here I mean substitute variables, replace escaped characters with the correct ones etc.), and then run the resultant expression once again.
eval is used when you want to dynamically create variables, or to read outputs from programs specifically designed to be read like this. See Eval command and security issues for examples. The link also contains some typical ways in which eval is used, and the risks associated with it.
In my experience, a "typical" use of eval is for running commands that generate shell commands to set environment variables.
Perhaps you have a system that uses a collection of environment variables, and you have a script or program that determines which ones should be set and their values. Whenever you run a script or program, it runs in a forked process, so anything it does directly to environment variables is lost when it exits. But that script or program can send the export commands to standard output.
Without eval, you would need to redirect standard output to a temporary file, source the temporary file, and then delete it. With eval, you can just:
eval "$(script-or-program)"
Note the quotes are important. Take this (contrived) example:
# activate.sh
echo 'I got activated!'
# test.py
print("export foo=bar/baz/womp")
print(". activate.sh")
$ eval $(python test.py)
bash: export: `.': not a valid identifier
bash: export: `activate.sh': not a valid identifier
$ eval "$(python test.py)"
I got activated!
The eval statement tells the shell to take eval’s arguments as commands and run them through the command-line. It is useful in a situation like below:
In your script if you are defining a command into a variable and later on you want to use that command then you should use eval:
a="ls | more"
$a
Output:
bash: command not found: ls | more
The above command didn't work as ls tried to list file with name pipe (|) and more. But these files are not there:
eval $a
Output:
file.txt
mailids
remote_cmd.sh
sample.txt
tmp
Update: Some people say one should -never- use eval. I disagree. I think the risk arises when corrupt input can be passed to eval. However there are many common situations where that is not a risk, and therefore it is worth knowing how to use eval in any case. This stackoverflow answer explains the risks of eval and alternatives to eval. Ultimately it is up to the user to determine if/when eval is safe and efficient to use.
The bash eval statement allows you to execute lines of code calculated or acquired, by your bash script.
Perhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order. That's essentially the same behavior as the bash source statement, which is what one would use, unless it was necessary to perform some kind of transformation (e.g. filtering or substitution) on the content of the imported script.
I rarely have needed eval, but I have found it useful to read or write variables whose names were contained in strings assigned to other variables. For example, to perform actions on sets of variables, while keeping the code footprint small and avoiding redundancy.
eval is conceptually simple. However, the strict syntax of the bash language, and the bash interpreter's parsing order can be nuanced and make eval appear cryptic and difficult to use or understand. Here are the essentials:
The argument passed to eval is a string expression that is calculated at runtime. eval will execute the final parsed result of its argument as an actual line of code in your script.
Syntax and parsing order are stringent. If the result isn't an executable line of bash code, in scope of your script, the program will crash on the eval statement as it tries to execute garbage.
When testing you can replace the eval statement with echo and look at what is displayed. If it is legitimate code in the current context, running it through eval will work.
The following examples may help clarify how eval works...
Example 1:
eval statement in front of 'normal' code is a NOP
$ eval a=b
$ eval echo $a
b
In the above example, the first eval statements has no purpose and can be eliminated. eval is pointless in the first line because there is no dynamic aspect to the code, i.e. it already parsed into the final lines of bash code, thus it would be identical as a normal statement of code in the bash script. The 2nd eval is pointless too, because, although there is a parsing step converting $a to its literal string equivalent, there is no indirection (e.g. no referencing via string value of an actual bash noun or bash-held script variable), so it would behave identically as a line of code without the eval prefix.
Example 2:
Perform var assignment using var names passed as string values.
$ key="mykey"
$ val="myval"
$ eval $key=$val
$ echo $mykey
myval
If you were to echo $key=$val, the output would be:
mykey=myval
That, being the final result of string parsing, is what will be executed by eval, hence the result of the echo statement at the end...
Example 3:
Adding more indirection to Example 2
$ keyA="keyB"
$ valA="valB"
$ keyB="that"
$ valB="amazing"
$ eval eval \$$keyA=\$$valA
$ echo $that
amazing
The above is a bit more complicated than the previous example, relying more heavily on the parsing-order and peculiarities of bash. The eval line would roughly get parsed internally in the following order (note the following statements are pseudocode, not real code, just to attempt to show how the statement would get broken down into steps internally to arrive at the final result).
eval eval \$$keyA=\$$valA # substitution of $keyA and $valA by interpreter
eval eval \$keyB=\$valB # convert '$' + name-strings to real vars by eval
eval $keyB=$valB # substitution of $keyB and $valB by interpreter
eval that=amazing # execute string literal 'that=amazing' by eval
If the assumed parsing order doesn't explain what eval is doing enough, the third example may describe the parsing in more detail to help clarify what is going on.
Example 4:
Discover whether vars, whose names are contained in strings, themselves contain string values.
a="User-provided"
b="Another user-provided optional value"
c=""
myvarname_a="a"
myvarname_b="b"
myvarname_c="c"
for varname in "myvarname_a" "myvarname_b" "myvarname_c"; do
eval varval=\$$varname
if [ -z "$varval" ]; then
read -p "$varname? " $varname
fi
done
In the first iteration:
varname="myvarname_a"
Bash parses the argument to eval, and eval sees literally this at runtime:
eval varval=\$$myvarname_a
The following pseudocode attempts to illustrate how bash interprets the above line of real code, to arrive at the final value executed by eval. (the following lines descriptive, not exact bash code):
1. eval varval="\$" + "$varname" # This substitution resolved in eval statement
2. .................. "$myvarname_a" # $myvarname_a previously resolved by for-loop
3. .................. "a" # ... to this value
4. eval "varval=$a" # This requires one more parsing step
5. eval varval="User-provided" # Final result of parsing (eval executes this)
Once all the parsing is done, the result is what is executed, and its effect is obvious, demonstrating there is nothing particularly mysterious about eval itself, and the complexity is in the parsing of its argument.
varval="User-provided"
The remaining code in the example above simply tests to see if the value assigned to $varval is null, and, if so, prompts the user to provide a value.
I originally intentionally never learned how to use eval, because most people will recommend to stay away from it like the plague. However I recently discovered a use case that made me facepalm for not recognizing it sooner.
If you have cron jobs that you want to run interactively to test, you might view the contents of the file with cat, and copy and paste the cron job to run it. Unfortunately, this involves touching the mouse, which is a sin in my book.
Lets say you have a cron job at /etc/cron.d/repeatme with the contents:
*/10 * * * * root program arg1 arg2
You cant execute this as a script with all the junk in front of it, but we can use cut to get rid of all the junk, wrap it in a subshell, and execute the string with eval
eval $( cut -d ' ' -f 6- /etc/cron.d/repeatme)
The cut command only prints out the 6th field of the file, delimited by spaces. Eval then executes that command.
I used a cron job here as an example, but the concept is to format text from stdout, and then evaluate that text.
The use of eval in this case is not insecure, because we know exactly what we will be evaluating before hand.
I've recently had to use eval to force multiple brace expansions to be evaluated in the order I needed. Bash does multiple brace expansions from left to right, so
xargs -I_ cat _/{11..15}/{8..5}.jpg
expands to
xargs -I_ cat _/11/8.jpg _/11/7.jpg _/11/6.jpg _/11/5.jpg _/12/8.jpg _/12/7.jpg _/12/6.jpg _/12/5.jpg _/13/8.jpg _/13/7.jpg _/13/6.jpg _/13/5.jpg _/14/8.jpg _/14/7.jpg _/14/6.jpg _/14/5.jpg _/15/8.jpg _/15/7.jpg _/15/6.jpg _/15/5.jpg
but I needed the second brace expansion done first, yielding
xargs -I_ cat _/11/8.jpg _/12/8.jpg _/13/8.jpg _/14/8.jpg _/15/8.jpg _/11/7.jpg _/12/7.jpg _/13/7.jpg _/14/7.jpg _/15/7.jpg _/11/6.jpg _/12/6.jpg _/13/6.jpg _/14/6.jpg _/15/6.jpg _/11/5.jpg _/12/5.jpg _/13/5.jpg _/14/5.jpg _/15/5.jpg
The best I could come up with to do that was
xargs -I_ cat $(eval echo _/'{11..15}'/{8..5}.jpg)
This works because the single quotes protect the first set of braces from expansion during the parsing of the eval command line, leaving them to be expanded by the subshell invoked by eval.
There may be some cunning scheme involving nested brace expansions that allows this to happen in one step, but if there is I'm too old and stupid to see it.
You asked about typical uses.
One common complaint about shell scripting is that you (allegedly) can't pass by reference to get values back out of functions.
But actually, via "eval", you can pass by reference. The callee can pass back a list of variable assignments to be evaluated by the caller. It is pass by reference because the caller can allowed to specify the name(s) of the result variable(s) - see example below. Error results can be passed back standard names like errno and errstr.
Here is an example of passing by reference in bash:
#!/bin/bash
isint()
{
re='^[-]?[0-9]+$'
[[ $1 =~ $re ]]
}
#args 1: name of result variable, 2: first addend, 3: second addend
iadd()
{
if isint ${2} && isint ${3} ; then
echo "$1=$((${2}+${3}));errno=0"
return 0
else
echo "errstr=\"Error: non-integer argument to iadd $*\" ; errno=329"
return 1
fi
}
var=1
echo "[1] var=$var"
eval $(iadd var A B)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[2] var=$var (unchanged after error)"
eval $(iadd var $var 1)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[3] var=$var (successfully changed)"
The output looks like this:
[1] var=1
errstr=Error: non-integer argument to iadd var A B
errno=329
[2] var=1 (unchanged after error)
[3] var=2 (successfully changed)
There is almost unlimited band width in that text output! And there are more possibilities if the multiple output lines are used: e.g., the first line could be used for variable assignments, the second for continuous 'stream of thought', but that's beyond the scope of this post.
In the question:
who | grep $(tty | sed s:/dev/::)
outputs errors claiming that files a and tty do not exist. I understood this to mean that tty is not being interpreted before execution of grep, but instead that bash passed tty as a parameter to grep, which interpreted it as a file name.
There is also a situation of nested redirection, which should be handled by matched parentheses which should specify a child process, but bash is primitively a word separator, creating parameters to be sent to a program, therefore parentheses are not matched first, but interpreted as seen.
I got specific with grep, and specified the file as a parameter instead of using a pipe. I also simplified the base command, passing output from a command as a file, so that i/o piping would not be nested:
grep $(tty | sed s:/dev/::) <(who)
works well.
who | grep $(echo pts/3)
is not really desired, but eliminates the nested pipe and also works well.
In conclusion, bash does not seem to like nested pipping. It is important to understand that bash is not a new-wave program written in a recursive manner. Instead, bash is an old 1,2,3 program, which has been appended with features. For purposes of assuring backward compatibility, the initial manner of interpretation has never been modified. If bash was rewritten to first match parentheses, how many bugs would be introduced into how many bash programs? Many programmers love to be cryptic.
As clearlight has said, "(p)erhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order". I'm no expert, but the textbook I'm currently reading (Shell-Programmierung by Jürgen Wolf) points to one particular use of this that I think would be a valuable addition to the set of potential use cases collected here.
For debugging purposes, you may want to go through your script line by line (pressing Enter for each step). You could use eval to execute every line by trapping the DEBUG signal (which I think is sent after every line):
trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG
I like the "evaluating your expression one additional time before execution" answer, and would like to clarify with another example.
var="\"par1 par2\""
echo $var # prints nicely "par1 par2"
function cntpars() {
echo " > Count: $#"
echo " > Pars : $*"
echo " > par1 : $1"
echo " > par2 : $2"
if [[ $# = 1 && $1 = "par1 par2" ]]; then
echo " > PASS"
else
echo " > FAIL"
return 1
fi
}
# Option 1: Will Pass
echo "eval \"cntpars \$var\""
eval "cntpars $var"
# Option 2: Will Fail, with curious results
echo "cntpars \$var"
cntpars $var
The curious results in option 2 are that we would have passed two parameters as follows:
First parameter: "par1
Second parameter: par2"
How is that for counter intuitive? The additional eval will fix that.
It was adapted from another answer on How can I reference a file for variables using Bash?

Resources