Bash call variables in for loop variables - linux

I am curious to know that whether it is possible in bash that we can run for loop on a bunch of variables and call those values within for loop. Example:
a="hello"
b="world"
c="this is bash"
for f in a b c; do {
echo $( $f )
OR
echo $ ( "$f" )
} done
I know this is not working but can we call the values saved in a, b and c variables in for loop with printing f. I tried multiple way but unable to resolve.

You need the ! like this:
for f in a b c; do
echo "${!f}"
done

You can also use a nameref:
#!/usr/bin/env bash
a="hello"
b="world"
c="this is bash"
declare -n f
for f in a b c; do
printf "%s\n" "$f"
done
From the documentation:
If the control variable in a for loop has the nameref attribute, the list of words can be a list of shell variables, and a name reference will be established for each word in the list, in turn, when the loop is executed.

Notes on the OP's code, (scroll to bottom for corrected version):
for f in a b c; do {
echo $( $f )
} done
Problems:
The purpose of { & } is usually to put the separate outputs of
separate unpiped commands into one stream. Example of separate
commands:
echo foo; echo bar | tac
Output:
foo
bar
The tac command puts lines of input in reverse order, but in the
code above it only gets one line, so there's nothing to reverse.
But with curly braces:
{ echo foo; echo bar; } | tac
Output:
bar
foo
A do ... done already acts just like curly braces.
So "do {" instead of a "do" is unnecessary and redundant; but it
won't harm anything, or have any effect.
If f=hello and we write:
echo $f
The output will be:
hello
But the code $( $f ) runs a subshell on $f which only works if $f is
a command. So:
echo $( $f )
...tries to run the command hello, but there probably is no such
command, so the subshell will output to standard error:
hello: command not found
...but no data is sent to standard output, so echo will
print nothing.
To fix:
a="hello"
b="world"
c="this is bash"
for f in "$a" "$b" "$c"; do
echo "$f"
done

Related

Bash: find function in files with the same content

I'm trying to solve some problem that would behave as follow
Let's quote situation
In the directory, I have few scripts with some content (it doesn't matter what it's doing)
example1.sh
example2.sh
example3.sh
...etc
Altogether there are 50 scripts
Some of these scripts contain the same function, for example
function foo1
{
echo "Hello"
}
and in some scripts function can be named the same but has other content or modified, for example
function foo1
{
echo "$PWD"
}
or
function foo1
{
echo "Hello"
ls -la
}
I have to find the same function with the same name and the same content in these scripts
For example,
foo1 the same or modified content in example1.sh and example2.sh -> what I want
foo1 other content in example1.sh and example3.sh -> not interested
My question is what is the best idea to solve this problem? What do you think?
My idea was to sort content from all scripts and grep names of repeated functions. I managed to do that but still, it's not what I want because I have to check every file with this function and check its content... and it's a pain in the neck because for some functions there are 10 scripts...
I was wondering about extracting content from repeated functions but I don't know how to do it, what do you think? Or maybe you have some other suggestions?
Thank you in advance for your answer!
what is the best idea to solve this problem?
Write a shell language tokenizer and implement syntax parsing enough to extract function definitions from a file. Sources of shell implementations will be an inspiration. Then build a database of file->function+body and list all files with same function+body.
For simple enough functions, an awk or perl or python script would be enough to cover most cases. But the best would be full shell language tokenizer.
Do not use function name {. Instead use name() {. See bash obsolete and deprecated syntax.
With the following files:
# file1.sh
function foo1
{
echo "Hello"
}
# file2.sh
function foo1
{
echo "Hello"
}
# file3.sh
function foo1
{
echo "$PWD"
}
# file4.sh
function foo1
{
echo "$PWD"
}
The following script:
printf "%s\n" *.sh |
while IFS= read -r file; do
sed -zE '
s/(function[[:space:]]+([[:print:]]+)[[:space:]]*\{|(function[[:space:]]+)?([[:print:]]+)[[:space:]]*\([[:space:]]*\)[[:space:]]*\{)([^}]*)}/\x01\2\4\n\5\x02/g;
/\x01/!d;
s/[^\x01\x02]*\x01([^\x01\x02]*)\x02[^\x01\x02]*/\1\n\x00/g
' "$file" |
sed -z 's~^~'"$file"'\x01~';
done |
awk -v RS='\0' -v FS='\1' '
{cnt[$2]++; a[$2]=a[$2]" "$1}
END{ for (i in cnt) if (cnt[i] > 1) print a[i], i }
'
outputs:
file1.sh file2.sh foo1
echo "Hello"
file3.sh file4.sh foo1
echo "$PWD"
Indicating there is the same function foo1 in file1.sh and file2.sh and the same function foo1 in file3.sh and file4.sh.
Also note that a script can and do:
if condition; then
func() { echo something; }
else
func() { echo something else; }
fi
A real tokenizer will have to also take that into account.
Create a message digest of the content of each function and use it as a key in an associative array. Add files that contain the same function digest to find duplicates.
You may want to normalize space in the function content and tweak the regex address range.
#!/usr/bin/env bash
# the 1st argument is the function name
func_name="$1"
func_pattern="^function $func_name[[:blank:]]*$"
shift
declare -A dupe_groups
while read -r func_dgst file; do # collect results in an associative array
dupe_groups[$func_dgst]+="$file "
done < <( # the remaining arguments are scripts
for f in "${#}"; do
if grep --quiet "$func_pattern" "$f"; then
dgst=$( # use an address range in sed to print function contents
sed -n "/$func_pattern/,/^}/p" "$f" | \
# pipe to openssl to create a message digest
openssl dgst -sha1 )
echo "$dgst $f"
fi
done )
# print the results
for key in "${!dupe_groups[#]}"; do
echo "$key ${dupe_groups[$key]}"
done
I tested with your example{1..3}.sh files added the following example4.sh for a duplicate function.
example4.sh
function foo1
{
echo "Hello"
ls -la
}
function another
{
echo "there"
}
To run
./group-func.sh foo1 example1.sh example2.sh example3.sh example4.sh
Results
155853f813e944a7fcc5ae73ee2d959e300d217a example1.sh
7848af9b8b9d48c5cb643f34b3e5ca26cb5bfbdd example2.sh
4771de27523a765bb0dbf070691ea1cbae841375 example3.sh example4.sh

Define bash variable to be evaluated every time it is used

I want to define bash a variable which will be evaluated every time it is used.
My goal is to define two variables:
A=/home/userA
B=$A/my_file
So whenever I update A, B will be updated with the new value of A
I know how to do it in prompt variables, but, is there a way to do it for regular variables?
If you have Bash 4.4 or newer, you could (ab)use the ${parameter#P} parameter expansion, which expands parameter as if it were a prompt string:
$ A='/home/userA'
$ B='$A/my_file' # Single quotes to suppress expansion
$ echo "${B#P}"
/home/userA/my_file
$ A='/other/path'
$ echo "${B#P}"
/other/path/my_file
However, as pointed out in the comments, it's much simpler and more portable to use a function instead:
$ appendfile() { printf '%s/%s\n' "$1" 'my_file'; }
$ A='/home/user'
$ B=$(appendfile "$A")
$ echo "$B"
/home/user/my_file
$ A='/other/path'
$ B=$(appendfile "$A")
$ echo "$B"
/other/path/my_file
No. Use a simple and robust function instead:
b() {
echo "$a/my_file"
}
a="/home/userA"
echo "b outputs $(b)"
a="/foo/bar"
echo "b outputs $(b)"
Result:
b outputs /home/userA/my_file
b outputs /foo/bar/my_file
That said, here's one ugly way of fighting the system accomplish your goal verbatim:
# Trigger a re-assignment after every single command
trap 'b="$a/my_file"' DEBUG
a="/home/userA"
echo "b is $b"
a="/foo/bar"
echo "b is $b"
Result:
b is /home/userA/my_file
b is /foo/bar/my_file

How to parse a string which contains spaces as an argument in Bash Script [duplicate]

In bash one can escape arguments that contain whitespace.
foo "a string"
This also works for arguments to a command or function:
bar() {
foo "$#"
}
bar "a string"
So far so good, but what if I want to manipulate the arguments before calling foo?
This does not work:
bar() {
for arg in "$#"
do
args="$args \"prefix $arg\""
done
# Everything looks good ...
echo $args
# ... but it isn't.
foo $args
# foo "$args" would just be silly
}
bar a b c
So how do you build argument lists when the arguments contain whitespace?
There are (at least) two ways to do this:
Use an array and expand it using "${array[#]}":
bar() {
local i=0 args=()
for arg in "$#"
do
args[$i]="prefix $arg"
((++i))
done
foo "${args[#]}"
}
So, what have we learned? "${array[#]}" is to ${array[*]} what "$#" is to $*.
Or if you do not want to use arrays you need to use eval:
bar() {
local args=()
for arg in "$#"
do
args="$args \"prefix $arg\""
done
eval foo $args
}
Here is a shorter version which does not require the use of a numeric index:
(example: building arguments to a find command)
dir=$1
shift
for f in "$#" ; do
args+=(-iname "*$f*")
done
find "$dir" "${args[#]}"
Use arrays (one of the hidden features in Bash).
You can use the arrays just as you suggest, with a small detail changed. The line calling foo should read
foo "${args[#]}"
I had a problem with this too as well. I was writing a bash script to backup the important files on my windows computer (cygwin). I tried the array approach too, and still had some issues. Not sure exactly how I fixed it, but here's the parts of my code that are important in case it will help you.
WORK="d:\Work Documents\*"
# prompt and 7zip each file
for x in $SVN $WEB1 $WEB2 "$WORK" $GRAPHICS $W_SQL
do
echo "Add $x to archive? (y/n)"
read DO
if [ "$DO" == "y" ]; then
echo "compressing $x"
7zip a $W_OUTPUT "$x"
fi
echo ""
done

How to get the content of a function in a string using bash?

I have already searched about this particular problem, but couldn't find anything helpful.
Let's assume I have following functions defined in my ~/.bashrc (Note: this is pseudo-code!):
ANDROID_PLATFORM_ROOT="/home/simao/xos/src/"
function getPlatformPath() {
echo "$ANDROID_PLATFORM_ROOT"
}
function addCaf() {
# Me doing stuff
echo "blah/$(getPlatformPath)"
}
function addAosp() {
# Me doing stuff
echo "aosp/$(getPlatformPath)"
}
function addXos() {
# Me doing stuff
echo "xos/$(getPlatformPath)"
}
function addAllAll() {
cd $(gettop)
# repo forall -c "addCaf; addAosp; addXos" # Does not work!
repo forall -c # Here is where I need all those commands
}
My problem:
I need to get the functions addCaf, addAosp and addXos in one single line.
Like you can run following in bash (pseudo code):
dothis; dothat; doanotherthing; trythis && succeedsdothis || nosuccessdothis; blah
I would like to run all commands inside the three functions addCaf, addAosp and addXos in just one line.
Any help is appreciated.
What I already tried:
repo forall -c "bash -c \"source ~/.bashrc; addAllAll\""
But that didn't work as well.
Edit:
To clarify what I mean.
I want something like that as a result:
repo forall -c 'function getPlatformPath() { echo "$ANDROID_PLATFORM_ROOT"; }; ANDROID_PLATFORM_ROOT="/home/simao/xos/src/"; echo "blah/$(getPlatformPath)"; echo "aosp/$(getPlatformPath)"; echo "xos/$(getPlatformPath)"'
But I don't want to write that manually. Instead, I want to get those lines from the functions that already exist.
You can use type and then parse its output to do whatever you want to do with the code lines.
$ foo() {
> echo foo
> }
$ type foo
foo is a function
foo ()
{
echo foo
}
Perhaps this example makes things more clear:
#!/bin/bash
foo() {
echo "foo"
}
bar() {
echo "bar"
}
export IFS=$'\n'
for f in foo bar; do
for i in $(type $f | head -n-1 | tail -n+4); do
eval $i
done
done
exit 0
This is how it looks:
$ ./funcs.sh
foo
bar
What the script is doing is first loop over all the functions you have (in this case only foo and bar). For each function, it loops over the code of that function (skipping the useless lines from type's output) and it executes them. So at the end it's the same as having this code...
echo "foo"
echo "bar"
...which are exactly the code lines inside the functions, and you are executing them one after the other.
Note that you could also build a string variable containing all the code lines separated by ; if instead of running eval on every line you do something like this:
code_lines=
for f in foo bar; do
for i in $(type $f | head -n-1 | tail -n+4); do
if [ -z $code_lines ]; then
code_lines="$i"
else
code_lines="${code_lines}; $i"
fi
done
done
eval $code_lines
Assuming that repo forall -c interprets the next positional argument just as bash -c, try:
foo () {
echo "foo!"
}
boo () {
if true; then
echo "boo!"
fi
}
echo works | bash -c "source "<(typeset -f foo boo)"; foo; boo; cat"
Note:
The difference from the original version is that this no longer interferes with stdin.
The <(...) substitution is unescaped because it must be performed by the original shell, the one where foo and boo are first defined. Its output will be a string of the form /dev/fd/63, which is a file descriptor that is passed open to the second shell, and which contains the forwarded definitions.
Make a dummy function foo(), which just prints "bar":
foo() { echo bar ; }
Now a bash function to print what's in one (or more) functions. Since the contents of a function are indented with 4 spaces, sed removes any lines without 4 leading spaces, then removes the leading spaces as well, and adds a ';' at the end of each function:
# Usage: in_func <function_name1> [ ...<function_name2> ... ]
in_func()
{ while [ "$1" ] ; do \
type $1 | sed -n '/^ /{s/^ //p}' | sed '$s/.*/&;/' ; shift ; \
done ; }
Print what's in foo():
in_func foo
Output:
echo bar;
Assign what's in foo() to the string $baz, then print $baz:
baz="`in_func foo`" ; echo $baz
Output:
echo bar;
Run what's in foo():
eval "$baz"
Output:
bar
Assign what's in foo() to $baz three times, and run it:
baz="`in_func foo foo foo`" ; eval "$baz"
Output:
bar
bar
bar
Shell functions aren't visible to child processes unless they're exported. Perhaps that is the missing ingredient.
export -f addCaf addAosp addXos
repo forall -c "addCaf; addAosp; addXos"
This should work:
repo forall -c "$(addCaf) $(addAosp) $(addXos)"

How do I indirectly assign a variable in bash to take multi-line data from both Standard In, a File, and the output of execution

I have found many snippets here and in other places that answer parts of this question. I have even managed to do this in many steps in an inefficient manner. If it is possible, I would really like to find single lines of execution that will perform this task, rather than having to assign to a variable and copy it a few times to perform the task.
e.g.
executeToVar ()
{
# Takes Arg1: NAME OF VARIABLE TO STORE IN
# All Remaining Arguments Are Executed
local STORE_INvar="${1}" ; shift
eval ${STORE_INvar}=\""$( "$#" 2>&1 )"\"
}
Overall does work, i.e. $ executeToVar SOME_VAR ls -l * # will actually fill SOME_VAR with the output of the execution of the ls -l * command that is taken from the rest of the arguments. However, if the command was to output empty lines at the end, (for e.g. - echo -e -n '\n\n123\n456\n789\n\n' which should have 2 x new lines at the start and the end ) these are stripped by bash's sub-execution process. I have seen in other posts similar to this that this has been solved by adding a token 'x' to the end of the stream, e.g. turning the sub-execution into something like:
eval ${STORE_INvar}=\""$( "$#" 2>&1 ; echo -n x )"\" # <-- ( Add echo -n x )
# and then if it wasn't an indirect reference to a var:
STORE_INvar=${STORE_INvar%x}
# However no matter how much I play with:
eval "${STORE_INvar}"=\""${STORE_INvar%x}"\"
# I am unable to indirectly remove the x from the end.
Anyway, I also need 2 x other variants on this, one that assigns the STDIN stream to the var and one that assigns the contents of a file to the var which I assume will be variations of this involving $( cat ${1} ), or maybe $( cat ${1:--} ) to give me a '-' if no filename. But, none of that will work until I can sort out the removal of the x that is needed to ensure accurate assignment of multi line variables.
I have also tried (but to no avail):
IFS='' read -d '' "${STORE_INvar}" <<<"$( $# ; echo -n x )"
eval \"'${STORE_INvar}=${!STORE_INvar%x}'\"
This is close to optimal -- but drop the eval.
executeToVar() { local varName=$1; shift; printf -v "$1" %s "$("$#")"; }
The one problem this formulation still has is that $() strips trailing newlines. If you want to prevent that, you need to add your own trailing character inside the subshell, and strip it off yourself.
executeToVar() {
local varName=$1; shift;
local val="$(printf %s x; "$#"; printf %s x)"; val=${val#x}
printf -v "$varName" %s "${val%x}"
}
If you want to read all content from stdin into a variable, this is particularly easy:
# This requires bash 4.1 for automatic fd allocation
readToVar() {
if [[ $2 && $2 != "-" ]]; then
exec {read_in_fd}<"$2" # copy from named file
else
exec {read_in_fd}<&0 # copy from stdin
fi
IFS= read -r -d '' "$1" <&$read_in_fd # read from the FD
exec {read_in_fd}<&- # close that FD
}
...used as:
readToVar var < <( : "run something here to read its output byte-for-byte" )
...or...
readToVar var filename
Testing these:
bash3-3.2$ executeToVar var printf '\n\n123\n456\n789\n\n'
bash3-3.2$ declare -p var
declare -- var="
123
456
789
"
...and...
bash4-4.3$ readToVar var2 < <(printf '\n\n123\n456\n789\n\n')
bash4-4.3$ declare -p var2
declare -- var2="
123
456
789
"
what'w wrong with storing in a file:
$ stuffToFile filename $(stuff)
where "stuffToFile" tests for a. > 1 argument, b. input on a pipe
$ ... commands ... | stuffToFile filename
and
$ stuffToFile filename < another_file
where "stoffToFile" is a function:
function stuffToFile
{
[[ -f $1 ]] || { echo $1 is not a file; return 1; }
[[ $# -lt 2 ]] && { cat - > $1; return; }
echo "$*" > $1
}
so, if "stuff" has leading and trailing blank lines, then you must:
$ stuff | stuffToFile filename

Resources