ksh script + print argument content in shell script - linux

I want to run the script.sh with one argument.
If the first argument = action then script.sh will print the action parameter - restart machine each 1 min
My example not work but please advice what need to fix in the script so I will print the $action parameter if argument is action.
Remark I not want to set the following solution - [[ $1 = action ]] && echo action "restart machine each 1 min
My example script:
#!/bin/ksh
action="restart machine each 1 min"
echo "action" ${$1}
Example how to run the script
./script.sh action
Expected results that I need to get :
action restart machine each 1 min

Well with pdksh this works:
echo "action" `eval echo '$'$1`

You want to use eval:
action="restart machine each 1 min"
eval echo $1 \$$1
Note that doing something like this is a huge security risk. Consider what happens if the user invokes the script with the first argument "; rm -rf /"
You can probably alleviate such problems with:
eval "echo '$1' \"\$$1\""
but really you're just asking for trouble (This last version will struggle if the first argument contains a double-quote, and a $() construct will permit an arbitrary command to be executed). It is much safer to simply use a case statement and check that the argument matches exactly a string that you are looking for. Or, at least check that the argument you are eval'ing does not contain any of the following characters: ;()$"'. It's probably safest to check that it only contains alphanumerics (a-zA-Z0-9)

It's been two years, but here's an example of using nameref (a.k.a. typeset -N).
It includes three consecutive tests for validity of the given argument.
Is an argument given?
Does the argument match a known variable? nameref checks this.
Does the target variable have a value set?
action='This is the value of $action'
word='This it the value of ${word}'
list='This is a list'
lie='This is a lie'
(
typeset name=${1:?Usage: script.sh varname} || exit
nameref arg1=${name} || exit
: ${arg1:?} || exit
echo "$name $arg1"
)

Related

How to pass argument in the custom bash function as part of a dir path?

I want to define a custom bash function, which gets an argument as a part of a dir path.
I'm new to bash scripts. The codes provided online are somehow confusing for me or don't work properly.
For example, the expected bash script looks like:
function my_copy() {
sudo cp ~/workspace/{$1} ~/tmp/{$2}
}
If I type my_copy a b,
then I expect the function executes sudo cp ~/workspace/a ~/tmp/b
in the terminal.
Thanks in advance.
If you have the below function in say copy.sh file and if you source it ( source copy.sh or . copy.sh) then the function call my_copy will work as expected.
$1 and $2 are positional parameters.
i.e. when you call my_copy a b, $1 will have the first command line argument as its value which is a in your case and $2 which is second command line argument, will have the value b. The function will work as expected.
Also you have a logical error in the function, you have given {$1} instead of ${1}. It will expand to {a} instead of a in your function and it will throw an error that says cp: cannot stat '~/workspace/{a}': No such file or directory when you run it.
Additionally, if the number of positional parameters are greater than 10, only then it is required to use {} in between otherwise you can avoid it. eg: ${11} instead of $11.
function my_copy() {
sudo cp ~/workspace/$1 ~/tmp/$2
}
So above function will execute the statement sudo cp ~/workspace/a ~/tmp/b as expected.
To understand the concept, you can try echo $1, echo ${1}, echo {$1}, echo {$2}, echo ${2} and echo $2 inside the script to see the resulting values. For more special $ sign shell variables
There is a syntax error in your code. You don't call a variable like {$foo}. If 1=a and 2=b then you execute
sudo cp ~/workspace/{$1} ~/tmp/{$2}
BASH is going to replace $1 with a and $2 with b, so, BASH is going to execute
sudo cp ~/workspace/{a} ~/tmp/{b}
That means tha cp is going to fail because there is no file with a name like {a}
There is some ways to call a variable
echo $foo
echo ${foo}
echo "$foo"
echo "${foo}"
Otherwise, your code looks good, should work.
Take a look a this links first and second, it's really important to quoting your variables. If you want more information about BASH or can't sleep at night try with the Official Manual, it have everything you must know about BASH and it's a good somniferous too ;)
PS: I know $1, $2, etc are positional parameters, I called it variables because you treat it as a variable, and my anwser can be applied for both.

Strange behavior with parameter expansion in program arguments

I'm trying to conditionally pass an argument to a bash script only if it has been set in the calling script and I've noticed some odd behavior.
I'm using parameter expansion to facilitate this, outputting an option only if the corresponding variable is set. The aim is to pass an argument from a 'parent' script to a 'child' script.
Consider the following example:
The calling script:
#!/bin/bash
# 1.sh
ONE="TEST_ONE"
TWO="TEST_TWO"
./2.sh \
--one "${ONE}" \
"${TWO:+"--two ${TWO}"}" \
--other
and the called script:
#!/bin/bash
# 2.sh
while [[ $# -gt 0 ]]; do
key="${1}"
case $key in
-o|--one)
ONE="${2}"
echo "ONE: ${ONE}"
shift
shift
;;
-t|--two)
TWO="${2}"
echo "TWO: ${TWO}"
shift
shift
;;
-f|--other)
OTHER=1
echo "OTHER: ${OTHER}"
shift
;;
*)
echo "UNRECOGNISED: ${1}"
shift
;;
esac
done
output:
ONE: TEST_ONE
UNRECOGNISED: --two TEST_TWO
OTHER: 1
Observe the behavior of the option '--two', which will be unrecognised. It looks like it is being expanded correctly, but is not recognised as being two distinct strings.
Can anyone explain why this is happening? I've seen it written in one source that it will not work with positional parameter arguments, but I'm still not understanding why this behaves as it does.
It is because when you pass $2 as a result of parameter expansion from 1.sh you are quoting it in a way that --two TEST_TWO is evaluated as one single argument, so that the number of arguments in 2.sh result in 4 instead of 5
But that said, using your $2 as ${TWO:+--two ${TWO}} would solve the problem, but that would word-split the content of $2 if it contains spaces. You need to use arrays.
As a much more recommended and fail-proof approach use arrays as below on 1.sh as
argsList=(--one "${ONE}" ${TWO:+--two "${TWO}"} --other)
and pass it along as
./2.sh "${argsList[#]}"
or if you are familiar with how quoting rules work (how and when to quote to prevent word-splitting from happening) use it directly on the command line as below. This would ensure that the contents variables ONE and TWO are preserved even if they have spaces.
./2.sh \
--one "${ONE}" \
${TWO:+--two "${TWO}"} \
--other
As a few recommended guidelines
Always use lower-case variable names for user defined variables to not confuse them with the environment variables maintained by the shell itself.
Use getopts() for more robust argument flags parsing

The 'eval' command in Bash and its typical uses

After reading the Bash man pages and with respect to this post, I am still having trouble understanding what exactly the eval command does and which would be its typical uses.
For example, if we do:
$ set -- one two three # Sets $1 $2 $3
$ echo $1
one
$ n=1
$ echo ${$n} ## First attempt to echo $1 using brackets fails
bash: ${$n}: bad substitution
$ echo $($n) ## Second attempt to echo $1 using parentheses fails
bash: 1: command not found
$ eval echo \${$n} ## Third attempt to echo $1 using 'eval' succeeds
one
What exactly is happening here and how do the dollar sign and the backslash tie into the problem?
eval takes a string as its argument, and evaluates it as if you'd typed that string on a command line. (If you pass several arguments, they are first joined with spaces between them.)
${$n} is a syntax error in bash. Inside the braces, you can only have a variable name, with some possible prefix and suffixes, but you can't have arbitrary bash syntax and in particular you can't use variable expansion. There is a way of saying “the value of the variable whose name is in this variable”, though:
echo ${!n}
one
$(…) runs the command specified inside the parentheses in a subshell (i.e. in a separate process that inherits all settings such as variable values from the current shell), and gathers its output. So echo $($n) runs $n as a shell command, and displays its output. Since $n evaluates to 1, $($n) attempts to run the command 1, which does not exist.
eval echo \${$n} runs the parameters passed to eval. After expansion, the parameters are echo and ${1}. So eval echo \${$n} runs the command echo ${1}.
Note that most of the time, you must use double quotes around variable substitutions and command substitutions (i.e. anytime there's a $): "$foo", "$(foo)". Always put double quotes around variable and command substitutions, unless you know you need to leave them off. Without the double quotes, the shell performs field splitting (i.e. it splits value of the variable or the output from the command into separate words) and then treats each word as a wildcard pattern. For example:
$ ls
file1 file2 otherfile
$ set -- 'f* *'
$ echo "$1"
f* *
$ echo $1
file1 file2 file1 file2 otherfile
$ n=1
$ eval echo \${$n}
file1 file2 file1 file2 otherfile
$eval echo \"\${$n}\"
f* *
$ echo "${!n}"
f* *
eval is not used very often. In some shells, the most common use is to obtain the value of a variable whose name is not known until runtime. In bash, this is not necessary thanks to the ${!VAR} syntax. eval is still useful when you need to construct a longer command containing operators, reserved words, etc.
Simply think of eval as "evaluating your expression one additional time before execution"
eval echo \${$n} becomes echo $1 after the first round of evaluation. Three changes to notice:
The \$ became $ (The backslash is needed, otherwise it tries to evaluate ${$n}, which means a variable named {$n}, which is not allowed)
$n was evaluated to 1
The eval disappeared
In the second round, it is basically echo $1 which can be directly executed.
So eval <some command> will first evaluate <some command> (by evaluate here I mean substitute variables, replace escaped characters with the correct ones etc.), and then run the resultant expression once again.
eval is used when you want to dynamically create variables, or to read outputs from programs specifically designed to be read like this. See Eval command and security issues for examples. The link also contains some typical ways in which eval is used, and the risks associated with it.
In my experience, a "typical" use of eval is for running commands that generate shell commands to set environment variables.
Perhaps you have a system that uses a collection of environment variables, and you have a script or program that determines which ones should be set and their values. Whenever you run a script or program, it runs in a forked process, so anything it does directly to environment variables is lost when it exits. But that script or program can send the export commands to standard output.
Without eval, you would need to redirect standard output to a temporary file, source the temporary file, and then delete it. With eval, you can just:
eval "$(script-or-program)"
Note the quotes are important. Take this (contrived) example:
# activate.sh
echo 'I got activated!'
# test.py
print("export foo=bar/baz/womp")
print(". activate.sh")
$ eval $(python test.py)
bash: export: `.': not a valid identifier
bash: export: `activate.sh': not a valid identifier
$ eval "$(python test.py)"
I got activated!
The eval statement tells the shell to take eval’s arguments as commands and run them through the command-line. It is useful in a situation like below:
In your script if you are defining a command into a variable and later on you want to use that command then you should use eval:
a="ls | more"
$a
Output:
bash: command not found: ls | more
The above command didn't work as ls tried to list file with name pipe (|) and more. But these files are not there:
eval $a
Output:
file.txt
mailids
remote_cmd.sh
sample.txt
tmp
Update: Some people say one should -never- use eval. I disagree. I think the risk arises when corrupt input can be passed to eval. However there are many common situations where that is not a risk, and therefore it is worth knowing how to use eval in any case. This stackoverflow answer explains the risks of eval and alternatives to eval. Ultimately it is up to the user to determine if/when eval is safe and efficient to use.
The bash eval statement allows you to execute lines of code calculated or acquired, by your bash script.
Perhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order. That's essentially the same behavior as the bash source statement, which is what one would use, unless it was necessary to perform some kind of transformation (e.g. filtering or substitution) on the content of the imported script.
I rarely have needed eval, but I have found it useful to read or write variables whose names were contained in strings assigned to other variables. For example, to perform actions on sets of variables, while keeping the code footprint small and avoiding redundancy.
eval is conceptually simple. However, the strict syntax of the bash language, and the bash interpreter's parsing order can be nuanced and make eval appear cryptic and difficult to use or understand. Here are the essentials:
The argument passed to eval is a string expression that is calculated at runtime. eval will execute the final parsed result of its argument as an actual line of code in your script.
Syntax and parsing order are stringent. If the result isn't an executable line of bash code, in scope of your script, the program will crash on the eval statement as it tries to execute garbage.
When testing you can replace the eval statement with echo and look at what is displayed. If it is legitimate code in the current context, running it through eval will work.
The following examples may help clarify how eval works...
Example 1:
eval statement in front of 'normal' code is a NOP
$ eval a=b
$ eval echo $a
b
In the above example, the first eval statements has no purpose and can be eliminated. eval is pointless in the first line because there is no dynamic aspect to the code, i.e. it already parsed into the final lines of bash code, thus it would be identical as a normal statement of code in the bash script. The 2nd eval is pointless too, because, although there is a parsing step converting $a to its literal string equivalent, there is no indirection (e.g. no referencing via string value of an actual bash noun or bash-held script variable), so it would behave identically as a line of code without the eval prefix.
Example 2:
Perform var assignment using var names passed as string values.
$ key="mykey"
$ val="myval"
$ eval $key=$val
$ echo $mykey
myval
If you were to echo $key=$val, the output would be:
mykey=myval
That, being the final result of string parsing, is what will be executed by eval, hence the result of the echo statement at the end...
Example 3:
Adding more indirection to Example 2
$ keyA="keyB"
$ valA="valB"
$ keyB="that"
$ valB="amazing"
$ eval eval \$$keyA=\$$valA
$ echo $that
amazing
The above is a bit more complicated than the previous example, relying more heavily on the parsing-order and peculiarities of bash. The eval line would roughly get parsed internally in the following order (note the following statements are pseudocode, not real code, just to attempt to show how the statement would get broken down into steps internally to arrive at the final result).
eval eval \$$keyA=\$$valA # substitution of $keyA and $valA by interpreter
eval eval \$keyB=\$valB # convert '$' + name-strings to real vars by eval
eval $keyB=$valB # substitution of $keyB and $valB by interpreter
eval that=amazing # execute string literal 'that=amazing' by eval
If the assumed parsing order doesn't explain what eval is doing enough, the third example may describe the parsing in more detail to help clarify what is going on.
Example 4:
Discover whether vars, whose names are contained in strings, themselves contain string values.
a="User-provided"
b="Another user-provided optional value"
c=""
myvarname_a="a"
myvarname_b="b"
myvarname_c="c"
for varname in "myvarname_a" "myvarname_b" "myvarname_c"; do
eval varval=\$$varname
if [ -z "$varval" ]; then
read -p "$varname? " $varname
fi
done
In the first iteration:
varname="myvarname_a"
Bash parses the argument to eval, and eval sees literally this at runtime:
eval varval=\$$myvarname_a
The following pseudocode attempts to illustrate how bash interprets the above line of real code, to arrive at the final value executed by eval. (the following lines descriptive, not exact bash code):
1. eval varval="\$" + "$varname" # This substitution resolved in eval statement
2. .................. "$myvarname_a" # $myvarname_a previously resolved by for-loop
3. .................. "a" # ... to this value
4. eval "varval=$a" # This requires one more parsing step
5. eval varval="User-provided" # Final result of parsing (eval executes this)
Once all the parsing is done, the result is what is executed, and its effect is obvious, demonstrating there is nothing particularly mysterious about eval itself, and the complexity is in the parsing of its argument.
varval="User-provided"
The remaining code in the example above simply tests to see if the value assigned to $varval is null, and, if so, prompts the user to provide a value.
I originally intentionally never learned how to use eval, because most people will recommend to stay away from it like the plague. However I recently discovered a use case that made me facepalm for not recognizing it sooner.
If you have cron jobs that you want to run interactively to test, you might view the contents of the file with cat, and copy and paste the cron job to run it. Unfortunately, this involves touching the mouse, which is a sin in my book.
Lets say you have a cron job at /etc/cron.d/repeatme with the contents:
*/10 * * * * root program arg1 arg2
You cant execute this as a script with all the junk in front of it, but we can use cut to get rid of all the junk, wrap it in a subshell, and execute the string with eval
eval $( cut -d ' ' -f 6- /etc/cron.d/repeatme)
The cut command only prints out the 6th field of the file, delimited by spaces. Eval then executes that command.
I used a cron job here as an example, but the concept is to format text from stdout, and then evaluate that text.
The use of eval in this case is not insecure, because we know exactly what we will be evaluating before hand.
I've recently had to use eval to force multiple brace expansions to be evaluated in the order I needed. Bash does multiple brace expansions from left to right, so
xargs -I_ cat _/{11..15}/{8..5}.jpg
expands to
xargs -I_ cat _/11/8.jpg _/11/7.jpg _/11/6.jpg _/11/5.jpg _/12/8.jpg _/12/7.jpg _/12/6.jpg _/12/5.jpg _/13/8.jpg _/13/7.jpg _/13/6.jpg _/13/5.jpg _/14/8.jpg _/14/7.jpg _/14/6.jpg _/14/5.jpg _/15/8.jpg _/15/7.jpg _/15/6.jpg _/15/5.jpg
but I needed the second brace expansion done first, yielding
xargs -I_ cat _/11/8.jpg _/12/8.jpg _/13/8.jpg _/14/8.jpg _/15/8.jpg _/11/7.jpg _/12/7.jpg _/13/7.jpg _/14/7.jpg _/15/7.jpg _/11/6.jpg _/12/6.jpg _/13/6.jpg _/14/6.jpg _/15/6.jpg _/11/5.jpg _/12/5.jpg _/13/5.jpg _/14/5.jpg _/15/5.jpg
The best I could come up with to do that was
xargs -I_ cat $(eval echo _/'{11..15}'/{8..5}.jpg)
This works because the single quotes protect the first set of braces from expansion during the parsing of the eval command line, leaving them to be expanded by the subshell invoked by eval.
There may be some cunning scheme involving nested brace expansions that allows this to happen in one step, but if there is I'm too old and stupid to see it.
You asked about typical uses.
One common complaint about shell scripting is that you (allegedly) can't pass by reference to get values back out of functions.
But actually, via "eval", you can pass by reference. The callee can pass back a list of variable assignments to be evaluated by the caller. It is pass by reference because the caller can allowed to specify the name(s) of the result variable(s) - see example below. Error results can be passed back standard names like errno and errstr.
Here is an example of passing by reference in bash:
#!/bin/bash
isint()
{
re='^[-]?[0-9]+$'
[[ $1 =~ $re ]]
}
#args 1: name of result variable, 2: first addend, 3: second addend
iadd()
{
if isint ${2} && isint ${3} ; then
echo "$1=$((${2}+${3}));errno=0"
return 0
else
echo "errstr=\"Error: non-integer argument to iadd $*\" ; errno=329"
return 1
fi
}
var=1
echo "[1] var=$var"
eval $(iadd var A B)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[2] var=$var (unchanged after error)"
eval $(iadd var $var 1)
if [[ $errno -ne 0 ]]; then
echo "errstr=$errstr"
echo "errno=$errno"
fi
echo "[3] var=$var (successfully changed)"
The output looks like this:
[1] var=1
errstr=Error: non-integer argument to iadd var A B
errno=329
[2] var=1 (unchanged after error)
[3] var=2 (successfully changed)
There is almost unlimited band width in that text output! And there are more possibilities if the multiple output lines are used: e.g., the first line could be used for variable assignments, the second for continuous 'stream of thought', but that's beyond the scope of this post.
In the question:
who | grep $(tty | sed s:/dev/::)
outputs errors claiming that files a and tty do not exist. I understood this to mean that tty is not being interpreted before execution of grep, but instead that bash passed tty as a parameter to grep, which interpreted it as a file name.
There is also a situation of nested redirection, which should be handled by matched parentheses which should specify a child process, but bash is primitively a word separator, creating parameters to be sent to a program, therefore parentheses are not matched first, but interpreted as seen.
I got specific with grep, and specified the file as a parameter instead of using a pipe. I also simplified the base command, passing output from a command as a file, so that i/o piping would not be nested:
grep $(tty | sed s:/dev/::) <(who)
works well.
who | grep $(echo pts/3)
is not really desired, but eliminates the nested pipe and also works well.
In conclusion, bash does not seem to like nested pipping. It is important to understand that bash is not a new-wave program written in a recursive manner. Instead, bash is an old 1,2,3 program, which has been appended with features. For purposes of assuring backward compatibility, the initial manner of interpretation has never been modified. If bash was rewritten to first match parentheses, how many bugs would be introduced into how many bash programs? Many programmers love to be cryptic.
As clearlight has said, "(p)erhaps the most straightforward example would be a bash program that opens another bash script as a text file, reads each line of text, and uses eval to execute them in order". I'm no expert, but the textbook I'm currently reading (Shell-Programmierung by Jürgen Wolf) points to one particular use of this that I think would be a valuable addition to the set of potential use cases collected here.
For debugging purposes, you may want to go through your script line by line (pressing Enter for each step). You could use eval to execute every line by trapping the DEBUG signal (which I think is sent after every line):
trap 'printf "$LINENO :-> " ; read line ; eval $line' DEBUG
I like the "evaluating your expression one additional time before execution" answer, and would like to clarify with another example.
var="\"par1 par2\""
echo $var # prints nicely "par1 par2"
function cntpars() {
echo " > Count: $#"
echo " > Pars : $*"
echo " > par1 : $1"
echo " > par2 : $2"
if [[ $# = 1 && $1 = "par1 par2" ]]; then
echo " > PASS"
else
echo " > FAIL"
return 1
fi
}
# Option 1: Will Pass
echo "eval \"cntpars \$var\""
eval "cntpars $var"
# Option 2: Will Fail, with curious results
echo "cntpars \$var"
cntpars $var
The curious results in option 2 are that we would have passed two parameters as follows:
First parameter: "par1
Second parameter: par2"
How is that for counter intuitive? The additional eval will fix that.
It was adapted from another answer on How can I reference a file for variables using Bash?

How to detect using of wildcard (asterisk *) as parameter for shell script?

In my script, how can I distinguish when the asterisk wildcard character was used instead of strongly typed parameters?
This
# myscript *
from this
# myscript p1 p2 p3 ... (where parameters are unknown number)
The shell expands the wildcard. By the time a script is run, the wildcard has been expanded, and there is no way a script can tell whether the arguments were a wildcard or an explicit list.
Which means that your script will need help from something else which is not a script. Specifically, something which is run before command-line processing. That something is an alias. This is your alias
alias myscript='set -f; globstopper /usr/bin/myscript'
What this does is set up an alias called 'myscript', so when someone types 'myscript', this is what gets run. The alias does two things: firstly, it turns off wildcard expansion with set -f, then it runs a function called globstopper, passing in the path to your script, and the rest of the command-line arguments.
So what's the globstopper function? This:
globstopper() {
if [[ "$2" == "*" ]]
then echo "You cannot use a wildcard"
return
fi
set +f
"$#";
}
This function does three things. Firstly, it checks to see if the argument to the script is a wildcard (caveat: it only checks the first argument, and it only checks to see if it's a simple star; extending this to cover more cases is left as an exercise to the reader). Secondly, it switches wildcard expansion back on. Lastly, it runs the original command.
For this to work, you do need to be able to set up the alias and the shell function in the user's shell, and require your users to use the alias, not the script. But if you can do that, it ought to work.
I should add that i am leaning heavily on the resplendent Simon Tatham's essay 'Magic Aliases: A Layering Loophole in the Bourne Shell' here.
I had a similar question, but rather than detecting when the user called the script using a wildcard, I simply wanted to prevent the use of the wildcard, and pass the string pre-expansion.
Tom's solution is great if you want to detect, but I'd rather prevent. In other words, if I had a script called findin that looked like
#!/bin/bash
echo "[${1}]"
and ran it using:
$ findin *
I would expect the output to be simply
[*]
To do this, you could just alias findin by
alias findin='set -f; /path/to/findin'
But then you would have the shell option set for the rest of your session. This will likely break many programs that don't expect this (e.g. ls -lh *.py). You could verify this by typing
echo $-
in console. If you see an f, that option is set.
You could manually clear the option by typing
set +f
after every instance of findin, but that would get tedious and annoying.
Since shell scripts spawn subshells and you cannot clear the flag from within the script (set +f), the solution I came up with was the following:
g(){ /usr/local/bin/findin "$#"; set +f; }
alias findin='set -f; g'
Note: 'g' might not be the best name for the function, so you'd be encouraged to change it.
Finally, you could generalize this by doing something like:
reset_expansion(){ CMD="$1"; shift; $CMD "$#"; set +f; }
alias findin='set -f; reset_expansion /usr/local/bin/findin'
That way another script where you would want expansion disabled would only require an additional alias, e.g.
alias newscript='set -f; reset_expansion /usr/local/bin/newscript'
and not an additional wrapper function.
For a much longer than necessary writeup, see my post here.
You can't.
It is one of the strengths (or, in some eyes, weaknesses) of Unix.
See the diatribe(s) in "The UNIX-HATERS Handbook".
$arg_num == ***; // detects *(literally anything since it's a global wildcard)
$arg_num == *_*; // detects _
here is an example of it working with _
for i in $*
do
if [[ "$i" == *_* ]];
then echo $i;
fi
done
output of ./bash test * test2 _
_
output of ./bash test * test2 with ********* rather then ****
test
bash
pass.rtf
test2
_
NOTE: the * is so global in bash that it printed out files matching that description or in my case of the files on my oh-so-unused desktop. I wish I could give you a better answer but the best choice it to use something other then * or another scripting language.
Addendum
I found this post while looking for a workaround for my command line calculator:
alias c='set -f; call_clc'
where "call_clc" is the function: "function call_clc { clc "$*"; set +f; }"
and "clc" is the script
#!/bin/bash
echo "$*" | sed -e 's/ //g' >&1 | tee /dev/tty | bc
I need 'set -f' and 'set +f' in order to make inputs such as 'c 4 * 3' to work,
therefore an asterix with white space before and after,
in order to prevent globbing of the bash.
Update: the previous variant 'alias c='set -f; clc "$*"; set+f;'' did not work
because for some reason the correct result was given after invoking the command "c 4 * 4' twice.
Anyone an idea why this is so?
If this is something you feel you must do, perhaps:
# if the number of parms is not the same as the number of files in cwd
# then user did not use *
dir_contents=(*)
if [[ "${##}" -ne "${#dir_contents[#]}" ]]; then
used_star=false
else
# if one of the params is not a file in cwd
# then user did not use *
used_star=true
for f; do [[ ! -a "$f" ]] && { used_star=false; break; }; done
fi
unset dir_contents
$used_star && echo "used star" || echo "did not use star"
Pedantically, this will echo "used star" if the user actually used an asterisk or if the user manually entered the directory contents in any order.

What does this bash syntax mean? (Featuring: case, exec)

What is the purpose of this bash script? (It is a portion of a larger script.)
if [ $# -gt 0 ]
then
case $1 in
-*) ;;
*) exec $* ;;
esac
fi
A related question:
https://stackoverflow.com/questions/2046762/problem-with-metamap-inappropriate-ioctl-for-device
In English, line-by-line:
if the number of arguments is greater than 0
then
if the first argument...
starts with '-', do nothing
else, "exec" the arguments (run the entire set of arguments as a command replacing this process, not as a child process)
(end of case)
(end of if)
Not knowing any bash scripting I'd say this
looks for whether the number of arguments is larger than 0
if it is, it looks at the first argument
If it starts with - it does nothing
Otherwise it executes all arguments as a single command line
The case ... esac part is a switch statement. If $1 matches against -* (that is if it starts with -) the first case will be executed - and will do nothing. Otherwise (if $1 matches *, which depending on shell setting might exclude things starting with .) exec $* will be run.
Around that there is an if statement making sure that the switch is only executed if there actually are any parameters to be checked against (the parameter count is greater than zero).
It takes the first argument passed in and executes it with the remaining arguments I.E.:
./script.sh ls dir1 dir2
would act as if you had typed
ls dir1 dir2
If the first parameter placed on the command-line for this script is a file, not an option, then try to run it as an executable file or script.

Resources