linux - shellscript - "$#" [duplicate] - linux

This question already has answers here:
What does the $# construct mean in bash? [duplicate]
(3 answers)
Closed 7 years ago.
I have a shellscript with the following lines:
set -o nounset
set -o errexit
set -o xtrace
if [ "$#" -ne 0 ]
then
echo 'A message'
exit 1
fi
Can somebody explain the commands, in particular the setter and the "$#" portion?

$# is the number of arguments passed to the script.
So this
if [ "$#" -ne 0 ]
check ensures that if no arguments are passed then the script exits,
which implies that the script expects one or more arguments.
In a simple script called my_script, have this:
#!/bin/bash
echo $#
and run with:
$ ./my_script # prints 0
$ ./my_script a b cde # prints 3
$ ./my_script 1 2 3 4 # prints 4
The set built-in options:
set -o unset (equivalent to set -u): Treats unset variable as error.
set -o errexit (equivalent to set -e): Exits immediately on error.
set -o xtrace (equivalent to set -x): Displays the expanded command. Typically used to to debug shell scripts.
Consider a simple script called opt to demonstrate this:
#!/bin/bash
set -e
set -u
set -x
cmd="ps $$"
${cmd}
echo $var # 'var' is unset. So it's an "error". Since we have
# 'set -o e', the script exits.
echo "won't print this
"
outputs something like:
+ cmd='ps 2885'
+ ps 2885
PID TTY STAT TIME COMMAND
2885 pts/1 S+ 0:00 /bin/bash ./s
./s: line 9: var: unbound variable
The first two lines in the output (starting with +) are due to set -x.
The next two are the result of running the ${cmd}.
The next line is the error, happened as the result of set -u.
You can read more about the set built-in options here.

In Bash, $# keeps the number of command line arguments. In your case, the conditional part will fire only when there are some.
I believe very similar question was answered here, second or third answer matching your problem.

Related

Bash discards command line arguments when passing to another bash shell

I have a big script (call it test) that, after stripping out the unrelated parts, comes down to just this using which I can explain my question:
#!/bin/bash
bash -c "$#"
This doesn't work as expected. E.g. ./test echo hi executes the only the echo and the argument disappears!
Testing with various inputs I can see only $1 is passed to bash -c ... and rest are discarded.
But if I use a variable like:
#!/bin/bash
cmd="$#"
bash -c "$cmd"
it works as expected for all inputs.
Questions:
1) I would like to understand why the double quotes don't "pass" the entire command line arguments to bash -c .... What am I missing here (that it works perfectly fine when using an intermediate variable)?
2) Why does bash discard the rest of the arguments (except $1) without any error messages?
For example:
bash -c "ls" -l -a hi hello blah
simply runs echo and hi hello blah doesn't result in any errors at all?
(If possible, please refer to the bash grammar where this behaviour is documented).
1) I would like to understand why the double quotes don't "pass" the entire command line arguments to bash -c .... What am I missing here (that it works perfectly fine when using an intermediate variable)?
From info bash #:
#
($#) Expands to the positional parameters, starting from one. When the expansion occurs within double quotes, each parameter expands
to a separate word. That is, "$#" is equivalent to "$1" "$2" ....
Thus, bash -c "$#" is equivalent to bash -c "$1" "$2" .... In the case of ./test echo hi invocation, the expression is expanded to
bash -c "echo" "hi"
2) Why does bash discard the rest of the arguments (except $1) without any error messages?
Bash actually doesn't discard anything. From man bash:
If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, they are assigned to the positional parameters, starting with $0.
Thus, for the command bash -c "echo" "hi", Bash passes "hi" as $0 for the "echo" script.
bash -c "ls" -l -a hi hello blah
simply runs echo and hi hello blah doesn't result in any errors at all?
According to the rules mentioned above, Bash executes "ls" script and passes the following positional parameters to this script:
$0: "-l"
$1: "-a"
$2: "hi"
$3: "hello"
$4: "blah"
Thus, the command actually executes ls, and the positional parameters are unused in the script. You can use them by referencing to the positional parameters, e.g.:
$ set -x
$ bash -c "ls \$0 \$1 \$3" -l -a hi hello blah
+ bash -c 'ls $0 $1 $3' -l -a hi hello blah
ls: cannot access hello: No such file or directory
You should be using $* instead of $# to pass command line as string. "$#" expands to multiple quoted arguments and "$*" combines multiple arguments into a single argument.
#!/bin/bash
bash -c "$*"
Problem is with your $# it executes:
bash -c echo hi
But with $* it executes:
bash -c 'echo hi'
When you use:
cmd="$#"
and use: bash -c "$cmd" it does the same thing for you.
Read: What is the difference between “$#” and “$*” in Bash?

use of flags in bash script

Scripts in linux start with some declaration like :
#!/bin/bash
Correct me if I am wrong : this probably says which shell to use.
I have also seen some scripts which say :
#!/bin/bash -ex
what is the use of the flags -ex
#!/bin/bash -ex
<=>
#!/bin/bash
set -e -x
Man Page (http://ss64.com/bash/set.html):
-e Exit immediately if a simple command exits with a non-zero status, unless
the command that fails is part of an until or while loop, part of an
if statement, part of a && or || list, or if the command's return status
is being inverted using !. -o errexit
-x Print a trace of simple commands and their arguments
after they are expanded and before they are executed. -o xtrace
UPDATE:
BTW, It is possible to set switches without script modification.
For example we have the script t.sh:
#!/bin/bash
echo "before false"
false
echo "after false"
And would like to trace this script: bash -x t.sh
output:
+ echo 'before false'
before false
+ false
+ echo 'after false'
after false
For example we would like to trace script and stop if some command fail (in our case it will be done by command false): bash -ex t.sh
output:
+ echo 'before false'
before false
+ false
These are documented under set in the SHELL BUILTIN COMMANDS section of the man page:
-e will cause Bash to exit as soon as a pipeline (or simple line) returns an error
-x will case Bash to print the commands before executing them
-e
is to quit the script on any error
-x
is the debug mode
Check bash -x command
and What does set -e mean in a bash script?

Execute a find command with expression from a shell script [duplicate]

This question already has answers here:
Why does shell ignore quoting characters in arguments passed to it through variables? [duplicate]
(3 answers)
Closed 6 years ago.
I'm trying to write a database call from within a bash script and I'm having problems with a sub-shell stripping my quotes away.
This is the bones of what I am doing.
#---------------------------------------------
#! /bin/bash
export COMMAND='psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o ${EXPORT_FILE} 2>&1'
PSQL_RETURN=`${COMMAND}`
#---------------------------------------------
If I use an 'echo' to print out the ${COMMAND} variable the output looks fine:
echo ${COMMAND}
screen output:-
#---------------
psql drupal7 -F , -t --no-align -c "SELECT DISTINCT hostname FROM accesslog;" -o /DRUPAL/INTERFACES/EXPORTS/ip_list.dat 2>&1
#---------------
Also if I cut and paste this screen output it executes just fine.
However, when I try to execute the command as a variable within a sub-shell call, it gives an error message.
The error is from the psql client to the effect that the quotes have been removed from around the ${SQL} string.
The error suggests psql is trying to interpret the terms in the sql string as parameters.
So it seems the string and quotes are composed correctly but the quotes around the ${SQL} variable/string are being interpreted by the sub-shell during the execution call from the main script.
I've tried to escape them using various methods: \", \\", \\\", "", \"" '"', \'"\', ... ...
As you can see from my 'try it all' approach I am no expert and it's driving me mad.
Any help would be greatly appreciated.
Charlie101
Instead of storing command in a string var better to use BASH array here:
cmd=(psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o "${EXPORT_FILE}")
PSQL_RETURN=$( "${cmd[#]}" 2>&1 )
Rather than evaluating the contents of a string, why not use a function?
call_psql() {
# optional, if variables are already defined in global scope
DB_NAME="$1"
SQL="$2"
EXPORT_FILE="$3"
psql "$DB_NAME" -F , -t --no-align -c "$SQL" -o "$EXPORT_FILE" 2>&1
}
then you can just call your function like:
PSQL_RETURN=$(call_psql "$DB_NAME" "$SQL" "$EXPORT_FILE")
It's entirely up to you how elaborate you make the function. You might like to check for the correct number of arguments (using something like (( $# == 3 ))) before calling the psql command.
Alternatively, perhaps you'd prefer just to make it as short as possible:
call_psql() { psql "$1" -F , -t --no-align -c "$2" -o "$3" 2>&1; }
In order to capture the command that is being executed for debugging purposes, you can use set -x in your script. This will the contents of the function including the expanded variables when the function (or any other command) is called. You can switch this behaviour off using set +x, or if you want it on for the whole duration of the script you can change the shebang to #!/bin/bash -x. This saves you explicitly echoing throughout your script to find out what commands are being run; you can just turn on set -x for a section.
A very simple example script using the shebang method:
#!/bin/bash -x
ec() {
echo "$1"
}
var=$(ec 2)
Running this script, either directly after making it executable or calling it with bash -x, gives:
++ ec 2
++ echo 2
+ var=2
Removing the -x from the shebang or the invocation results in the script running silently.

Problems of set -e with grep command [duplicate]

I am using following options
set -o pipefail
set -e
In bash script to stop execution on error. I have ~100 lines of script executing and I don't want to check return code of every line in the script.
But for one particular command, I want to ignore the error. How can I do that?
The solution:
particular_script || true
Example:
$ cat /tmp/1.sh
particular_script()
{
false
}
set -e
echo one
particular_script || true
echo two
particular_script
echo three
$ bash /tmp/1.sh
one
two
three will be never printed.
Also, I want to add that when pipefail is on,
it is enough for shell to think that the entire pipe has non-zero exit code
when one of commands in the pipe has non-zero exit code (with pipefail off it must the last one).
$ set -o pipefail
$ false | true ; echo $?
1
$ set +o pipefail
$ false | true ; echo $?
0
Just add || true after the command where you want to ignore the error.
Don't stop and also save exit status
Just in case if you want your script not to stop if a particular command fails and you also want to save error code of failed command:
set -e
EXIT_CODE=0
command || EXIT_CODE=$?
echo $EXIT_CODE
More concisely:
! particular_script
From the POSIX specification regarding set -e (emphasis mine):
When this option is on, if a simple command fails for any of the reasons listed in Consequences of Shell Errors or returns an exit status value >0, and is not part of the compound list following a while, until, or if keyword, and is not a part of an AND or OR list, and is not a pipeline preceded by the ! reserved word, then the shell shall immediately exit.
Instead of "returning true", you can also use the "noop" or null utility (as referred in the POSIX specs) : and just "do nothing". You'll save a few letters. :)
#!/usr/bin/env bash
set -e
man nonexistentghing || :
echo "It's ok.."
Thanks for the simple solution here from above:
<particular_script/command> || true
The following construction could be used for additional actions/troubleshooting of script steps and additional flow control options:
if <particular_script/command>
then
echo "<particular_script/command> is fine!"
else
echo "<particular_script/command> failed!"
#exit 1
fi
We can brake the further actions and exit 1 if required.
I found another way to solve this:
set +e
find "./csharp/Platform.$REPOSITORY_NAME/obj" -type f -iname "*.cs" -delete
find "./csharp/Platform.$REPOSITORY_NAME.Tests/obj" -type f -iname "*.cs" -delete
set -e
You can turn off failing on errors by set +e this will now ignore all errors after that line. Once you are done, and you want the script to fail again on any error, you can use set -e.
After applying set +e the find does not fail the whole script anymore, when files are not found. At the same time, error messages
from find are still printed, but the whole script continues to execute. So it is easy to debug if that causes the problem.
This is useful for CI & CD (for example in GitHub Actions).
If you want to prevent your script failing and collect the return code:
command () {
return 1 # or 0 for success
}
set -e
command && returncode=$? || returncode=$?
echo $returncode
returncode is collected no matter whether command succeeds or fails.
output=$(*command* 2>&1) && exit_status=$? || exit_status=$?
echo $output
echo $exit_status
Example of using this to create a log file
log_event(){
timestamp=$(date '+%D %T') #mm/dd/yy HH:MM:SS
echo -e "($timestamp) $event" >> "$log_file"
}
output=$(*command* 2>&1) && exit_status=$? || exit_status=$?
if [ "$exit_status" = 0 ]
then
event="$output"
log_event
else
event="ERROR $output"
log_event
fi
I have been using the snippet below when working with CLI tools and I want to know if some resource exist or not, but I don't care about the output.
if [ -z "$(cat no_exist 2>&1 >/dev/null)" ]; then
echo "none exist actually exist!"
fi
while || true is preferred one, but you can also do
var=$(echo $(exit 1)) # it shouldn't fail
I kind of like this solution :
: `particular_script`
The command/script between the back ticks is executed and its output is fed to the command ":" (which is the equivalent of "true")
$ false
$ echo $?
1
$ : `false`
$ echo $?
0
edit: Fixed ugly typo

Sub-shell differences between bash and ksh

I always believed that a sub-shell was not a child process, but another
shell environment in the same process.
I use a basic set of built-ins:
(echo "Hello";read)
On another terminal:
ps -t pts/0
PID TTY TIME CMD
20104 pts/0 00:00:00 ksh
So, no child process in kornShell (ksh).
Enter bash, it appears to behave differently, given the same command:
PID TTY TIME CMD
3458 pts/0 00:00:00 bash
20067 pts/0 00:00:00 bash
So, a child process in bash.
From reading the man pages for bash, it is obvious that another process is created for a sub-shell,
however it fakes $$, which is sneeky.
Is this difference between bash and ksh expected, or am I reading the symptoms incorrectly?
Edit: additional information:
Running strace -f on bash and ksh on Linux shows that bash calls clone twice for the sample command (it does not call fork). So bash might be using threads (I tried ltrace but it core dumped!).
KornShell calls neither fork, vfork, nor clone.
In ksh, a subshell might or might not result in a new process. I don't know what the conditions are, but the shell was optimized for performance on systems where fork() was more expensive than it typically is on Linux, so it avoids creating a new process whenever it can. The specification says a "new environment", but that environmental separation may be done in-process.
Another vaguely-related difference is the use of new processes for pipes. In ksh and zsh, if the last command in a pipeline is a builtin, it runs in the current shell process, so this works:
$ unset x
$ echo foo | read x
$ echo $x
foo
$
In bash, all pipeline commands after the first are run in subshells, so the above doesn't work:
$ unset x
$ echo foo | read x
$ echo $x
$
As #dave-thompson-085 points out, you can get the ksh/zsh behavior in bash versions 4.2 and newer if you turn off job control (set +o monitor) and turn on the lastpipe option (shopt -s lastpipe). But my usual solution is to use process substitution instead:
$ unset x
$ read x < <(echo foo)
$ echo $x
foo
ksh93 works unusually hard to avoid subshells. Part of the reason is the avoidance of stdio and extensive use of sfio which allows builtins to communicate directly. Another reason is ksh can in theory have so many builtins. If built with SHOPT_CMDLIB_DIR, all of the cmdlib builtins are included and enabled by default. I can't give a comprehensive list of places where subshells are avoided, but it's typically in situations where only builtins are used, and where there are no redirects.
#!/usr/bin/env ksh
# doCompat arr
# "arr" is an indexed array name to be assigned an index corresponding to the detected shell.
# 0 = Bash, 1 = Ksh93, 2 = mksh
function doCompat {
${1:+:} return 1
if [[ ${BASH_VERSION+_} ]]; then
shopt -s lastpipe extglob
eval "${1}[0]="
else
case "${BASH_VERSINFO[*]-${!KSH_VERSION}}" in
.sh.version)
nameref v=$1
v[1]=
if builtin pids; then
function BASHPID.get { .sh.value=$(pids -f '%(pid)d'); }
elif [[ -r /proc/self/stat ]]; then
function BASHPID.get { read -r .sh.value _ </proc/self/stat; }
else
function BASHPID.get { .sh.value=$(exec sh -c 'echo $PPID'); }
fi 2>/dev/null
;;
KSH_VERSION)
nameref "_${1}=$1"
eval "_${1}[2]="
;&
*)
if [[ ! ${BASHPID+_} ]]; then
echo 'BASHPID requires Bash, ksh93, or mksh >= R41' >&2
return 1
fi
esac
fi
}
function main {
typeset -a myShell
doCompat myShell || exit 1 # stripped-down compat function.
typeset x
print -v .sh.version
x=$(print -nv BASHPID; print -nr " $$"); print -r "$x" # comsubs are free for builtins with no redirections
_=$({ print -nv BASHPID; print -r " $$"; } >&2) # but not with a redirect
_=$({ printf '%s ' "$BASHPID" $$; } >&2); echo # nor for expansions with a redirect
_=$(printf '%s ' "$BASHPID" $$ >&2); echo # but if expansions aren't redirected, they occur in the same process.
_=${ { print -nv BASHPID; print -r " $$"; } >&2; } # However, ${ ;} is always subshell-free (obviously).
( printf '%s ' "$BASHPID" $$ ); echo # Basically the same rules apply to ( )
read -r x _ <<<$(</proc/self/stat); print -r "$x $$" # These are free in {{m,}k,z}sh. Only Bash forks for this.
printf '%s ' "$BASHPID" $$ | cat # Sadly, pipes always fork. It isn't possible to precisely mimic "printf -v".
echo
} 2>&1
main "$#"
out:
Version AJM 93v- 2013-02-22
31732 31732
31735 31732
31736 31732
31732 31732
31732 31732
31732 31732
31732 31732
31738 31732
Another neat consequence of all this internal I/O handling is some buffering issues just go away. Here's a funny example of reading lines with tee and head builtins (don't try this in any other shell).
$ ksh -s <<\EOF
integer -a x
builtin head tee
printf %s\\n {1..10} |
while head -n 1 | [[ ${ { x+=("$(tee /dev/fd/{3,4})"); } 3>&1; } ]] 4>&1; do
print -r -- "${x[#]}"
done
EOF
1
0 1
2
0 1 2
3
0 1 2 3
4
0 1 2 3 4
5
0 1 2 3 4 5
6
0 1 2 3 4 5 6
7
0 1 2 3 4 5 6 7
8
0 1 2 3 4 5 6 7 8
9
0 1 2 3 4 5 6 7 8 9
10
0 1 2 3 4 5 6 7 8 9 10
The bash manpage reads:
Each command in a pipeline is executed as a separate process (i.e., in a subshell).
While this sentence is about pipes, it strongly implies a subshell is a separate process.
Wikipedia's disambiguation page also describes a subshell in child-process terms. A child process is certainly itself a process.
The ksh manpage (at a glance) isn't direct about its own definition of a subshell, so it does not imply one way or the other that a subshell is a different process.
Learning the Korn Shell says that they are different processes.
I'd say you're missing something (or the book is wrong or out of date).
The Korn shell does not necessarily use a subshell for command substitution. They are usually handled in the same process. Exceptions include I/O operations
To go a bit farther, I had a command giving a variable value that looked like this, in ksh93, from a VERY old script:
my_variable=(`cat ./my_file`)
In other words, parentheses around the backticked command substitution. "my_file" is a list of 4-digit octal numbers, one to a line.
When this is supplied this way in ksh93t and later, the newlines are preserved, and you can step through the numbers in the variable using a counter. For example, the following code would give a 4 digit octal number from the list discussed above, after which, you would increment the counter:
data_I_want=$(echo "${my_variable[$my_counter]}")
In ksh93, the command for the variable can also be done with this:
my_variable=($(cat ./my_file))
and, finally, to eliminate the "useless use of cat",
my_variable=($(<./my_file))
If the command is structured without the outer parentheses, the newlines are stripped (a POSIX standard), and the first use of the variable includes all of the numbers from the file. Subsequent calls to the variable using the counter return null values.
Putting the command inside parentheses forces the use of a subshell in a new process, and skirts the necessity of resetting the default field separator using IFS="".
Sorry for bumping something so old, but it seemed worthwhile to include this, as I haven't seen this particular behavior discussed elsewhere.

Resources