linux shell test `-d` on empty argument - linux

Here is the code,
x=
if [ -d $x ]; then
echo "it's a dir"
else
echo "not a dir"
fi
The above code gives me "it's a dir", why? $x is empty, isn't it?

x=
if [ -d $x ]; then
is equivalent to:
if [ -d ] ; then
A simpler way to demonstrate what's going on is:
test -d ; echo $?
which prints 0, indicating that the test succeeded ([ is actually a command, equivalent to test except that it takes a terminating ] argument.)
But this:
test -f ; echo $?
does the same thing. Does that mean that the missing argument is both a directory and a plain file?
No, it means that it's not doing those tests.
According to the POSIX specification for the test command, its behavior depends on the number of arguments it receives.
With 0 arguments, it exits with a status of 1, indicating failure.
With 1 argument, it exits with a status of 0 (success) if the argument is not empty, or 1 (success) if the argument is empty.
With 2 arguments, the result depends on the first argument, which can be either ! (which reverses the behavior for 1 arguments), or a "unary primary" like -f or -d, or something else; if it's something else, the results are unspecified.
(POSIX also specifies the behavior for more than 2 arguments, but that's not relevant to this question.)
So this:
x=
if [ -d $x ]; then echo yes ; else echo no ; fi
prints "yes", not because the missing argument is a directory, but because the single argument -d is not the empty string.
Incidentally, the GNU Coreutils manual doesn't mention this.
So don't do that. If you want to test whether $x is a directory, enclose it in double quotes:
if [ -d "$x" ] ; then ...

The stat system call, which your shell presumably uses to determine if is a directory, treats null as the current directory.
Try compiling this program and running it with no arguments:
#include <stdio.h>
#include <sys/stat.h>
int main(int argc, char* argv[]) {
struct stat s ;
stat(argv[1], &s) ;
if(s.st_mode & S_IFDIR != 0) {
printf("%s is a directory\n", argv[1]) ;
}
}

Related

pass arguments to shell script that has a switch

I am not sure if switch is the proper terminology as I am new to Unix.
I have a shell script that requires what I call a switch to function properly but I also want to pass arguments:
./scriptname -cm
where if I run just ./scriptname it would fail. But I also want to pass various arguments:
./scriptname -cm arg1 arg2 arg3 arg4
This appears to fail due to the -cm. Normally when I do ./scriptname arg1 arg2 arg3 it will work properly but once I add the switch it fails. Suggestions?
Edit1:
Adding some more relevant code:
./scriptname -cm
will call
scriptname
gencmlicense()
{
echo $2
do stuff
}
gentermlicense()
{
do stuff
}
if [ "$1" = "-cm" ] ; then
gencmlicense
elif [ "$1" = "-term" ] ; then
gentermlicense
fi
If I added an argument the echo $2 would not print out the second argument passed.
If you want to pass arguments from the main script to a function unmodified, use
...
if [ "$1" = "-cm" ] ; then
gencmlicense "$#"
elif [ "$1" = "-term" ] ; then
gentermlicense "$#"
fi
The "$#" (with double quotes!) expands to all positional parameters. For more on this, see your shell manual, likely under "Parameter Expansion".
If your functions don't need the first positional parameter, you can shift it away:
if [ "$1" = "-cm" ]; then
shift
gencmlicense "$#"
elif [ "$1" = "-term" ]; then
shift
gentermlicense "$#"
fi
The professional way of handling options, though, is with the getopts builtin, because it is flexible and extensible, yet compact. This is what I use:
#!/bin/sh
MYNAME=${0##*/} # Short program name for diagnostic messages.
VERSION='1.0'
PATH="$(/usr/bin/getconf PATH):/usr/local/bin"
usage () {
cat << EOF
usage: $MYNAME [-hvVx] [-a arg] ...
Perform nifty operations on objects specified by arguments.
Options:
-a arg do something with arg
-h display this help text and exit
-v verbose mode
-V display version and exit
-x debug mode with set -x
EOF
exit $1
}
parse_options () {
opt_verbose=false
while getopts :a:hvVx option; do
case $option in
(a) opt_a=$OPTARG;;
(h) usage 0;;
(v) opt_verbose=true;;
(V) echo "version $VERSION"; exit 0;;
(x) set -x;;
(?) usage 1;;
esac
done
}
#-------------------------------------------------------------#
# Main script #
#-------------------------------------------------------------#
parse_options "$#"
shift $((OPTIND - 1)) # Shift away options and option args.
...rest of script here...

What is -z in bash

I am trying to understand the following code:
if [ -z "$1" ] || [ -z "$2" || [ "${3:-}" ]
then
echo "Usage: $0 <username> <password>" >&2
exit 1
fi
I want to understand what we mean by -z "$1" and "${3:-}" in the code.
Please also help me understand >&2 in the code.
1) Your code is not correct, you missed one ] bracket somewhere. Probably after [ -z "$2" block.
2) if statement executes following command(s) and then executes block of code enclosed in then .. fi or then .. else keywords if the return value of the command(s) is true (their exit code is 0)
3) [ is just an alias for the test command (try man test). This command takes several parameters and evaluates them. For example, used with -z "$something" flags would return true (0) if $something is not set or is an empty string. Try it:
if [ -z "$variable" ]; then
echo Variable is not set or is an empty string
fi
4) || statement is an OR. Next command would be executed if the previous one returned false statement. So in the statement
if [ -z "$variable" ] || [ -z "$variable2" ]; then
echo Variable 1 or variable 2 is not set or is an empty string
fi
command [ -z "$variable2" ] would be executed only if variable was empty. The same could be achieved with different syntax:
if [ -z "$variable" -o -z "$variable2" ]; then
echo Variable 1 or variable 2 is not set or is an empty string
fi
which should be faster, because it requires only one instance of the test program to be run. Flag -o means OR, so you could read it as:
If variable is not set/empty OR variable2 is not set/EMPTY...
5) Statement "[ ${3:-} ]" means return true if $3 (the third argument of the script) is set.
6) >&2 is a stream redirection. Every process has two outputs: standard output and error output. These are independent and could be redirected (for example) to be written to two different files. >&2 means "redirect standard output to the same location as standard error".
So to sum up: commands between then .. fi will be executed IF the script is run with $1 empty or $2 empty or $3 NOT empty That means that the script should be run with exactly two parameters. And if not, the echo message will be printed to standard error output.
-z STRING means the length of STRING is zero.
${parameter:-word} If parameter is unset or null, the expansion of word is substituted. In your case $3 is just set with a blank value, if $3 do not have any value.
&2 writes to standard-error. I mean the stdout value of the executed command is sent to stderr,

Linux: run multiple commands without losing individual return codes?

I read this question, but my problem is that I have "plenty" of commands to run; and I need a solution that works for a systems calls.
We have an exit task that basically triggers a lot of "cleanup" activity within our JVM. The part I am working on has to call a certain script, not once, but n times!
The current implementation on the Java side creates n ProcessBuilder objects; and each one runs a simple bash script.sh parm ... where parm is different on each run.
Now I want to change the Java side to only make one system call (instead of n) using ProcessBuilder.
I could just use the following command:
bash script.sh parm1 ; bash script.sh parm2 ; ... ; bash script.sh parmN
Now the thing is: if one of the runs fails ... I want all other runs to still take place; but I would like to get a "bad" return code in the end.
Is there a simple, elegant way to achieve that, one that works with command strings coming from system calls?
You can build up the return codes in a subshell as you go, then check them at the end using arithmetic evaluation. E.g., on my test system (cygwin), at a bash prompt:
$ ( r=; echo "foo" ; r=$r$?; echo "bar" ; r=$r$? ; echo "baz" ; r=$r$? ; (($r==0)) )
foo
bar
baz
$ echo $?
0 <--- all the commands in the subshell worked OK, so the status is OK
and
VVVV make this echo fail
$ ( r=; echo "foo" ; r=$r$?; echo "bar" 1>&- ; r=$r$? ; echo "baz" ; r=$r$? ; (($r==0)) )
foo
-bash: echo: write error: Bad file descriptor
baz
$ echo $?
1 <--- failure code, but all the commands in the subshell still ran.
So, in your case,
(r=; bash script.sh parm1 ; r=$r$?; bash script.sh parm2 ; r=$r$?; ... ; bash script.sh parmN r=$r$?; (($r==0)) )
You can also make that slightly shorter with a function s that stashes the return code:
$ (r=;s(){ r=$r$?;}; echo "foo" ; s; echo "bar" 1>&-; s; echo "baz" ; s; (($r==0)) )
foo
-bash: echo: write error: Bad file descriptor
baz
$ echo $?
1
s(){ r=$r$?;} defines a function s that will update r. Then s can be run after each command. The space and semicolon in the definition of s are required.
What's happening?
r= initializes r to an empty string. That will hold our return values as we go.
After each command, r=$r$? tacks that command's exit status onto r. There are no spaces in r, by construction, so I left off the quotes for brevity. See below for a note about negative return values.
At the end, (($r==0)) succeeds if r evaluates to 0. So, if all commands succeeded, r will be 000...0, which equals 0.
The exit status of a subshell is the exit status of its last command, here, the (($r==0)) test. So if r is all zeros, the subshell will report success. If not, the subshell will report failure ($?==1).
Negative exit values
If some of the programs in the subshell may have negative exit values, this will probably still work. For example, 0-255100255 is a valid expression that is not equal to zero. However, if you had two commands, the first exited with 127, and the second exited with -127, r would be 127-127, which is zero.
To avoid this problem, replace each r=$r$? with r=$r$((! ! $?)). The double logical negation $((! ! $?)) converts 0 to 0 and any other value, positive or negative, to 1. Then r will only contain 0 and 1 values, so the (($r==0)) test will be correct. (You do need spaces after each ! so bash doesn't think you're trying to refer to your command history.)
A binary OR-ing of all exit codes will be zero (0) "if" and "only if" all exit codes are zero (0).
You could get a running exit code with this simple arithmetic expression:
excode=((excode | $?))
To run all parameters, you could use an script ("callcmd") like:
#!/bin/bash
excode=0
for i
do cmd "$i"
excode=((excode | $?))
done
echo "The final exit code is $excode"
# if [[ $excode -ne 0 ]]; exit 1; fi # An alternative exit solution.
Where this script ("callcmd") is called from java as:
callcmd parm1 parm2 parm3 … parmi … parmN
The output of each command is available at the usual standard output and the error strings (if any) will also be available in the stderr (but all will be joined, so the command "cmd" should identify for which parm is emitting the error).
r=0; for parm in parm1 parm2 ... parmN; do
bash script.sh "$parm" || r=1
done
exit "$r"
You can do it like this
bash script.sh parm1 || echo "FAILED" > some.log ; bash script.sh parm2 || echo "FAILED" > some.log; ... ; bash script.sh parmN|| echo "FAILED" > some.log
Then check if there is some.log file.
|| - It's simple bash logical or ( executed if exit status of previous one is non-zero)

Get reason for permission denied due to traversed directory not executable

I have a file /a/b that is readable by a user A. But /a does not provide executable permission by A, and thus the path /a/b cannot traverse through /a. For an arbitrarily long path, how would I determine the cause for not being able to access a given path due to an intermediate path not being accessible by the user?
Alternative answer to parsing the tree manually and pinpointing the error to a single row would be using namei tool.
namei -mo a/b/c/d
f: a/b/c/d
drwxrw-rw- rasjani rasjani a
drw-rwxr-x rasjani rasjani b
c - No such file or directory
This shows the whole tree structure and permissions up until the entry where the permission is denied.
Something along like this:
#!/bin/bash
PAR=${1}
PAR=${PAR:="."}
if ! [[ "${PAR:0:1}" == / || "${PAR:0:2}" == ~[/a-z] ]]
then
TMP=`pwd`
PAR=$(dirname ${TMP}/${PAR})
fi
cd $PAR 2> /dev/null
if [ $? -eq 1 ]; then
while [ ! -z "$PAR" ]; do
PREV=$(readlink -f ${PAR})
TMP=$(echo ${PAR}|awk -F\/ '{$NF=""}'1|tr ' ' \/)
PAR=${TMP%/}
cd ${PAR} 2>/dev/null
if [ $? -eq 0 ]; then
if [ -e ${PREV} ]; then
ls -ld ${PREV}
fi
exit
fi
done
fi
Ugly but it would get the job done ..
So the idea is basicly that taking a parameter $1, if its not absolute directory, expand it to such and then drop the last element of the path and try to cd into it, if it fails, rince and repeat .. If it works, PREV would hold the last directory where user couldn't cd into, so print it out ..
Here's what I threw together. I actually didn't look at rasjani's answer before writing this, but it uses the same concept where you take the exit status of the command. Basically its going through all the directories (starting the farthest down the chain) and tries to ls them. If the exit status is 0, then the ls succeeded, and it prints out the last dir that it couldn't ls (I'm not sure what would happen in some of the edge cases like where you can't access anything):
LAST=/a/b
while [ ! -z "$LAST" ] ; do
NEXT=`echo "$LAST" | sed 's/[^\/]*$//' | sed 's/\/$//'`
ls "$NEXT" 2> /dev/null > /dev/null
if [ $? -eq 0 ] ; then
echo "Can't access: $LAST"
break
fi
LAST="$NEXT"
done
and I like putting stuff like this on one line just for fun:
LAST=/a/b; while [ ! -z "$LAST" ] ; do NEXT=`echo "$LAST" | sed 's/[^\/]*$//' | sed 's/\/$//'`; ls "$NEXT" 2> /dev/null > /dev/null; if [ $? -eq 0 ] ; then echo "Can't access: $LAST"; break; fi; LAST="$NEXT"; done
I have below C program for you which does this. Below are the steps
Copy and save program as file.c.
Compile program with gcc file.c -o file
Execute it as ./file PATH
Assuming that you have a path as /a/b/c/d and you do not have permission for 'c' then output will be
Given Path = /a/b/c/d
No permission on = /a/b/c
For permission i am relying on "EACCES" error. Path length is assumed to 1024.
If you have any question please share.
#include <stdio.h>
#include <string.h>
#include <errno.h>
#define MAX_LEN 1024
int main(int argc, char *argv[])
{
char path[MAX_LEN] = "/home/sudhansu/Test";
int i = 0;
char parse[MAX_LEN] = "";
if(argc == 2)
{
strcpy(path, argv[1]);
printf("\n\t\t Given Path = %s\n", path);
}
else
{
printf("\n\t\t Usage : ./file PATH\n\n");
return 0;
}
if(path[strlen(path)-1] != '/')
strcat(path, "/");
path[strlen(path)] = '\0';
while(path[i])
{
if(path[i] == '/')
{
strncpy(parse, path, i+1);
if(chdir(parse) < 0)
{
if(errno == EACCES)
{
printf("\t\t No permission on = [%s]\n", parse);
break;
}
}
}
parse[i] = path[i];
i++;
}
printf("\n");
return 0;
}
Regards,
Sudhansu

Bash Shell Script - Check for a flag and grab its value

I am trying to make a shell script which is designed to be run like this:
script.sh -t application
Firstly, in my script I want to check to see if the script has been run with the -t flag. For example if it has been run without the flag like this I want it to error:
script.sh
Secondly, assuming there is a -t flag, I want to grab the value and store it in a variable that I can use in my script for example like this:
FLAG="application"
So far the only progress I've been able to make on any of this is that $# grabs all the command line arguments but I don't know how this relates to flags, or if this is even possible.
You should read this getopts tutorial.
Example with -a switch that requires an argument :
#!/bin/bash
while getopts ":a:" opt; do
case $opt in
a)
echo "-a was triggered, Parameter: $OPTARG" >&2
;;
\?)
echo "Invalid option: -$OPTARG" >&2
exit 1
;;
:)
echo "Option -$OPTARG requires an argument." >&2
exit 1
;;
esac
done
Like greybot said(getopt != getopts) :
The external command getopt(1) is never safe to use, unless you know
it is GNU getopt, you call it in a GNU-specific way, and you ensure
that GETOPT_COMPATIBLE is not in the environment. Use getopts (shell
builtin) instead, or simply loop over the positional parameters.
Use $# to grab the number of arguments, if it is unequal to 2 there are not enough arguments provided:
if [ $# -ne 2 ]; then
usage;
fi
Next, check if $1 equals -t, otherwise an unknown flag was used:
if [ "$1" != "-t" ]; then
usage;
fi
Finally store $2 in FLAG:
FLAG=$2
Note: usage() is some function showing the syntax. For example:
function usage {
cat << EOF
Usage: script.sh -t <application>
Performs some activity
EOF
exit 1
}
Here is a generalized simple command argument interface you can paste to the top of all your scripts.
#!/bin/bash
declare -A flags
declare -A booleans
args=()
while [ "$1" ];
do
arg=$1
if [ "${1:0:1}" == "-" ]
then
shift
rev=$(echo "$arg" | rev)
if [ -z "$1" ] || [ "${1:0:1}" == "-" ] || [ "${rev:0:1}" == ":" ]
then
bool=$(echo ${arg:1} | sed s/://g)
booleans[$bool]=true
echo \"$bool\" is boolean
else
value=$1
flags[${arg:1}]=$value
shift
echo \"$arg\" is flag with value \"$value\"
fi
else
args+=("$arg")
shift
echo \"$arg\" is an arg
fi
done
echo -e "\n"
echo booleans: ${booleans[#]}
echo flags: ${flags[#]}
echo args: ${args[#]}
echo -e "\nBoolean types:\n\tPrecedes Flag(pf): ${booleans[pf]}\n\tFinal Arg(f): ${booleans[f]}\n\tColon Terminated(Ct): ${booleans[Ct]}\n\tNot Mentioned(nm): ${boolean[nm]}"
echo -e "\nFlag: myFlag => ${flags["myFlag"]}"
echo -e "\nArgs: one: ${args[0]}, two: ${args[1]}, three: ${args[2]}"
By running the command:
bashScript.sh firstArg -pf -myFlag "my flag value" secondArg -Ct: thirdArg -f
The output will be this:
"firstArg" is an arg
"pf" is boolean
"-myFlag" is flag with value "my flag value"
"secondArg" is an arg
"Ct" is boolean
"thirdArg" is an arg
"f" is boolean
booleans: true true true
flags: my flag value
args: firstArg secondArg thirdArg
Boolean types:
Precedes Flag(pf): true
Final Arg(f): true
Colon Terminated(Ct): true
Not Mentioned(nm):
Flag: myFlag => my flag value
Args: one => firstArg, two => secondArg, three => thirdArg
Basically, the arguments are divided up into flags booleans and generic arguments.
By doing it this way a user can put the flags and booleans anywhere as long as he/she keeps the generic arguments (if there are any) in the specified order.
Allowing me and now you to never deal with bash argument parsing again!
You can view an updated script here
This has been enormously useful over the last year. It can now simulate scope by prefixing the variables with a scope parameter.
Just call the script like
replace() (
source $FUTIL_REL_DIR/commandParser.sh -scope ${FUNCNAME[0]} "$#"
echo ${replaceFlags[f]}
echo ${replaceBooleans[b]}
)
Doesn't look like I implemented argument scope, not sure why I guess I haven't needed it yet.
Try shFlags -- Advanced command-line flag library for Unix shell scripts.
https://github.com/kward/shflags
It is very good and very flexible.
FLAG TYPES: This is a list of the DEFINE_*'s that you can do. All flags take
a name, default value, help-string, and optional 'short' name (one-letter
name). Some flags have other arguments, which are described with the flag.
DEFINE_string: takes any input, and intreprets it as a string.
DEFINE_boolean: typically does not take any argument: say --myflag to set
FLAGS_myflag to true, or --nomyflag to set FLAGS_myflag to false.
Alternately, you can say
--myflag=true or --myflag=t or --myflag=0 or
--myflag=false or --myflag=f or --myflag=1
Passing an option has the same affect as passing the option once.
DEFINE_float: takes an input and intreprets it as a floating point number. As
shell does not support floats per-se, the input is merely validated as
being a valid floating point value.
DEFINE_integer: takes an input and intreprets it as an integer.
SPECIAL FLAGS: There are a few flags that have special meaning:
--help (or -?) prints a list of all the flags in a human-readable fashion
--flagfile=foo read flags from foo. (not implemented yet)
-- as in getopt(), terminates flag-processing
EXAMPLE USAGE:
-- begin hello.sh --
! /bin/sh
. ./shflags
DEFINE_string name 'world' "somebody's name" n
FLAGS "$#" || exit $?
eval set -- "${FLAGS_ARGV}"
echo "Hello, ${FLAGS_name}."
-- end hello.sh --
$ ./hello.sh -n Kate
Hello, Kate.
Note: I took this text from shflags documentation

Resources