Using getopt in unix shell - linux

I have to use a Unix script to pass arguments:
./Script.sh -c "abc" -d "def" -k "abc -d -c"
where the argument for:
-c = "abc"
-d = "def"
-k = "abc -d -c"
How can I handle options in a Uunix shell script?

Here is some option handling using getopts:
# -F Final version (do not append date to version)
# -s suffix Add '-suffix' after version number
# -V Print version and exit
# -h Print help and exit
# -j jdcfile JDC file for project - required
# -q Quiet operation
# -v Verbose operation
arg0=$(basename $0 .sh)
usage()
{
echo "Usage: $arg0 [-hqvFV] [-s suffix] -j jdcfile file.msd" 1>&2
exit 1
}
error()
{
echo "$0: $*" 1>&2
exit 1
}
Fflag=
suffix=
jdcfile=
qflag=
vflag=no
while getopts FVhj:qs:v opt
do
case "$opt" in
(F) Fflag="-F";;
(V) echo "Version information";;
(h) echo "Help information";;
(j) jdcfile="$OPTARG";;
(q) qflag="-q";;
(s) suffix="$OPTARG";;
(v) vflag=yes;;
(*) usage;;
esac
done
shift $(($OPTIND - 1))
case $# in
(1) : OK;;
(*) usage;;
esac
if [ -z "$jdcfile" ]
then error "you did not specify which jdcfile to use (-j option)"
fi
The script then continues and does its task based on the options it was given. The shift removes the 'consumed' options, leaving just the file name arguments.

The argument can contain whitespace, so either use the getopts built-in shell command or the GNU enhanced version of the external getopt program.
The getopts option is more portable, because not all systems have the GNU enhanced version of getopt. For example, Linux has the GNU enhanced version, but Mac OS X does not. The original version of getopt does not support whitespaces. Despite this limitation, there is a reason why you might want to use toe the GNU enhanced version: it supports long option names, which getopts does not.
This is how to use the GNU enhanced getopt with whitespaces. It is important to use "$#" (use $# instead of $* and make sure the double quotes are around it) and to eval the whole set command so that whitespaces are handled properly.
eval set -- `getopt --long currdir:,dir:,argval:,verbose -o c:d:k:v -- "$#"`
while [ $# -gt 0 ]
do
case "$1" in
-c | --currdir) CURRDIR="$2"; shift;;
-d | --dir) MYDIR="$2"; shift;;
-k | --argval) ARGVAL="$2"; shift;;
-v | --verbose) VERBOSE=yes;;
esac
shift
done

There is a command getopts, and a program getopt, though I'm of the opinion that by the time you need to handle arguments, you've outgrown shell scripting.
I'm not actually sure how getopts work, having never actually used it; but here, and check your shell's docs.
getopt splits your arguments into flags -- rest as far as I can tell.

Related

User defined output redirection not working as expected

I am using a KSH script to execute a binary (program) that has the following syntax to execute correctly:
myprog [-v | --verbose (optional)] [input1] [input2]
The program prints nothing & returns exit code 0 (zero) on success. On failure it prints ERROR messages to STDERR & returns exit status > 0. If -v option is specified it prints verbose details to STDOUT both in case of success and failure.
To make this usable and reduce chances of argument swapping and user controlled logging I used a ksh shell script to invoke this binary. The syntax to run the ksh shell script is as:
myshell.sh [-v (optional)] [-a input1] [-b input2]
If -v option is specified, ksh redirects STDOUT to <execution_date_time>_out.log and STDERR to <execution_date_time>_err.log. My ksh script is as follows:
myshell.sh :
#! /bun/ksh
verbopt=""
log=""
arg1=""
arg2=""
dateTime=`date +%y-%m-%d_%H:%M:%S`
while getopts "va:b:" arg
do
case $arg in
v) # verbose output
verbopt="-v"
log="1>${dateTime}_out.log 2>${dateTime}_err.log"
;;
a) # Input 1
arg1=$OPTARG
;;
b) # Input 2
arg2=$OPTARG
;;
*) # usage
echo "USAGE: myshell.sh [-v] [-a input1] [-b input2]"
exit 2
;;
esac
done
if [[ -z $arg1|| -z $arg2]]
then
echo "Missing arguments"
exit 2
fi
myprog $verbopt $arg1 $arg2 $log
exit $?
The problem here is, all the output STDERR & STDOUT is printed on the screen (i.e, No redirection took place) as well as no *.log files were created after successful or unsuccessful execution (i.e, exit status: 0 or >0 respectively).
Can anyone help me out on this?
Thanks.
Rather than trying to monkey patch redirections into the command line, just redirect the streams when you parse the flags. That is:
while getopts "va:b:" arg
do
case $arg in
v) # verbose output
verbopt="-v"
exec 1>${dateTime}_out.log 2>${dateTime}_err.log
;;
...
You need to be a little careful, since you do some error checking after this and you probably don't want your later error messages going to the *_err.log, but that's fairly trivial to fix. (eg, error check sooner, or do a test -n "$verbopt" && exec > ... after the error check, or similar)
The problem is that > is not expanded in the value of $log.
I'm afraid you will need to use a conditional for this, for example:
cmd="myprog $verbopt $arg1 $arg2"
if [ "$log" ]; then
$cmd 1>${dateTime}_out.log 2>${dateTime}_err.log
else
$cmd
fi
I would use the idiom exec redirection, which runs the rest of the script as if the given redirection had been supplied when it was run:
if need_to_log; then
exec >stdout_file 2>stderr_file
fi
this command will be logged if the above if statement was true
If you need to restore stdout and stderr afterward for the script to do more unlogged things, you can just run the logging part in a subshell:
(
if need_to_log; then
exec >stdout_file 2>stderr_file
fi
this command will be logged if the above if statement was true
)
this command will not be logged regardless
I would also build the command in an array, so you can add things like -v to it without having to have a separate variable for each possible parameter. If the order in which the -a and -b arguments are supplied to myprog doesn't matter, you can just add those to the array instead of having separate variables as well.
You can see my version below. Besides the above changes, I also don't bother getting the timestamp if not logging, since it's unneeded, and send error messages to standard error instead of standard out using the ksh builtin print.
Here's what I put together:
#!/usr/bin/env ksh
# new array syntax requires ksh93+; for older ksh, use this:
# set -A cmd myprog
cmd=(myprog) # build up the command to run in an array
log_flag=0 # nonzero if the command should be logged
input_a= # the two input filenames
input_b=
while getopts 'va:b:' arg; do
case $arg in
v) # verbose output
# older ksh: set -A cmd "${cmd[#]}" -v
cmd+=(-v)
log_flag=1
;;
a) # Input 1
input_a=$OPTARG
;;
b) # Input 2
input_b=$OPTARG
;;
*) # usage
print -u2 "USAGE: $0 [-v] [-a input1] [-b input2]"
exit 2
;;
esac
done
if [[ -z $input_a || -z $input_b ]]; then
print -u2 "$0: Missing arguments"
exit 2
fi
if (( log_flag )); then
timestamp=$(date +%y-%m-%d_%H:%M:%S)
exec >"${timestamp}_out.log" 2>"${timestamp}_err.log"
fi
"${cmd[#]}" "$input_a" "$input_b"
Your timestamp uses the two-digit year (%y); that and the underscore between the components are the only deviations from the ISO 8601 standard, so I would recommend you go ahead and adopt the standard format. That'd be %Y-%m-%dT%H:%M:%S, or, in C libraries with newer versions of strftime, %FT%T.
You could also be a little more clever and make log_flag a string that is either empty or -q, pass that to the command, and test it against the empty string to determine whether or not to open the log files, but I find the logic easier to follow with the simple 0/1 value treated as a Boolean.
Take a look at the eval command.
Replace ...
myprog $verbopt $arg1 $arg2 $log
with:
eval myprog $verbopt $arg1 $arg2 $log
I don't know what your myprog does but here's a simple example using eval to run date (valid command) and date xyz (invalid command), redirecting output to log.stdout/log.stderr accordingly:
$ cat logout
log='1>log.stdout 2>log.stderr'
'rm' -rf log.std* > /dev/null 2>&1
echo ""
echo 'eval date ${log}'
eval date ${log}
echo ""
echo "++++++++++++ log.stdout"
cat log.stdout
echo "++++++++++++ log.stderr"
cat log.stderr
echo "++++++++++++"
'rm' -rf log.std* > /dev/null 2>&1
echo ""
echo 'eval date xyz ${log}'
eval date xyz ${log}
echo ""
echo "++++++++++++ log.stdout"
cat log.stdout
echo "++++++++++++ log.stderr"
cat log.stderr
echo "++++++++++++"
Now run the script:
$ logout
eval date ${log}
++++++++++++ log.stdout
Sun Jul 23 15:56:01 CDT 2017
++++++++++++ log.stderr
++++++++++++
eval date xyz ${log}
++++++++++++ log.stdout
++++++++++++ log.stderr
date: invalid date `xyz'
++++++++++++

How to add to your bash script help option "yourscript --help"

is there any possibility to add "help" to written by you bash script in Linux (Debian)? I mean specifically, by using command yourscript --help or yourscript -h
It doesn't have to be harder than this.
case $1 in
-[h?] | --help)
cat <<-____HALP
Usage: ${0##*/} [ --help ]
Outputs a friendly help message if you can figure out how.
____HALP
exit 0;;
esac
If you use getopts for option processing, use that to identify the option; but the action is going to look more or less similar (and IMNSHO getopts doesn't really offer anything over a simple while ... shift loop).
getopt
#!/bin/bash
args=$(getopt -n "$(basename "$0")" -o h --longoptions help -- "$#") || exit 1
eval set -- "$args"
while :; do
case $1 in
-h|--help) echo offer help here ; exit ;;
--) shift; break ;;
*) echo "error: $1"; exit 1;;
esac
done
echo "hello world, $*"
There are many ways to do this. Over time I have come to prefer separate usage and help functions. The help is provided in response to a request for either --help or -h and it provides extended help/option information in a heredoc format. The usage function is provided in response to an invalid input. It is short and provides a quick reminder of what the script needs. Both functions take a string as the first argument that allows you to pass an error message to be displayed along with the help or usage. Both also allow you to pass an exit code as the second argument.
The following is an example I pulled from an existing script. You can ignore the contents, but it was left by way of example:
function help {
local ecode=${2:-0}
[[ -n $1 ]] && printf "\n $1\n" >&2
cat >&2 << helpMessage
Usage: ${0##*/} <ofile> <file.c> [ <cflags> ... --log [ \$(<./bldflags)]]
${0##*/} calls 'gcc -Wall -o <ofile> <file.c> <cflags> <\$(<./bldflags)>'
If the file './bldflags' exists in the present directory, its contents are
read into the script as additional flags to pass to gcc. It is intended to
provide a simple way of specifying additional libraries common to the source
files to be built. (e.g. -lssl -lcrypto).
If the -log option is given, then the compile string and compiler ouput are
written to a long file in ./log/<ofile>_gcc.log
Options:
-h | --help program help (this file)
-l | --log write compile string and compiler ouput to ./log/<ofile>_gcc.log
helpMessage
exit $ecode
}
function usage {
local ecode=${2:-0}
[[ -n $1 ]] && printf "\n $1\n" >&2
printf "\n Usage: %s <ofile> <file.c> [ <cflags> ... --log [ \$(<./bldflags)]]\n\n" "${0##*/}"
exit $ecode
}
I generally test for help when looking at all arguments, e.g.:
## test for help and log flags and parse remaining args as cflags
for i in $*; do
test "$i" == "-h" || test "$i" == "--help" && help
...
done
Usage is provided in response to an invalid input, e.g.:
[ -f "$1" ] || usage "error: first argument is not a file." 1
They come in handy and I've preferred this approach to getopts.

I keep getting a 'while syntax' error on the output of the at job in unix and I have no idea why

#!/usr/dt/bin/dtksh
while getopts w:m: option
do
case $option in
w) wflag=1
wval="$OPTARG";;
m) mflag=1
mval="$OPTARG";;
?) printf 'BAD\n' $0
exit 2;;
esac
done
if [ ! -z "$wflag" ]; then
printf "W and -w arg is $wval\n"
fi
if [ ! -z "$mflag" ]; then
printf "M and -m arg is $mval\n"
fi
shift $(($OPTIND - 1))
printf "Remaining arguments are: $* \n"
at $wval <<ENDMARKER
echo $* >> Search_List
tr " " "\n" <Search_List >Usr_List
while true; do
if [ -s Usr_List ]; then
for i in $(cat Usr_List); do
if finger -m | grep $i; then
echo '$i is online' | elm user
sed '/$i/d' <Usr_List >tmplist
mv tmplist Usr_List
fi
done
else
break
fi
done
ENDMARKER
Essentially I want to keep searching through until it is empty. Each time an element of the list is found, it is deleted. Once the list is empty quit.
There are no error messages when I first run the command, it only shows up in an email containing the output of the at job.
Thanks in advance for any advice
EDIT: The script uses getopts and takes one argument for -w and one for -m, the w value is set as the time for the at job, the m still has to be used. Any arguments after the one for m are sent to a file called Search_List, Search_List is edited and saved as Usr_List. Then in the while loop, while Usr_List is not empty, the script checks the results of finger -m against the names in Usr_List. If a name is found, it is removed from Usr_List. Once Usr_List is empty, the program should stop.
elm is a way to send an email, so elm user sends an email to user.
The error is :
while: Expression syntax
at uses /bin/sh by default.
at now <<ENDMARKER
<code here>
ENDMARKER
All of this executes under /bin/sh, which on some systems can be Bourne Shell (Solaris for example).
You need to figure out what /bin/sh is for your system, then modify things accordingly. Plus, read the gurantees about what is and what is not in your "at" environment. I think the problem lies there. You have both UNIX and linux tags. So I cannot give a lot more help than that.
You can enable logging -- the way YOU need it -- of the at code chunk:
exec 2&>1 > /tmp/somefile.log
Then write debugging messages to stdout or stderr.
Your HEREDOC is being interpolated. Try quoting the delimiter:
at $wval << 'ENDMARKER'
Although ( I haven't looked closely) it appears that you want some interpolation. But you definitely do not want it on the line in which you reference $i, so quote that $ if you do not quote the entire heredoc:
if finger -m | grep \$i; then
You need to pass the -k option to at:
...
at -k $wval <<ENDMARKER
...
at is otherwise defaulting to your login shell which is csh or one of its derivatives.
It turns out that the while command and the if command needed to be combined.
while [[ -s Usr_List ]]; do
......
done

How can I preserve quotes in printing a bash script's arguments

I am making a bash script that will print and pass complex arguments to another external program.
./script -m root#hostname,root#hostname -o -q -- 'uptime ; uname -a'
How do I print the raw arguments as such:
-m root#hostname,root#hostname -o -q -- 'uptime ; uname -a'
Using $# and $* removes the single quotes around uptime ; uname -a which could cause undesired results. My script does not need to parse each argument. I just need to print / log the argument string and pass them to another program exactly how they are given.
I know I can escape the quotes with something like "'uptime ; uname -a'" but I cannot guarantee the user will do that.
The quotes are removed before the arguments are passed to your script, so it's too late to preserve them. What you can do is preserve their effect when passing the arguments to the inner command, and reconstruct an equivalent quoted/escaped version of the arguments for printing.
For passing the arguments to the inner command "$#" -- with the double-quotes, $# preserves the original word breaks, meaning that the inner command receives exactly the same argument list that your script did.
For printing, you can use the %q format in bash's printf command to reconstruct the quoting. Note that this won't always reconstruct the original quoting, but will construct an equivalent quoted/escaped string. For example, if you passed the argument 'uptime ; uname -a' it might print uptime\ \;\ uname\ -a or "uptime ; uname -a" or any other equivalent (see #William Pursell's answer for similar examples).
Here's an example of using these:
printf "Running command:"
printf " %q" innercmd "$#" # note the space before %q -- this inserts spaces between arguments
printf "\n"
innercmd "$#"
If you have bash version 4.4 or later, you can use the #Q modifier on parameter expansions to add quoting. This tends to prefer using single-quotes (as opposed to printf %q's preference for escapes). You can combine this with $* to get a reasonable result:
echo "Running command: innercmd ${*#Q}"
innercmd "$#"
Note that $* mashes all arguments together into a single string with whitespace between them, which is normally not useful, but in this case each argument is individually quoted so the result is actually what you (probably) want. (Well, unless you changed IFS, in which case the "whitespace" between arguments will be the first character of $IFS, which may not be what you want.)
Use ${##Q} for a simple solution. To test put the lines below in a script bigQ.
#!/bin/bash
line="${##Q}"
echo $line
./bigQ 1 a "4 5" b="6 7 8"
'1' 'a' '4 5' 'b=6 7 8'
If the user invokes your command as:
./script 'foo'
the first argument given to the script is the string foo without the quotes. There is no way for your script to differentiate between that and any of the other methods by which it could get foo as an argument (eg ./script $(echo foo) or ./script foo or ./script "foo" or ./script \f\o""''""o).
If you want to print the argument list as close as possible to what the user probably entered:
#!/bin/bash
chars='[ !"#$&()*,;<>?\^`{|}]'
for arg
do
if [[ $arg == *"'"* ]]
then
arg=\""$arg"\"
elif [[ $arg == *$chars* ]]
then
arg="'$arg'"
fi
allargs+=("$arg") # ${allargs[#]} is to be used only for printing
done
printf '%s\n' "${allargs[*]}"
It's not perfect. An argument like ''\''"' is more difficult to accommodate than is justified.
As someone else already mentioned, when you access the arguments inside of your script, it's too late to know which arguments were quote when it was called. However, you can re-quote the arguments that contain spaces or other special characters that would need to be quoted to be passed as parameters.
Here is a Bash implementation based on Python's shlex.quote(s) that does just that:
function quote() {
declare -a params
for param; do
if [[ -z "${param}" || "${param}" =~ [^A-Za-z0-9_#%+=:,./-] ]]; then
params+=("'${param//\'/\'\"\'\"\'}'")
else
params+=("${param}")
fi
done
echo "${params[*]}"
}
Your example slightly changed to show empty arguments:
$ quote -m root#hostname,root#hostname -o -q -- 'uptime ; uname -a' ''
-m root#hostname,root#hostname -o -q -- 'uptime ; uname -a' ''
In my case, I have tried to call the bash like script --argument="--arg-inner=1 --arg-inner2".
Unfortunately any solution upper don't help in my case.
Definitive solution was
#!/bin/bash
# Fix given array argument quotation
function quote() {
local QUOTED_ARRAY=()
for ARGUMENT; do
case ${ARGUMENT} in
--*=*)
QUOTED_ARRAY+=( "${ARGUMENT%%=*}=$(printf "%q" "${ARGUMENT#*=}")" )
shift
;;
*)
QUOTED_ARRAY+=( "$(printf " %q" "${ARGUMENT}")" )
;;
esac
done
echo ${QUOTED_ARRAY[#]}
}
ARGUMENTS="$(quote "${#}")"
echo "${ARGUMENTS}"
The result in the case of MacOS is --argument=--arg-inner=1\ --arg-inner2 which is logically the same.
Just separate each argument using quotes, and the nul character:
#! /bin/bash
sender () {
printf '"%s"\0' "$#"
}
receiver () {
readarray -d '' args < <(function "$#")
}
receiver "$#"
As commented by Charles Duffy.

issue in getopt , Unix shell script

Hi can someone fix this issue, i am not able to get outpt.
I am not able to get output of -p.
#!/bin/bash
args=`getopt c:m:p $*`
if [ $? != 0 -o $# == 0 ]
then
echo 'Usage: -c <current-dir> -m <my dir> -p <argument>'
exit 1
fi
set -- $args
for i
do
case "$i" in
-c) shift;CURRDIR=$1;shift;shift ;;
-m) MYDIR=$1;shift;;
-p) ARGVAL=$OPTARG;;
esac
done
echo "CURRDIR = $CURRDIR"
echo "MYDIR = $MYDIR"
echo "ARGVAL = $ARGVAL"
./1.sh -c "def" -m "ref" -p "ref -k ref"
Expected output
output -c = "def"
-m ="ref"
-p ="ref -k ref"
getopt
args=`getopt c:m:p $*`
You need to add a colon after the p to indicate that -p takes an argument. Also you should change $* to "$#" for better handling of spaces.
args=`getopt c:m:p: "$#"`
You are also mixing up getopt and getopts. $OPTARG is a getopts feature. With plain getopt and set you should simply use $2 and then shift off the argument.
-p) ARGVAL=$2; shift 2;;
At this point you've done as good as you can with getopt. Unfortunately it doesn't handle the multi-word argument to -p no matter what you do. For that, we need to use getopts.
getopts
From getopt and getopts:
Easier to use and generally better than getopt, though of course not available in csh-like shells. You shouldn't be using those anyway.
This works rather differently than "getopt". First, because it's a built-in, you usually won't find a separate man page for it, though "help getopts" may give you what you need.
The old "getopt" is called once, and it modifies the environment as we saw above. The builtin "getopts" is called each time you want to process an argument, and it doesn't change the original arguments .
Using getopts is a lot simpler. Your entire loop can be simplified to this:
while getopts c:m:p: flag
do
case "$flag" in
c) CURRDIR=$OPTARG;;
m) MYDIR=$OPTARG;;
p) ARGVAL=$OPTARG;;
esac
done
No shifting needed, you just read $OPTARG each time to get each option's value.

Resources