RHEL6 getopts doesn't seem to be working - linux

I have a new RHEL6 machine and I'm trying to run a script to generate some output. The script uses getopts which I've never used in the past. This should have worked on other machines but is my first time trying it. Below is the beginning of the script. Is there anything wrong with the syntax? When I try to output the variables it displays nothing:
#! /bin/sh
while getopts "h:u:g:o:e:y:bf" c
do
case "$c" in
u) USER=$OPTARG;;
g) GROUP=$OPTARG;;
o) OUT=$OPTARG;;
b) BATCH=1;;
f) FORCE=1;;
h) FQDN=$OPTARG;;
e) ENTITYID=$OPTARG;;
y) YEARS=$OPTARG;;
\?) echo "keygen [-o output directory (default .)] [-u username to own keypair] [-g owning groupname] [-h hostname for cert] [-y years to issue cert] [-e entityID to embed in cert]"
exit 1;;
esac
done
echo $FQDN
The echo displays a blank line.

You can't use question mark with the bash getopts (you also can't use the colon). In the case of question mark, getopts sets the value of the argument ($c in your case) to a question mark when the end of options has been encountered. It also uses question mark and colon as the value for the argument name when there's an error (specifically, ? is used when an invalid option is encountered or when in non-silent mode and a required option is not provided; colon is used in silent mode when a required option is not provided). In those error cases, OPTARG contains the offending argument. This is how POSIX getopts works as well.
The KSH getopts behaves differently, but it also excludes ? : (as well as - [ ] and only allowing # as the first option). It does, however, show a usage message when you provide -?. Basically, don't use -? with shell getopts. :)
Typically, I write a small function called "usage" and call it from both *) and by checking $? immediately after the case statement for non-zero value.

Related

BASH getopts Multiple Scripts with Same Options

I have a series of BASH scripts.
I am using getopts to parse arguments from the cmd line (although open to alternatives).
There are a series of common options to these scripts call this options set A
ie queue, ncores etc.
Each script then has a series of extra options ie set B1,B2,B3.
What I want is for script
"1 to be able to take options A+B1"
"2 to be able to take options A+B2"
"3 to be able to take options A+B2"
But I want to be able to store the code for options A in a central location (library/function) with having to write out in each script.
What I want is a way to insert generic code in getopts. Or alternatively a way to run getopts twice.
In fact I've done this by having getopts as a function which is sourced.
But the problem is I cant get the unrecognised option to work them.
I guess one way would be to remove the arguements from options A from the string before passing to a getopts for B1, B2 , B3 etc ?
Thanks Roger
That's a very nice question, to answer which we need to have a good understanding of how getopts works.
The key point here is that getopts is designed to iterate over the supplied arguments in a single loop. Thus, the solution to the question is to split the loop between different files rather then running the command twice:
#!/usr/bin/env bash
# File_1
getopts_common() {
builtin getopts ":ab:${1}" ${2} ${#:3} || return 1
case ${!2} in
'a')
echo 'a triggered'
continue
;;
'b')
echo "b argument supplied -- ${OPTARG}"
continue
;;
':')
echo "MISSING ARGUMENT for option -- ${OPTARG}" >&2
exit 1
;;
esac
}
#!/usr/bin/env bash
# File_2
# source "File_1"
while getopts_common 'xy:' OPTKEY ${#}; do
case ${OPTKEY} in
'x')
echo 'x triggered'
;;
'y')
echo "y argument supplied -- ${OPTARG}"
;;
'?')
echo "INVALID OPTION -- ${OPTARG}" >&2
exit 1
;;
':')
echo "MISSING ARGUMENT for option -- ${OPTARG}" >&2
exit 1
;;
*)
echo "UNIMPLEMENTED OPTION -- ${OPTKEY}" >&2
exit 1
;;
esac
done
Implementation notes
We start with File_2 since that's where the execution of the script starts:
Instead of invoking getopts directly, we call it via it's proxy: getopts_common, which is responsible for processing all common option.
getopts_common function is invoked with:
A string that defines which options to expect, and where to expect their arguments. This string only covers options defined in File_2.
The name of the shell-variable to use for option reporting.
A list of the command line arguments. (This simplifies accessing them from inside getopts_common function.)
Moving on to the sourced file (File_1) we need to bear in mind that getopts_common function runs inside the while loop defined in File_2:
getopts returns false if there is nothing left to parse, || return 1 bit insures that getopts_common function does the same.
The execution needs to move on to the next iteration of the loop when a valid option is processed. Hence, each valid option match ends with continue.
Silent error reporting (enabled when OPTSPEC starts with :) allows us to distinguish between INVALID OPTION and MISSING ARGUMENT. The later error is specific to the common options defined in File_1, thus it needs to be trapped there.
For more in-depth information on getopts, see Bash Hackers Wiki: Getopts tutorial

getopts: unable to identify arguments

This is the script I tried:
#!/bin/bash
while getopts ":u:p:" option; do
case $option in
u) USER=$OPTARG;;
p) PASS=$OPTARG;;
\?) echo "Invalid Option: $OPTARG"
exit 1;;
:) echo "Please provide an argument for $OPTARG!"
exit 1;;
esac
done
echo "Username/Password: $USER/$PASS"
If command for running the script is:
./test9.sh -u test -p -a
Then I am getting an output:
Username/Password: test/-a
-a is an invalid argument but the script is taking -a as password. I would like to display a message Please enter a password and exit the script. Please help me in fixing this.
There are three kinds of parameters: options, option arguments, and positional parameters. If a parameter is an option that requires an argument, then the next parameter will be treated as an an argument no matter what. It may start with a dash or even coincide with a valid option, it will still be treated as an option argument.
If your program wants to reject arguments that start with a dash, you need to program it yourself. Passwords that start with a dash are perfectly legitimate; a program that checks passwords must not reject them.
Option that accept optional arguments are extremely confusing and non-standard. Getopt in general doesn't support them. There's a GNU extension for that, but don't use it.
TL;DR there's nothing to fix, your script is fine.
I haven't tested your script, but I think that if you use getopt instead of getopts you'll get the result you expect, an error because -a is not a valid option.

'less' the file specified by the output of 'which'

command 'which' shows the link to a command.
command 'less' open the file.
How can I 'less' the file as the output of 'which'?
I don't want to use two commands like below to do it.
=>which script
/file/to/script/fiel
=>less /file/to/script/fiel
This is a use case for command substitution:
less -- "$(which commandname)"
That said, if your shell is bash, consider using type -P instead, which (unlike the external command which) is built into the shell:
less -- "$(type -P commandname)"
Note the quotes: These are important for reliable operation. Without them, the command may not work correctly if the filename contains characters inside IFS (by default, whitespace) or can be evaluated as a glob expression.
The double dashes are likewise there for correctness: Any argument after them is treated as positional (as per POSIX Utility Syntax Guidelines), so even if a filename starting with a dash were to be returned (however unlikely this may be), it ensures that less treats that as a filename rather than as the beginning of a sequence of options or flags.
You may also wish to consider honoring the user's pager selection via the environment variable $PAGER, and using type without -P to look for aliases, shell functions and builtins:
cmdsource() {
local sourcefile
if sourcefile="$(type -P -- "$1")"; then
"${PAGER:-less}" -- "$sourcefile"
else
echo "Unable to find source for $1" >&2
echo "...checking for a shell builtin:" >&2
type -- "$1"
fi
}
This defines a function you can run:
cmdsource commandname
You should be able to just pipe it over, try this:
which script | less

How to prevent execution of command in ZSH?

I wrote hook for command line:
# Transforms command 'ls?' to 'man ls'
function question_to_man() {
if [[ $2 =~ '^\w+\?$' ]]; then
man ${2[0,-2]}
fi
}
autoload -Uz add-zsh-hook
add-zsh-hook preexec question_to_man
But when I do:
> ls?
After exiting from man I get:
> zsh: no matches found: ls?
How can I get rid of from message about wrong command?
? is special to zsh and is the wildcard for a single character. That means that if you type ls? zsh tries find matching file names in the current directory (any three letter name starting with "ls").
There are two ways to work around that:
You can make "?" "unspecial" by quoting it: ls\?, 'ls?' or "ls?".
You make zsh handle the cases where it does not match better:
The default behaviour if no match can be found is to print an error. This can be changed by disabling the NOMATCH option (also NULL_GLOB must not be set):
setopt NO_NOMATCH
setopt NO_NULL_GLOB
This will leave the word untouched, if there is no matching file.
Caution: In the (maybe unlikely) case that there is a file with a matching name, zsh will try to execute a command with the name of the first matching file. That is if there is a file named "lsx", then ls? will be replaced by lsx and zsh will try to run it. This may or may not fail, but will most likely not be the desired effect.
Both methods have their pro and cons. 1. is probably not exactly what you are looking for and 2. does not work every time as well as changes your shells behaviour.
Also (as #chepner noted in his comment) preexec runs additionally to not instead of a command. That means you may get the help for ls but zsh will still try to run ls? or even lsx (or another matching name).
To avoid that, I would suggest defining a command_not_found_handler function instead of preexec. From the zsh manual:
If no external command is found but a function command_not_found_handler exists the shell executes this function with all command line arguments. The function should return status zero if it successfully handled the command, or non-zero status if it failed. In the latter case the standard handling is applied: ‘command not found’ is printed to standard error and the shell exits with status 127. Note that the handler is executed in a subshell forked to execute an external command, hence changes to directories, shell parameters, etc. have no effect on the main shell.
So this should do the trick:
command_not_found_handler () {
if [[ $1 =~ '\?$' ]]; then
man ${1%\?}
return 0
else
return 1
fi
}
If you have a lot of matching file names but seldomly misstype commands (the usual reason for "Command not found" errors) you might want to consider using this instead:
command_not_found_handler () {
man ${1%?}
}
This does not check for "?" at the end, but just cuts away any last character (note the missing "\" in ${1%?}) and tries to run man on the rest. So even if a file name matches, man will be run unless there is indeed a command with the same name as the matched file.
Note: This will interfere with other tools using command_not_found_handler for example the command-not-found tool from Ubuntu (if enabled for zsh).
That all being said, zsh has a widget called run-help which can be bound to a key (in Emacs mode it is by default bound to Alt+H) and than runs man for the current command.
The main advantages of using run-help over the above are:
You can call it any time while typing a longer command, as long as the command name is complete.
After you leave the manpage, the command is still there unchanged, so you can continue writing on it.
You can even bind it to Alt+? to make it more similar: bindkey '^[?' run-help

Bash Shell - The : Command

The colon command is a null command.
The : construct is also useful in the conditional setting of variables. For example,
: ${var:=value}
Without the :, the shell would try to evaluate $var as a command. <=???
I don't quite understand the last sentence in above statement. Can anyone give me some details?
Thank you
Try
var=badcommand
$var
you will get
bash: badcommand: command not found
Try
var=
${var:=badcommand}
and you will get the same.
The shell (e.g. bash) always tries to run the first word on each command line as a command, even after doing variable expansion.
The only exception to this is
var=value
which the shell treats specially.
The trick in the example you provide is that ${var:=value} works anywhere on a command line, e.g.
# set newvar to somevalue if it isn't already set
echo ${newvar:=somevalue}
# show that newvar has been set by the above command
echo $newvar
But we don't really even want to echo the value, so we want something better than
echo ${newvar:=somevalue}.
The : command lets us do the assignment without any other action.
I suppose what the man page writers meant was
: ${var:=value}
Can be used as a short cut instead of say
if [ -z "$var" ]; then
var=value
fi
${var} on its own executes the command stored in $var. Adding substitution parameters does not change this, so you use : to neutralize this.
Try this:
$ help :
:: :
Null command.
No effect; the command does nothing.
Exit Status:
Always succeeds.

Resources