'less' the file specified by the output of 'which' - linux

command 'which' shows the link to a command.
command 'less' open the file.
How can I 'less' the file as the output of 'which'?
I don't want to use two commands like below to do it.
=>which script
/file/to/script/fiel
=>less /file/to/script/fiel

This is a use case for command substitution:
less -- "$(which commandname)"
That said, if your shell is bash, consider using type -P instead, which (unlike the external command which) is built into the shell:
less -- "$(type -P commandname)"
Note the quotes: These are important for reliable operation. Without them, the command may not work correctly if the filename contains characters inside IFS (by default, whitespace) or can be evaluated as a glob expression.
The double dashes are likewise there for correctness: Any argument after them is treated as positional (as per POSIX Utility Syntax Guidelines), so even if a filename starting with a dash were to be returned (however unlikely this may be), it ensures that less treats that as a filename rather than as the beginning of a sequence of options or flags.
You may also wish to consider honoring the user's pager selection via the environment variable $PAGER, and using type without -P to look for aliases, shell functions and builtins:
cmdsource() {
local sourcefile
if sourcefile="$(type -P -- "$1")"; then
"${PAGER:-less}" -- "$sourcefile"
else
echo "Unable to find source for $1" >&2
echo "...checking for a shell builtin:" >&2
type -- "$1"
fi
}
This defines a function you can run:
cmdsource commandname

You should be able to just pipe it over, try this:
which script | less

Related

How to store command arguments which contain double quotes in an array?

I have a Bash script which generates, stores and modifies values in an array. These values are later used as arguments for a command.
For a MCVE I thought of an arbitrary command bash -c 'echo 0="$0" ; echo 1="$1"' which explains my problem. I will call my command with two arguments -option1=withoutspace and -option2="with space". So it would look like this
> bash -c 'echo 0="$0" ; echo 1="$1"' -option1=withoutspace -option2="with space"
if the call to the command would be typed directly into the shell. It prints
0=-option1=withoutspace
1=-option2=with space
In my Bash script, the arguments are part of an array. However
#!/bin/bash
ARGUMENTS=()
ARGUMENTS+=('-option1=withoutspace')
ARGUMENTS+=('-option2="with space"')
bash -c 'echo 0="$0" ; echo 1="$1"' "${ARGUMENTS[#]}"
prints
0=-option1=withoutspace
1=-option2="with space"
which still shows the double quotes (because they are interpreted literally?). What works is
#!/bin/bash
ARGUMENTS=()
ARGUMENTS+=('-option1=withoutspace')
ARGUMENTS+=('-option2=with space')
bash -c 'echo 0="$0" ; echo 1="$1"' "${ARGUMENTS[#]}"
which prints again
0=-option1=withoutspace
1=-option2=with space
What do I have to change to make ARGUMENTS+=('-option2="with space"') work as well as ARGUMENTS+=('-option2=with space')?
(Maybe it's even entirely wrong to store arguments for a command in an array? I'm open for suggestions.)
Get rid of the single quotes. Write the options exactly as you would on the command line.
ARGUMENTS+=(-option1=withoutspace)
ARGUMENTS+=(-option2="with space")
Note that this is exactly equivalent to your second option:
ARGUMENTS+=('-option1=withoutspace')
ARGUMENTS+=('-option2=with space')
-option2="with space" and '-option2=with space' both evaluate to the same string. They're two ways of writing the same thing.
(Maybe it's even entirely wrong to store arguments for a command in an array? I'm open for suggestions.)
It's the exact right thing to do. Arrays are perfect for this. Using a flat string would be a mistake.

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

How to get the complete calling command of a BASH script from inside the script (not just the arguments)

I have a BASH script that has a long set of arguments and two ways of calling it:
my_script --option1 value --option2 value ... etc
or
my_script val1 val2 val3 ..... valn
This script in turn compiles and runs a large FORTRAN code suite that eventually produces a netcdf file as output. I already have all the metadata in the netcdf output global attributes, but it would be really nice to also include the full run command one used to create that experiment. Thus another user who receives the netcdf file could simply reenter the run command to rerun the experiment, without having to piece together all the options.
So that is a long way of saying, in my BASH script, how do I get the last command entered from the parent shell and put it in a variable? i.e. the script is asking "how was I called?"
I could try to piece it together from the option list, but the very long option list and two interface methods would make this long and arduous, and I am sure there is a simple way.
I found this helpful page:
BASH: echoing the last command run
but this only seems to work to get the last command executed within the script itself. The asker also refers to use of history, but the answers seem to imply that the history will only contain the command after the programme has completed.
Many thanks if any of you have any idea.
You can try the following:
myInvocation="$(printf %q "$BASH_SOURCE")$((($#)) && printf ' %q' "$#")"
$BASH_SOURCE refers to the running script (as invoked), and $# is the array of arguments; (($#)) && ensures that the following printf command is only executed if at least 1 argument was passed; printf %q is explained below.
While this won't always be a verbatim copy of your command line, it'll be equivalent - the string you get is reusable as a shell command.
chepner points out in a comment that this approach will only capture what the original arguments were ultimately expanded to:
For instance, if the original command was my_script $USER "$(date +%s)", $myInvocation will not reflect these arguments as-is, but will rather contain what the shell expanded them to; e.g., my_script jdoe 1460644812
chepner also points that out that getting the actual raw command line as received by the parent process will be (next to) impossible. Do tell me if you know of a way.
However, if you're prepared to ask users to do extra work when invoking your script or you can get them to invoke your script through an alias you define - which is obviously tricky - there is a solution; see bottom.
Note that use of printf %q is crucial to preserving the boundaries between arguments - if your original arguments had embedded spaces, something like $0 $* would result in a different command.
printf %q also protects against other shell metacharacters (e.g., |) embedded in arguments.
printf %q quotes the given argument for reuse as a single argument in a shell command, applying the necessary quoting; e.g.:
$ printf %q 'a |b'
a\ \|b
a\ \|b is equivalent to single-quoted string 'a |b' from the shell's perspective, but this example shows how the resulting representation is not necessarily the same as the input representation.
Incidentally, ksh and zsh also support printf %q, and ksh actually outputs 'a |b' in this case.
If you're prepared to modify how your script is invoked, you can pass $BASH_COMMANDas an extra argument: $BASH_COMMAND contains the raw[1]
command line of the currently executing command.
For simplicity of processing inside the script, pass it as the first argument (note that the double quotes are required to preserve the value as a single argument):
my_script "$BASH_COMMAND" --option1 value --option2
Inside your script:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
myInvocation=$1 # Save the command line in a variable...
shift # ... and remove it from "$#".
# Now process "$#", as you normally would.
Unfortunately, there are only two options when it comes to ensuring that your script is invoked this way, and they're both suboptimal:
The end user has to invoke the script this way - which is obviously tricky and fragile (you could however, check in your script whether the first argument contains the script name and error out, if not).
Alternatively, provide an alias that wraps the passing of $BASH_COMMAND as follows:
alias my_script='/path/to/my_script "$BASH_COMMAND"'
The tricky part is that this alias must be defined in all end users' shell initialization files to ensure that it's available.
Also, inside your script, you'd have to do extra work to re-transform the alias-expanded version of the command line into its aliased form:
# The *first* argument is what "$BASH_COMMAND" expanded to,
# i.e., the entire (alias-expanded) command line.
# Here we also re-transform the alias-expanded command line to
# its original aliased form, by replacing everything up to and including
# "$BASH_COMMMAND" with the alias name.
myInvocation=$(sed 's/^.* "\$BASH_COMMAND"/my_script/' <<<"$1")
shift # Remove the first argument from "$#".
# Now process "$#", as you normally would.
Sadly, wrapping the invocation via a script or function is not an option, because the $BASH_COMMAND truly only ever reports the current command's command line, which in the case of a script or function wrapper would be the line inside that wrapper.
[1] The only thing that gets expanded are aliases, so if you invoked your script via an alias, you'll still see the underlying script in $BASH_COMMAND, but that's generally desirable, given that aliases are user-specific.
All other arguments and even input/output redirections, including process substitutiions <(...) are reflected as-is.
"$0" contains the script's name, "$#" contains the parameters.
Do you mean something like echo $0 $*?

How to prevent execution of command in ZSH?

I wrote hook for command line:
# Transforms command 'ls?' to 'man ls'
function question_to_man() {
if [[ $2 =~ '^\w+\?$' ]]; then
man ${2[0,-2]}
fi
}
autoload -Uz add-zsh-hook
add-zsh-hook preexec question_to_man
But when I do:
> ls?
After exiting from man I get:
> zsh: no matches found: ls?
How can I get rid of from message about wrong command?
? is special to zsh and is the wildcard for a single character. That means that if you type ls? zsh tries find matching file names in the current directory (any three letter name starting with "ls").
There are two ways to work around that:
You can make "?" "unspecial" by quoting it: ls\?, 'ls?' or "ls?".
You make zsh handle the cases where it does not match better:
The default behaviour if no match can be found is to print an error. This can be changed by disabling the NOMATCH option (also NULL_GLOB must not be set):
setopt NO_NOMATCH
setopt NO_NULL_GLOB
This will leave the word untouched, if there is no matching file.
Caution: In the (maybe unlikely) case that there is a file with a matching name, zsh will try to execute a command with the name of the first matching file. That is if there is a file named "lsx", then ls? will be replaced by lsx and zsh will try to run it. This may or may not fail, but will most likely not be the desired effect.
Both methods have their pro and cons. 1. is probably not exactly what you are looking for and 2. does not work every time as well as changes your shells behaviour.
Also (as #chepner noted in his comment) preexec runs additionally to not instead of a command. That means you may get the help for ls but zsh will still try to run ls? or even lsx (or another matching name).
To avoid that, I would suggest defining a command_not_found_handler function instead of preexec. From the zsh manual:
If no external command is found but a function command_not_found_handler exists the shell executes this function with all command line arguments. The function should return status zero if it successfully handled the command, or non-zero status if it failed. In the latter case the standard handling is applied: ‘command not found’ is printed to standard error and the shell exits with status 127. Note that the handler is executed in a subshell forked to execute an external command, hence changes to directories, shell parameters, etc. have no effect on the main shell.
So this should do the trick:
command_not_found_handler () {
if [[ $1 =~ '\?$' ]]; then
man ${1%\?}
return 0
else
return 1
fi
}
If you have a lot of matching file names but seldomly misstype commands (the usual reason for "Command not found" errors) you might want to consider using this instead:
command_not_found_handler () {
man ${1%?}
}
This does not check for "?" at the end, but just cuts away any last character (note the missing "\" in ${1%?}) and tries to run man on the rest. So even if a file name matches, man will be run unless there is indeed a command with the same name as the matched file.
Note: This will interfere with other tools using command_not_found_handler for example the command-not-found tool from Ubuntu (if enabled for zsh).
That all being said, zsh has a widget called run-help which can be bound to a key (in Emacs mode it is by default bound to Alt+H) and than runs man for the current command.
The main advantages of using run-help over the above are:
You can call it any time while typing a longer command, as long as the command name is complete.
After you leave the manpage, the command is still there unchanged, so you can continue writing on it.
You can even bind it to Alt+? to make it more similar: bindkey '^[?' run-help

Bash variable defaulting doesn't work if followed by pipe (bash bug?)

I've just discovered a strange behaviour in bash that I don't understand. The expression
${variable:=default}
sets variable to the value default if it isn't already set. Consider the following examples:
#!/bin/bash
file ${foo:=$1}
echo "foo >$foo<"
file ${bar:=$1} | cat
echo "bar >$bar<"
The output is:
$ ./test myfile.txt
myfile.txt: ASCII text
foo >myfile.txt<
myfile.txt: ASCII text
bar ><
You will notice that the variable foo is assigned the value of $1 but the variable bar is not, even though the result of its defaulting is presented to the file command.
If you remove the innocuous pipe into cat from line 4 and re-run it, then it both foo and bar get set to the value of $1
Am I missing somehting here, or is this potentially a bash bug?
(GNU bash, version 4.3.30)
In second case file is a pipe member and runs as every pipe member in its own shell. When file with its subshell ends, $b with its new value from $1 no longer exists.
Workaround:
#!/bin/bash
file ${foo:=$1}
echo "foo >$foo<"
: "${bar:=$1}" # Parameter Expansion before subshell
file $bar | cat
echo "bar >$bar<"
It's not a bug. Parameter expansion happens when the command is evaluated, not parsed, but a command that is part of a pipeline is not evaluated until the new process has been started. Changing this, aside from likely breaking some existing code, would require extra level of expansion before evaluation occurs.
A hypothetical bash session:
> foo=5
> bar='$foo'
> echo "$bar"
$foo
# $bar expands to '$foo' before the subshell is created, but then `$foo` expands to 5
# during the "normal" round of parameter expansion.
> echo "$bar" | cat
5
To avoid that, bash would need some way of marking pieces of text that result from the new first round of pre-evaluation parameter expansion, so that they do not undergo a second
round of evaluation. This type of bookkeeping would quickly lead to unmaintainable code as more corner cases are found to be handled. Far simpler is to just accept that parameter expansions will be deferred until after the subshell starts.
The other alternative is to allow each component to run in the current shell, something that is allowed by the POSIX standard, but is not required, either. bash made the choice long ago to execute each component in a subshell, and reversing that would break too much existing code that relies on the current behavior. (bash 4.2 did introduce the lastpipe option, allowing the last component of a pipeline to execute in the current shell if explicitly enabled.)

Resources