Using history expansion in a bash alias or function - linux

I am trying to make a simple thing to make my teammates lives easier. They are constantly copying quote into the command line that are formatted which breaks the command ie: “test“ vs. "test"
It's proved surprisingly annoying to do with:
function damn() { !!:gs/“/" }
or:
alias damn='!!:gs/“/"'
Neither seems to work and keeps giving me either the error
-bash: !!:gs/“/" : No such file or directory
or just:
>
I must be missing something obvious here.

! does not work in functions or aliases. According to bash manual:
History expansion is performed immediately after a complete line is read, before the shell breaks it into words.
You can use the builtin fc command:
[STEP 100] # echo $BASH_VERSION
4.4.19(1)-release
[STEP 101] # alias damn='fc -s “=\" ”=\" '
[STEP 102] # echo “test”
“test”
[STEP 103] # damn
echo "test"
test
[STEP 104] #
For quick referecne, the following is output of help fc.
fc: fc [-e ename] [-lnr] [first] [last] or fc -s [OLD=NEW] [command]
Display or execute commands from the history list.
fc is used to list or edit and re-execute commands from the history list.
FIRST and LAST can be numbers specifying the range, or FIRST can be a
string, which means the most recent command beginning with that
string.
Options:
-e ENAME select which editor to use. Default is FCEDIT, then EDITOR,
then vi
-l list lines instead of editing
-n omit line numbers when listing
-r reverse the order of the lines (newest listed first)
| With the `fc -s [OLD=NEW ...] [command]' format, COMMAND is
| re-executed after the substitution OLD=NEW is performed.
A useful alias to use with this is r='fc -s', so that typing `r cc'
runs the last command beginning with `cc' and typing `r' re-executes
the last command.
Exit Status:
Returns success or status of executed command; non-zero if an error occurs.

Here is a slightly more general solution using a bash function to wrap the fc call, if you want to do something to the string beyond substitution.
function damn() {
# Capture the previous command.
cmd=$(fc -ln -1)
# Do whatever you want with cmd here
cmd=$(echo $cmd | sed 's/[“”]/"/g')
# Re-run the command
eval $cmd
}

Related

Creating a command history pipe, how do I get rid of the line feed passed by ack?

I am manually recreating bash's history expansion for reasons beyond the scope of this question. This is to say, I know that this functionality exists with another bash method, but the way I structure my bash history, each session gets its own session history file, i.e.
HISTFILE="${HOME}/.history/$(date -u +%Y/%m/%d.%H.%M.%S)_${HOSTNAME_SHORT}_$$"
requires that I build my own history search functions.
I have written the following functions:
function historysearch {
ack "$1" ~/.history
}
function historycopy {
mycopy=$(historysearch $1 | ack $2 | rev | cut -d: -f1 | rev)
echo ${mycopy%\\n} | pbcopy
}
Usage goes as follows:
$ historysearch foo
...
~/.history/2015/10/10.14.53.34_user-5_16778
6:hexedit assets/wav/foo_mu.wav
13:hexedit assets/wav/foo_mu.wav
...
Having identified the command I want, I then
$ historycopy foo 778:13
where the second argument is the last three digits of the name of the session history followed by : and then the digits associated with the command I want. The above copies the command I want to my system clipboard. Unfortunately, it does so with a carriage even when I run the string replace command ${mycopy%\\n} within the function. This is the rub ...
If I paste the command into the terminal it immediately executes. I would much prefer to have the command copied to the clipboard so that I would be able to paste it, alter it if necessary and then manually execute.
How do I get rid of the line feed at the end of the string passed to pbcopy?
Update: It appears that my string replace command was removing the line feed from ack but then echo was adding another line feed. Resolved with -n flag.
echo adds a line feed at the end. To avoid this, use echo -n

'less' the file specified by the output of 'which'

command 'which' shows the link to a command.
command 'less' open the file.
How can I 'less' the file as the output of 'which'?
I don't want to use two commands like below to do it.
=>which script
/file/to/script/fiel
=>less /file/to/script/fiel
This is a use case for command substitution:
less -- "$(which commandname)"
That said, if your shell is bash, consider using type -P instead, which (unlike the external command which) is built into the shell:
less -- "$(type -P commandname)"
Note the quotes: These are important for reliable operation. Without them, the command may not work correctly if the filename contains characters inside IFS (by default, whitespace) or can be evaluated as a glob expression.
The double dashes are likewise there for correctness: Any argument after them is treated as positional (as per POSIX Utility Syntax Guidelines), so even if a filename starting with a dash were to be returned (however unlikely this may be), it ensures that less treats that as a filename rather than as the beginning of a sequence of options or flags.
You may also wish to consider honoring the user's pager selection via the environment variable $PAGER, and using type without -P to look for aliases, shell functions and builtins:
cmdsource() {
local sourcefile
if sourcefile="$(type -P -- "$1")"; then
"${PAGER:-less}" -- "$sourcefile"
else
echo "Unable to find source for $1" >&2
echo "...checking for a shell builtin:" >&2
type -- "$1"
fi
}
This defines a function you can run:
cmdsource commandname
You should be able to just pipe it over, try this:
which script | less

Internal Variable PIPESTATUS

I am new to linux and bash scripting and i have query about this internal variable PIPESTATUS which is an array and stores the exit status of individual commands in pipe.
On command line:
$ find /home | /bin/pax -dwx ustar | /bin/gzip -c > myfile.tar.gz
$ echo ${PIPESTATUS[*]}
$ 0 0 0
working fine on command line but when I am putting this code in a bash script it is showing only one exit status. My default SHELL on command line is bash only.
Somebody please help me to understand why this behaviour is changing? And what should I do to get this work in script?
#!/bin/bash
cmdfile=/var/tmp/cmd$$
backfile=/var/tmp/backup$$
find_fun() {
find /home
}
cmd1="find_fun | /bin/pax -dwx ustar"
cmd2="/bin/gzip -c"
eval "$cmd1 | $cmd2 > $backfile.tar.gz " 2>/dev/null
echo -e " find ${PIPESTATUS[0]} \npax ${PIPESTATUS[1]} \ncompress ${PIPESTATUS[2]} > $cmdfile
The problem you are having with your script is that you aren't running the same code as you ran on the command line. You are running different code. Namely the script has the addition of eval. If you were to wrap your command line test in eval you would see that it fails in a similar manner.
The reason the eval version fails (only gives you one value in PIPESTATUS) is because you aren't executing a pipeline anymore. You are executing eval on a string that contains a pipeline. This is similar to executing /bin/bash -c 'some | pipe | line'. The thing actually being run by the current shell is a single command so it has a single exit code.
You have two choices here:
Get rid of eval (which you should do anyway as eval is generally something to avoid) and stop using a string for a command (see Bash FAQ 050 for more on why doing this is a bad idea.
Move the echo "${PIPESTATUS[#]}" into the eval and then capture (and split/parse) the resulting output. (This is clearly a worse solution in just about every way.)
Instead of ${PIPESTATUS[0]} use ${PIPESTATUS[#]}
As with any array in bash PIPESTATUS[0] contains the first command exit status. If you want to get all of them you have to use PIPESTATUS[#] which returns all the contents of the array.
I'm not sure why it worked for you when you tried it in the command line. I tested it and I didn't get the same result as you.

Pass a full bash script line to another bash function to execute

in the BASH code below, the variable ECHO_ALL is a global and set to either 'yes' or 'no' based on input parsing of options.
--- begin of ~/scripts/util/util-optout.sh ---
########################################
# #param $#
# #return the return value from $#
# #brief A wrapper function to allow
# for OPTional OUTput of any
# command w/wo args
#######################################
optout()
{
if [ ${ECHO_ALL} = 'no' ]; then
"$#" 1>/dev/null 2>&1
return $?
else
"$#"
return $?
fi
}
--- end of file ---
in another bash file I source the above util-optout.sh file and use the optout() function to allow for conditional output.. essentially allow for conditional redirection of any commands output to /dev/null to make scripts silent.
for example in some other build script i have
source ~/scripts/util/util-optout.sh
optout pushd ${ZLIB_DIR}
optout rm -vf config.cache
optout CC=${BUILD_TOOL_CC} ./configure ${ZLIB_CONFIGURE_OPT} --prefix=${CURR_DIR}/${INSTALL_DIR}
# ^^^^^^^^^^^^^^^^^^^
# ^ this breaks my optout() command
# my optout() fails when there are prefixed bash env vars set like CC=${...} before ./configure
optout popd
optout make -C ${ZLIB_DIR} ${ZLIB_COMPILER_OPT} all
optout make -C ${ZLIB_DIR} install
for simple commands with any type of parameters after it like 'pushd' or 'rm'.. optout() works great.
even the optout make -C ones work fine.
but it gives me an error for commands that have prefix env-vars set like the optout CC=${...} ./configure ...
utils/util-optout.sh: line 33: CC=gcc: command not found
Is there a way to make my optout() function work for ANY possible valid bash script line.
i know it has something to do with the use of "$#" or "$*" in my optout() function, but i have studied the bash man pages in detail and I can't make it work for all possible bash line cases.
so far the only way to get past this limitation with my optout() is the following 3-line style; which is annoying.
export CC=${BUILD_TOOL_CC}
optout ./configure ${ZLIB_CONFIGURE_OPT} --prefix=${CURR_DIR}/${INSTALL_DIR}
unset CC
Any ideas on how to reduce it all back down to a single optout ... line
optout is a command like any other, and so must be preceded by any local modifications to the environment. The command that optout runs will inherit that environment.
CC=${BUILD_TOOL_CC} optout ./configure ${ZLIB_CONFIGURE_OPT} --prefix=${CURR_DIR}/${INSTALL_DIR}
By the way, this is just one of the problems you are likely to encounter with your optout function. You cannot run arbitrary command lines in that fashion, only a simple command followed by zero or more arguments (and I would expect there are some exceptions to even that restricted set, as well).

How to show line number when executing bash script

I have a test script which has a lot of commands and will generate lots of output, I use set -x or set -v and set -e, so the script would stop when error occurs. However, it's still rather difficult for me to locate which line did the execution stop in order to locate the problem.
Is there a method which can output the line number of the script before each line is executed?
Or output the line number before the command exhibition generated by set -x?
Or any method which can deal with my script line location problem would be a great help.
Thanks.
You mention that you're already using -x. The variable PS4 denotes the value is the prompt printed before the command line is echoed when the -x option is set and defaults to : followed by space.
You can change PS4 to emit the LINENO (The line number in the script or shell function currently executing).
For example, if your script reads:
$ cat script
foo=10
echo ${foo}
echo $((2 + 2))
Executing it thus would print line numbers:
$ PS4='Line ${LINENO}: ' bash -x script
Line 1: foo=10
Line 2: echo 10
10
Line 3: echo 4
4
http://wiki.bash-hackers.org/scripting/debuggingtips gives the ultimate PS4 that would output everything you will possibly need for tracing:
export PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'
In Bash, $LINENO contains the line number where the script currently executing.
If you need to know the line number where the function was called, try $BASH_LINENO. Note that this variable is an array.
For example:
#!/bin/bash
function log() {
echo "LINENO: ${LINENO}"
echo "BASH_LINENO: ${BASH_LINENO[*]}"
}
function foo() {
log "$#"
}
foo "$#"
See here for details of Bash variables.
PS4 with value $LINENO is what you need,
E.g. Following script (myScript.sh):
#!/bin/bash -xv
PS4='${LINENO}: '
echo "Hello"
echo "World"
Output would be:
./myScript.sh
+echo Hello
3 : Hello
+echo World
4 : World
Workaround for shells without LINENO
In a fairly sophisticated script I wouldn't like to see all line numbers; rather I would like to be in control of the output.
Define a function
echo_line_no () {
grep -n "$1" $0 | sed "s/echo_line_no//"
# grep the line(s) containing input $1 with line numbers
# replace the function name with nothing
} # echo_line_no
Use it with quotes like
echo_line_no "this is a simple comment with a line number"
Output is
16 "this is a simple comment with a line number"
if the number of this line in the source file is 16.
This basically answers the question How to show line number when executing bash script for users of ash or other shells without LINENO.
Anything more to add?
Sure. Why do you need this? How do you work with this? What can you do with this? Is this simple approach really sufficient or useful? Why do you want to tinker with this at all?
Want to know more? Read reflections on debugging
Simple (but powerful) solution: Place echo around the code you think that causes the problem and move the echo line by line until the messages does not appear anymore on screen - because the script has stop because of an error before.
Even more powerful solution: Install bashdb the bash debugger and debug the script line by line
If you're using $LINENO within a function, it will cache the first occurrence. Instead use ${BASH_LINENO[0]}

Resources