how to redirect command output in bash shell? - linux

I'm having a problem writing a small bash command. Basically I want to echo the wrapper command and redirect the output of the real command to a log file.
Something like this in my .bashrc doesn't work -- the output still gets to the console.
cmd="some_command >& output.log";
echo $cmd;
$cmd;
But the following works -- the output is directed into the log file.
cmd = "some_command";
echo $cmd" >& output.log";
$cmd >& output.log;
What is wrong with the first method? How to fix it?
Thanks!

Using eval works, but is bad practice for security reasons. The Right Thing, when you need to perform redirections inside code stored for reuse, is to define a function:
cmd() { some_command &> output.log; } # define it
declare -p cmd # print it
cmd # run it
If you don't need redirections, then the right thing is an array:
cmd=( something 'with spaces' 'in args' ) # define it
printf '%q ' "${cmd[#]}"; echo # print it
"${cmd[#]}" # run it
This is safer, inasmuch as array contents won't go through a full eval pass. Think about if you did cmd="something-with $filename", and filename contained $(rm -rf /). If you used eval, this would run the rm command!
To provide a more specific example, this would hose your system if run as root:
# !!! I AM DANGEROUS DO NOT RUN ME !!!
evil_filename='/tmp/foo $(rm -rf /)'
cmd="echo $evil_filename" # define it (BROKEN!)
eval "$cmd" # run it (DANGEROUS!)
On the other hand, this would be safe:
evil_filename='/tmp/foo $(rm -rf /)'
cmd=( echo "$evil_filename" ) # define it (OK!)
printf '%q ' "${cmd[#]}"; echo # print it (OK!)
"${cmd[#]}" # run it (OK!)
...and it would still be safe even if you left out some of the quotes -- it would work wrong, but still not break your system:
# I'm broken, but not in a way that damages system security
evil_filename='/tmp/foo $(rm -rf /)'
cmd=( echo $evil_filename ) # define it (BROKEN!)
${cmd[#]} # run it (BROKEN!)
And this would be safe too:
evil_filename='/tmp/foo $(rm -rf /)'
cmd() { echo "$1"; } # define it (OK!)
cmd "$evil_filename" # run it (OK!)
For a more in-depth discussion, see BashFAQ #50 (on properly storing command sequences for reuse), and BashFAQ #48 (on why eval is dangerous).

What is wrong with the first method?
When you include the redirection operators within a variable, the shell doesn't treat those as special. Instead those are considered as arguments to the program in question.
One solution is to make use of eval:
cmd="some command >& output.log";
eval $cmd;
As an aside, the following is wrong:
cmd = "some command";
You cannot have spaces around = in a variable assignment.

Related

Passing values to a specific command in a chained command using an alias [duplicate]

I used to use CShell (csh), which lets you make an alias that takes a parameter. The notation was something like
alias junk="mv \\!* ~/.Trash"
In Bash, this does not seem to work. Given that Bash has a multitude of useful features, I would assume that this one has been implemented but I am wondering how.
Bash alias does not directly accept parameters. You will have to create a function.
alias does not accept parameters but a function can be called just like an alias. For example:
myfunction() {
#do things with parameters like $1 such as
mv "$1" "$1.bak"
cp "$2" "$1"
}
myfunction old.conf new.conf #calls `myfunction`
By the way, Bash functions defined in your .bashrc and other files are available as commands within your shell. So for instance you can call the earlier function like this
$ myfunction original.conf my.conf
Refining the answer above, you can get 1-line syntax like you can for aliases, which is more convenient for ad-hoc definitions in a shell or .bashrc files:
bash$ myfunction() { mv "$1" "$1.bak" && cp -i "$2" "$1"; }
bash$ myfunction original.conf my.conf
Don't forget the semi-colon before the closing right-bracket. Similarly, for the actual question:
csh% alias junk="mv \\!* ~/.Trash"
bash$ junk() { mv "$#" ~/.Trash/; }
Or:
bash$ junk() { for item in "$#" ; do echo "Trashing: $item" ; mv "$item" ~/.Trash/; done; }
The question is simply asked wrong. You don't make an alias that takes parameters because alias just adds a second name for something that already exists. The functionality the OP wants is the function command to create a new function. You do not need to alias the function as the function already has a name.
I think you want something like this :
function trash() { mv "$#" ~/.Trash; }
That's it! You can use parameters $1, $2, $3, etc, or just stuff them all with $#
TL;DR: Do this instead
Its far easier and more readable to use a function than an alias to put arguments in the middle of a command.
$ wrap_args() { echo "before $# after"; }
$ wrap_args 1 2 3
before 1 2 3 after
If you read on, you'll learn things that you don't need to know about shell argument processing. Knowledge is dangerous. Just get the outcome you want, before the dark side forever controls your destiny.
Clarification
bash aliases do accept arguments, but only at the end:
$ alias speak=echo
$ speak hello world
hello world
Putting arguments into the middle of command via alias is indeed possible but it gets ugly.
Don't try this at home, kiddies!
If you like circumventing limitations and doing what others say is impossible, here's the recipe. Just don't blame me if your hair gets frazzled and your face ends up covered in soot mad-scientist-style.
The workaround is to pass the arguments that alias accepts only at the end to a wrapper that will insert them in the middle and then execute your command.
Solution 1
If you're really against using a function per se, you can use:
$ alias wrap_args='f(){ echo before "$#" after; unset -f f; }; f'
$ wrap_args x y z
before x y z after
You can replace $# with $1 if you only want the first argument.
Explanation 1
This creates a temporary function f, which is passed the arguments (note that f is called at the very end). The unset -f removes the function definition as the alias is executed so it doesn't hang around afterwards.
Solution 2
You can also use a subshell:
$ alias wrap_args='sh -c '\''echo before "$#" after'\'' _'
Explanation 2
The alias builds a command like:
sh -c 'echo before "$#" after' _
Comments:
The placeholder _ is required, but it could be anything. It gets set to sh's $0, and is required so that the first of the user-given arguments don't get consumed. Demonstration:
sh -c 'echo Consumed: "$0" Printing: "$#"' alcohol drunken babble
Consumed: alcohol Printing: drunken babble
The single-quotes inside single-quotes are required. Here's an example of it not working with double quotes:
$ sh -c "echo Consumed: $0 Printing: $#" alcohol drunken babble
Consumed: -bash Printing:
Here the values of the interactive shell's $0 and $# are replaced into the double quoted before it is passed to sh. Here's proof:
echo "Consumed: $0 Printing: $#"
Consumed: -bash Printing:
The single quotes ensure that these variables are not interpreted by interactive shell, and are passed literally to sh -c.
You could use double-quotes and \$#, but best practice is to quote your arguments (as they may contain spaces), and \"\$#\" looks even uglier, but may help you win an obfuscation contest where frazzled hair is a prerequisite for entry.
All you have to do is make a function inside an alias:
$ alias mkcd='_mkcd(){ mkdir "$1"; cd "$1";}; _mkcd'
^ * ^ ^ ^ ^ ^
You must put double quotes around "$1" because single quotes will not work. This is because clashing the quotes at the places marked with arrows confuses the system. Also, a space at the place marked with a star is needed for the function.
Once I did some fun project and I'm still using it. It's showing some animation while copy files via cp command coz cp don't show anything and it's kind of frustrating. So I've made this alias for cp:
alias cp="~/SCR/spinner cp"
And this is the spinner script
#!/bin/bash
#Set timer
T=$(date +%s)
#Add some color
. ~/SCR/color
#Animation sprites
sprite=( "(* ) ( *)" " (* )( *) " " ( *)(* ) " "( *) (* )" "(* ) ( *)" )
#Print empty line and hide cursor
printf "\n${COF}"
#Exit function
function bye { printf "${CON}"; [ -e /proc/$pid ] && kill -9 $pid; exit; }; trap bye INT
#Run our command and get its pid
"$#" & pid=$!
#Waiting animation
i=0; while [ -e /proc/$pid ]; do sleep 0.1
printf "\r${GRN}Please wait... ${YLW}${sprite[$i]}${DEF}"
((i++)); [[ $i = ${#sprite[#]} ]] && i=0
done
#Print time and exit
T=$(($(date +%s)-$T))
printf "\n\nTime taken: $(date -u -d #${T} +'%T')\n"
bye
It looks like this
Cycled animation)
Here is the link to a color script mentioned above.
And new animation cycle)
So the answer to the OP's question is to use intermediate script that could shuffle args as you wish.
An alternative solution is to use marker, a tool I've created recently that allows you to "bookmark" command templates and easily place cursor at command place-holders:
I found that most of time, I'm using shell functions so I don't have to write frequently used commands again and again in the command-line. The issue of using functions for this use case, is adding new terms to my command vocabulary and having to remember what functions parameters refer to in the real-command. Marker goal is to eliminate that mental burden.
Syntax:
alias shortName="your custom command here"
Example:
alias tlogs='_t_logs() { tail -f ../path/$1/to/project/logs.txt ;}; _t_logs'
Bash alias absolutely does accept parameters. I just added an alias to create a new react app which accepts the app name as a parameter. Here's my process:
Open the bash_profile for editing in nano
nano /.bash_profile
Add your aliases, one per line:
alias gita='git add .'
alias gitc='git commit -m "$#"'
alias gitpom='git push origin master'
alias creact='npx create-react-app "$#"'
note: the "$#" accepts parameters passed in like "creact my-new-app"
Save and exit nano editor
ctrl+o to to write (hit enter); ctrl+x to exit
Tell terminal to use the new aliases in .bash_profile
source /.bash_profile
That's it! You can now use your new aliases
Here's are three examples of functions I have in my ~/.bashrc, that are essentially aliases that accept a parameter:
#Utility required by all below functions.
#https://stackoverflow.com/questions/369758/how-to-trim-whitespace-from-bash-variable#comment21953456_3232433
alias trim="sed -e 's/^[[:space:]]*//g' -e 's/[[:space:]]*\$//g'"
.
:<<COMMENT
Alias function for recursive deletion, with are-you-sure prompt.
Example:
srf /home/myusername/django_files/rest_tutorial/rest_venv/
Parameter is required, and must be at least one non-whitespace character.
Short description: Stored in SRF_DESC
With the following setting, this is *not* added to the history:
export HISTIGNORE="*rm -r*:srf *"
- https://superuser.com/questions/232885/can-you-share-wisdom-on-using-histignore-in-bash
See:
- y/n prompt: https://stackoverflow.com/a/3232082/2736496
- Alias w/param: https://stackoverflow.com/a/7131683/2736496
COMMENT
#SRF_DESC: For "aliaf" command (with an 'f'). Must end with a newline.
SRF_DESC="srf [path]: Recursive deletion, with y/n prompt\n"
srf() {
#Exit if no parameter is provided (if it's the empty string)
param=$(echo "$1" | trim)
echo "$param"
if [ -z "$param" ] #http://tldp.org/LDP/abs/html/comparison-ops.html
then
echo "Required parameter missing. Cancelled"; return
fi
#Actual line-breaks required in order to expand the variable.
#- https://stackoverflow.com/a/4296147/2736496
read -r -p "About to
sudo rm -rf \"$param\"
Are you sure? [y/N] " response
response=${response,,} # tolower
if [[ $response =~ ^(yes|y)$ ]]
then
sudo rm -rf "$param"
else
echo "Cancelled."
fi
}
.
:<<COMMENT
Delete item from history based on its line number. No prompt.
Short description: Stored in HX_DESC
Examples
hx 112
hx 3
See:
- https://unix.stackexchange.com/questions/57924/how-to-delete-commands-in-history-matching-a-given-string
COMMENT
#HX_DESC: For "aliaf" command (with an 'f'). Must end with a newline.
HX_DESC="hx [linenum]: Delete history item at line number\n"
hx() {
history -d "$1"
}
.
:<<COMMENT
Deletes all lines from the history that match a search string, with a
prompt. The history file is then reloaded into memory.
Short description: Stored in HXF_DESC
Examples
hxf "rm -rf"
hxf ^source
Parameter is required, and must be at least one non-whitespace character.
With the following setting, this is *not* added to the history:
export HISTIGNORE="*hxf *"
- https://superuser.com/questions/232885/can-you-share-wisdom-on-using-histignore-in-bash
See:
- https://unix.stackexchange.com/questions/57924/how-to-delete-commands-in-history-matching-a-given-string
COMMENT
#HXF_DESC: For "aliaf" command (with an 'f'). Must end with a newline.
HXF_DESC="hxf [searchterm]: Delete all history items matching search term, with y/n prompt\n"
hxf() {
#Exit if no parameter is provided (if it's the empty string)
param=$(echo "$1" | trim)
echo "$param"
if [ -z "$param" ] #http://tldp.org/LDP/abs/html/comparison-ops.html
then
echo "Required parameter missing. Cancelled"; return
fi
read -r -p "About to delete all items from history that match \"$param\". Are you sure? [y/N] " response
response=${response,,} # tolower
if [[ $response =~ ^(yes|y)$ ]]
then
#Delete all matched items from the file, and duplicate it to a temp
#location.
grep -v "$param" "$HISTFILE" > /tmp/history
#Clear all items in the current sessions history (in memory). This
#empties out $HISTFILE.
history -c
#Overwrite the actual history file with the temp one.
mv /tmp/history "$HISTFILE"
#Now reload it.
history -r "$HISTFILE" #Alternative: exec bash
else
echo "Cancelled."
fi
}
References:
Trimming whitespace from strings: How to trim whitespace from a Bash variable?
Actual line breaks: https://stackoverflow.com/a/4296147/2736496
Alias w/param: https://stackoverflow.com/a/7131683/2736496 (another answer in this question)
HISTIGNORE: https://superuser.com/questions/232885/can-you-share-wisdom-on-using-histignore-in-bash
Y/N prompt: https://stackoverflow.com/a/3232082/2736496
Delete all matching items from history: https://unix.stackexchange.com/questions/57924/how-to-delete-commands-in-history-matching-a-given-string
Is string null/empty: http://tldp.org/LDP/abs/html/comparison-ops.html
Respectfully to all those saying you can't insert a parameter in the middle of an alias I just tested it and found that it did work.
alias mycommand = "python3 "$1" script.py --folderoutput RESULTS/"
when I then ran mycommand foobar it worked exactly as if I had typed the command out longhand.
NB: In case the idea isn't obvious, it is a bad idea to use aliases for anything but aliases, the first one being the 'function in an alias' and the second one being the 'hard to read redirect/source'. Also, there are flaws (which i thought would be obvious, but just in case you are confused: I do not mean them to actually be used... anywhere!)
I've answered this before, and it has always been like this in the past:
alias foo='__foo() { unset -f $0; echo "arg1 for foo=$1"; }; __foo()'
which is fine and good, unless you are avoiding the use of functions all together. in which case you can take advantage of bash's vast ability to redirect text:
alias bar='cat <<< '\''echo arg1 for bar=$1'\'' | source /dev/stdin'
They are both about the same length give or take a few characters.
The real kicker is the time difference, the top being the 'function method' and the bottom being the 'redirect-source' method. To prove this theory, the timing speaks for itself:
arg1 for foo=FOOVALUE
real 0m0.011s user 0m0.004s sys 0m0.008s # <--time spent in foo
real 0m0.000s user 0m0.000s sys 0m0.000s # <--time spent in bar
arg1 for bar=BARVALUE
ubuntu#localhost /usr/bin# time foo FOOVALUE; time bar BARVALUE
arg1 for foo=FOOVALUE
real 0m0.010s user 0m0.004s sys 0m0.004s
real 0m0.000s user 0m0.000s sys 0m0.000s
arg1 for bar=BARVALUE
ubuntu#localhost /usr/bin# time foo FOOVALUE; time bar BARVALUE
arg1 for foo=FOOVALUE
real 0m0.011s user 0m0.000s sys 0m0.012s
real 0m0.000s user 0m0.000s sys 0m0.000s
arg1 for bar=BARVALUE
ubuntu#localhost /usr/bin# time foo FOOVALUE; time bar BARVALUE
arg1 for foo=FOOVALUE
real 0m0.012s user 0m0.004s sys 0m0.004s
real 0m0.000s user 0m0.000s sys 0m0.000s
arg1 for bar=BARVALUE
ubuntu#localhost /usr/bin# time foo FOOVALUE; time bar BARVALUE
arg1 for foo=FOOVALUE
real 0m0.010s user 0m0.008s sys 0m0.004s
real 0m0.000s user 0m0.000s sys 0m0.000s
arg1 for bar=BARVALUE
This is the bottom part of about 200 results, done at random intervals. It seems that function creation/destruction takes more time than redirection. Hopefully this will help future visitors to this question (didn't want to keep it to myself).
If you're looking for a generic way to apply all params to a function, not just one or two or some other hardcoded amount, you can do that this way:
#!/usr/bin/env bash
# you would want to `source` this file, maybe in your .bash_profile?
function runjar_fn(){
java -jar myjar.jar "$#";
}
alias runjar=runjar_fn;
So in the example above, i pass all parameters from when i run runjar to the alias.
For example, if i did runjar hi there it would end up actually running java -jar myjar.jar hi there. If i did runjar one two three it would run java -jar myjar.jar one two three.
I like this $# - based solution because it works with any number of params.
There are legitimate technical reasons to want a generalized solution to the problem of bash alias not having a mechanism to take a reposition arbitrary arguments. One reason is if the command you wish to execute would be adversely affected by the changes to the environment that result from executing a function. In all other cases, functions should be used.
What recently compelled me to attempt a solution to this is that I wanted to create some abbreviated commands for printing the definitions of variables and functions. So I wrote some functions for that purpose. However, there are certain variables which are (or may be) changed by a function call itself. Among them are:
FUNCNAME
BASH_SOURCE
BASH_LINENO
BASH_ARGC
BASH_ARGV
The basic command I had been using (in a function) to print variable defns. in the form output by the set command was:
sv () { set | grep --color=never -- "^$1=.*"; }
E.g.:
> V=voodoo
sv V
V=voodoo
Problem: This won't print the definitions of the variables mentioned above as they are in the current context, e.g., if in an interactive shell prompt (or not in any function calls), FUNCNAME isn't defined. But my function tells me the wrong information:
> sv FUNCNAME
FUNCNAME=([0]="sv")
One solution I came up with has been mentioned by others in other posts on this topic. For this specific command to print variable defns., and which requires only one argument, I did this:
alias asv='(grep -- "^$(cat -)=.*" <(set)) <<<'
Which gives the correct output (none), and result status (false):
> asv FUNCNAME
> echo $?
1
However, I still felt compelled to find a solution that works for arbitrary numbers of arguments.
A General Solution To Passing Arbitrary Arguments To A Bash Aliased Command:
# (I put this code in a file "alias-arg.sh"):
# cmd [arg1 ...] – an experimental command that optionally takes args,
# which are printed as "cmd(arg1 ...)"
#
# Also sets global variable "CMD_DONE" to "true".
#
cmd () { echo "cmd($#)"; declare -g CMD_DONE=true; }
# Now set up an alias "ac2" that passes to cmd two arguments placed
# after the alias, but passes them to cmd with their order reversed:
#
# ac2 cmd_arg2 cmd_arg1 – calls "cmd" as: "cmd cmd_arg1 cmd_arg2"
#
alias ac2='
# Set up cmd to be execed after f() finishes:
#
trap '\''cmd "${CMD_ARGV[1]}" "${CMD_ARGV[0]}"'\'' SIGUSR1;
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# (^This is the actually execed command^)
#
# f [arg0 arg1 ...] – acquires args and sets up trap to run cmd:
f () {
declare -ag CMD_ARGV=("$#"); # array to give args to cmd
kill -SIGUSR1 $$; # this causes cmd to be run
trap SIGUSR1; # unset the trap for SIGUSR1
unset CMD_ARGV; # clean up env...
unset f; # incl. this function!
};
f' # Finally, exec f, which will receive the args following "ac2".
E.g.:
> . alias-arg.sh
> ac2 one two
cmd(two one)
>
> # Check to see that command run via trap affects this environment:
> asv CMD_DONE
CMD_DONE=true
A nice thing about this solution is that all the special tricks used to handle positional parameters (arguments) to commands will work when composing the trapped command. The only difference is that array syntax must be used.
E.g.,
If you want "$#", use "${CMD_ARGV[#]}".
If you want "$#", use "${#CMD_ARGV[#]}".
Etc.
I will just post my (hopefully, okay) solution
(for future readers, & most vitally; editors).
So - please edit & improve/remove anything in this post.
In the terminal:
$ alias <name_of_your_alias>_$argname="<command> $argname"
and to use it (notice the space after '_':
$<name_of_your_alias>_ $argname
for example, a alias to cat a file called hello.txt:
(alias name is CAT_FILE_)
and the $f (is the $argname, which is a file in this example)
$ alias CAT_FILE_$f="cat $f"
$ echo " " >> hello.txt
$ echo "hello there!" >> hello.txt
$ echo " " >> hello.txt
$ cat hello.txt
hello there!
Test (notice the space after '_'):
CAT_FILE_ hello.txt
As has already been pointed out by others, using a function should be considered best practice.
However, here is another approach, leveraging xargs:
alias junk="xargs -I "{}" -- mv "{}" "~/.Trash" <<< "
Note that this has side effects regarding redirection of streams.
Solution with subcommands:
d () {
if [ $# -eq 0 ] ; then
docker
return 0
fi
CMD=$1
shift
case $CMD in
p)
docker ps --all $#
;;
r)
docker run --interactive --tty $#
;;
rma)
docker container prune
docker image prune --filter "dangling=true"
;;
*)
docker $CMD $#
;;
esac
return $?
}
Using:
$ d r my_image ...
Called:
docker run --interactive --tty my_image ...
Here's the example:
alias gcommit='function _f() { git add -A; git commit -m "$1"; } ; _f'
Very important:
There is a space after { and before }.
There is a ; after each command in sequence. If you forget this after the last command, you will see > prompt instead!
The argument is enclosed in quotes as "$1"
To give specific answer to the Question posed about creating the alias to move the files to Trash folder instead of deleting them:
alias rm="mv "$1" -t ~/.Trash/"
Offcourse you have to create dir ~/.Trash first.
Then just give following command:
$rm <filename>
$rm <dirname>
Here is another approach using read. I am using this for brute search of a file by its name fragment, ignoring the "permission denied" messages.
alias loc0='( IFS= read -r x; find . -iname "*" -print 2>/dev/null | grep $x;) <<<'
A simple example:
$ ( IFS= read -r x; echo "1 $x 2 ";) <<< "a b"
1 a b 2
Note, that this converts the argument as a string into variable(s). One could use several parameters within quotes for this, space separated:
$ ( read -r x0 x1; echo "1 ${x0} 2 ${x1} 3 ";) <<< "a b"
1 a 2 b 3
Functions are indeed almost always the answer as already amply contributed and confirmed by this quote from the man page: "For almost every purpose, aliases are superseded by shell functions."
For completeness and because this can be useful (marginally more lightweight syntax) it could be noted that when the parameter(s) follow the alias, they can still be used (although this wouldn't address the OP's requirement). This is probably easiest to demonstrate with an example:
alias ssh_disc='ssh -O stop'
allows me to type smth like ssh_disc myhost, which gets expanded as expected as: ssh -O stop myhost
This can be useful for commands which take complex arguments (my memory isn't what it use t be anymore...)
For taking parameters, you should use functions!
However $# get interpreted when creating the alias instead of during the execution of the alias and escaping the $ doesn’t work either. How do I solve this problem?
You need to use shell function instead of an alias to get rid of this problem. You can define foo as follows:
function foo() { /path/to/command "$#" ;}
OR
foo() { /path/to/command "$#" ;}
Finally, call your foo() using the following syntax:
foo arg1 arg2 argN
Make sure you add your foo() to ~/.bash_profile or ~/.zshrc file.
In your case, this will work
function trash() { mv $# ~/.Trash; }
Both functions and aliases can use parameters as others have shown here. Additionally, I would like to point out a couple of other aspects:
1. function runs in its own scope, alias shares scope
It may be useful to know this difference in cases you need to hide or expose something. It also suggests that a function is the better choice for encapsulation.
function tfunc(){
GlobalFromFunc="Global From Func" # Function set global variable by default
local FromFunc="onetwothree from func" # Set a local variable
}
alias talias='local LocalFromAlias="Local from Alias"; GlobalFromAlias="Global From Alias" # Cant hide a variable with local here '
# Test variables set by tfunc
tfunc # call tfunc
echo $GlobalFromFunc # This is visible
echo $LocalFromFunc # This is not visible
# Test variables set by talias
# call talias
talias
echo $GlobalFromAlias # This is invisible
echo $LocalFromAlias # This variable is unset and unusable
Output:
bash-3.2$ # Test variables set by tfunc
bash-3.2$ tfunc # call tfunc
bash-3.2$ echo $GlobalFromFunc # This is visible
Global From Func
bash-3.2$ echo $LocalFromFunc # This is not visible
bash-3.2$ # Test variables set by talias
bash-3.2$ # call talias
bash-3.2$ talias
bash: local: can only be used in a function
bash-3.2$ echo $GlobalFromAlias # This is invisible
Global From Alias
bash-3.2$ echo $LocalFromAlias # This variable is unset and unusable
2. wrapper script is a better choice
It has happened to me several times that an alias or function can not be found when logging in via ssh or involving switching usernames or multi-user environment. There are tips and tricks with sourcing dot files, or this interesting one with alias: alias sd='sudo ' lets this subsequent alias alias install='sd apt-get install' work as expect (notice the extra space in sd='sudo '). However, a wrapper script works better than a function or alias in cases like this. The main advantage with a wrapper script is that it is visible/executable for under intended path (i.e. /usr/loca/bin/) where as a function/alias needs to be sourced before it is usable. For example, you put a function in a ~/.bash_profile or ~/.bashrc for bash, but later switch to another shell (i.e. zsh) then the function is not visible anymore.
So, when you are in doubt, a wrapper script is always the most reliable and portable solution.
alias junk="delay-arguments mv _ ~/.Trash"
delay-arguments script:
#!/bin/bash
# Example:
# > delay-arguments echo 1 _ 3 4 2
# 1 2 3 4
# > delay-arguments echo "| o n e" _ "| t h r e e" "| f o u r" "| t w o"
# | o n e | t w o | t h r e e | f o u r
RAW_ARGS=("$#")
ARGS=()
ARG_DELAY_MARKER="_"
SKIPPED_ARGS=0
SKIPPED_ARG_NUM=0
RAW_ARGS_COUNT="$#"
for ARG in "$#"; do
#echo $ARG
if [[ "$ARG" == "$ARG_DELAY_MARKER" ]]; then
SKIPPED_ARGS=$((SKIPPED_ARGS+1))
fi
done
for ((I=0; I<$RAW_ARGS_COUNT-$SKIPPED_ARGS; I++)); do
ARG="${RAW_ARGS[$I]}"
if [[ "$ARG" == "$ARG_DELAY_MARKER" ]]; then
MOVE_SOURCE_ARG_NUM=$(($RAW_ARGS_COUNT-$SKIPPED_ARGS+$SKIPPED_ARG_NUM))
MOVING_ARG="${RAW_ARGS[$MOVE_SOURCE_ARG_NUM]}"
if [[ "$MOVING_ARG" == "$ARG_DELAY_MARKER" ]]; then
echo "Error: Not enough arguments!"
exit 1;
fi
#echo "Moving arg: $MOVING_ARG"
ARGS+=("$MOVING_ARG")
SKIPPED_ARG_NUM=$(($SKIPPED_ARG_NUM+1))
else
ARGS+=("$ARG")
fi
done
#for ARG in "${ARGS[#]}"; do
#echo "ARGN: $ARG"
#done
#echo "RAW_ARGS_COUNT: $RAW_ARGS_COUNT"
#echo "SKIPPED_ARGS: $SKIPPED_ARGS"
#echo "${ARGS[#]}"
QUOTED_ARGS=$(printf ' %q' "${ARGS[#]}")
eval "${QUOTED_ARGS[#]}"

How do I properly use SSH heredoc?

This question is somewhat related to the question I asked here, but it has not been adequately answered. What interests me here is the following:
When I run the command type -t test on a remote computer, I get the answer 'function' because the 'test' is an existing function inside the .bashrc file on the remote computer.
However, when I run this SSH command on the local computer,
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f test)
echo $VAR
EOSSH
)"
echo $s
I don't get anything printed. The first question would be how do I make this work?
The second question builds on the previous one. That is, my ultimate goal is to define on a local computer which function I want to check on a remote computer and come up with an adequate answer, ie.:
a="test"
s="$(
ssh -T $HOST <<'EOSSH'
VAR=$(type -f $a)
echo $VAR
EOSSH
)"
echo $s
So, I would like the variable s to be equal to 'function'. How to do it?
how do I make this work?
Either load .bashrc (. .bashrc) or start an interactive session (bash -i).
Because your work is not-interactive, if you want .bashrc loaded and it has no protection against non-interactive use, just load it. If not, maybe move your function somewhere else, to something you can source. If not, be prepared that interactive session may print /etc/motd and /etc/issue and other interactive stuff.
Remove -T - you do not need a tty for non-interactive work.
I would like the variable s to be equal to 'function'. How to do it?
I recommend using declare to transfer all the work and context that you need, which is flexible and works generically, preserves STDIN and doesn't require you to deal with the intricacies escaping inside a here document. Specifically request bash shell from the remote and use printf "%q" to properly escape all the data.
functions_to_check=(a b c)
fn_exists() { [[ "$(LC_ALL=C type -t -- "$1" 2>/dev/null)" = function ]]; }
work() {
for f in "${functions_to_check[#]}"; do
if fn_exists "$f"; then
echo "Great - function $f exists!"
else
echo "Och nuu - no function $f!"
fi
done
}
ssh "$host" "$(printf "%q " bash -c "
$(declare -p function_to_check) # transfer variables
$(declare -f fn_exists work) # transfer functions
work # run the work to do
")"

prompt list of files before execution of rm

I started using "sudo rm -r" to delete files/directories. I even put it as an alias of rm.
I normally know what I am doing and I am quite experience linux user.
However, I would like that when I press the "ENTER", before the execution of rm, a list of files will show up on the screen and a prompt at the end to OK the deletion of files.
Options -i -I -v does not do what I want. I want only one prompt for all the printed files on screen.
Thank you.
##
# Double-check files to delete.
delcheck() {
printf 'Here are the %d files you said you wanted to delete:\n' "$#"
printf '"%s"\n' "$#"
read -p 'Do you want to delete them? [y/N] ' doit
case "$doit" in
[yY]) rm "$#";;
*) printf 'No files deleted\n';;
esac
}
This is a shell function that (when used properly) will do what you want. However, if you load the function in your current shell then try to use it with sudo, it won't do what you expect because sudo creates a separate shell. So you'd need to make this a shell script…
#!/bin/bash
… same code as above …
# All this script does is create the function and then execute it.
# It's lazy, but functions are nice.
delcheck "$#"
…then make sure sudo can access it. Put it in some place that is in the sudo execution PATH (Depending on sudo configuration.) Then if you really want to execute it precisely as sudo rm -r * you will still need to name the script rm, (which in my opinion is dangerous) and make sure its PATH is before /bin in your PATH. (Also dangerous). But there you go.
Here's a nice option
Alias rm to echo | xargs -p rm
The -p option means "interactive" - it will display the entire command (including any expanded file lists) and ask you to confirm
It will NOT ask about the recursively removed files. But it will expand rm * .o to:
rm -rf * .o
rm -rf program.cc program.cc~ program program.o backup?... # NO NO NO NO NO!
Which is much nicer than receiving the error
rm: .o file not found
Edit: corrected the solution based on chepner comment. My previous solutions had a bug :(
This simple script prompts for a y response before deleting the files specified.
rmc script file:
read -p "ok to delete? " ans
case $ans in
[yY]*) sudo rm "$#" ;;
*) echo "Nothing deleted";;
esac
Invoke thus
./rmc *.tmp
I created a script to do this. The solution is similar to #kojiro's.
Save the script with the filename del. Run the command sudo chmod a=r+w+x del to make the script an executable. In the directory in which you want to save the script, export the path by entering export PATH=$PATH:/path/to/the/del/executable in your '~/.bashrc' file and run source ~/.bashrc.
Here, the syntax of rm is preserved, except instead of typing rm ..., type del ... where del is the name of the bash script below.
#! /bin/bash
# Safely delete files
args=("$#") # store all arguments passed to shell
N=$# # number of arguments passed to shell
#echo $#
#echo $#
#echo ${args[#]:0}
echo "Files to delete:"
echo
n=`expr $N - 1`
for i in `seq 0 $n`
do
str=${args[i]}
if [ ${str:0:1} != "-" ]; then
echo $str
fi
done
echo
read -r -p "Delete these files? [y/n] " response
case $response in
[yY][eE][sS]|[yY])
rm ${args[#]:0}
esac

Parameter list with double quotes does not pass through properly in Bash

I have a Bash script that calls another Bash script. The called script does some modification and checking on a few things, shifts, and then passes the rest of the caller's command line through.
In the called script, I have verified that I have everything managed and ready to call. Here's some debug-style code I've put in:
echo $SVN $command $# > /tmp/shimcmd
bash /tmp/shimcmd
$SVN $command $#
Now, in /tmp/shimcmd you'll see:
svn commit --username=myuser --password=mypass --non-interactive --trust-server-cert -m "Auto Update autocommit Wed Apr 11 17:33:37 CDT 2012"
That is, the built command, all on one line, perfectly fine, including a -m "my string with spaces" portion.
It's perfect. And the "bash /tmp/shimcmd" execution of it works perfectly as well.
But of course I don't want this silly tmp file and such (only used it to debug). The problem is that calling the command directly, instead of via the shim file:
$SVN $command $#
results in the svn command itself NOT receiving the quoted string with spaces--it garbles the '-m "my string with spaces"' parameter and shanks the command as if it was passed as '-m my string with spaces'.
I have tried all manner of crazy escape methods to no avail. Can't believe it's dogging me this badly. Again, by echoing the very same thing ($SVN $command $#) to a file and then executing that file, it's FINE. But calling directly garbles the quoted string. That element alone shanks.
Any ideas?
Dan
Did you try:
eval "$SVN $command $#"
?
Here's a way to demonstrate the problem:
$ args='-m "foo bar"'
$ printf '<%s> ' $args
<-m> <"foo> <bar">
And here's a way to avoid it:
$ args=( -m "foo bar" )
$ printf '<%s> ' "${args[#]}"
<-m> <foo bar>
In this latter case, args is an array, not a quoted string.
Note, by the way, that it has to be "$#", not $#, to get this behavior (in which string-splitting is avoided in favor of respecting the array entries' boundaries).
this
echo -n -e $SVN \"$command\" > /tmp/shimcmd
for x in "$#"
do
a=$a" "\"$x\"
done
echo -e " " $a >> /tmp/shimcmd
bash /tmp/shimcmd
or simply
$SVN "$command" "$#"

Equivalent of %~dp0 (retrieving source file name) in sh

I'm converting some Windows batch files to Unix scripts using sh. I have problems because some behavior is dependent on the %~dp0 macro available in batch files.
Is there any sh equivalent to this? Any way to obtain the directory where the executing script lives?
The problem (for you) with $0 is that it is set to whatever command line was use to invoke the script, not the location of the script itself. This can make it difficult to get the full path of the directory containing the script which is what you get from %~dp0 in a Windows batch file.
For example, consider the following script, dollar.sh:
#!/bin/bash
echo $0
If you'd run it you'll get the following output:
# ./dollar.sh
./dollar.sh
# /tmp/dollar.sh
/tmp/dollar.sh
So to get the fully qualified directory name of a script I do the following:
cd `dirname $0`
SCRIPTDIR=`pwd`
cd -
This works as follows:
cd to the directory of the script, using either the relative or absolute path from the command line.
Gets the absolute path of this directory and stores it in SCRIPTDIR.
Goes back to the previous working directory using "cd -".
Yes, you can! It's in the arguments. :)
look at
${0}
combining that with
{$var%Pattern}
Remove from $var the shortest part of $Pattern that matches the back end of $var.
what you want is just
${0%/*}
I recommend the Advanced Bash Scripting Guide
(that is also where the above information is from).
Especiall the part on Converting DOS Batch Files to Shell Scripts
might be useful for you. :)
If I have misunderstood you, you may have to combine that with the output of "pwd". Since it only contains the path the script was called with!
Try the following script:
#!/bin/bash
called_path=${0%/*}
stripped=${called_path#[^/]*}
real_path=`pwd`$stripped
echo "called path: $called_path"
echo "stripped: $stripped"
echo "pwd: `pwd`"
echo "real path: $real_path
This needs some work though.
I recommend using Dave Webb's approach unless that is impossible.
In bash under linux you can get the full path to the command with:
readlink /proc/$$/fd/255
and to get the directory:
dir=$(dirname $(readlink /proc/$$/fd/255))
It's ugly, but I have yet to find another way.
I was trying to find the path for a script that was being sourced from another script. And that was my problem, when sourcing the text just gets copied into the calling script, so $0 always returns information about the calling script.
I found a workaround, that only works in bash, $BASH_SOURCE always has the info about the script in which it is referred to. Even if the script is sourced it is correctly resolved to the original (sourced) script.
The correct answer is this one:
How do I determine the location of my script? I want to read some config files from the same place.
It is important to realize that in the general case, this problem has no solution. Any approach you might have heard of, and any approach that will be detailed below, has flaws and will only work in specific cases. First and foremost, try to avoid the problem entirely by not depending on the location of your script!
Before we dive into solutions, let's clear up some misunderstandings. It is important to understand that:
Your script does not actually have a location! Wherever the bytes end up coming from, there is no "one canonical path" for it. Never.
$0 is NOT the answer to your problem. If you think it is, you can either stop reading and write more bugs, or you can accept this and read on.
...
Try this:
${0%/*}
This should work for bash shell:
dir=$(dirname $(readlink -m $BASH_SOURCE))
Test script:
#!/bin/bash
echo $(dirname $(readlink -m $BASH_SOURCE))
Run test:
$ ./somedir/test.sh
/tmp/somedir
$ source ./somedir/test.sh
/tmp/somedir
$ bash ./somedir/test.sh
/tmp/somedir
$ . ./somedir/test.sh
/tmp/somedir
This is a script can get the shell file real path when executed or sourced.
Tested in bash, zsh, ksh, dash.
BTW: you shall clean the verbose code by yourself.
#!/usr/bin/env bash
echo "---------------- GET SELF PATH ----------------"
echo "NOW \$(pwd) >>> $(pwd)"
ORIGINAL_PWD_GETSELFPATHVAR=$(pwd)
echo "NOW \$0 >>> $0"
echo "NOW \$_ >>> $_"
echo "NOW \${0##*/} >>> ${0##*/}"
if test -n "$BASH"; then
echo "RUNNING IN BASH..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=${BASH_SOURCE[0]}
elif test -n "$ZSH_NAME"; then
echo "RUNNING IN ZSH..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=${(%):-%x}
elif test -n "$KSH_VERSION"; then
echo "RUNNING IN KSH..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=${.sh.file}
else
echo "RUNNING IN DASH OR OTHERS ELSE..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=$(lsof -p $$ -Fn0 | tr -d '\0' | grep "${0##*/}" | tail -1 | sed 's/^[^\/]*//g')
fi
echo "EXECUTING FILE PATH: $SH_FILE_RUN_PATH_GETSELFPATHVAR"
cd "$(dirname "$SH_FILE_RUN_PATH_GETSELFPATHVAR")" || return 1
SH_FILE_RUN_BASENAME_GETSELFPATHVAR=$(basename "$SH_FILE_RUN_PATH_GETSELFPATHVAR")
# Iterate down a (possible) chain of symlinks as lsof of macOS doesn't have -f option.
while [ -L "$SH_FILE_RUN_BASENAME_GETSELFPATHVAR" ]; do
SH_FILE_REAL_PATH_GETSELFPATHVAR=$(readlink "$SH_FILE_RUN_BASENAME_GETSELFPATHVAR")
cd "$(dirname "$SH_FILE_REAL_PATH_GETSELFPATHVAR")" || return 1
SH_FILE_RUN_BASENAME_GETSELFPATHVAR=$(basename "$SH_FILE_REAL_PATH_GETSELFPATHVAR")
done
# Compute the canonicalized name by finding the physical path
# for the directory we're in and appending the target file.
SH_SELF_PATH_DIR_RESULT=$(pwd -P)
SH_FILE_REAL_PATH_GETSELFPATHVAR=$SH_SELF_PATH_DIR_RESULT/$SH_FILE_RUN_BASENAME_GETSELFPATHVAR
echo "EXECUTING REAL PATH: $SH_FILE_REAL_PATH_GETSELFPATHVAR"
echo "EXECUTING FILE DIR: $SH_SELF_PATH_DIR_RESULT"
cd "$ORIGINAL_PWD_GETSELFPATHVAR" || return 1
unset ORIGINAL_PWD_GETSELFPATHVAR
unset SH_FILE_RUN_PATH_GETSELFPATHVAR
unset SH_FILE_RUN_BASENAME_GETSELFPATHVAR
unset SH_FILE_REAL_PATH_GETSELFPATHVAR
echo "---------------- GET SELF PATH ----------------"
# USE $SH_SELF_PATH_DIR_RESULT BEBLOW
I have tried $0 before, namely:
dirname $0
and it just returns "." even when the script is being sourced by another script:
. ../somedir/somescript.sh

Resources