'command not found' when passing variable into BASH function, am I quoting incorrectly? - linux

So I have a command that basically adds a line to a file, but only if that line isn't already in the file. It uses grep to check the file and if not there then appends to the file.
The purpose of this is because I want to import all my aliases into BASH from an installation script that is likely to be executed more than once (and I don't want to fill ~/.bashrc with duplicate lines of the same alias).
The command works fine by itself, but when I try to abstract it away into a function to reuse elsewhere, I get the error: command not found.
So far I've looked at grep and pattern matches (thinking maybe & or ~ was throwing it off), parameter expansion OR command expansion and quoting.
I feel it's the latter i.e. I'm not quoting the alias string or file string correctly and it's trying to execute it as a command instead of a string.
I've been pulling my hair out for a while on this one, would somebody please be able to point me in the right direction?
Any help appreciated!
# Command (which works)
grep -qxF 'alias gs="clear && git status"' ~/.bashrc || echo 'alias gs="clear && git status"' >> ~/.bashrc
# Wrap command in function so I can reuse and pass in different parameters
function append_unique_line {
grep -qxF $1 $2 || echo $1 >> $2
}
# Doesn't work.. append_unique_line: command not found
append_unique_line 'alias gs="clear && git status"' ~/.bashrc

Try
function append_unique_line() {
grep -qxF "$1" "$2" || echo "$1" >> "$2"
}
append_unique_line 'alias gs="clear && git status"' ~/.bashrc
Always wrap your variable in " for expansion.

Related

Creating permanent-alias bash command

I am trying to create a "permalias" bash command to be able to easily create permanent aliases without having to directly work on the ~/.bashrc file.
As of now, the only way I've been able to make this work is with this code:
alias permalias="echo alias $1 >> ~/.bashrc"
which allows for an input in this format:
permalias commandname=\"commandbody\"
But I am not satisfied with this because I'd like to mantain a simpler input format, one closer to the original alias command's.
I tried several variants of this code:
alias permalias="echo alias $1=\"$2\" >> ~/.bashrc"
Using this version, this code permalias c "echo test" should add this line alias permalias c="echo test" to the ~/.bashrc file.
But instead this is the result: alias c "echo test", which, of course, does not work.
I'd also be grateful for any advice on how to avoid the need of putting the " around the new command's body.
Thank you
Try this:
#!/bin/bash
permalias()
{
local alias_regex='[A-Za-z_0-9]*'
if
[[ $# = 1 && $1 =~ ($alias_regex)=(.*) ]]
then
printf "%s\n" "${BASH_REMATCH[1]}=\"${BASH_REMATCH[2]}\"" >> ~/.bashrc
else
echo "USAGE: permalias VARNAME=ALIAS_COMMAND"
return 1
fi
}
A nicer version would check for the presence of said alias in .bashrc first, and would then replace it or fail if it is already present.
You can't use arguments in an alias. What you need is a function, something like:
permalias() {
echo "alias ${1}=\"${2}\"" >> ~/.bashrc
}
Make it a function as Olli says, then you could use "$*" to concatenate all the arguments to the function.
permalias() {
n=$1;
shift;
echo "alias $n=\"$*\"" >> ~/.bashrc;
}
This should work with stuff like permalias c echo foo bar, but if you actually want quotes inside the alias, it will get hairy. permalias c echo "foo bar" would not work, you'd need something like permalias c echo "'foo bar'" to counter the additional level of command line processing and get the inside quotes to the file.
For anything complicated, it's better to make a shell function anyway. You can use declare -fp funcname to print the definition of a function, and save it to a file if you like.
If you happen to use zsh, drawing on Fred's answer, we can switch $BASH-REMATCH for $match and send the aliases to .zsh_aliases (assuming you have them set up-- if not add .zsh_aliases to your homedir and add this to your .zshrc: source ~/.zsh_aliases).
So, as an example, I added this function to my .zsh_aliases file and it works well.
permalias() {
sauce="unhash -ma "*" ; unhash -mf "*"; source ~/.zshrc"
local alias_regex='[A-Za-z_0-9]*'
if
[[ $# == 1 && $1 =~ ($alias_regex)=(.*) ]]
then
printf "%s\n" "alias ${match[1]}=\"${match[2]}\"" >>~/.zsh_aliases
#uncomment the following line to automatically load your new alias
#eval ${sauce}
else
echo "Usage: permalias ALIAS_NAME=ALIAS_COMMAND"
return 1
fi
}

How to suppress Error printed by shell commands. [duplicate]

How would I validate that a program exists, in a way that will either return an error and exit, or continue with the script?
It seems like it should be easy, but it's been stumping me.
Answer
POSIX compatible:
command -v <the_command>
Example use:
if ! command -v <the_command> &> /dev/null
then
echo "<the_command> could not be found"
exit
fi
For Bash specific environments:
hash <the_command> # For regular commands. Or...
type <the_command> # To check built-ins and keywords
Explanation
Avoid which. Not only is it an external process you're launching for doing very little (meaning builtins like hash, type or command are way cheaper), you can also rely on the builtins to actually do what you want, while the effects of external commands can easily vary from system to system.
Why care?
Many operating systems have a which that doesn't even set an exit status, meaning the if which foo won't even work there and will always report that foo exists, even if it doesn't (note that some POSIX shells appear to do this for hash too).
Many operating systems make which do custom and evil stuff like change the output or even hook into the package manager.
So, don't use which. Instead use one of these:
command -v foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
type foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
hash foo 2>/dev/null || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
(Minor side-note: some will suggest 2>&- is the same 2>/dev/null but shorter – this is untrue. 2>&- closes FD 2 which causes an error in the program when it tries to write to stderr, which is very different from successfully writing to it and discarding the output (and dangerous!))
If your hash bang is /bin/sh then you should care about what POSIX says. type and hash's exit codes aren't terribly well defined by POSIX, and hash is seen to exit successfully when the command doesn't exist (haven't seen this with type yet). command's exit status is well defined by POSIX, so that one is probably the safest to use.
If your script uses bash though, POSIX rules don't really matter anymore and both type and hash become perfectly safe to use. type now has a -P to search just the PATH and hash has the side-effect that the command's location will be hashed (for faster lookup next time you use it), which is usually a good thing since you probably check for its existence in order to actually use it.
As a simple example, here's a function that runs gdate if it exists, otherwise date:
gnudate() {
if hash gdate 2>/dev/null; then
gdate "$#"
else
date "$#"
fi
}
Alternative with a complete feature set
You can use scripts-common to reach your need.
To check if something is installed, you can do:
checkBin <the_command> || errorMessage "This tool requires <the_command>. Install it please, and then run this tool again."
The following is a portable way to check whether a command exists in $PATH and is executable:
[ -x "$(command -v foo)" ]
Example:
if ! [ -x "$(command -v git)" ]; then
echo 'Error: git is not installed.' >&2
exit 1
fi
The executable check is needed because bash returns a non-executable file if no executable file with that name is found in $PATH.
Also note that if a non-executable file with the same name as the executable exists earlier in $PATH, dash returns the former, even though the latter would be executed. This is a bug and is in violation of the POSIX standard. [Bug report] [Standard]
Edit: This seems to be fixed as of dash 0.5.11 (Debian 11).
In addition, this will fail if the command you are looking for has been defined as an alias.
I agree with lhunath to discourage use of which, and his solution is perfectly valid for Bash users. However, to be more portable, command -v shall be used instead:
$ command -v foo >/dev/null 2>&1 || { echo "I require foo but it's not installed. Aborting." >&2; exit 1; }
Command command is POSIX compliant. See here for its specification: command - execute a simple command
Note: type is POSIX compliant, but type -P is not.
It depends on whether you want to know whether it exists in one of the directories in the $PATH variable or whether you know the absolute location of it. If you want to know if it is in the $PATH variable, use
if which programname >/dev/null; then
echo exists
else
echo does not exist
fi
otherwise use
if [ -x /path/to/programname ]; then
echo exists
else
echo does not exist
fi
The redirection to /dev/null/ in the first example suppresses the output of the which program.
I have a function defined in my .bashrc that makes this easier.
command_exists () {
type "$1" &> /dev/null ;
}
Here's an example of how it's used (from my .bash_profile.)
if command_exists mvim ; then
export VISUAL="mvim --nofork"
fi
Expanding on #lhunath's and #GregV's answers, here's the code for the people who want to easily put that check inside an if statement:
exists()
{
command -v "$1" >/dev/null 2>&1
}
Here's how to use it:
if exists bash; then
echo 'Bash exists!'
else
echo 'Your system does not have Bash'
fi
Try using:
test -x filename
or
[ -x filename ]
From the Bash manpage under Conditional Expressions:
-x file
True if file exists and is executable.
To use hash, as #lhunath suggests, in a Bash script:
hash foo &> /dev/null
if [ $? -eq 1 ]; then
echo >&2 "foo not found."
fi
This script runs hash and then checks if the exit code of the most recent command, the value stored in $?, is equal to 1. If hash doesn't find foo, the exit code will be 1. If foo is present, the exit code will be 0.
&> /dev/null redirects standard error and standard output from hash so that it doesn't appear onscreen and echo >&2 writes the message to standard error.
Command -v works fine if the POSIX_BUILTINS option is set for the <command> to test for, but it can fail if not. (It has worked for me for years, but I recently ran into one where it didn't work.)
I find the following to be more failproof:
test -x "$(which <command>)"
Since it tests for three things: path, existence and execution permission.
There are a ton of options here, but I was surprised no quick one-liners. This is what I used at the beginning of my scripts:
[[ "$(command -v mvn)" ]] || { echo "mvn is not installed" 1>&2 ; exit 1; }
[[ "$(command -v java)" ]] || { echo "java is not installed" 1>&2 ; exit 1; }
This is based on the selected answer here and another source.
If you check for program existence, you are probably going to run it later anyway. Why not try to run it in the first place?
if foo --version >/dev/null 2>&1; then
echo Found
else
echo Not found
fi
It's a more trustworthy check that the program runs than merely looking at PATH directories and file permissions.
Plus you can get some useful result from your program, such as its version.
Of course the drawbacks are that some programs can be heavy to start and some don't have a --version option to immediately (and successfully) exit.
Check for multiple dependencies and inform status to end users
for cmd in latex pandoc; do
printf '%-10s' "$cmd"
if hash "$cmd" 2>/dev/null; then
echo OK
else
echo missing
fi
done
Sample output:
latex OK
pandoc missing
Adjust the 10 to the maximum command length. It is not automatic, because I don't see a non-verbose POSIX way to do it:
How can I align the columns of a space separated table in Bash?
Check if some apt packages are installed with dpkg -s and install them otherwise.
See: Check if an apt-get package is installed and then install it if it's not on Linux
It was previously mentioned at: How can I check if a program exists from a Bash script?
I never did get the previous answers to work on the box I have access to. For one, type has been installed (doing what more does). So the builtin directive is needed. This command works for me:
if [ `builtin type -p vim` ]; then echo "TRUE"; else echo "FALSE"; fi
I wanted the same question answered but to run within a Makefile.
install:
#if [[ ! -x "$(shell command -v ghead)" ]]; then \
echo 'ghead does not exist. Please install it.'; \
exit -1; \
fi
It could be simpler, just:
#!/usr/bin/env bash
set -x
# if local program 'foo' returns 1 (doesn't exist) then...
if ! type -P foo; then
echo 'crap, no foo'
else
echo 'sweet, we have foo!'
fi
Change foo to vi to get the other condition to fire.
hash foo 2>/dev/null: works with Z shell (Zsh), Bash, Dash and ash.
type -p foo: it appears to work with Z shell, Bash and ash (BusyBox), but not Dash (it interprets -p as an argument).
command -v foo: works with Z shell, Bash, Dash, but not ash (BusyBox) (-ash: command: not found).
Also note that builtin is not available with ash and Dash.
zsh only, but very useful for zsh scripting (e.g. when writing completion scripts):
The zsh/parameter module gives access to, among other things, the internal commands hash table. From man zshmodules:
THE ZSH/PARAMETER MODULE
The zsh/parameter module gives access to some of the internal hash ta‐
bles used by the shell by defining some special parameters.
[...]
commands
This array gives access to the command hash table. The keys are
the names of external commands, the values are the pathnames of
the files that would be executed when the command would be in‐
voked. Setting a key in this array defines a new entry in this
table in the same way as with the hash builtin. Unsetting a key
as in `unset "commands[foo]"' removes the entry for the given
key from the command hash table.
Although it is a loadable module, it seems to be loaded by default, as long as zsh is not used with --emulate.
example:
martin#martin ~ % echo $commands[zsh]
/usr/bin/zsh
To quickly check whether a certain command is available, just check if the key exists in the hash:
if (( ${+commands[zsh]} ))
then
echo "zsh is available"
fi
Note though that the hash will contain any files in $PATH folders, regardless of whether they are executable or not. To be absolutely sure, you have to spend a stat call on that:
if (( ${+commands[zsh]} )) && [[ -x $commands[zsh] ]]
then
echo "zsh is available"
fi
The which command might be useful. man which
It returns 0 if the executable is found and returns 1 if it's not found or not executable:
NAME
which - locate a command
SYNOPSIS
which [-a] filename ...
DESCRIPTION
which returns the pathnames of the files which would
be executed in the current environment, had its
arguments been given as commands in a strictly
POSIX-conformant shell. It does this by searching
the PATH for executable files matching the names
of the arguments.
OPTIONS
-a print all matching pathnames of each argument
EXIT STATUS
0 if all specified commands are
found and executable
1 if one or more specified commands is nonexistent
or not executable
2 if an invalid option is specified
The nice thing about which is that it figures out if the executable is available in the environment that which is run in - it saves a few problems...
Use Bash builtins if you can:
which programname
...
type -P programname
For those interested, none of the methodologies in previous answers work if you wish to detect an installed library. I imagine you are left either with physically checking the path (potentially for header files and such), or something like this (if you are on a Debian-based distribution):
dpkg --status libdb-dev | grep -q not-installed
if [ $? -eq 0 ]; then
apt-get install libdb-dev
fi
As you can see from the above, a "0" answer from the query means the package is not installed. This is a function of "grep" - a "0" means a match was found, a "1" means no match was found.
This will tell according to the location if the program exist or not:
if [ -x /usr/bin/yum ]; then
echo "This is Centos"
fi
I'd say there isn't any portable and 100% reliable way due to dangling aliases. For example:
alias john='ls --color'
alias paul='george -F'
alias george='ls -h'
alias ringo=/
Of course, only the last one is problematic (no offence to Ringo!). But all of them are valid aliases from the point of view of command -v.
In order to reject dangling ones like ringo, we have to parse the output of the shell built-in alias command and recurse into them (command -v isn't a superior to alias here.) There isn't any portable solution for it, and even a Bash-specific solution is rather tedious.
Note that a solution like this will unconditionally reject alias ls='ls -F':
test() { command -v $1 | grep -qv alias }
If you guys/gals can't get the things in answers here to work and are pulling hair out of your back, try to run the same command using bash -c. Just look at this somnambular delirium. This is what really happening when you run $(sub-command):
First. It can give you completely different output.
$ command -v ls
alias ls='ls --color=auto'
$ bash -c "command -v ls"
/bin/ls
Second. It can give you no output at all.
$ command -v nvm
nvm
$ bash -c "command -v nvm"
$ bash -c "nvm --help"
bash: nvm: command not found
#!/bin/bash
a=${apt-cache show program}
if [[ $a == 0 ]]
then
echo "the program doesn't exist"
else
echo "the program exists"
fi
#program is not literal, you can change it to the program's name you want to check
The hash-variant has one pitfall: On the command line you can for example type in
one_folder/process
to have process executed. For this the parent folder of one_folder must be in $PATH. But when you try to hash this command, it will always succeed:
hash one_folder/process; echo $? # will always output '0'
I second the use of "command -v". E.g. like this:
md=$(command -v mkdirhier) ; alias md=${md:=mkdir} # bash
emacs="$(command -v emacs) -nw" || emacs=nano
alias e=$emacs
[[ -z $(command -v jed) ]] && alias jed=$emacs
I had to check if Git was installed as part of deploying our CI server. My final Bash script was as follows (Ubuntu server):
if ! builtin type -p git &>/dev/null; then
sudo apt-get -y install git-core
fi
To mimic Bash's type -P cmd, we can use the POSIX compliant env -i type cmd 1>/dev/null 2>&1.
man env
# "The option '-i' causes env to completely ignore the environment it inherits."
# In other words, there are no aliases or functions to be looked up by the type command.
ls() { echo 'Hello, world!'; }
ls
type ls
env -i type ls
cmd=ls
cmd=lsx
env -i type $cmd 1>/dev/null 2>&1 || { echo "$cmd not found"; exit 1; }
If there isn't any external type command available (as taken for granted here), we can use POSIX compliant env -i sh -c 'type cmd 1>/dev/null 2>&1':
# Portable version of Bash's type -P cmd (without output on stdout)
typep() {
command -p env -i PATH="$PATH" sh -c '
export LC_ALL=C LANG=C
cmd="$1"
cmd="`type "$cmd" 2>/dev/null || { echo "error: command $cmd not found; exiting ..." 1>&2; exit 1; }`"
[ $? != 0 ] && exit 1
case "$cmd" in
*\ /*) exit 0;;
*) printf "%s\n" "error: $cmd" 1>&2; exit 1;;
esac
' _ "$1" || exit 1
}
# Get your standard $PATH value
#PATH="$(command -p getconf PATH)"
typep ls
typep builtin
typep ls-temp
At least on Mac OS X v10.6.8 (Snow Leopard) using Bash 4.2.24(2) command -v ls does not match a moved /bin/ls-temp.
My setup for a Debian server:
I had the problem when multiple packages contained the same name.
For example apache2. So this was my solution:
function _apt_install() {
apt-get install -y $1 > /dev/null
}
function _apt_install_norecommends() {
apt-get install -y --no-install-recommends $1 > /dev/null
}
function _apt_available() {
if [ `apt-cache search $1 | grep -o "$1" | uniq | wc -l` = "1" ]; then
echo "Package is available : $1"
PACKAGE_INSTALL="1"
else
echo "Package $1 is NOT available for install"
echo "We can not continue without this package..."
echo "Exitting now.."
exit 0
fi
}
function _package_install {
_apt_available $1
if [ "${PACKAGE_INSTALL}" = "1" ]; then
if [ "$(dpkg-query -l $1 | tail -n1 | cut -c1-2)" = "ii" ]; then
echo "package is already_installed: $1"
else
echo "installing package : $1, please wait.."
_apt_install $1
sleep 0.5
fi
fi
}
function _package_install_no_recommends {
_apt_available $1
if [ "${PACKAGE_INSTALL}" = "1" ]; then
if [ "$(dpkg-query -l $1 | tail -n1 | cut -c1-2)" = "ii" ]; then
echo "package is already_installed: $1"
else
echo "installing package : $1, please wait.."
_apt_install_norecommends $1
sleep 0.5
fi
fi
}

Parameter list with double quotes does not pass through properly in Bash

I have a Bash script that calls another Bash script. The called script does some modification and checking on a few things, shifts, and then passes the rest of the caller's command line through.
In the called script, I have verified that I have everything managed and ready to call. Here's some debug-style code I've put in:
echo $SVN $command $# > /tmp/shimcmd
bash /tmp/shimcmd
$SVN $command $#
Now, in /tmp/shimcmd you'll see:
svn commit --username=myuser --password=mypass --non-interactive --trust-server-cert -m "Auto Update autocommit Wed Apr 11 17:33:37 CDT 2012"
That is, the built command, all on one line, perfectly fine, including a -m "my string with spaces" portion.
It's perfect. And the "bash /tmp/shimcmd" execution of it works perfectly as well.
But of course I don't want this silly tmp file and such (only used it to debug). The problem is that calling the command directly, instead of via the shim file:
$SVN $command $#
results in the svn command itself NOT receiving the quoted string with spaces--it garbles the '-m "my string with spaces"' parameter and shanks the command as if it was passed as '-m my string with spaces'.
I have tried all manner of crazy escape methods to no avail. Can't believe it's dogging me this badly. Again, by echoing the very same thing ($SVN $command $#) to a file and then executing that file, it's FINE. But calling directly garbles the quoted string. That element alone shanks.
Any ideas?
Dan
Did you try:
eval "$SVN $command $#"
?
Here's a way to demonstrate the problem:
$ args='-m "foo bar"'
$ printf '<%s> ' $args
<-m> <"foo> <bar">
And here's a way to avoid it:
$ args=( -m "foo bar" )
$ printf '<%s> ' "${args[#]}"
<-m> <foo bar>
In this latter case, args is an array, not a quoted string.
Note, by the way, that it has to be "$#", not $#, to get this behavior (in which string-splitting is avoided in favor of respecting the array entries' boundaries).
this
echo -n -e $SVN \"$command\" > /tmp/shimcmd
for x in "$#"
do
a=$a" "\"$x\"
done
echo -e " " $a >> /tmp/shimcmd
bash /tmp/shimcmd
or simply
$SVN "$command" "$#"

Equivalent of %~dp0 (retrieving source file name) in sh

I'm converting some Windows batch files to Unix scripts using sh. I have problems because some behavior is dependent on the %~dp0 macro available in batch files.
Is there any sh equivalent to this? Any way to obtain the directory where the executing script lives?
The problem (for you) with $0 is that it is set to whatever command line was use to invoke the script, not the location of the script itself. This can make it difficult to get the full path of the directory containing the script which is what you get from %~dp0 in a Windows batch file.
For example, consider the following script, dollar.sh:
#!/bin/bash
echo $0
If you'd run it you'll get the following output:
# ./dollar.sh
./dollar.sh
# /tmp/dollar.sh
/tmp/dollar.sh
So to get the fully qualified directory name of a script I do the following:
cd `dirname $0`
SCRIPTDIR=`pwd`
cd -
This works as follows:
cd to the directory of the script, using either the relative or absolute path from the command line.
Gets the absolute path of this directory and stores it in SCRIPTDIR.
Goes back to the previous working directory using "cd -".
Yes, you can! It's in the arguments. :)
look at
${0}
combining that with
{$var%Pattern}
Remove from $var the shortest part of $Pattern that matches the back end of $var.
what you want is just
${0%/*}
I recommend the Advanced Bash Scripting Guide
(that is also where the above information is from).
Especiall the part on Converting DOS Batch Files to Shell Scripts
might be useful for you. :)
If I have misunderstood you, you may have to combine that with the output of "pwd". Since it only contains the path the script was called with!
Try the following script:
#!/bin/bash
called_path=${0%/*}
stripped=${called_path#[^/]*}
real_path=`pwd`$stripped
echo "called path: $called_path"
echo "stripped: $stripped"
echo "pwd: `pwd`"
echo "real path: $real_path
This needs some work though.
I recommend using Dave Webb's approach unless that is impossible.
In bash under linux you can get the full path to the command with:
readlink /proc/$$/fd/255
and to get the directory:
dir=$(dirname $(readlink /proc/$$/fd/255))
It's ugly, but I have yet to find another way.
I was trying to find the path for a script that was being sourced from another script. And that was my problem, when sourcing the text just gets copied into the calling script, so $0 always returns information about the calling script.
I found a workaround, that only works in bash, $BASH_SOURCE always has the info about the script in which it is referred to. Even if the script is sourced it is correctly resolved to the original (sourced) script.
The correct answer is this one:
How do I determine the location of my script? I want to read some config files from the same place.
It is important to realize that in the general case, this problem has no solution. Any approach you might have heard of, and any approach that will be detailed below, has flaws and will only work in specific cases. First and foremost, try to avoid the problem entirely by not depending on the location of your script!
Before we dive into solutions, let's clear up some misunderstandings. It is important to understand that:
Your script does not actually have a location! Wherever the bytes end up coming from, there is no "one canonical path" for it. Never.
$0 is NOT the answer to your problem. If you think it is, you can either stop reading and write more bugs, or you can accept this and read on.
...
Try this:
${0%/*}
This should work for bash shell:
dir=$(dirname $(readlink -m $BASH_SOURCE))
Test script:
#!/bin/bash
echo $(dirname $(readlink -m $BASH_SOURCE))
Run test:
$ ./somedir/test.sh
/tmp/somedir
$ source ./somedir/test.sh
/tmp/somedir
$ bash ./somedir/test.sh
/tmp/somedir
$ . ./somedir/test.sh
/tmp/somedir
This is a script can get the shell file real path when executed or sourced.
Tested in bash, zsh, ksh, dash.
BTW: you shall clean the verbose code by yourself.
#!/usr/bin/env bash
echo "---------------- GET SELF PATH ----------------"
echo "NOW \$(pwd) >>> $(pwd)"
ORIGINAL_PWD_GETSELFPATHVAR=$(pwd)
echo "NOW \$0 >>> $0"
echo "NOW \$_ >>> $_"
echo "NOW \${0##*/} >>> ${0##*/}"
if test -n "$BASH"; then
echo "RUNNING IN BASH..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=${BASH_SOURCE[0]}
elif test -n "$ZSH_NAME"; then
echo "RUNNING IN ZSH..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=${(%):-%x}
elif test -n "$KSH_VERSION"; then
echo "RUNNING IN KSH..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=${.sh.file}
else
echo "RUNNING IN DASH OR OTHERS ELSE..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=$(lsof -p $$ -Fn0 | tr -d '\0' | grep "${0##*/}" | tail -1 | sed 's/^[^\/]*//g')
fi
echo "EXECUTING FILE PATH: $SH_FILE_RUN_PATH_GETSELFPATHVAR"
cd "$(dirname "$SH_FILE_RUN_PATH_GETSELFPATHVAR")" || return 1
SH_FILE_RUN_BASENAME_GETSELFPATHVAR=$(basename "$SH_FILE_RUN_PATH_GETSELFPATHVAR")
# Iterate down a (possible) chain of symlinks as lsof of macOS doesn't have -f option.
while [ -L "$SH_FILE_RUN_BASENAME_GETSELFPATHVAR" ]; do
SH_FILE_REAL_PATH_GETSELFPATHVAR=$(readlink "$SH_FILE_RUN_BASENAME_GETSELFPATHVAR")
cd "$(dirname "$SH_FILE_REAL_PATH_GETSELFPATHVAR")" || return 1
SH_FILE_RUN_BASENAME_GETSELFPATHVAR=$(basename "$SH_FILE_REAL_PATH_GETSELFPATHVAR")
done
# Compute the canonicalized name by finding the physical path
# for the directory we're in and appending the target file.
SH_SELF_PATH_DIR_RESULT=$(pwd -P)
SH_FILE_REAL_PATH_GETSELFPATHVAR=$SH_SELF_PATH_DIR_RESULT/$SH_FILE_RUN_BASENAME_GETSELFPATHVAR
echo "EXECUTING REAL PATH: $SH_FILE_REAL_PATH_GETSELFPATHVAR"
echo "EXECUTING FILE DIR: $SH_SELF_PATH_DIR_RESULT"
cd "$ORIGINAL_PWD_GETSELFPATHVAR" || return 1
unset ORIGINAL_PWD_GETSELFPATHVAR
unset SH_FILE_RUN_PATH_GETSELFPATHVAR
unset SH_FILE_RUN_BASENAME_GETSELFPATHVAR
unset SH_FILE_REAL_PATH_GETSELFPATHVAR
echo "---------------- GET SELF PATH ----------------"
# USE $SH_SELF_PATH_DIR_RESULT BEBLOW
I have tried $0 before, namely:
dirname $0
and it just returns "." even when the script is being sourced by another script:
. ../somedir/somescript.sh

How to properly handle wildcard expansion in a bash shell script?

#!/bin/bash
hello()
{
SRC=$1
DEST=$2
for IP in `cat /opt/ankit/configs/machine.configs` ; do
echo $SRC | grep '*' > /dev/null
if test `echo $?` -eq 0 ; then
for STAR in $SRC ; do
echo -en "$IP"
echo -en "\n\t ARG1=$STAR ARG2=$2\n\n"
done
else
echo -en "$IP"
echo -en "\n\t ARG1=$SRC ARG2=$DEST\n\n"
fi
done
}
hello $1 $2
The above is the shell script which I provide source (SRC) & desitnation (DEST) path. It worked fine when I did not put in a SRC path with wild card ''. When I run this shell script and give ''.pdf or '*'as follows:
root#ankit1:~/as_prac# ./test.sh /home/dev/Examples/*.pdf /ankit_test/as
I get the following output:
192.168.1.6
ARG1=/home/dev/Examples/case_Contact.pdf ARG2=/home/dev/Examples/case_howard_county_library.pdf
The DEST is /ankit_test/as but DEST also get manupulated due to '*'. The expected answer is
ARG1=/home/dev/Examples/case_Contact.pdf ARG2=/ankit_test/as
So, if you understand what I am trying to do, please help me out to solve this BUG.
I'll be grateful to you.
Thanks in advance!!!
I need to know exactly how I use '*.pdf' in my program one by one without disturbing DEST.
Your script needs more work.
Even after escaping the wildcard, you won't get your expected answer. You will get:
ARG1=/home/dev/Examples/*.pdf ARG2=/ankit__test/as
Try the following instead:
for IP in `cat /opt/ankit/configs/machine.configs`
do
for i in $SRC
do
echo -en "$IP"
echo -en "\n\t ARG1=$i ARG2=$DEST\n\n"
done
done
Run it like this:
root#ankit1:~/as_prac# ./test.sh "/home/dev/Examples/*.pdf" /ankit__test/as
The shell will expand wildcards unless you escape them, so for example if you have
$ ls
one.pdf two.pdf three.pdf
and run your script as
./test.sh *.pdf /ankit__test/as
it will be the same as
./test.sh one.pdf two.pdf three.pdf /ankit__test/as
which is not what you expect. Doing
./test.sh \*.pdf /ankit__test/as
should work.
If you can, change the order of the parameters passed to your shell script as follows:
./test.sh /ankit_test/as /home/dev/Examples/*.pdf
That would make your life a lot easier since the variable part moves to the end of the line. Then, the following script will do what you want:
#!/bin/bash
hello()
{
SRC=$1
DEST=$2
for IP in `cat /opt/ankit/configs/machine.configs` ; do
echo -en "$IP"
echo -en "\n\t ARG1=$SRC ARG2=$DEST\n\n"
done
}
arg2=$1
shift
while [[ "$1" != "" ]] ; do
hello $1 $arg2
shift
done
You are also missing a final "done" to close your outer for loop.
OK, this appears to do what you want:
#!/bin/bash
hello() {
SRC=$1
DEST=$2
while read IP ; do
for FILE in $SRC; do
echo -e "$IP"
echo -e "\tARG1=$FILE ARG2=$DEST\n"
done
done < /tmp/machine.configs
}
hello "$1" $2
You still need to escape any wildcard characters when you invoke the script
The double quotes are necessary when you invoke the hello function, otherwise the mere fact of evaluating $1 causes the wildcard to be expanded, but we don't want that to happen until $SRC is assigned in the function
Here's what I came up with:
#!/bin/bash
hello()
{
# DEST will contain the last argument
eval DEST=\$$#
while [ $1 != $DEST ]; do
SRC=$1
for IP in `cat /opt/ankit/configs/machine.configs`; do
echo -en "$IP"
echo -en "\n\t ARG1=$SRC ARG2=$DEST\n\n"
done
shift || break
done
}
hello $*
Instead of passing only two parameters to the hello() function, we'll pass in all the arguments that the script got.
Inside the hello() function, we first assign the final argument to the DEST var. Then we loop through all of the arguments, assigning each one to SRC, and run whatever commands we want using the SRC and DEST arguments. Note that you may want to put quotation marks around $SRC and $DEST in case they contain spaces. We stop looping when SRC is the same as DEST because that means we've hit the final argument (the destination).
For multiple input files using a wildcard such as *.txt, I found this to work perfectly, no escaping required. It should work just like a native bash app like "ls" or "rm." This was not documented just about anywhere so since I spent a better part of 3 days trying to figure it out I decided I should post it for future readers.
Directory contains the following files (output of ls)
file1.txt file2.txt file3.txt
Run script like
$ ./script.sh *.txt
Or even like
$ ./script.sh file{1..3}.txt
The script
#!/bin/bash
# store default IFS, we need to temporarily change this
sfi=$IFS
#set IFS to $'\n\' - new line
IFS=$'\n'
if [[ -z $# ]]
then
echo "Error: Missing required argument"
echo
exit 1
fi
# Put the file glob into an array
file=("$#")
# Now loop through them
for (( i=0 ; i < ${#file[*]} ; i++ ));
do
if [ -w ${file[$i]} ]; then
echo ${file[$i]} " writable"
else
echo ${file[$i]} " NOT writable"
fi
done
# Reset IFS to its default value
IFS=$sfi
The output
file1.txt writable
file2.txt writable
file3.txt writable
The key was switching the IFS (Internal Field Separator) temporarily. You have to be sure to store this before switching and then switch it back when you are done with it as demonstrated above.
Now you have a list of expanded files (with spaces escaped) in the file[] array which you can then loop through. I like this solution the best, easiest to program for and easiest for the users.
There's no need to spawn a shell to look at the $? variable, you can evaluate it directly.
It should just be:
if [ $? -eq 0 ]; then
You're running
./test.sh /home/dev/Examples/*.pdf /ankit_test/as
and your interactive shell is expanding the wildcard before the script gets it. You just need to quote the first argument when you launch it, as in
./test.sh "/home/dev/Examples/*.pdf" /ankit_test/as
and then, in your script, quote "$SRC" anywhere where you literally want the things with wildcards (ie, when you do echo $SRC, instead use echo "$SRC") and leave it unquoted when you want the wildcards expanded. Basically, always put quotes around things which might contain shell metacharacters unless you want the metacharacters interpreted. :)

Resources