How can I test if a file could be marked as executable and run? [duplicate] - linux

I am wondering what's the easiest way to check if a program is executable with bash, without executing it ? It should at least check whether the file has execute rights, and is of the same architecture (for example, not a windows executable or another unsupported architecture, not 64 bits if the system is 32 bits, ...) as the current system.

Take a look at the various test operators (this is for the test command itself, but the built-in BASH and TCSH tests are more or less the same).
You'll notice that -x FILE says FILE exists and execute (or search) permission is granted.
BASH, Bourne, Ksh, Zsh Script
if [[ -x "$file" ]]
then
echo "File '$file' is executable"
else
echo "File '$file' is not executable or found"
fi
TCSH or CSH Script:
if ( -x "$file" ) then
echo "File '$file' is executable"
else
echo "File '$file' is not executable or found"
endif
To determine the type of file it is, try the file command. You can parse the output to see exactly what type of file it is. Word 'o Warning: Sometimes file will return more than one line. Here's what happens on my Mac:
$ file /bin/ls
/bin/ls: Mach-O universal binary with 2 architectures
/bin/ls (for architecture x86_64): Mach-O 64-bit executable x86_64
/bin/ls (for architecture i386): Mach-O executable i386
The file command returns different output depending upon the OS. However, the word executable will be in executable programs, and usually the architecture will appear too.
Compare the above to what I get on my Linux box:
$ file /bin/ls
/bin/ls: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped
And a Solaris box:
$ file /bin/ls
/bin/ls: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, stripped
In all three, you'll see the word executable and the architecture (x86-64, i386, or SPARC with 32-bit).
Addendum
Thank you very much, that seems the way to go. Before I mark this as my answer, can you please guide me as to what kind of script shell check I would have to perform (ie, what kind of parsing) on 'file' in order to check whether I can execute a program ? If such a test is too difficult to make on a general basis, I would at least like to check whether it's a linux executable or osX (Mach-O)
Off the top of my head, you could do something like this in BASH:
if [ -x "$file" ] && file "$file" | grep -q "Mach-O"
then
echo "This is an executable Mac file"
elif [ -x "$file" ] && file "$file" | grep -q "GNU/Linux"
then
echo "This is an executable Linux File"
elif [ -x "$file" ] && file "$file" | grep q "shell script"
then
echo "This is an executable Shell Script"
elif [ -x "$file" ]
then
echo "This file is merely marked executable, but what type is a mystery"
else
echo "This file isn't even marked as being executable"
fi
Basically, I'm running the test, then if that is successful, I do a grep on the output of the file command. The grep -q means don't print any output, but use the exit code of grep to see if I found the string. If your system doesn't take grep -q, you can try grep "regex" > /dev/null 2>&1.
Again, the output of the file command may vary from system to system, so you'll have to verify that these will work on your system. Also, I'm checking the executable bit. If a file is a binary executable, but the executable bit isn't on, I'll say it's not executable. This may not be what you want.

Seems nobody noticed that -x operator does not differ file with directory.
So to precisely check an executable file, you may use
[[ -f SomeFile && -x SomeFile ]]

Testing files, directories and symlinks
The solutions given here fail on either directories or symlinks (or both). On Linux, you can test files, directories and symlinks with:
if [[ -f "$file" && -x $(realpath "$file") ]]; then .... fi
On OS X, you should be able to install coreutils with homebrew and use grealpath.
Defining an isexec function
You can define a function for convenience:
isexec() {
if [[ -f "$1" && -x $(realpath "$1") ]]; then
true;
else
false;
fi;
}
Or simply
isexec() { [[ -f "$1" && -x $(realpath "$1") ]]; }
Then you can test using:
if `isexec "$file"`; then ... fi

Also seems nobody noticed -x operator on symlinks. A symlink (chain) to a regular file (not classified as executable) fails the test.

First you need to remember that in Unix and Linux, everything is a file, even directories. For a file to have the rights to be executed as a command, it needs to satisfy 3 conditions:
It needs to be a regular file
It needs to have read-permissions
It needs to have execute-permissions
So this can be done simply with:
[ -f "${file}" ] && [ -r "${file}" ] && [ -x "${file}" ]
If your file is a symbolic link to a regular file, the test command will operate on the target and not the link-name. So the above command distinguishes if a file can be used as a command or not. So there is no need to pass the file first to realpath or readlink or any of those variants.
If the file can be executed on the current OS, that is a different question. Some answers above already pointed to some possibilities for that, so there is no need to repeat it here.

To test whether a file itself has ACL_EXECUTE bit set in any of permission sets (user, group, others) regardless of where it resides, i. e. even on a tmpfs with noexec option, use stat -c '%A' to get the permission string and then check if it contains at least a single “x” letter:
if [[ "$(stat -c '%A' 'my_exec_file')" == *'x'* ]] ; then
echo 'Has executable permission for someone'
fi
The right-hand part of comparison may be modified to fit more specific cases, such as *x*x*x* to check whether all kinds of users should be able to execute the file when it is placed on a volume mounted with exec option.

This might be not so obvious, but sometime is required to test the executable to appropriately call it without an external shell process:
function tkl_is_file_os_exec()
{
[[ ! -x "$1" ]] && return 255
local exec_header_bytes
case "$OSTYPE" in
cygwin* | msys* | mingw*)
# CAUTION:
# The bash version 3.2+ might require a file path together with the extension,
# otherwise will throw the error: `bash: ...: No such file or directory`.
# So we make a guess to avoid the error.
#
{
read -r -n 4 exec_header_bytes 2> /dev/null < "$1" ||
{
[[ -x "${1%.exe}.exe" ]] && read -r -n 4 exec_header_bytes 2> /dev/null < "${1%.exe}.exe"
} ||
{
[[ -x "${1%.com}.com" ]] && read -r -n 4 exec_header_bytes 2> /dev/null < "${1%.com}.com"
}
} &&
if [[ "${exec_header_bytes:0:3}" == $'MZ\x90' ]]; then
# $'MZ\x90\00' for bash version 3.2.42+
# $'MZ\x90\03' for bash version 4.0+
[[ "${exec_header_bytes:3:1}" == $'\x00' || "${exec_header_bytes:3:1}" == $'\x03' ]] && return 0
fi
;;
*)
read -r -n 4 exec_header_bytes < "$1"
[[ "$exec_header_bytes" == $'\x7fELF' ]] && return 0
;;
esac
return 1
}
# executes script in the shell process in case of a shell script, otherwise executes as usual
function tkl_exec_inproc()
{
if tkl_is_file_os_exec "$1"; then
"$#"
else
. "$#"
fi
return $?
}
myscript.sh:
#!/bin/bash
echo 123
return 123
In Cygwin:
> tkl_exec_inproc /cygdrive/c/Windows/system32/cmd.exe /c 'echo 123'
123
> tkl_exec_inproc /cygdrive/c/Windows/system32/chcp.com 65001
Active code page: 65001
> tkl_exec_inproc ./myscript.sh
123
> echo $?
123
In Linux:
> tkl_exec_inproc /bin/bash -c 'echo 123'
123
> tkl_exec_inproc ./myscript.sh
123
> echo $?
123

Related

Checking if package is installed [duplicate]

How would I validate that a program exists, in a way that will either return an error and exit, or continue with the script?
It seems like it should be easy, but it's been stumping me.
Answer
POSIX compatible:
command -v <the_command>
Example use:
if ! command -v <the_command> &> /dev/null
then
echo "<the_command> could not be found"
exit
fi
For Bash specific environments:
hash <the_command> # For regular commands. Or...
type <the_command> # To check built-ins and keywords
Explanation
Avoid which. Not only is it an external process you're launching for doing very little (meaning builtins like hash, type or command are way cheaper), you can also rely on the builtins to actually do what you want, while the effects of external commands can easily vary from system to system.
Why care?
Many operating systems have a which that doesn't even set an exit status, meaning the if which foo won't even work there and will always report that foo exists, even if it doesn't (note that some POSIX shells appear to do this for hash too).
Many operating systems make which do custom and evil stuff like change the output or even hook into the package manager.
So, don't use which. Instead use one of these:
command -v foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
type foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
hash foo 2>/dev/null || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
(Minor side-note: some will suggest 2>&- is the same 2>/dev/null but shorter – this is untrue. 2>&- closes FD 2 which causes an error in the program when it tries to write to stderr, which is very different from successfully writing to it and discarding the output (and dangerous!))
If your hash bang is /bin/sh then you should care about what POSIX says. type and hash's exit codes aren't terribly well defined by POSIX, and hash is seen to exit successfully when the command doesn't exist (haven't seen this with type yet). command's exit status is well defined by POSIX, so that one is probably the safest to use.
If your script uses bash though, POSIX rules don't really matter anymore and both type and hash become perfectly safe to use. type now has a -P to search just the PATH and hash has the side-effect that the command's location will be hashed (for faster lookup next time you use it), which is usually a good thing since you probably check for its existence in order to actually use it.
As a simple example, here's a function that runs gdate if it exists, otherwise date:
gnudate() {
if hash gdate 2>/dev/null; then
gdate "$#"
else
date "$#"
fi
}
Alternative with a complete feature set
You can use scripts-common to reach your need.
To check if something is installed, you can do:
checkBin <the_command> || errorMessage "This tool requires <the_command>. Install it please, and then run this tool again."
The following is a portable way to check whether a command exists in $PATH and is executable:
[ -x "$(command -v foo)" ]
Example:
if ! [ -x "$(command -v git)" ]; then
echo 'Error: git is not installed.' >&2
exit 1
fi
The executable check is needed because bash returns a non-executable file if no executable file with that name is found in $PATH.
Also note that if a non-executable file with the same name as the executable exists earlier in $PATH, dash returns the former, even though the latter would be executed. This is a bug and is in violation of the POSIX standard. [Bug report] [Standard]
Edit: This seems to be fixed as of dash 0.5.11 (Debian 11).
In addition, this will fail if the command you are looking for has been defined as an alias.
I agree with lhunath to discourage use of which, and his solution is perfectly valid for Bash users. However, to be more portable, command -v shall be used instead:
$ command -v foo >/dev/null 2>&1 || { echo "I require foo but it's not installed. Aborting." >&2; exit 1; }
Command command is POSIX compliant. See here for its specification: command - execute a simple command
Note: type is POSIX compliant, but type -P is not.
It depends on whether you want to know whether it exists in one of the directories in the $PATH variable or whether you know the absolute location of it. If you want to know if it is in the $PATH variable, use
if which programname >/dev/null; then
echo exists
else
echo does not exist
fi
otherwise use
if [ -x /path/to/programname ]; then
echo exists
else
echo does not exist
fi
The redirection to /dev/null/ in the first example suppresses the output of the which program.
I have a function defined in my .bashrc that makes this easier.
command_exists () {
type "$1" &> /dev/null ;
}
Here's an example of how it's used (from my .bash_profile.)
if command_exists mvim ; then
export VISUAL="mvim --nofork"
fi
Expanding on #lhunath's and #GregV's answers, here's the code for the people who want to easily put that check inside an if statement:
exists()
{
command -v "$1" >/dev/null 2>&1
}
Here's how to use it:
if exists bash; then
echo 'Bash exists!'
else
echo 'Your system does not have Bash'
fi
Try using:
test -x filename
or
[ -x filename ]
From the Bash manpage under Conditional Expressions:
-x file
True if file exists and is executable.
To use hash, as #lhunath suggests, in a Bash script:
hash foo &> /dev/null
if [ $? -eq 1 ]; then
echo >&2 "foo not found."
fi
This script runs hash and then checks if the exit code of the most recent command, the value stored in $?, is equal to 1. If hash doesn't find foo, the exit code will be 1. If foo is present, the exit code will be 0.
&> /dev/null redirects standard error and standard output from hash so that it doesn't appear onscreen and echo >&2 writes the message to standard error.
Command -v works fine if the POSIX_BUILTINS option is set for the <command> to test for, but it can fail if not. (It has worked for me for years, but I recently ran into one where it didn't work.)
I find the following to be more failproof:
test -x "$(which <command>)"
Since it tests for three things: path, existence and execution permission.
There are a ton of options here, but I was surprised no quick one-liners. This is what I used at the beginning of my scripts:
[[ "$(command -v mvn)" ]] || { echo "mvn is not installed" 1>&2 ; exit 1; }
[[ "$(command -v java)" ]] || { echo "java is not installed" 1>&2 ; exit 1; }
This is based on the selected answer here and another source.
If you check for program existence, you are probably going to run it later anyway. Why not try to run it in the first place?
if foo --version >/dev/null 2>&1; then
echo Found
else
echo Not found
fi
It's a more trustworthy check that the program runs than merely looking at PATH directories and file permissions.
Plus you can get some useful result from your program, such as its version.
Of course the drawbacks are that some programs can be heavy to start and some don't have a --version option to immediately (and successfully) exit.
Check for multiple dependencies and inform status to end users
for cmd in latex pandoc; do
printf '%-10s' "$cmd"
if hash "$cmd" 2>/dev/null; then
echo OK
else
echo missing
fi
done
Sample output:
latex OK
pandoc missing
Adjust the 10 to the maximum command length. It is not automatic, because I don't see a non-verbose POSIX way to do it:
How can I align the columns of a space separated table in Bash?
Check if some apt packages are installed with dpkg -s and install them otherwise.
See: Check if an apt-get package is installed and then install it if it's not on Linux
It was previously mentioned at: How can I check if a program exists from a Bash script?
I never did get the previous answers to work on the box I have access to. For one, type has been installed (doing what more does). So the builtin directive is needed. This command works for me:
if [ `builtin type -p vim` ]; then echo "TRUE"; else echo "FALSE"; fi
I wanted the same question answered but to run within a Makefile.
install:
#if [[ ! -x "$(shell command -v ghead)" ]]; then \
echo 'ghead does not exist. Please install it.'; \
exit -1; \
fi
It could be simpler, just:
#!/usr/bin/env bash
set -x
# if local program 'foo' returns 1 (doesn't exist) then...
if ! type -P foo; then
echo 'crap, no foo'
else
echo 'sweet, we have foo!'
fi
Change foo to vi to get the other condition to fire.
hash foo 2>/dev/null: works with Z shell (Zsh), Bash, Dash and ash.
type -p foo: it appears to work with Z shell, Bash and ash (BusyBox), but not Dash (it interprets -p as an argument).
command -v foo: works with Z shell, Bash, Dash, but not ash (BusyBox) (-ash: command: not found).
Also note that builtin is not available with ash and Dash.
zsh only, but very useful for zsh scripting (e.g. when writing completion scripts):
The zsh/parameter module gives access to, among other things, the internal commands hash table. From man zshmodules:
THE ZSH/PARAMETER MODULE
The zsh/parameter module gives access to some of the internal hash ta‐
bles used by the shell by defining some special parameters.
[...]
commands
This array gives access to the command hash table. The keys are
the names of external commands, the values are the pathnames of
the files that would be executed when the command would be in‐
voked. Setting a key in this array defines a new entry in this
table in the same way as with the hash builtin. Unsetting a key
as in `unset "commands[foo]"' removes the entry for the given
key from the command hash table.
Although it is a loadable module, it seems to be loaded by default, as long as zsh is not used with --emulate.
example:
martin#martin ~ % echo $commands[zsh]
/usr/bin/zsh
To quickly check whether a certain command is available, just check if the key exists in the hash:
if (( ${+commands[zsh]} ))
then
echo "zsh is available"
fi
Note though that the hash will contain any files in $PATH folders, regardless of whether they are executable or not. To be absolutely sure, you have to spend a stat call on that:
if (( ${+commands[zsh]} )) && [[ -x $commands[zsh] ]]
then
echo "zsh is available"
fi
The which command might be useful. man which
It returns 0 if the executable is found and returns 1 if it's not found or not executable:
NAME
which - locate a command
SYNOPSIS
which [-a] filename ...
DESCRIPTION
which returns the pathnames of the files which would
be executed in the current environment, had its
arguments been given as commands in a strictly
POSIX-conformant shell. It does this by searching
the PATH for executable files matching the names
of the arguments.
OPTIONS
-a print all matching pathnames of each argument
EXIT STATUS
0 if all specified commands are
found and executable
1 if one or more specified commands is nonexistent
or not executable
2 if an invalid option is specified
The nice thing about which is that it figures out if the executable is available in the environment that which is run in - it saves a few problems...
Use Bash builtins if you can:
which programname
...
type -P programname
For those interested, none of the methodologies in previous answers work if you wish to detect an installed library. I imagine you are left either with physically checking the path (potentially for header files and such), or something like this (if you are on a Debian-based distribution):
dpkg --status libdb-dev | grep -q not-installed
if [ $? -eq 0 ]; then
apt-get install libdb-dev
fi
As you can see from the above, a "0" answer from the query means the package is not installed. This is a function of "grep" - a "0" means a match was found, a "1" means no match was found.
This will tell according to the location if the program exist or not:
if [ -x /usr/bin/yum ]; then
echo "This is Centos"
fi
I'd say there isn't any portable and 100% reliable way due to dangling aliases. For example:
alias john='ls --color'
alias paul='george -F'
alias george='ls -h'
alias ringo=/
Of course, only the last one is problematic (no offence to Ringo!). But all of them are valid aliases from the point of view of command -v.
In order to reject dangling ones like ringo, we have to parse the output of the shell built-in alias command and recurse into them (command -v isn't a superior to alias here.) There isn't any portable solution for it, and even a Bash-specific solution is rather tedious.
Note that a solution like this will unconditionally reject alias ls='ls -F':
test() { command -v $1 | grep -qv alias }
If you guys/gals can't get the things in answers here to work and are pulling hair out of your back, try to run the same command using bash -c. Just look at this somnambular delirium. This is what really happening when you run $(sub-command):
First. It can give you completely different output.
$ command -v ls
alias ls='ls --color=auto'
$ bash -c "command -v ls"
/bin/ls
Second. It can give you no output at all.
$ command -v nvm
nvm
$ bash -c "command -v nvm"
$ bash -c "nvm --help"
bash: nvm: command not found
#!/bin/bash
a=${apt-cache show program}
if [[ $a == 0 ]]
then
echo "the program doesn't exist"
else
echo "the program exists"
fi
#program is not literal, you can change it to the program's name you want to check
The hash-variant has one pitfall: On the command line you can for example type in
one_folder/process
to have process executed. For this the parent folder of one_folder must be in $PATH. But when you try to hash this command, it will always succeed:
hash one_folder/process; echo $? # will always output '0'
I second the use of "command -v". E.g. like this:
md=$(command -v mkdirhier) ; alias md=${md:=mkdir} # bash
emacs="$(command -v emacs) -nw" || emacs=nano
alias e=$emacs
[[ -z $(command -v jed) ]] && alias jed=$emacs
I had to check if Git was installed as part of deploying our CI server. My final Bash script was as follows (Ubuntu server):
if ! builtin type -p git &>/dev/null; then
sudo apt-get -y install git-core
fi
To mimic Bash's type -P cmd, we can use the POSIX compliant env -i type cmd 1>/dev/null 2>&1.
man env
# "The option '-i' causes env to completely ignore the environment it inherits."
# In other words, there are no aliases or functions to be looked up by the type command.
ls() { echo 'Hello, world!'; }
ls
type ls
env -i type ls
cmd=ls
cmd=lsx
env -i type $cmd 1>/dev/null 2>&1 || { echo "$cmd not found"; exit 1; }
If there isn't any external type command available (as taken for granted here), we can use POSIX compliant env -i sh -c 'type cmd 1>/dev/null 2>&1':
# Portable version of Bash's type -P cmd (without output on stdout)
typep() {
command -p env -i PATH="$PATH" sh -c '
export LC_ALL=C LANG=C
cmd="$1"
cmd="`type "$cmd" 2>/dev/null || { echo "error: command $cmd not found; exiting ..." 1>&2; exit 1; }`"
[ $? != 0 ] && exit 1
case "$cmd" in
*\ /*) exit 0;;
*) printf "%s\n" "error: $cmd" 1>&2; exit 1;;
esac
' _ "$1" || exit 1
}
# Get your standard $PATH value
#PATH="$(command -p getconf PATH)"
typep ls
typep builtin
typep ls-temp
At least on Mac OS X v10.6.8 (Snow Leopard) using Bash 4.2.24(2) command -v ls does not match a moved /bin/ls-temp.
My setup for a Debian server:
I had the problem when multiple packages contained the same name.
For example apache2. So this was my solution:
function _apt_install() {
apt-get install -y $1 > /dev/null
}
function _apt_install_norecommends() {
apt-get install -y --no-install-recommends $1 > /dev/null
}
function _apt_available() {
if [ `apt-cache search $1 | grep -o "$1" | uniq | wc -l` = "1" ]; then
echo "Package is available : $1"
PACKAGE_INSTALL="1"
else
echo "Package $1 is NOT available for install"
echo "We can not continue without this package..."
echo "Exitting now.."
exit 0
fi
}
function _package_install {
_apt_available $1
if [ "${PACKAGE_INSTALL}" = "1" ]; then
if [ "$(dpkg-query -l $1 | tail -n1 | cut -c1-2)" = "ii" ]; then
echo "package is already_installed: $1"
else
echo "installing package : $1, please wait.."
_apt_install $1
sleep 0.5
fi
fi
}
function _package_install_no_recommends {
_apt_available $1
if [ "${PACKAGE_INSTALL}" = "1" ]; then
if [ "$(dpkg-query -l $1 | tail -n1 | cut -c1-2)" = "ii" ]; then
echo "package is already_installed: $1"
else
echo "installing package : $1, please wait.."
_apt_install_norecommends $1
sleep 0.5
fi
fi
}

How do I check if a file is executable on Linux [duplicate]

I am wondering what's the easiest way to check if a program is executable with bash, without executing it ? It should at least check whether the file has execute rights, and is of the same architecture (for example, not a windows executable or another unsupported architecture, not 64 bits if the system is 32 bits, ...) as the current system.
Take a look at the various test operators (this is for the test command itself, but the built-in BASH and TCSH tests are more or less the same).
You'll notice that -x FILE says FILE exists and execute (or search) permission is granted.
BASH, Bourne, Ksh, Zsh Script
if [[ -x "$file" ]]
then
echo "File '$file' is executable"
else
echo "File '$file' is not executable or found"
fi
TCSH or CSH Script:
if ( -x "$file" ) then
echo "File '$file' is executable"
else
echo "File '$file' is not executable or found"
endif
To determine the type of file it is, try the file command. You can parse the output to see exactly what type of file it is. Word 'o Warning: Sometimes file will return more than one line. Here's what happens on my Mac:
$ file /bin/ls
/bin/ls: Mach-O universal binary with 2 architectures
/bin/ls (for architecture x86_64): Mach-O 64-bit executable x86_64
/bin/ls (for architecture i386): Mach-O executable i386
The file command returns different output depending upon the OS. However, the word executable will be in executable programs, and usually the architecture will appear too.
Compare the above to what I get on my Linux box:
$ file /bin/ls
/bin/ls: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped
And a Solaris box:
$ file /bin/ls
/bin/ls: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, stripped
In all three, you'll see the word executable and the architecture (x86-64, i386, or SPARC with 32-bit).
Addendum
Thank you very much, that seems the way to go. Before I mark this as my answer, can you please guide me as to what kind of script shell check I would have to perform (ie, what kind of parsing) on 'file' in order to check whether I can execute a program ? If such a test is too difficult to make on a general basis, I would at least like to check whether it's a linux executable or osX (Mach-O)
Off the top of my head, you could do something like this in BASH:
if [ -x "$file" ] && file "$file" | grep -q "Mach-O"
then
echo "This is an executable Mac file"
elif [ -x "$file" ] && file "$file" | grep -q "GNU/Linux"
then
echo "This is an executable Linux File"
elif [ -x "$file" ] && file "$file" | grep q "shell script"
then
echo "This is an executable Shell Script"
elif [ -x "$file" ]
then
echo "This file is merely marked executable, but what type is a mystery"
else
echo "This file isn't even marked as being executable"
fi
Basically, I'm running the test, then if that is successful, I do a grep on the output of the file command. The grep -q means don't print any output, but use the exit code of grep to see if I found the string. If your system doesn't take grep -q, you can try grep "regex" > /dev/null 2>&1.
Again, the output of the file command may vary from system to system, so you'll have to verify that these will work on your system. Also, I'm checking the executable bit. If a file is a binary executable, but the executable bit isn't on, I'll say it's not executable. This may not be what you want.
Seems nobody noticed that -x operator does not differ file with directory.
So to precisely check an executable file, you may use
[[ -f SomeFile && -x SomeFile ]]
Testing files, directories and symlinks
The solutions given here fail on either directories or symlinks (or both). On Linux, you can test files, directories and symlinks with:
if [[ -f "$file" && -x $(realpath "$file") ]]; then .... fi
On OS X, you should be able to install coreutils with homebrew and use grealpath.
Defining an isexec function
You can define a function for convenience:
isexec() {
if [[ -f "$1" && -x $(realpath "$1") ]]; then
true;
else
false;
fi;
}
Or simply
isexec() { [[ -f "$1" && -x $(realpath "$1") ]]; }
Then you can test using:
if `isexec "$file"`; then ... fi
Also seems nobody noticed -x operator on symlinks. A symlink (chain) to a regular file (not classified as executable) fails the test.
First you need to remember that in Unix and Linux, everything is a file, even directories. For a file to have the rights to be executed as a command, it needs to satisfy 3 conditions:
It needs to be a regular file
It needs to have read-permissions
It needs to have execute-permissions
So this can be done simply with:
[ -f "${file}" ] && [ -r "${file}" ] && [ -x "${file}" ]
If your file is a symbolic link to a regular file, the test command will operate on the target and not the link-name. So the above command distinguishes if a file can be used as a command or not. So there is no need to pass the file first to realpath or readlink or any of those variants.
If the file can be executed on the current OS, that is a different question. Some answers above already pointed to some possibilities for that, so there is no need to repeat it here.
To test whether a file itself has ACL_EXECUTE bit set in any of permission sets (user, group, others) regardless of where it resides, i. e. even on a tmpfs with noexec option, use stat -c '%A' to get the permission string and then check if it contains at least a single “x” letter:
if [[ "$(stat -c '%A' 'my_exec_file')" == *'x'* ]] ; then
echo 'Has executable permission for someone'
fi
The right-hand part of comparison may be modified to fit more specific cases, such as *x*x*x* to check whether all kinds of users should be able to execute the file when it is placed on a volume mounted with exec option.
This might be not so obvious, but sometime is required to test the executable to appropriately call it without an external shell process:
function tkl_is_file_os_exec()
{
[[ ! -x "$1" ]] && return 255
local exec_header_bytes
case "$OSTYPE" in
cygwin* | msys* | mingw*)
# CAUTION:
# The bash version 3.2+ might require a file path together with the extension,
# otherwise will throw the error: `bash: ...: No such file or directory`.
# So we make a guess to avoid the error.
#
{
read -r -n 4 exec_header_bytes 2> /dev/null < "$1" ||
{
[[ -x "${1%.exe}.exe" ]] && read -r -n 4 exec_header_bytes 2> /dev/null < "${1%.exe}.exe"
} ||
{
[[ -x "${1%.com}.com" ]] && read -r -n 4 exec_header_bytes 2> /dev/null < "${1%.com}.com"
}
} &&
if [[ "${exec_header_bytes:0:3}" == $'MZ\x90' ]]; then
# $'MZ\x90\00' for bash version 3.2.42+
# $'MZ\x90\03' for bash version 4.0+
[[ "${exec_header_bytes:3:1}" == $'\x00' || "${exec_header_bytes:3:1}" == $'\x03' ]] && return 0
fi
;;
*)
read -r -n 4 exec_header_bytes < "$1"
[[ "$exec_header_bytes" == $'\x7fELF' ]] && return 0
;;
esac
return 1
}
# executes script in the shell process in case of a shell script, otherwise executes as usual
function tkl_exec_inproc()
{
if tkl_is_file_os_exec "$1"; then
"$#"
else
. "$#"
fi
return $?
}
myscript.sh:
#!/bin/bash
echo 123
return 123
In Cygwin:
> tkl_exec_inproc /cygdrive/c/Windows/system32/cmd.exe /c 'echo 123'
123
> tkl_exec_inproc /cygdrive/c/Windows/system32/chcp.com 65001
Active code page: 65001
> tkl_exec_inproc ./myscript.sh
123
> echo $?
123
In Linux:
> tkl_exec_inproc /bin/bash -c 'echo 123'
123
> tkl_exec_inproc ./myscript.sh
123
> echo $?
123

How to suppress Error printed by shell commands. [duplicate]

How would I validate that a program exists, in a way that will either return an error and exit, or continue with the script?
It seems like it should be easy, but it's been stumping me.
Answer
POSIX compatible:
command -v <the_command>
Example use:
if ! command -v <the_command> &> /dev/null
then
echo "<the_command> could not be found"
exit
fi
For Bash specific environments:
hash <the_command> # For regular commands. Or...
type <the_command> # To check built-ins and keywords
Explanation
Avoid which. Not only is it an external process you're launching for doing very little (meaning builtins like hash, type or command are way cheaper), you can also rely on the builtins to actually do what you want, while the effects of external commands can easily vary from system to system.
Why care?
Many operating systems have a which that doesn't even set an exit status, meaning the if which foo won't even work there and will always report that foo exists, even if it doesn't (note that some POSIX shells appear to do this for hash too).
Many operating systems make which do custom and evil stuff like change the output or even hook into the package manager.
So, don't use which. Instead use one of these:
command -v foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
type foo >/dev/null 2>&1 || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
hash foo 2>/dev/null || { echo >&2 "I require foo but it's not installed. Aborting."; exit 1; }
(Minor side-note: some will suggest 2>&- is the same 2>/dev/null but shorter – this is untrue. 2>&- closes FD 2 which causes an error in the program when it tries to write to stderr, which is very different from successfully writing to it and discarding the output (and dangerous!))
If your hash bang is /bin/sh then you should care about what POSIX says. type and hash's exit codes aren't terribly well defined by POSIX, and hash is seen to exit successfully when the command doesn't exist (haven't seen this with type yet). command's exit status is well defined by POSIX, so that one is probably the safest to use.
If your script uses bash though, POSIX rules don't really matter anymore and both type and hash become perfectly safe to use. type now has a -P to search just the PATH and hash has the side-effect that the command's location will be hashed (for faster lookup next time you use it), which is usually a good thing since you probably check for its existence in order to actually use it.
As a simple example, here's a function that runs gdate if it exists, otherwise date:
gnudate() {
if hash gdate 2>/dev/null; then
gdate "$#"
else
date "$#"
fi
}
Alternative with a complete feature set
You can use scripts-common to reach your need.
To check if something is installed, you can do:
checkBin <the_command> || errorMessage "This tool requires <the_command>. Install it please, and then run this tool again."
The following is a portable way to check whether a command exists in $PATH and is executable:
[ -x "$(command -v foo)" ]
Example:
if ! [ -x "$(command -v git)" ]; then
echo 'Error: git is not installed.' >&2
exit 1
fi
The executable check is needed because bash returns a non-executable file if no executable file with that name is found in $PATH.
Also note that if a non-executable file with the same name as the executable exists earlier in $PATH, dash returns the former, even though the latter would be executed. This is a bug and is in violation of the POSIX standard. [Bug report] [Standard]
Edit: This seems to be fixed as of dash 0.5.11 (Debian 11).
In addition, this will fail if the command you are looking for has been defined as an alias.
I agree with lhunath to discourage use of which, and his solution is perfectly valid for Bash users. However, to be more portable, command -v shall be used instead:
$ command -v foo >/dev/null 2>&1 || { echo "I require foo but it's not installed. Aborting." >&2; exit 1; }
Command command is POSIX compliant. See here for its specification: command - execute a simple command
Note: type is POSIX compliant, but type -P is not.
It depends on whether you want to know whether it exists in one of the directories in the $PATH variable or whether you know the absolute location of it. If you want to know if it is in the $PATH variable, use
if which programname >/dev/null; then
echo exists
else
echo does not exist
fi
otherwise use
if [ -x /path/to/programname ]; then
echo exists
else
echo does not exist
fi
The redirection to /dev/null/ in the first example suppresses the output of the which program.
I have a function defined in my .bashrc that makes this easier.
command_exists () {
type "$1" &> /dev/null ;
}
Here's an example of how it's used (from my .bash_profile.)
if command_exists mvim ; then
export VISUAL="mvim --nofork"
fi
Expanding on #lhunath's and #GregV's answers, here's the code for the people who want to easily put that check inside an if statement:
exists()
{
command -v "$1" >/dev/null 2>&1
}
Here's how to use it:
if exists bash; then
echo 'Bash exists!'
else
echo 'Your system does not have Bash'
fi
Try using:
test -x filename
or
[ -x filename ]
From the Bash manpage under Conditional Expressions:
-x file
True if file exists and is executable.
To use hash, as #lhunath suggests, in a Bash script:
hash foo &> /dev/null
if [ $? -eq 1 ]; then
echo >&2 "foo not found."
fi
This script runs hash and then checks if the exit code of the most recent command, the value stored in $?, is equal to 1. If hash doesn't find foo, the exit code will be 1. If foo is present, the exit code will be 0.
&> /dev/null redirects standard error and standard output from hash so that it doesn't appear onscreen and echo >&2 writes the message to standard error.
Command -v works fine if the POSIX_BUILTINS option is set for the <command> to test for, but it can fail if not. (It has worked for me for years, but I recently ran into one where it didn't work.)
I find the following to be more failproof:
test -x "$(which <command>)"
Since it tests for three things: path, existence and execution permission.
There are a ton of options here, but I was surprised no quick one-liners. This is what I used at the beginning of my scripts:
[[ "$(command -v mvn)" ]] || { echo "mvn is not installed" 1>&2 ; exit 1; }
[[ "$(command -v java)" ]] || { echo "java is not installed" 1>&2 ; exit 1; }
This is based on the selected answer here and another source.
If you check for program existence, you are probably going to run it later anyway. Why not try to run it in the first place?
if foo --version >/dev/null 2>&1; then
echo Found
else
echo Not found
fi
It's a more trustworthy check that the program runs than merely looking at PATH directories and file permissions.
Plus you can get some useful result from your program, such as its version.
Of course the drawbacks are that some programs can be heavy to start and some don't have a --version option to immediately (and successfully) exit.
Check for multiple dependencies and inform status to end users
for cmd in latex pandoc; do
printf '%-10s' "$cmd"
if hash "$cmd" 2>/dev/null; then
echo OK
else
echo missing
fi
done
Sample output:
latex OK
pandoc missing
Adjust the 10 to the maximum command length. It is not automatic, because I don't see a non-verbose POSIX way to do it:
How can I align the columns of a space separated table in Bash?
Check if some apt packages are installed with dpkg -s and install them otherwise.
See: Check if an apt-get package is installed and then install it if it's not on Linux
It was previously mentioned at: How can I check if a program exists from a Bash script?
I never did get the previous answers to work on the box I have access to. For one, type has been installed (doing what more does). So the builtin directive is needed. This command works for me:
if [ `builtin type -p vim` ]; then echo "TRUE"; else echo "FALSE"; fi
I wanted the same question answered but to run within a Makefile.
install:
#if [[ ! -x "$(shell command -v ghead)" ]]; then \
echo 'ghead does not exist. Please install it.'; \
exit -1; \
fi
It could be simpler, just:
#!/usr/bin/env bash
set -x
# if local program 'foo' returns 1 (doesn't exist) then...
if ! type -P foo; then
echo 'crap, no foo'
else
echo 'sweet, we have foo!'
fi
Change foo to vi to get the other condition to fire.
hash foo 2>/dev/null: works with Z shell (Zsh), Bash, Dash and ash.
type -p foo: it appears to work with Z shell, Bash and ash (BusyBox), but not Dash (it interprets -p as an argument).
command -v foo: works with Z shell, Bash, Dash, but not ash (BusyBox) (-ash: command: not found).
Also note that builtin is not available with ash and Dash.
zsh only, but very useful for zsh scripting (e.g. when writing completion scripts):
The zsh/parameter module gives access to, among other things, the internal commands hash table. From man zshmodules:
THE ZSH/PARAMETER MODULE
The zsh/parameter module gives access to some of the internal hash ta‐
bles used by the shell by defining some special parameters.
[...]
commands
This array gives access to the command hash table. The keys are
the names of external commands, the values are the pathnames of
the files that would be executed when the command would be in‐
voked. Setting a key in this array defines a new entry in this
table in the same way as with the hash builtin. Unsetting a key
as in `unset "commands[foo]"' removes the entry for the given
key from the command hash table.
Although it is a loadable module, it seems to be loaded by default, as long as zsh is not used with --emulate.
example:
martin#martin ~ % echo $commands[zsh]
/usr/bin/zsh
To quickly check whether a certain command is available, just check if the key exists in the hash:
if (( ${+commands[zsh]} ))
then
echo "zsh is available"
fi
Note though that the hash will contain any files in $PATH folders, regardless of whether they are executable or not. To be absolutely sure, you have to spend a stat call on that:
if (( ${+commands[zsh]} )) && [[ -x $commands[zsh] ]]
then
echo "zsh is available"
fi
The which command might be useful. man which
It returns 0 if the executable is found and returns 1 if it's not found or not executable:
NAME
which - locate a command
SYNOPSIS
which [-a] filename ...
DESCRIPTION
which returns the pathnames of the files which would
be executed in the current environment, had its
arguments been given as commands in a strictly
POSIX-conformant shell. It does this by searching
the PATH for executable files matching the names
of the arguments.
OPTIONS
-a print all matching pathnames of each argument
EXIT STATUS
0 if all specified commands are
found and executable
1 if one or more specified commands is nonexistent
or not executable
2 if an invalid option is specified
The nice thing about which is that it figures out if the executable is available in the environment that which is run in - it saves a few problems...
Use Bash builtins if you can:
which programname
...
type -P programname
For those interested, none of the methodologies in previous answers work if you wish to detect an installed library. I imagine you are left either with physically checking the path (potentially for header files and such), or something like this (if you are on a Debian-based distribution):
dpkg --status libdb-dev | grep -q not-installed
if [ $? -eq 0 ]; then
apt-get install libdb-dev
fi
As you can see from the above, a "0" answer from the query means the package is not installed. This is a function of "grep" - a "0" means a match was found, a "1" means no match was found.
This will tell according to the location if the program exist or not:
if [ -x /usr/bin/yum ]; then
echo "This is Centos"
fi
I'd say there isn't any portable and 100% reliable way due to dangling aliases. For example:
alias john='ls --color'
alias paul='george -F'
alias george='ls -h'
alias ringo=/
Of course, only the last one is problematic (no offence to Ringo!). But all of them are valid aliases from the point of view of command -v.
In order to reject dangling ones like ringo, we have to parse the output of the shell built-in alias command and recurse into them (command -v isn't a superior to alias here.) There isn't any portable solution for it, and even a Bash-specific solution is rather tedious.
Note that a solution like this will unconditionally reject alias ls='ls -F':
test() { command -v $1 | grep -qv alias }
If you guys/gals can't get the things in answers here to work and are pulling hair out of your back, try to run the same command using bash -c. Just look at this somnambular delirium. This is what really happening when you run $(sub-command):
First. It can give you completely different output.
$ command -v ls
alias ls='ls --color=auto'
$ bash -c "command -v ls"
/bin/ls
Second. It can give you no output at all.
$ command -v nvm
nvm
$ bash -c "command -v nvm"
$ bash -c "nvm --help"
bash: nvm: command not found
#!/bin/bash
a=${apt-cache show program}
if [[ $a == 0 ]]
then
echo "the program doesn't exist"
else
echo "the program exists"
fi
#program is not literal, you can change it to the program's name you want to check
The hash-variant has one pitfall: On the command line you can for example type in
one_folder/process
to have process executed. For this the parent folder of one_folder must be in $PATH. But when you try to hash this command, it will always succeed:
hash one_folder/process; echo $? # will always output '0'
I second the use of "command -v". E.g. like this:
md=$(command -v mkdirhier) ; alias md=${md:=mkdir} # bash
emacs="$(command -v emacs) -nw" || emacs=nano
alias e=$emacs
[[ -z $(command -v jed) ]] && alias jed=$emacs
I had to check if Git was installed as part of deploying our CI server. My final Bash script was as follows (Ubuntu server):
if ! builtin type -p git &>/dev/null; then
sudo apt-get -y install git-core
fi
To mimic Bash's type -P cmd, we can use the POSIX compliant env -i type cmd 1>/dev/null 2>&1.
man env
# "The option '-i' causes env to completely ignore the environment it inherits."
# In other words, there are no aliases or functions to be looked up by the type command.
ls() { echo 'Hello, world!'; }
ls
type ls
env -i type ls
cmd=ls
cmd=lsx
env -i type $cmd 1>/dev/null 2>&1 || { echo "$cmd not found"; exit 1; }
If there isn't any external type command available (as taken for granted here), we can use POSIX compliant env -i sh -c 'type cmd 1>/dev/null 2>&1':
# Portable version of Bash's type -P cmd (without output on stdout)
typep() {
command -p env -i PATH="$PATH" sh -c '
export LC_ALL=C LANG=C
cmd="$1"
cmd="`type "$cmd" 2>/dev/null || { echo "error: command $cmd not found; exiting ..." 1>&2; exit 1; }`"
[ $? != 0 ] && exit 1
case "$cmd" in
*\ /*) exit 0;;
*) printf "%s\n" "error: $cmd" 1>&2; exit 1;;
esac
' _ "$1" || exit 1
}
# Get your standard $PATH value
#PATH="$(command -p getconf PATH)"
typep ls
typep builtin
typep ls-temp
At least on Mac OS X v10.6.8 (Snow Leopard) using Bash 4.2.24(2) command -v ls does not match a moved /bin/ls-temp.
My setup for a Debian server:
I had the problem when multiple packages contained the same name.
For example apache2. So this was my solution:
function _apt_install() {
apt-get install -y $1 > /dev/null
}
function _apt_install_norecommends() {
apt-get install -y --no-install-recommends $1 > /dev/null
}
function _apt_available() {
if [ `apt-cache search $1 | grep -o "$1" | uniq | wc -l` = "1" ]; then
echo "Package is available : $1"
PACKAGE_INSTALL="1"
else
echo "Package $1 is NOT available for install"
echo "We can not continue without this package..."
echo "Exitting now.."
exit 0
fi
}
function _package_install {
_apt_available $1
if [ "${PACKAGE_INSTALL}" = "1" ]; then
if [ "$(dpkg-query -l $1 | tail -n1 | cut -c1-2)" = "ii" ]; then
echo "package is already_installed: $1"
else
echo "installing package : $1, please wait.."
_apt_install $1
sleep 0.5
fi
fi
}
function _package_install_no_recommends {
_apt_available $1
if [ "${PACKAGE_INSTALL}" = "1" ]; then
if [ "$(dpkg-query -l $1 | tail -n1 | cut -c1-2)" = "ii" ]; then
echo "package is already_installed: $1"
else
echo "installing package : $1, please wait.."
_apt_install_norecommends $1
sleep 0.5
fi
fi
}

Expand a possible relative path in bash

As arguments to my script there are some file paths. Those can, of course, be relative (or contain ~). But for the functions I've written I need paths that are absolute, but do not have their symlinks resolved.
Is there any function for this?
MY_PATH=$(readlink -f $YOUR_ARG) will resolve relative paths like "./" and "../"
Consider this as well (source):
#!/bin/bash
dir_resolve()
{
cd "$1" 2>/dev/null || return $? # cd to desired directory; if fail, quell any error messages but return exit status
echo "`pwd -P`" # output full, link-resolved path
}
# sample usage
if abs_path="`dir_resolve \"$1\"`"
then
echo "$1 resolves to $abs_path"
echo pwd: `pwd` # function forks subshell, so working directory outside function is not affected
else
echo "Could not reach $1"
fi
http://www.linuxquestions.org/questions/programming-9/bash-script-return-full-path-and-filename-680368/page3.html has the following
function abspath {
if [[ -d "$1" ]]
then
pushd "$1" >/dev/null
pwd
popd >/dev/null
elif [[ -e "$1" ]]
then
pushd "$(dirname "$1")" >/dev/null
echo "$(pwd)/$(basename "$1")"
popd >/dev/null
else
echo "$1" does not exist! >&2
return 127
fi
}
which uses pushd/popd to get into a state where pwd is useful.
Simple one-liner:
function abs_path {
(cd "$(dirname '$1')" &>/dev/null && printf "%s/%s" "$PWD" "${1##*/}")
}
Usage:
function do_something {
local file=$(abs_path $1)
printf "Absolute path to %s: %s\n" "$1" "$file"
}
do_something $HOME/path/to/some\ where
I am still trying to figure out how I can get it to be completely oblivious to whether the path exists or not (so it can be used when creating files as well).
This does the trick for me on OS X: $(cd SOME_DIRECTORY 2> /dev/null && pwd -P)
It should work anywhere. The other solutions seemed too complicated.
If your OS supports it, use:
realpath -s "./some/dir"
And using it in a variable:
some_path="$(realpath -s "./some/dir")"
Which will expand your path. Tested on Ubuntu and CentOS, might not be available on yours. Some recommend readlink, but documentation for readlink says:
Note realpath(1) is the preferred command to use for canonicalization functionality.
In case people wonder why I quote my variables, it's to preserve spaces in paths. Like doing realpath some path will give you two different path results. But realpath "some path" will return one. Quoted parameters ftw :)
Thanks to NyanPasu64 for the heads up. You'll want to add -s if you don't want it to follow the symlinks.
Use readlink -f <relative-path>, e.g.
export FULLPATH=`readlink -f ./`
Maybe this is more readable and does not use a subshell and does not change the current dir:
dir_resolve() {
local dir=`dirname "$1"`
local file=`basename "$1"`
pushd "$dir" &>/dev/null || return $? # On error, return error code
echo "`pwd -P`/$file" # output full, link-resolved path with filename
popd &> /dev/null
}
on OS X you can use
stat -f "%N" YOUR_PATH
on linux you might have realpath executable. if not, the following might work (not only for links):
readlink -c YOUR_PATH
There's another method. You can use python embedding in bash script to resolve a relative path.
abs_path=$(python3 - <<END
from pathlib import Path
path = str(Path("$1").expanduser().resolve())
print(path)
END
)
self edit, I just noticed the OP said he's not looking for symlinks resolved:
"But for the functions I've written I need paths that are absolute, but do not have their symlinks resolved."
So guess this isn't so apropos to his question after all. :)
Since I've run into this many times over the years, and this time around I needed a pure bash portable version that I could use on OSX and linux, I went ahead and wrote one:
The living version lives here:
https://github.com/keen99/shell-functions/tree/master/resolve_path
but for the sake of SO, here's the current version (I feel it's well tested..but I'm open to feedback!)
Might not be difficult to make it work for plain bourne shell (sh), but I didn't try...I like $FUNCNAME too much. :)
#!/bin/bash
resolve_path() {
#I'm bash only, please!
# usage: resolve_path <a file or directory>
# follows symlinks and relative paths, returns a full real path
#
local owd="$PWD"
#echo "$FUNCNAME for $1" >&2
local opath="$1"
local npath=""
local obase=$(basename "$opath")
local odir=$(dirname "$opath")
if [[ -L "$opath" ]]
then
#it's a link.
#file or directory, we want to cd into it's dir
cd $odir
#then extract where the link points.
npath=$(readlink "$obase")
#have to -L BEFORE we -f, because -f includes -L :(
if [[ -L $npath ]]
then
#the link points to another symlink, so go follow that.
resolve_path "$npath"
#and finish out early, we're done.
return $?
#done
elif [[ -f $npath ]]
#the link points to a file.
then
#get the dir for the new file
nbase=$(basename $npath)
npath=$(dirname $npath)
cd "$npath"
ndir=$(pwd -P)
retval=0
#done
elif [[ -d $npath ]]
then
#the link points to a directory.
cd "$npath"
ndir=$(pwd -P)
retval=0
#done
else
echo "$FUNCNAME: ERROR: unknown condition inside link!!" >&2
echo "opath [[ $opath ]]" >&2
echo "npath [[ $npath ]]" >&2
return 1
fi
else
if ! [[ -e "$opath" ]]
then
echo "$FUNCNAME: $opath: No such file or directory" >&2
return 1
#and break early
elif [[ -d "$opath" ]]
then
cd "$opath"
ndir=$(pwd -P)
retval=0
#done
elif [[ -f "$opath" ]]
then
cd $odir
ndir=$(pwd -P)
nbase=$(basename "$opath")
retval=0
#done
else
echo "$FUNCNAME: ERROR: unknown condition outside link!!" >&2
echo "opath [[ $opath ]]" >&2
return 1
fi
fi
#now assemble our output
echo -n "$ndir"
if [[ "x${nbase:=}" != "x" ]]
then
echo "/$nbase"
else
echo
fi
#now return to where we were
cd "$owd"
return $retval
}
here's a classic example, thanks to brew:
%% ls -l `which mvn`
lrwxr-xr-x 1 draistrick 502 29 Dec 17 10:50 /usr/local/bin/mvn# -> ../Cellar/maven/3.2.3/bin/mvn
use this function and it will return the -real- path:
%% cat test.sh
#!/bin/bash
. resolve_path.inc
echo
echo "relative symlinked path:"
which mvn
echo
echo "and the real path:"
resolve_path `which mvn`
%% test.sh
relative symlinked path:
/usr/local/bin/mvn
and the real path:
/usr/local/Cellar/maven/3.2.3/libexec/bin/mvn
Do you have to use bash exclusively? I needed to do this and got fed up with differences between Linux and OS X. So I used PHP for a quick and dirty solution.
#!/usr/bin/php <-- or wherever
<?php
{
if($argc!=2)
exit();
$fname=$argv[1];
if(!file_exists($fname))
exit();
echo realpath($fname)."\n";
}
?>
I know it's not a very elegant solution but it does work.

Bash: Create a file if it does not exist, otherwise check to see if it is writeable

I have a bash program that will write to an output file. This file may or may not exist, but the script must check permissions and fail early. I can't find an elegant way to make this happen. Here's what I have tried.
set +e
touch $file
set -e
if [ $? -ne 0 ]; then exit;fi
I keep set -e on for this script so it fails if there is ever an error on any line. Is there an easier way to do the above script?
Why complicate things?
file=exists_and_writeable
if [ ! -e "$file" ] ; then
touch "$file"
fi
if [ ! -w "$file" ] ; then
echo cannot write to $file
exit 1
fi
Or, more concisely,
( [ -e "$file" ] || touch "$file" ) && [ ! -w "$file" ] && echo cannot write to $file && exit 1
Rather than check $? on a different line, check the return value immediately like this:
touch file || exit
As long as your umask doesn't restrict the write bit from being set, you can just rely on the return value of touch
You can use -w to check if a file is writable (search for it in the bash man page).
if [[ ! -w $file ]]; then exit; fi
Why must the script fail early? By separating the writable test and the file open() you introduce a race condition. Instead, why not try to open (truncate/append) the file for writing, and deal with the error if it occurs? Something like:
$ echo foo > output.txt
$ if [ $? -ne 0 ]; then die("Couldn't echo foo")
As others mention, the "noclobber" option might be useful if you want to avoid overwriting existing files.
Open the file for writing. In the shell, this is done with an output redirection. You can redirect the shell's standard output by putting the redirection on the exec built-in with no argument.
set -e
exec >shell.out # exit if shell.out can't be opened
echo "This will appear in shell.out"
Make sure you haven't set the noclobber option (which is useful interactively but often unusable in scripts). Use > if you want to truncate the file if it exists, and >> if you want to append instead.
If you only want to test permissions, you can run : >foo.out to create the file (or truncate it if it exists).
If you only want some commands to write to the file, open it on some other descriptor, then redirect as needed.
set -e
exec 3>foo.out
echo "This will appear on the standard output"
echo >&3 "This will appear in foo.out"
echo "This will appear both on standard output and in foo.out" | tee /dev/fd/3
(/dev/fd is not supported everywhere; it's available at least on Linux, *BSD, Solaris and Cygwin.)

Resources