A "try" shell script - linux

So, the idea is to have a script that tries to run a command, and if the command fails it shows up any warnings/errors. My try:
$ cat try.sh
#! /bin/sh
tempfile=`tempfile 2>/dev/null` || tempfile=/tmp/temp$$
trap 'rm -f $tempfile >/dev/null 2>&1' 0
trap 'exit 2' 1 2 3 15
echo "$#"
if ! "$#" >$tempfile 2>&1; then
cat $tempfile;
false;
fi
Do you think that this script is ok (wrt portability and functionality)?

Some changes I would make:
Use "$#" as Steve Emmerson suggested
Don't redirect stdout of tempfile to /dev/null; that's what you're trying to capture in the variable!
Consider mktemp; it is more portable.
Capture and exit with actual exit code of command, so information is not lost.
E.g., without error checks,
tempfile=`mktemp 2>/dev/null || echo /tmp/tempfile$$`
[ -w "$tempfile" ] || { echo "Can't make tempfile" >&2; exit 1; }
"$#" 2> $tempfile
rc=$?
case $rc in
0) ;;
*) cat "$tempfile" >&2 ;;
esac
rm -f "$tempfile"
exit $rc

I would enclose the $# in double quotes in the "if" statement in order to preserve word boundaries.

Related

BASH getopt inside bash function

I would like to put my getopt call into a function so I can make my script a bit more tidy. I've read a few guides Using getopts inside a Bash function but they seem to be for getopts not getopt and cannot get my head round it.
I have the following getopt call at the start of my script
#-------------------------------------------------------------------------------
# Main
#-------------------------------------------------------------------------------
getopt_results=$( getopt -s bash -o e:h --long ENVIRONMENT:,HELP:: -- "$#" )
if test $? != 0
then
echo "Failed to parse command line unrecognized option" >&2
Usage
exit 1
fi
eval set -- "$getopt_results"
while true
do
case "$1" in
-e | --ENVIRONMENT)
ENVIRONMENT="$2"
if [ ! -f "../properties/static/build_static.${ENVIRONMENT}.properties" -o ! -f "../properties/dynamic/build_dynamic.${ENVIRONMENT}.properties" ]; then
echo "ERROR: Unable to open properties file for ${ENVIRONMENT}"
echo "Please check they exist or supply a Correct Environment name"
Usage
exit 1
else
declare -A props
readpropsfile "../properties/dynamic/dynamic.${ENVIRONMENT}.properties"
readpropsfile "../properties/static/static.${ENVIRONMENT}.properties"
fi
shift 2
;;
-h | --HELP)
Usage
exit 1
;;
--)
shift
break
;;
*)
echo "$0: unparseable option $1"
Usage
exit 1
;;
esac
done
when I put the whole lot in function , say called parse_command_line ()
and call it with parse_command_line "$#"
my script dies because it cannot work out the parameters it was called with. I have tried making OPTIND local as per some of the guides. Any advice? Thanks.
getopt shouldn't be used, but the bash-aware GNU version works fine inside a function, as demonstrated below:
#!/usr/bin/env bash
main() {
local getopt_results
getopt_results=$(getopt -s bash -o e:h --long ENVIRONMENT:,HELP:: "$#")
eval "set -- $getopt_results" # this is less misleading than the original form
echo "Positional arguments remaining:"
if (( $# )); then
printf ' - %q\n' "$#"
else
echo " (none)"
fi
}
main "$#"
...when saved as getopt-test and run as:
./getopt-test -e foo=bar "first argument" "second argument"
...properly emits:
Positional arguments remaining:
- -e
- foo=bar
- --
- hello
- cruel
- world

Can't parse a string with brace expansion operations into a command

have some problem with shell script.
In our office we set up only few commands, that available for devs when they are trying ssh to server. It is configured with help of .ssh/authorized_keys file and available command for user there is bash script:
#!/bin/sh
if [[ $1 == "--help" ]]; then
cat <<"EOF"
This script has the purpose to let people remote execute certain commands without logging into the system.
For this they NEED to have a homedir on this system and uploaded their RSA public key to .ssh/authorized_keys (via ssh-copy-id)
Then you can alter that file and add some commands in front of their key eg :
command="/usr/bin/dev.sh",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty
The user will do the following : ssh testuser#server tail testserver.example.com/2017/01/01/user.log
EOF
exit 0;
fi
# set global variable
set $SSH_ORIGINAL_COMMAND
# set the syslog path where the files can be found
PATH="/opt/syslog/logs"
# strip ; or any other unwanted signs out of the command, this prevents them from breaking out of the setup command
if [[ $1 != "" ]]; then
COMMAND=$1
COMMAND=${COMMAND//[;\`]/}
fi
if [[ $2 != "" ]]; then
ARGU1=$2
ARGU1=${ARGU1//[;\`]/}
fi
if [[ $3 != "" ]]; then
ARGU2=$3
ARGU2=${ARGU2//[;\`]/}
fi
if [[ $4 != "" ]]; then
ARGU3=$4
ARGU3=${ARGU3//[;\`]/}
fi
# checking for the commands
case "$COMMAND" in
less)
ARGU2=${ARGU1//\.\./}
FILE=$PATH/$ARGU1
if [ ! -f $FILE ]; then
echo "File doesn't exist"
exit 1;
fi
#echo " --------------------------------- LESS $FILE"
/usr/bin/less $FILE
;;
grep)
if [[ $ARGU2 == "" ]]; then
echo "Pls give a filename"
exit 1
fi
if [[ $ARGU1 == "" ]]; then
echo "Pls give a string to search for"
exit 1
fi
ARGU2=${ARGU2//\.\./}
FILE=$PATH/$ARGU2
/usr/bin/logger -t restricted-command -- "------- $USER Executing grep $ARGU1 \"$ARGU2\" $FILE"
if [ ! -f $FILE ]; then
echo "File doesn't exist"
/usr/bin/logger -t restricted-command -- "$USER Executing $#"
exit 1;
fi
/bin/grep $ARGU1 $FILE
;;
tail)
if [[ $ARGU1 == "" ]]; then
echo "Pls give a filename"
exit 1
fi
ARGU1=${ARGU1//\.\./}
FILE=$PATH/$ARGU1
if [ ! -f $FILE ]; then
echo "File doesn't exist"
/usr/bin/logger -t restricted-command -- "$USER Executing $# ($FILE)"
exit 1;
fi
/usr/bin/tail -f $FILE
;;
cat)
ARGU2=${ARGU1//\.\./}
FILE=$PATH/$ARGU1
if [ ! -f $FILE ]; then
echo "File doesn't exist"
exit 1;
fi
/bin/cat $FILE
;;
help)
/bin/cat <<"EOF"
# less LOGNAME (eg less testserver.example.com/YYYY/MM/DD/logfile.log)
# grep [ARGUMENT] LOGNAME
# tail LOGNAME (eg tail testserver.example.com/YYYY/MM/DD/logfile.log)
# cat LOGNAME (eg cat testserver.example.com/YYYY/MM/DD/logfile.log)
In total the command looks like this : ssh user#testserver.example.com COMMAND [ARGUMENT] LOGFILE
EOF
/usr/bin/logger -t restricted-command -- "$USER HELP requested $#"
exit 1
;;
*)
/usr/bin/logger -s -t restricted-command -- "$USER Invalid command $#"
exit 1
;;
esac
/usr/bin/logger -t restricted-command -- "$USER Executing $#"
The problem is next:
when i try to exec some command, it takes only first argument, if i do recursion in files by using {n,n1,n2} - it doesn't work:
[testuser#local ~]$ ssh testuser#syslog.server less srv1838.example.com/2017/02/10/local1.log |grep 'srv2010' | wc -l
0
[testuser#local ~]$ ssh testuser#syslog.server less srv2010.example.com/2017/02/10/local1.log |grep 'srv2010' | wc -l
11591
[testuser#local ~]$ ssh testuser#syslog.server less srv{1838,2010}.example.com/2017/02/10/local1.log |grep 'srv2010' | wc -l
0
[testuser#local ~]$ ssh testuser#syslog.server less srv{2010,1838}.example.com/2017/02/21/local1.log |grep 'srv2010' | wc -l
11591
Could someone help me, how can i parse\count command arguments to make it work?
Thank you and have a nice day!
The number of arguments for a bash script would be $#. As a quick example:
#!/bin/bash
narg=$#
typeset -i i
i=1
while [ $i -le $narg ] ; do
echo " $# $i: $1"
shift
i=$i+1
done
gives, for bash tst.sh a b {c,d}
4 1: a
3 2: b
2 3: c
1 4: d
In your script, the command to execute (cat, less, ...) gets explicitly only the second argument to the script. If you want to read all arguments, you should do something like this (note: only a hint, removed all sorts of checks etc..)
command="$1"
shift
case $command in
(grep) pattern="$1"
shift
while [ $# -gt 0 ] ; do
grep "$pattern" "$1"
shift
done
;;
esac
note: added some quotes as comment suggested, but, being only a hint, you should carefully look at quoting and your checks in your own script.
Less command working now:
case "$COMMAND" in
less)
if [[ $ARGU1 == "" ]]; then
echo "Pls give a filename"
exit 1
fi
FILES_LIST=${#:2}
FILE=(${FILES_LIST//\.\./})
for v in "${FILE[#]}";do
v=${v//[;\']/}
if [ ! -f $v ]; then
echo "File doesn't exist"
fi
/usr/bin/less $PATH/$v
done;;
tail command works too with 2 and more files, but i can't execute tail -f command on two files unfortunately.

Bash Script not detecting failed exit codes

I can't get my bash script (a logging file) to detect any other exit code other than 0, so the count for failed commands isn't being incremented, but the successes is incremented regardless of whether the command failed or succeeded.
Here is the code:
#!/bin/bash
#Script for Homework 8
#Created by Greg Kendall on 5/10/2016
file=$$.cmd
signal() {
rm -f $file
echo
echo "User Aborted by Control-C"
exit
}
trap signal 2
i=0
success=0
fail=0
commands=0
read -p "$(pwd)$" "command"
while [ "$command" != 'exit' ]
do
$command
((i++))
echo $i: "$command" >> $file
if [ "$?" -eq 0 ]
then
((success++))
((commands++))
else
((fail++))
((commands++))
fi
read -p "$(pwd)" "command"
done
if [ "$command" == 'exit' ]
then
rm -f $file
echo commands:$commands "(successes:$success, failures:$fail)"
fi
Any help would be greatly appreciated. Thanks!
That's because echo $i: "$command" is succeeding always.
The exit status $? in if [ "$?" -eq 0 ] is actually the exit status of echo, the command that is run immediately before the checking.
So do the test immediate after the command:
$command
if [ "$?" -eq 0 ]
and use echo elsewhere
Or if you prefer you don't need the $? check at all, you can run the command and check status within if alone:
if $command; then .....; else ....; fi
If you do not want to get the STDOUT and STDERR:
if $command &>/dev/null; then .....; else ....; fi
** Note that, as #Charles Duffy mentioned in the comment, you should not run command(s) from variables.
Your code is correctly counting the number of times that the echo $i: "$command" command fails. I presume that you would prefer to count the number of times that $command fails. In that case, replace:
$command
((i++))
echo $i: "$command" >> $file
if [ "$?" -eq 0 ]
With:
$command
code=$?
((i++))
echo $i: "$command" >> $file
if [ "$code" -eq 0 ]
Since $? captures the exit code of the previous command, it should be placed immediately after the command whose code we want to capture.
Improvement
To make sure that the value of $? is captured before any other command is run, Charles Duffy suggests placing the assignment on the same line as the command like so:
$command; code=$?
((i++))
echo $i: "$command" >> $file
if [ "$code" -eq 0 ]
This should make it less likely that any future changes to the code would separate the command from the capture of the value of $?.

Segmentation fault on bash script

i have a bash script that shows "Segment Violation" on line
sp-sc-auth "${sopUrl}" 8809 8908 > /dev/null &
but when sp-sc-auth is executed from terminal works fine
I set:
set -o pipefail
set -o errexit
set -o xtrace
set -o nounset
end script continue executing but throws that "Segment Violation" error...
System is a debian 64 bits
Thanks in advance
Regars
The ugly code:
#!/usr/bin/env bash
# Init
set -o pipefail
set -o errexit
#set -o xtrace
set -o nounset
__DIR__="$(cd "$(dirname "${0}")"; echo $(pwd))"
__BASE__="$(basename "${0}")"
__FILE__="${__DIR__}/${__BASE__}"
ARG1="${1:-Undefined}"
display_usage() {
scriptName=$(basename $0)
echo -e "Uso:\n "${scriptName}" [6,7,8,9,10 o 12]"
echo "Sin especificar el canal, búsqueda de retransmisiones"
}
parse_arenavision() {
url="http://www.arenavision.in/agenda"
if ! av=$(curl -s "${url}");then
echo "Sin conexión"
exit 1
fi
started="off"
declare -a _list
element=""
while read line
do
if [[ $line =~ (([0-9][0-9]+/[0-9]+/[0-9]+.*)) ]]; then
element=$(echo "${BASH_REMATCH[0]}" | sed -r 's#CET|AV([^6789]|1[02])##g; s#<br />##g; s#//|&.*;##g; s#/\s*$##g; s#INGLATERRA/PREMIER LEAGUE#PREMIER#g; s#ITALIA/SERIE A#SERIE A#g; s#ITALIA/SERIE A#SERIE A#g;' | tr -dc '[:print:]')
if [[ "${element}" =~ (.*AV[6789]|.*AV10|.*AV12) ]]; then
_list+=("${element}")
fi
started="on"
else
if [[ ${started} == "on" ]]; then
break
fi
fi
done <<< "${av}"
for i in "${_list[#]}"; do
if [[ "${i}" =~ (.*BALONCESTO.*) ]]; then
echo -e "\e[92m${i}\e[0m"
elif [[ "${i}" =~ (.*LIGA BBVA.*) ]]; then
echo -e "\e[37m${i}\e[0m"
else
echo "${i}"
fi
done
}
case $ARG1 in
"Undefined" )
parse_arenavision
exit 0
;;
[6789] )
page="${ARG1}"
;;
10 )
page="${ARG1}"
;;
* )
display_usage
exit 1
;;
esac
# Delete "zombies"
if pgrep -f "sp-sc"
then
kill -9 `pgrep -f "sp-sc-auth"`
fi
url="http://www.arenavision.in/arenavision$page"
# Get url content and url sop
if ! content=$(curl -s "${url}");then
echo "Sin conexión"
fi
if [[ $content =~ (sop://([A-Za-z0-9_]+|\.)+:[0-9]+) ]]; then
sopUrl=${BASH_REMATCH[1]}
else
echo "No se ha encontrado la url"
exit 1
fi
# Connect ArenaVision 1
children=""
trap 'kill $children 1>/dev/null 2>&1; exit 143' EXIT
sp-sc-auth "${sopUrl}" 8809 8908 > /dev/null &
children="$!"
# Check if exists
line='[ ]'
for i in {0..15}
do
replace="${line/ /#}"
line=$replace
echo -ne "Comprobando sopcast ${replace}" \\r
sleep 1
done
echo -ne "\033[2K"
if ! kill -0 "${children}" 1>/dev/null 2>&1; then
echo "Sin emisión"
exit 1
else
echo -ne "Comprobando sopcast [ OK ]" \\r
echo
fi
# Open VLC player
line='[ ]'
for i in {0..25}
do
replace="${line/ /#}"
line=$replace
echo -ne "Cargando reproductor ${replace}" \\r
sleep 1
done
if ! kill -0 "${children}" 1>/dev/null 2>&1; then
echo "Fallo en recepción"
exit 1
else
vlc http://localhost:8908/tv.asf 1>/dev/null 2>&1
echo -ne "\033[2K"
fi
exit 0
errexit cannot work on programs run in the background, so this is unsurprising -- the inline command is simply starting a background process, and that (starting a background process) succeeds, even if the process itself subsequently fails.
If you call wait $! subsequently, then errexit will be able to take effect, as the wait call will exit with the exit status of the program itself. (Of course, if you can call wait $!, then this raises the question of why you were backgrounding the initial program to start with).
If you always want to kill the parent script if the child fails, you can do this instead:
(sp-sc-auth "$sopUrl" 8809 8908 >/dev/null || kill $$) &
$$ evaluates to the PID of the parent shell, not the subshell, so this will act accordingly.
As for the segfault itself, "program X segfaults" is a question too vague to be addressed here. To even start debugging that, you'd need to collect the core dump created on its failure (enabling cores if necessary), install debug symbols for sopcast, and use gdb to collect a stack trace from the core file created on failure.

Linux: Bash: what does mkdir return

I want to write a simple check upon running mkdir to create a dir. First it will check whether the dir already exists, if it does, it will just skip. If the dir doesn't exist, it will run mkdir, if mkdir fails (meaning the script could not create the dir because it does not have sufficient privileges), it will terminate.
This is what I wrote:
if [ ! -d "$FINALPATH" ]; then
if [[ `mkdir -p "$FINALPATH"` -ne 0 ]]; then
echo "\nCannot create folder at $FOLDERPATH. Dying ..."
exit 1
fi
fi
However, the 2nd if doesn't seem to be working right (I am catching 0 as return value for a successful mkdir). So how to correctly write the 2nd if? and what does mkdir returns upon success as well as failure?
The result of running
`mkdir -p "$FINALPATH"`
isn't the return code, but the output from the program. $? the return code. So you could do
if mkdir -p "$FINALPATH" ; then
# success
else
echo Failure
fi
or
mkdir -p "$FINALPATH"
if [ $? -ne 0 ] ; then
echo Failure
fi
The shorter way would be
mkdir -p "$FINALPATH" || echo failure
also idiomatic:
if mkdir -p "$FINALPATH"
then
# .....
fi
Likewise you can while .....; do ....; done or until ......; do ......; done
Just for completeness, you can exit by issuing:
mkdir -p "$FINALPATH" || { echo "Failure, aborting..." ; exit 1 ; }
Braces are necessary, or else exit 1 would execute in both cases.
Or you can create an abort function like:
errormsg()
{
echo "$1"
echo Aborting...
{ exit 1 ; }
}
And then just call it by issuing:
mkdir -p "$FINALPATH" || errormsg "Failure creating $FINALPATH"
Edited:
Braces, not parenthesis, as parenthesis only exit the subshell.
( Thanks #Charles Duffy )
A function to write a message and exit

Resources