creating bash script that runs for every subsequent command - linux

Suppose I created my own bash script with a command called customcmd
I want it so that if I type in customcmd into the terminal, every subsequent commands following it will also execute customcmd
so suppose I do
>customcmd
>param1
>param2
>param3
I want this to be the equivalent of
>customcmd
>customcmd param1
>customcmd param2
>customcmd param3
ie. I want it to be so that by executing customcmd once, I won't have to type in customcmd again and I want to have the command line parse every single command I type afterwards to automatically be parameters to customcmd...
how do I go about achieving this when writing the bash script?

If I understand your question correctly, I'd do the following:
Create a script, eg mycommand.sh:
#!/bin/bash
while [[ 1 ]]; do
read _INPUT
echo $_INPUT
done
initialize an infinite loop
for each iteration, get the user input ( whatever it is ) and run it through the command you specify in the while loop ( if your script needs to parse multiple arguments, you can swap our echo with a function that can handle that )
Hope that helps!

This could be one of your forms.
#!/bin/bash
shopt -s extglob
function customcmd {
# Do something with "$#".
echo "$*"
}
while read -er INPUT -p ">" && [[ $INPUT != *([[:blank:]]) ]]; do
if [[ $INPUT == customcmd ]]; then
customcmd
while read -er INPUT -p ">" && [[ $INPUT != *([[:blank:]]) ]]; do
customcmd "$INPUT"
done
fi
done
Or this:
#!/bin/bash
shopt -s extglob
function customcmd {
if [[ $# -gt 0 ]]; then
# Do something with "$#".
echo "$*"
else
local INPUT
while read -er INPUT -p ">" && [[ $INPUT != *([[:blank:]]) ]]; do
customcmd "$INPUT"
done
fi
}
while read -era INPUT -p ">" && [[ $INPUT != *([[:blank:]]) ]]; do
case "$INPUT" in
customcmd)
customcmd "${INPUT[#]:2}"
;;
# ...
esac
done
** In arrays $INPUT is equivalent to ${INPUT[0]}, although other people would disagree using the former since it's less "documentive", but every tool has their own traditionally accepted hacks which same people would allow just like those hacks in Awk, and not any Wiki or he who thinks is a more veteran Bash user could dictate which should be standard.

Related

how to run shell script to display command line arguments?

#! /usr/bin/bash
for var in "$#"
do
echo $var
done
simple shell script which displays each of the command
line arguments, one at a time and stops displaying command line
arguments when it gets an argument whose value is “stop”.?
IIUC, try this:
#!/bin/bash
for var in "$#"
do
if [[ "$var" == 'stop' ]]; then
exit 0
else
echo "$var"
fi
done
Process args with a while loop:
#!/usr/bin/env bash
while test -n "$1"; do
test "$1" != 'stop' && echo "$1" && shift || break
done

Execute a process with the same environment variable than another process [duplicate]

This question already has an answer here:
Why variable values are lost after terminating the loop in bash? [duplicate]
(1 answer)
Closed 2 years ago.
I would like to make a script which allow me to execute a command which inherit environment variables from any PID.
Here the script I made :
#!/bin/sh
VARS=$(cat -A /proc/1/environ | tr "^#" "\n")
COMMAND=""
# sh compatible loop on a variable containing multiple lines
printf %s "$VARS" | while IFS='\n' read -r var
do
if [ "$var" != "" ]; then
export "$var"
fi
done
exec "$#"
I though exported variables would be available for the child process (created by exec) but this is obviously not the case because sh my_script.sh printenv doesn't show environment variables which are in /proc/1/environ.
I also tried the following script :
#!/bin/sh
VARS=$(cat -A /proc/1/environ | tr "^#" "\n")
COMMAND=""
# sh compatible loop on a variable containing multiple lines
printf %s "$VARS" | while IFS='\n' read -r var
do
if [ "$var" != "" ]; then
# Replace 'VAR=var' by 'VAR="var"' for eval
# sed replace only the first occurence of the '=' due of the missing /g parameter
escaped=$(echo $var | sed -e 's/=/="/')\"
COMMAND="${COMMAND} ${escaped}"
fi
done
COMMAND="${COMMAND} $#"
eval $COMMAND
However, it looks like eval doesn't export variables even if the evaluated command looks like VAR=value my_command.
How I am supposed to achieve my needs ?
Thanks in advance
That one should work (tested on RHEL 7)
#!/bin/bash
locPROC=$1
locCMD=$2
if [[ -z $locPROC || -z $locCMD ]]; then
exit
fi
if [[ -r /proc/${locPROC}/environ ]]; then
while IFS= read -r -d '' line; do
#Making sure it's properly quoted
locVar="${line/=/=\"}\""
#You probably don't want to mess with those
if [[ ${locVar:0:1} != "_" && ${locVar} != A__z* ]]; then
eval "$locVar"
eval "export ${locVar%%=*}"
fi
done < "/proc/${locPROC}/environ"
$locCMD
else
echo "Environment file is either inexistant or unreadable"
fi
EDITED : According to comments (still use eval...got to read more :) )

Read command in if clause in bash

elif [ "$arg" == "--file" ] || [ "$arg" == "-f" ] && [[ read var ]]
then
touch $var
I'm writing a bash script which takes in a command-line input, either long-form or short-form along with the file name to create an empty file with touch command. the above snippet is what I tried to do, but there's an error unary "read: unary operator expected".please help
This happens for most commands:
$ [[ echo "hello world" ]]
bash: conditional binary operator expected
bash: syntax error near `world"'
This is because [[ .. ]] should be used to compare values, and not to run commands. To run a command, don't wrap it in anything:
$ echo "hello world"
hello world
Applied to your example:
echo "You are expected to type in a value, but you will receive no prompt."
arg="-f"
if [ "$arg" == "--file" ] || [ "$arg" == "-f" ] && read var
then
echo "You entered: $var"
fi
Bash needs to know that it's running a whole command
To make bash aware that it's running a command you can use the backtick syntax (not recommended) or the preferred $() command substitution syntax. Without this syntax, bash is assuming that you're putting two separate strings inside of that condition.
The error you're getting is saying that you are trying to compare two strings without an operator to do so (i.e. -eq or ==).
Here is an example of how to make it recognize your commands:
elif [[ ... ]] && [[ $(read var) ]]
then
However, this won't work. This will evaluate to false. This is because you haven't printed anything out and as such an empty string ("") is falsey.
echo your variable to test its value
elif [[ ... ]] && [[ $(read var; echo $var) ]]
then
This will read into the variable and then test if the user has typed anything into it. If the user doesn't type anything, it will evaluate to false, otherwise, it will evaluate to true and run the body of the elif statement.

Checking root integrity via a script

Below is my script to check root path integrity, to ensure there is no vulnerability in PATH variable.
#! /bin/bash
if [ ""`echo $PATH | /bin/grep :: `"" != """" ]; then
echo "Empty Directory in PATH (::)"
fi
if [ ""`echo $PATH | /bin/grep :$`"" != """" ]; then echo ""Trailing : in PATH""
fi
p=`echo $PATH | /bin/sed -e 's/::/:/' -e 's/:$//' -e 's/:/ /g'`
set -- $p
while [ ""$1"" != """" ]; do
if [ ""$1"" = ""."" ]; then
echo ""PATH contains ."" shift
continue
fi
if [ -d $1 ]; then
dirperm=`/bin/ls -ldH $1 | /bin/cut -f1 -d"" ""`
if [ `echo $dirperm | /bin/cut -c6 ` != ""-"" ]; then
echo ""Group Write permission set on directory $1""
fi
if [ `echo $dirperm | /bin/cut -c9 ` != ""-"" ]; then
echo ""Other Write permission set on directory $1""
fi
dirown=`ls -ldH $1 | awk '{print $3}'`
if [ ""$dirown"" != ""root"" ] ; then
echo $1 is not owned by root
fi
else
echo $1 is not a directory
fi
shift
done
The script works fine for me, and shows all vulnerable paths defined in the PATH variable. I want to also automate the process of correctly setting the PATH variable based on the above result. Any quick method to do that.
For example, on my Linux box, the script gives output as:
/usr/bin/X11 is not a directory
/root/bin is not a directory
whereas my PATH variable have these defined,and so I want to have a delete mechanism, to remove them from PATH variable of root. lot of lengthy ideas coming in mind. But searching for a quick and "not so complex" method please.
No offense but your code is completely broken. Your using quotes in a… creative way, yet in a completely wrong way. Your code is unfortunately subject to pathname expansions and word splitting. And it's really a shame to have an insecure code to “secure” your PATH.
One strategy is to (safely!) split your PATH variable into an array, and scan each entry. Splitting is done like so:
IFS=: read -r -d '' -a path_ary < <(printf '%s:\0' "$PATH")
See my mock which and How to split a string on a delimiter answers.
With this command you'll have a nice array path_ary that contains each fields of PATH.
You can then check whether there's an empty field, or a . field or a relative path in there:
for ((i=0;i<${#path_ary[#]};++i)); do
if [[ ${path_ary[i]} = ?(.) ]]; then
printf 'Warning: the entry %d contains the current dir\n' "$i"
elif [[ ${path_ary[i]} != /* ]]; then
printf 'Warning: the entry %s is not an absolute path\n' "$i"
fi
done
You can add more elif's, e.g., to check whether the entry is not a valid directory:
elif [[ ! -d ${path_ary[i]} ]]; then
printf 'Warning: the entry %s is not a directory\n' "$i"
Now, to check for the permission and ownership, unfortunately, there are no pure Bash ways nor portable ways of proceeding. But parsing ls is very likely not a good idea. stat can work, but is known to have different behaviors on different platforms. So you'll have to experiment with what works for you. Here's an example that works with GNU stat on Linux:
read perms owner_id < <(/usr/bin/stat -Lc '%a %u' -- "${path_ary[i]}")
You'll want to check that owner_id is 0 (note that it's okay to have a dir path that is not owned by root; for example, I have /home/gniourf/bin and that's fine!). perms is in octal and you can easily check for g+w or o+w with bit tests:
elif [[ $owner_id != 0 ]]; then
printf 'Warning: the entry %s is not owned by root\n' "$i"
elif ((0022&8#$perms)); then
printf 'Warning: the entry %s has group or other write permission\n' "$i"
Note the use of 8#$perms to force Bash to understand perms as an octal number.
Now, to remove them, you can unset path_ary[i] when one of these tests is triggered, and then put all the remaining back in PATH:
else
# In the else statement, the corresponding entry is good
unset_it=false
fi
if $unset_it; then
printf 'Unsetting entry %s: %s\n' "$i" "${path_ary[i]}"
unset path_ary[i]
fi
of course, you'll have unset_it=true as the first instruction of the loop.
And to put everything back into PATH:
IFS=: eval 'PATH="${path_ary[*]}"'
I know that some will cry out loud that eval is evil, but this is a canonical (and safe!) way to join array elements in Bash (observe the single quotes).
Finally, the corresponding function could look like:
clean_path() {
local path_ary perms owner_id unset_it
IFS=: read -r -d '' -a path_ary < <(printf '%s:\0' "$PATH")
for ((i=0;i<${#path_ary[#]};++i)); do
unset_it=true
read perms owner_id < <(/usr/bin/stat -Lc '%a %u' -- "${path_ary[i]}" 2>/dev/null)
if [[ ${path_ary[i]} = ?(.) ]]; then
printf 'Warning: the entry %d contains the current dir\n' "$i"
elif [[ ${path_ary[i]} != /* ]]; then
printf 'Warning: the entry %s is not an absolute path\n' "$i"
elif [[ ! -d ${path_ary[i]} ]]; then
printf 'Warning: the entry %s is not a directory\n' "$i"
elif [[ $owner_id != 0 ]]; then
printf 'Warning: the entry %s is not owned by root\n' "$i"
elif ((0022 & 8#$perms)); then
printf 'Warning: the entry %s has group or other write permission\n' "$i"
else
# In the else statement, the corresponding entry is good
unset_it=false
fi
if $unset_it; then
printf 'Unsetting entry %s: %s\n' "$i" "${path_ary[i]}"
unset path_ary[i]
fi
done
IFS=: eval 'PATH="${path_ary[*]}"'
}
This design, with if/elif/.../else/fi is good for this simple task but can get awkward to use for more involved tests. For example, observe that we had to call stat early before the tests so that the information is available later in the tests, before we even checked that we're dealing with a directory.
The design may be changed by using a kind of spaghetti awfulness as follows:
for ((oneblock=1;oneblock--;)); do
# This block is only executed once
# You can exit this block with break at any moment
done
It's usually much better to use a function instead of this, and return from the function. But because in the following I'm also going to check for multiple entries, I'll need to have a lookup table (associative array), and it's weird to have an independent function that uses an associative array that's defined somewhere else…
clean_path() {
local path_ary perms owner_id unset_it oneblock
local -A lookup
IFS=: read -r -d '' -a path_ary < <(printf '%s:\0' "$PATH")
for ((i=0;i<${#path_ary[#]};++i)); do
unset_it=true
for ((oneblock=1;oneblock--;)); do
if [[ ${path_ary[i]} = ?(.) ]]; then
printf 'Warning: the entry %d contains the current dir\n' "$i"
break
elif [[ ${path_ary[i]} != /* ]]; then
printf 'Warning: the entry %s is not an absolute path\n' "$i"
break
elif [[ ! -d ${path_ary[i]} ]]; then
printf 'Warning: the entry %s is not a directory\n' "$i"
break
elif [[ ${lookup[${path_ary[i]}]} ]]; then
printf 'Warning: the entry %s appears multiple times\n' "$i"
break
fi
# Here I'm sure I'm dealing with a directory
read perms owner_id < <(/usr/bin/stat -Lc '%a %u' -- "${path_ary[i]}")
if [[ $owner_id != 0 ]]; then
printf 'Warning: the entry %s is not owned by root\n' "$i"
break
elif ((0022 & 8#$perms)); then
printf 'Warning: the entry %s has group or other write permission\n' "$i"
break
fi
# All tests passed, will keep it
lookup[${path_ary[i]}]=1
unset_it=false
done
if $unset_it; then
printf 'Unsetting entry %s: %s\n' "$i" "${path_ary[i]}"
unset path_ary[i]
fi
done
IFS=: eval 'PATH="${path_ary[*]}"'
}
All this is really safe regarding spaces and glob characters and newlines inside PATH; the only thing I don't really like is the use of the external (and non-portable) stat command.
I'd recommend you get a good book on Bash shell scripting. It looks like you learned Bash from looking at 30 year old system shell scripts and by hacking away. This isn't a terrible thing. In fact, it shows initiative and great logic skills. Unfortunately, it leads you down to some really bad code.
If statements
In the original Bourne shell the [ was a command. In fact, /bin/[ was a hard link to /bin/test. The test command was a way to test certain aspects of a file. For example test -e $file would return a 0 if the $file was executable and a 1 if it wasn't.
The if merely took the command after it, and would run the then clause if that command returned an exit code of zero, or the else clause (if it exists) if the exit code wasn't zero.
These two are the same:
if test -e $file
then
echo "$file is executable"
fi
if [ -e $file ]
then
echo "$file is executable"
fi
The important idea is that [ is merely a system command. You don't need these with the if:
if grep -q "foo" $file
then
echo "Found 'foo' in $file"
fi
Note that I am simply running grep and if grep is successful, I'm echoing my statement. No [ ... ] are necessary.
A shortcut to the if is to use the list operators && and ||. For example:
grep -q "foo" $file && echo "I found 'foo' in $file"
is the same as the above if statement.
Never parse ls
You should never parse the ls command. You should use stat instead. stat gets you all the information in the command, but in an easily parseable form.
[ ... ] vs. [[ ... ]]
As I mentioned earlier, in the original Bourne shell, [ was a system command. In Kornshell, it was an internal command, and Bash carried it over too.
The problem with [ ... ] is that the shell would first interpolate the command before the test was performed. Thus, it was vulnerable to all sorts of shell issues. The Kornshell introduced [[ ... ]] as an alternative to the [ ... ] and Bash uses it too.
The [[ ... ]] allows Kornshell and Bash to evaluate the arguments before the shell interpolates the command. For example:
foo="this is a test"
bar="test this is"
[ $foo = $bar ] && echo "'$foo' and '$bar' are equal."
[[ $foo = $bar ]] && echo "'$foo' and '$bar' are equal."
In the [ ... ] test, the shell interpolates first which means that it becomes [ this is a test = test this is ] and that's not valid. In [[ ... ]] the arguments are evaluated first, thus the shell understands it's a test between $foo and $bar. Then, the values of $foo and $bar are interpolated. That works.
For loops and $IFS
There's a shell variable called $IFS that sets how read and for loops parse their arguments. Normally, it's set to space/tab/NL, but you can modify this. Since each PATH argument is separated by :, you can set IFS=:", and use a for loop to parse your $PATH.
The <<< Redirection
The <<< allows you to take a shell variable and pass it as STDIN to the command. These both more or less do the same thing:
statement="This contains the word 'foo'"
echo "$statement" | sed 's/foo/bar/'
statement="This contains the word 'foo'"
sed 's/foo/bar/'<<<$statement
Mathematics in the Shell
Using ((...)) allows you to use math and one of the math function is masking. I use masks to determine whether certain bits are set in the permission.
For example, if my directory permission is 0755 and I and it against 0022, I can see if user read and write permissions are set. Note the leading zeros. That's important, so that these are interpreted as octal values.
Here's your program rewritten using the above:
#! /bin/bash
grep -q "::" <<<"$PATH" && echo "Empty directory in PATH ('::')."
grep -q ":$" <<<$PATH && "PATH has trailing ':'"
#
# Fix Path Issues
#
path=$(sed -e 's/::/:/g' -e 's/:$//'<<<$PATH);
OLDIFS="$IFS"
IFS=":"
for directory in $PATH
do
[[ $directory == "." ]] && echo "Path contains '.'."
[[ ! -d "$directory" ]] && echo "'$directory' isn't a directory in path."
mode=$(stat -L -f %04Lp "$directory") # Differs from system to system
[[ $(stat -L -f %u "$directory") -eq 0 ]] && echo "Directory '$directory' owned by root"
((mode & 0022)) && echo "Group or Other write permission is set on '$directory'."
done
I'm not 100% sure what you want to do or mean about PATH Vulnerabilities. I don't know why you care whether a directory is owned by root, and if an entry in the $PATH is not a directory, it won't affect the $PATH. However, one thing I would test for is to make sure all directories in your $PATH are absolute paths.
[[ $directory != /* ]] && echo "Directory '$directory' is a relative path"
The following could do the whole work and also removes duplicate entries
export PATH="$(perl -e 'print join(q{:}, grep{ -d && !((stat(_))[2]&022) && !$seen{$_}++ } split/:/, $ENV{PATH})')"
I like #kobame's answer but if you don't like the perl-dependency you can do something similar to:
$ cat path.sh
#!/bin/bash
PATH="/root/bin:/tmp/groupwrite:/tmp/otherwrite:/usr/bin:/usr/sbin"
echo "${PATH}"
OIFS=$IFS
IFS=:
for path in ${PATH}; do
[ -d "${path}" ] || continue
paths=( "${paths[#]}" "${path}" )
done
while read -r stat path; do
[ "${stat:5:1}${stat:8:1}" = '--' ] || continue
newpath="${newpath}:${path}"
done < <(stat -c "%A:%n" "${paths[#]}" 2>/dev/null)
IFS=${OIFS}
PATH=${newpath#:}
echo "${PATH}"
$ ./path.sh
/root/bin:/tmp/groupwrite:/tmp/otherwrite:/usr/bin:/usr/sbin
/usr/bin:/usr/sbin
Note that this is not portable due to stat not being portable but it will work on Linux (and Cygwin). For this to work on BSD systems you will have to adapt the format string, other Unices don't ship with stat at all OOTB (Solaris, for example).
It doesn't remove duplicates or directories not owned by root either but that can easily be added. The latter only requires the loop to be adapted slightly so that stat also returns the owner's username:
while read -r stat owner path; do
[ "${owner}${stat:5:1}${stat:8:1}" = 'root--' ] || continue
newpath="${newpath}:${path}"
done < <(stat -c "%A:%U:%n" "${paths[#]}" 2>/dev/null)

use of special characters in if statement in bash

I'm unfamiliar with bash scripting. I wrote a script check arguments. the code is:
for (( i=1; i<=4; i++ ))
do
if ! [[ "$"$i =~ .*[^0-9].* ]]; then
echo "bad input was $i"
fi
done
Actually i want to split non numerical arguments, But it seems that "$"$i is wrong because the answer is always true or false independent of arguments.
can anybody tell me what is the mistake?
You seem to be trying to use indirect parameter expansion.
for (( i=1; i<=4; i++ ))
do
if ! [[ ${!i} =~ .*[^0-9].* ]]; then
echo "bad input was $i"
fi
done
However, it's cleaner to just iterate over the parameters directly, rather than over their position:
for arg in "${#:1:4}"; do
if ! [[ $arg =~ .*[^0-9].* ]]; then
echo "bad input was $arg"
fi
done
If condition should be like this:
if [[ ! "$i" =~ [^0-9] ]]; then
OR remove 2 negatives:
if [[ "$i" =~ [0-9] ]]; then
OR use glob:
if [[ "$i" == *[0-9]* ]]; then
Which means $i contains a digit 0-9
Update: Based on your comments it looks like you are looking for BASH variable indirection like this script check-num.sh:
#!/bin/bash
for (( i=1; i<=$#; i++ )); do
[[ "${!i}" != *[0-9]* ]] && echo "bad input was ${!i}"
done
You can run this script as: ./check-num.sh 1 2 x 4 a
Note how ${!i} syntax is being used here to access the variable's $1, $2, $3 etc that is called BASH variable indirection. You shouldn't use $$i for this purpose.
As per BASH manual:
If the first character of parameter is an exclamation point, a level of variable indirection is introduced. Bash uses the value of the variable formed from
the rest of parameter as the name of the variable; this variable is then expanded and that value is used in the rest of the substitution, rather than the
value of parameter itself.
Use something like this :
for i in "$#"; do
[[ $i =~ .*[^0-9].* ]] || echo "bad input was $i"
done
N.B : It's not necessary to use doubles quotes arround the variable with the [[ internal instruction.

Resources