How to implement tree command in bash/shell script? - linux

I tried to implement in bash some code which output would be similar to the "tree" command in the terminal.
Here it is:
listContent() {
local file
for file in "$1"/*; do
if [ -d $file ]; then
countSpaces $file
echo $?"Directory: $file"
listContent "$file"
elif [ -f $file ]; then
countSpaces $file
echo $?"File: $file"
fi
done
}
countSpaces() {
local space=" "
for (( i=0; i<${#$1}; i++ )); do
if [ ${file:$i:1} = "/" ]; then
space = space + space
return space
done
}
listContent "$1"
Running the script I give: ./scriptName.sh directoryName
where scriptName is my script and directoryName is the argument which is the name of the directory from which the code should start.
I would like to see the output like this:
Directory: Bash/Test1Dir
File: Bash/Test1Dir/Test1Doc.txt
Directory: Bash/Test2Dir
Directory: Bash/Test2Dir/InsideTest2DirDir
File: Bash/Test2Dir/insideTest2DirDoc.txt
File: Bash/test.sh
But I have some troubles in completing this code. Could someone help me figure it out why it isn't working and what should I change?
Will be grateful.

A correct and efficient implementation might look like:
listContent() {
local dir=${1:-.} whitespacePrefix=$2 file
for file in "$dir"/*; do
[ -e "$file" ] || [ -L "$file" ] || continue
if [ -d "$file" ]; then
printf '%sDirectory %q\n' "$whitespacePrefix" "${file##*/}"
listContent "$file" "${whitespacePrefix} "
else
printf '%sFile %q\n' "$whitespacePrefix" "${file##*/}"
fi
done
}
Note:
Instead of counting spaces, we use the call stack to track the amount of whitespace, appending on each recursive call. This avoids needing to count the number of /s in each name.
We quote all parameter expansions, except in one of the limited number of contexts where string-splitting and glob expansions are implicitly avoided.
We avoid attempts to use $? for anything other than its intended purpose of tracking numeric exit status.
We use printf %q whenever uncontrolled data (such as a filename) is present, to ensure that even malicious names (containing newlines, cursor-control characters, etc) are printed unambiguously.

if you want a visual representation without the Directory and File leaders, then the following is a simple one-liner (wrapped in a shell function).
treef() (
[ -d "$1" ] && { dir="$1"; shift; } || dir='.'
find "$dir" "$#" | sed -e 's#/#|#g;s/^\.|//;s/[^|][^|]*|/ |/g;/^[. |]*$/d'
)

Related

bash script that checks if file exists [duplicate]

This checks if a file exists:
#!/bin/bash
FILE=$1
if [ -f $FILE ]; then
echo "File $FILE exists."
else
echo "File $FILE does not exist."
fi
How do I only check if the file does not exist?
The test command (written as [ here) has a "not" logical operator, ! (exclamation mark):
if [ ! -f /tmp/foo.txt ]; then
echo "File not found!"
fi
Bash File Testing
-b filename - Block special file
-c filename - Special character file
-d directoryname - Check for directory Existence
-e filename - Check for file existence, regardless of type (node, directory, socket, etc.)
-f filename - Check for regular file existence not a directory
-G filename - Check if file exists and is owned by effective group ID
-G filename set-group-id - True if file exists and is set-group-id
-k filename - Sticky bit
-L filename - Symbolic link
-O filename - True if file exists and is owned by the effective user id
-r filename - Check if file is a readable
-S filename - Check if file is socket
-s filename - Check if file is nonzero size
-u filename - Check if file set-user-id bit is set
-w filename - Check if file is writable
-x filename - Check if file is executable
How to use:
#!/bin/bash
file=./file
if [ -e "$file" ]; then
echo "File exists"
else
echo "File does not exist"
fi
A test expression can be negated by using the ! operator
#!/bin/bash
file=./file
if [ ! -e "$file" ]; then
echo "File does not exist"
else
echo "File exists"
fi
Negate the expression inside test (for which [ is an alias) using !:
#!/bin/bash
FILE=$1
if [ ! -f "$FILE" ]
then
echo "File $FILE does not exist"
fi
The relevant man page is man test or, equivalently, man [ -- or help test or help [ for the built-in bash command.
Alternatively (less commonly used) you can negate the result of test using:
if ! [ -f "$FILE" ]
then
echo "File $FILE does not exist"
fi
That syntax is described in "man 1 bash" in sections "Pipelines" and "Compound Commands".
[[ -f $FILE ]] || printf '%s does not exist!\n' "$FILE"
Also, it's possible that the file is a broken symbolic link, or a non-regular file, like e.g. a socket, device or fifo. For example, to add a check for broken symlinks:
if [[ ! -f $FILE ]]; then
if [[ -L $FILE ]]; then
printf '%s is a broken symlink!\n' "$FILE"
else
printf '%s does not exist!\n' "$FILE"
fi
fi
It's worth mentioning that if you need to execute a single command you can abbreviate
if [ ! -f "$file" ]; then
echo "$file"
fi
to
test -f "$file" || echo "$file"
or
[ -f "$file" ] || echo "$file"
I prefer to do the following one-liner, in POSIX shell compatible format:
$ [ -f "/$DIR/$FILE" ] || echo "$FILE NOT FOUND"
$ [ -f "/$DIR/$FILE" ] && echo "$FILE FOUND"
For a couple of commands, like I would do in a script:
$ [ -f "/$DIR/$FILE" ] || { echo "$FILE NOT FOUND" ; exit 1 ;}
Once I started doing this, I rarely use the fully typed syntax anymore!!
To test file existence, the parameter can be any one of the following:
-e: Returns true if file exists (regular file, directory, or symlink)
-f: Returns true if file exists and is a regular file
-d: Returns true if file exists and is a directory
-h: Returns true if file exists and is a symlink
All the tests below apply to regular files, directories, and symlinks:
-r: Returns true if file exists and is readable
-w: Returns true if file exists and is writable
-x: Returns true if file exists and is executable
-s: Returns true if file exists and has a size > 0
Example script:
#!/bin/bash
FILE=$1
if [ -f "$FILE" ]; then
echo "File $FILE exists"
else
echo "File $FILE does not exist"
fi
You can do this:
[[ ! -f "$FILE" ]] && echo "File doesn't exist"
or
if [[ ! -f "$FILE" ]]; then
echo "File doesn't exist"
fi
If you want to check for file and folder both, then use -e option instead of -f. -e returns true for regular files, directories, socket, character special files, block special files etc.
You should be careful about running test for an unquoted variable, because it might produce unexpected results:
$ [ -f ]
$ echo $?
0
$ [ -f "" ]
$ echo $?
1
The recommendation is usually to have the tested variable surrounded by double quotation marks:
#!/bin/sh
FILE=$1
if [ ! -f "$FILE" ]
then
echo "File $FILE does not exist."
fi
In
[ -f "$file" ]
the [ command does a stat() (not lstat()) system call on the path stored in $file and returns true if that system call succeeds and the type of the file as returned by stat() is "regular".
So if [ -f "$file" ] returns true, you can tell the file does exist and is a regular file or a symlink eventually resolving to a regular file (or at least it was at the time of the stat()).
However if it returns false (or if [ ! -f "$file" ] or ! [ -f "$file" ] return true), there are many different possibilities:
the file doesn't exist
the file exists but is not a regular file (could be a device, fifo, directory, socket...)
the file exists but you don't have search permission to the parent directory
the file exists but that path to access it is too long
the file is a symlink to a regular file, but you don't have search permission to some of the directories involved in the resolution of the symlink.
... any other reason why the stat() system call may fail.
In short, it should be:
if [ -f "$file" ]; then
printf '"%s" is a path to a regular file or symlink to regular file\n' "$file"
elif [ -e "$file" ]; then
printf '"%s" exists but is not a regular file\n' "$file"
elif [ -L "$file" ]; then
printf '"%s" exists, is a symlink but I cannot tell if it eventually resolves to an actual file, regular or not\n' "$file"
else
printf 'I cannot tell if "%s" exists, let alone whether it is a regular file or not\n' "$file"
fi
To know for sure that the file doesn't exist, we'd need the stat() system call to return with an error code of ENOENT (ENOTDIR tells us one of the path components is not a directory is another case where we can tell the file doesn't exist by that path). Unfortunately the [ command doesn't let us know that. It will return false whether the error code is ENOENT, EACCESS (permission denied), ENAMETOOLONG or anything else.
The [ -e "$file" ] test can also be done with ls -Ld -- "$file" > /dev/null. In that case, ls will tell you why the stat() failed, though the information can't easily be used programmatically:
$ file=/var/spool/cron/crontabs/root
$ if [ ! -e "$file" ]; then echo does not exist; fi
does not exist
$ if ! ls -Ld -- "$file" > /dev/null; then echo stat failed; fi
ls: cannot access '/var/spool/cron/crontabs/root': Permission denied
stat failed
At least ls tells me it's not because the file doesn't exist that it fails. It's because it can't tell whether the file exists or not. The [ command just ignored the problem.
With the zsh shell, you can query the error code with the $ERRNO special variable after the failing [ command, and decode that number using the $errnos special array in the zsh/system module:
zmodload zsh/system
ERRNO=0
if [ ! -f "$file" ]; then
err=$ERRNO
case $errnos[err] in
("") echo exists, not a regular file;;
(ENOENT|ENOTDIR)
if [ -L "$file" ]; then
echo broken link
else
echo does not exist
fi;;
(*) syserror -p "can't tell: " "$err"
esac
fi
(beware the $errnos support was broken with some versions of zsh when built with recent versions of gcc).
There are three distinct ways to do this:
Negate the exit status with bash (no other answer has said this):
if ! [ -e "$file" ]; then
echo "file does not exist"
fi
Or:
! [ -e "$file" ] && echo "file does not exist"
Negate the test inside the test command [ (that is the way most answers before have presented):
if [ ! -e "$file" ]; then
echo "file does not exist"
fi
Or:
[ ! -e "$file" ] && echo "file does not exist"
Act on the result of the test being negative (|| instead of &&):
Only:
[ -e "$file" ] || echo "file does not exist"
This looks silly (IMO), don't use it unless your code has to be portable to the Bourne shell (like the /bin/sh of Solaris 10 or earlier) that lacked the pipeline negation operator (!):
if [ -e "$file" ]; then
:
else
echo "file does not exist"
fi
envfile=.env
if [ ! -f "$envfile" ]
then
echo "$envfile does not exist"
exit 1
fi
To reverse a test, use "!".
That is equivalent to the "not" logical operator in other languages. Try this:
if [ ! -f /tmp/foo.txt ];
then
echo "File not found!"
fi
Or written in a slightly different way:
if [ ! -f /tmp/foo.txt ]
then echo "File not found!"
fi
Or you could use:
if ! [ -f /tmp/foo.txt ]
then echo "File not found!"
fi
Or, presing all together:
if ! [ -f /tmp/foo.txt ]; then echo "File not found!"; fi
Which may be written (using then "and" operator: &&) as:
[ ! -f /tmp/foo.txt ] && echo "File not found!"
Which looks shorter like this:
[ -f /tmp/foo.txt ] || echo "File not found!"
The test thing may count too. It worked for me (based on Bash Shell: Check File Exists or Not):
test -e FILENAME && echo "File exists" || echo "File doesn't exist"
This code also working .
#!/bin/bash
FILE=$1
if [ -f $FILE ]; then
echo "File '$FILE' Exists"
else
echo "The File '$FILE' Does Not Exist"
fi
The simplest way
FILE=$1
[ ! -e "${FILE}" ] && echo "does not exist" || echo "exists"
This shell script also works for finding a file in a directory:
echo "enter file"
read -r a
if [ -s /home/trainee02/"$a" ]
then
echo "yes. file is there."
else
echo "sorry. file is not there."
fi
sometimes it may be handy to use && and || operators.
Like in (if you have command "test"):
test -b $FILE && echo File not there!
or
test -b $FILE || echo File there!
If you want to use test instead of [], then you can use ! to get the negation:
if ! test "$FILE"; then
echo "does not exist"
fi
You can also group multiple commands in the one liner
[ -f "filename" ] || ( echo test1 && echo test2 && echo test3 )
or
[ -f "filename" ] || { echo test1 && echo test2 && echo test3 ;}
If filename doesn't exit, the output will be
test1
test2
test3
Note: ( ... ) runs in a subshell, { ... ;} runs in the same shell.

File checking shell script

A program that takes in a file as agument 1 and a time in seconds in argument 2, and then the program will check if:
The file exist
If file has been changed
.
#!/bin/bash
file=$1
sleeptime=$2
bool=true
if [ -e $file ]; then
thetime=$(date -r $file "+%s")
newtime=$(date -r $file "+%s")
while "$bool" = true
do
sleep $sleeptime
newtime=$(date -r $file "+%s")
if [ "$thetime" -ne "$newtime" ]; then
bool=false
echo "Filen $file ble endret"
fi
if [ ! -e $file ]; then
bool=false
echo "Filen $file ble slettet"
fi
done
fi
if [ ! -e $file ]; then
while "$bool" = true
do
sleep $sleeptime
if [ -e $file ]; then
bool=false
echo "Filen $file ble opprettet"
fi
done
fi
Bash is quite hard to get into, to give you a few pointers:
subcommands in conditions are a somewhat tricky, you usually want to run it in a subshell with substitution if the expression contains variables, which is done by $(), not () (explained in this SO question)
while, if needs to be separated from do, then either by a semicolon or by a newline
testing conditions in bash is a bit different from other languages, in your case -f operator checks file is a regular file (see further file testing operators)
it's usually better to quote variables in conditions and variables passed to commands for case they contain special characters (which would be evaluated otherwise)
not sure what should be the nature of "file hasn't been changed" claim, but probably the simplest approach is to test if it's size changed, for a start the -s operator checks if size of a file is more than zero
as it's pointed out in a comment application-specific variables should be lower-case by convention
Your code with these adjustments:
#!/bin/bash
file=$1
sleeptime=$2
while ($(sleep $sleeptime)); do
if [ ! -f "$file" ]; then
touch $file
echo "File $file was created."
elif [ ! -s "$file" ]; then
rm $file
echo "File $file was deleted."
fi
done
That would be something like this:
#!/bin/bash
file=$1
sleeptime=$2
while :; do
sleep "$sleeptime"
if [ -f "$file" ] ; then
if [ "$file" -nt ".tag.$file" ] ; then
echo "Not removed $file, because it was changed"
else
rm "$file"
rm ".tag.$file"
echo "File $file was deleted."
fi
else
touch "$file"
touch ".tag.$file"
echo "File $file was created."
fi
done
Notes:
cleaned-up some of the code.
see man test.
uppercase variables are usually used for environment variables.

Checking root integrity via a script

Below is my script to check root path integrity, to ensure there is no vulnerability in PATH variable.
#! /bin/bash
if [ ""`echo $PATH | /bin/grep :: `"" != """" ]; then
echo "Empty Directory in PATH (::)"
fi
if [ ""`echo $PATH | /bin/grep :$`"" != """" ]; then echo ""Trailing : in PATH""
fi
p=`echo $PATH | /bin/sed -e 's/::/:/' -e 's/:$//' -e 's/:/ /g'`
set -- $p
while [ ""$1"" != """" ]; do
if [ ""$1"" = ""."" ]; then
echo ""PATH contains ."" shift
continue
fi
if [ -d $1 ]; then
dirperm=`/bin/ls -ldH $1 | /bin/cut -f1 -d"" ""`
if [ `echo $dirperm | /bin/cut -c6 ` != ""-"" ]; then
echo ""Group Write permission set on directory $1""
fi
if [ `echo $dirperm | /bin/cut -c9 ` != ""-"" ]; then
echo ""Other Write permission set on directory $1""
fi
dirown=`ls -ldH $1 | awk '{print $3}'`
if [ ""$dirown"" != ""root"" ] ; then
echo $1 is not owned by root
fi
else
echo $1 is not a directory
fi
shift
done
The script works fine for me, and shows all vulnerable paths defined in the PATH variable. I want to also automate the process of correctly setting the PATH variable based on the above result. Any quick method to do that.
For example, on my Linux box, the script gives output as:
/usr/bin/X11 is not a directory
/root/bin is not a directory
whereas my PATH variable have these defined,and so I want to have a delete mechanism, to remove them from PATH variable of root. lot of lengthy ideas coming in mind. But searching for a quick and "not so complex" method please.
No offense but your code is completely broken. Your using quotes in a… creative way, yet in a completely wrong way. Your code is unfortunately subject to pathname expansions and word splitting. And it's really a shame to have an insecure code to “secure” your PATH.
One strategy is to (safely!) split your PATH variable into an array, and scan each entry. Splitting is done like so:
IFS=: read -r -d '' -a path_ary < <(printf '%s:\0' "$PATH")
See my mock which and How to split a string on a delimiter answers.
With this command you'll have a nice array path_ary that contains each fields of PATH.
You can then check whether there's an empty field, or a . field or a relative path in there:
for ((i=0;i<${#path_ary[#]};++i)); do
if [[ ${path_ary[i]} = ?(.) ]]; then
printf 'Warning: the entry %d contains the current dir\n' "$i"
elif [[ ${path_ary[i]} != /* ]]; then
printf 'Warning: the entry %s is not an absolute path\n' "$i"
fi
done
You can add more elif's, e.g., to check whether the entry is not a valid directory:
elif [[ ! -d ${path_ary[i]} ]]; then
printf 'Warning: the entry %s is not a directory\n' "$i"
Now, to check for the permission and ownership, unfortunately, there are no pure Bash ways nor portable ways of proceeding. But parsing ls is very likely not a good idea. stat can work, but is known to have different behaviors on different platforms. So you'll have to experiment with what works for you. Here's an example that works with GNU stat on Linux:
read perms owner_id < <(/usr/bin/stat -Lc '%a %u' -- "${path_ary[i]}")
You'll want to check that owner_id is 0 (note that it's okay to have a dir path that is not owned by root; for example, I have /home/gniourf/bin and that's fine!). perms is in octal and you can easily check for g+w or o+w with bit tests:
elif [[ $owner_id != 0 ]]; then
printf 'Warning: the entry %s is not owned by root\n' "$i"
elif ((0022&8#$perms)); then
printf 'Warning: the entry %s has group or other write permission\n' "$i"
Note the use of 8#$perms to force Bash to understand perms as an octal number.
Now, to remove them, you can unset path_ary[i] when one of these tests is triggered, and then put all the remaining back in PATH:
else
# In the else statement, the corresponding entry is good
unset_it=false
fi
if $unset_it; then
printf 'Unsetting entry %s: %s\n' "$i" "${path_ary[i]}"
unset path_ary[i]
fi
of course, you'll have unset_it=true as the first instruction of the loop.
And to put everything back into PATH:
IFS=: eval 'PATH="${path_ary[*]}"'
I know that some will cry out loud that eval is evil, but this is a canonical (and safe!) way to join array elements in Bash (observe the single quotes).
Finally, the corresponding function could look like:
clean_path() {
local path_ary perms owner_id unset_it
IFS=: read -r -d '' -a path_ary < <(printf '%s:\0' "$PATH")
for ((i=0;i<${#path_ary[#]};++i)); do
unset_it=true
read perms owner_id < <(/usr/bin/stat -Lc '%a %u' -- "${path_ary[i]}" 2>/dev/null)
if [[ ${path_ary[i]} = ?(.) ]]; then
printf 'Warning: the entry %d contains the current dir\n' "$i"
elif [[ ${path_ary[i]} != /* ]]; then
printf 'Warning: the entry %s is not an absolute path\n' "$i"
elif [[ ! -d ${path_ary[i]} ]]; then
printf 'Warning: the entry %s is not a directory\n' "$i"
elif [[ $owner_id != 0 ]]; then
printf 'Warning: the entry %s is not owned by root\n' "$i"
elif ((0022 & 8#$perms)); then
printf 'Warning: the entry %s has group or other write permission\n' "$i"
else
# In the else statement, the corresponding entry is good
unset_it=false
fi
if $unset_it; then
printf 'Unsetting entry %s: %s\n' "$i" "${path_ary[i]}"
unset path_ary[i]
fi
done
IFS=: eval 'PATH="${path_ary[*]}"'
}
This design, with if/elif/.../else/fi is good for this simple task but can get awkward to use for more involved tests. For example, observe that we had to call stat early before the tests so that the information is available later in the tests, before we even checked that we're dealing with a directory.
The design may be changed by using a kind of spaghetti awfulness as follows:
for ((oneblock=1;oneblock--;)); do
# This block is only executed once
# You can exit this block with break at any moment
done
It's usually much better to use a function instead of this, and return from the function. But because in the following I'm also going to check for multiple entries, I'll need to have a lookup table (associative array), and it's weird to have an independent function that uses an associative array that's defined somewhere else…
clean_path() {
local path_ary perms owner_id unset_it oneblock
local -A lookup
IFS=: read -r -d '' -a path_ary < <(printf '%s:\0' "$PATH")
for ((i=0;i<${#path_ary[#]};++i)); do
unset_it=true
for ((oneblock=1;oneblock--;)); do
if [[ ${path_ary[i]} = ?(.) ]]; then
printf 'Warning: the entry %d contains the current dir\n' "$i"
break
elif [[ ${path_ary[i]} != /* ]]; then
printf 'Warning: the entry %s is not an absolute path\n' "$i"
break
elif [[ ! -d ${path_ary[i]} ]]; then
printf 'Warning: the entry %s is not a directory\n' "$i"
break
elif [[ ${lookup[${path_ary[i]}]} ]]; then
printf 'Warning: the entry %s appears multiple times\n' "$i"
break
fi
# Here I'm sure I'm dealing with a directory
read perms owner_id < <(/usr/bin/stat -Lc '%a %u' -- "${path_ary[i]}")
if [[ $owner_id != 0 ]]; then
printf 'Warning: the entry %s is not owned by root\n' "$i"
break
elif ((0022 & 8#$perms)); then
printf 'Warning: the entry %s has group or other write permission\n' "$i"
break
fi
# All tests passed, will keep it
lookup[${path_ary[i]}]=1
unset_it=false
done
if $unset_it; then
printf 'Unsetting entry %s: %s\n' "$i" "${path_ary[i]}"
unset path_ary[i]
fi
done
IFS=: eval 'PATH="${path_ary[*]}"'
}
All this is really safe regarding spaces and glob characters and newlines inside PATH; the only thing I don't really like is the use of the external (and non-portable) stat command.
I'd recommend you get a good book on Bash shell scripting. It looks like you learned Bash from looking at 30 year old system shell scripts and by hacking away. This isn't a terrible thing. In fact, it shows initiative and great logic skills. Unfortunately, it leads you down to some really bad code.
If statements
In the original Bourne shell the [ was a command. In fact, /bin/[ was a hard link to /bin/test. The test command was a way to test certain aspects of a file. For example test -e $file would return a 0 if the $file was executable and a 1 if it wasn't.
The if merely took the command after it, and would run the then clause if that command returned an exit code of zero, or the else clause (if it exists) if the exit code wasn't zero.
These two are the same:
if test -e $file
then
echo "$file is executable"
fi
if [ -e $file ]
then
echo "$file is executable"
fi
The important idea is that [ is merely a system command. You don't need these with the if:
if grep -q "foo" $file
then
echo "Found 'foo' in $file"
fi
Note that I am simply running grep and if grep is successful, I'm echoing my statement. No [ ... ] are necessary.
A shortcut to the if is to use the list operators && and ||. For example:
grep -q "foo" $file && echo "I found 'foo' in $file"
is the same as the above if statement.
Never parse ls
You should never parse the ls command. You should use stat instead. stat gets you all the information in the command, but in an easily parseable form.
[ ... ] vs. [[ ... ]]
As I mentioned earlier, in the original Bourne shell, [ was a system command. In Kornshell, it was an internal command, and Bash carried it over too.
The problem with [ ... ] is that the shell would first interpolate the command before the test was performed. Thus, it was vulnerable to all sorts of shell issues. The Kornshell introduced [[ ... ]] as an alternative to the [ ... ] and Bash uses it too.
The [[ ... ]] allows Kornshell and Bash to evaluate the arguments before the shell interpolates the command. For example:
foo="this is a test"
bar="test this is"
[ $foo = $bar ] && echo "'$foo' and '$bar' are equal."
[[ $foo = $bar ]] && echo "'$foo' and '$bar' are equal."
In the [ ... ] test, the shell interpolates first which means that it becomes [ this is a test = test this is ] and that's not valid. In [[ ... ]] the arguments are evaluated first, thus the shell understands it's a test between $foo and $bar. Then, the values of $foo and $bar are interpolated. That works.
For loops and $IFS
There's a shell variable called $IFS that sets how read and for loops parse their arguments. Normally, it's set to space/tab/NL, but you can modify this. Since each PATH argument is separated by :, you can set IFS=:", and use a for loop to parse your $PATH.
The <<< Redirection
The <<< allows you to take a shell variable and pass it as STDIN to the command. These both more or less do the same thing:
statement="This contains the word 'foo'"
echo "$statement" | sed 's/foo/bar/'
statement="This contains the word 'foo'"
sed 's/foo/bar/'<<<$statement
Mathematics in the Shell
Using ((...)) allows you to use math and one of the math function is masking. I use masks to determine whether certain bits are set in the permission.
For example, if my directory permission is 0755 and I and it against 0022, I can see if user read and write permissions are set. Note the leading zeros. That's important, so that these are interpreted as octal values.
Here's your program rewritten using the above:
#! /bin/bash
grep -q "::" <<<"$PATH" && echo "Empty directory in PATH ('::')."
grep -q ":$" <<<$PATH && "PATH has trailing ':'"
#
# Fix Path Issues
#
path=$(sed -e 's/::/:/g' -e 's/:$//'<<<$PATH);
OLDIFS="$IFS"
IFS=":"
for directory in $PATH
do
[[ $directory == "." ]] && echo "Path contains '.'."
[[ ! -d "$directory" ]] && echo "'$directory' isn't a directory in path."
mode=$(stat -L -f %04Lp "$directory") # Differs from system to system
[[ $(stat -L -f %u "$directory") -eq 0 ]] && echo "Directory '$directory' owned by root"
((mode & 0022)) && echo "Group or Other write permission is set on '$directory'."
done
I'm not 100% sure what you want to do or mean about PATH Vulnerabilities. I don't know why you care whether a directory is owned by root, and if an entry in the $PATH is not a directory, it won't affect the $PATH. However, one thing I would test for is to make sure all directories in your $PATH are absolute paths.
[[ $directory != /* ]] && echo "Directory '$directory' is a relative path"
The following could do the whole work and also removes duplicate entries
export PATH="$(perl -e 'print join(q{:}, grep{ -d && !((stat(_))[2]&022) && !$seen{$_}++ } split/:/, $ENV{PATH})')"
I like #kobame's answer but if you don't like the perl-dependency you can do something similar to:
$ cat path.sh
#!/bin/bash
PATH="/root/bin:/tmp/groupwrite:/tmp/otherwrite:/usr/bin:/usr/sbin"
echo "${PATH}"
OIFS=$IFS
IFS=:
for path in ${PATH}; do
[ -d "${path}" ] || continue
paths=( "${paths[#]}" "${path}" )
done
while read -r stat path; do
[ "${stat:5:1}${stat:8:1}" = '--' ] || continue
newpath="${newpath}:${path}"
done < <(stat -c "%A:%n" "${paths[#]}" 2>/dev/null)
IFS=${OIFS}
PATH=${newpath#:}
echo "${PATH}"
$ ./path.sh
/root/bin:/tmp/groupwrite:/tmp/otherwrite:/usr/bin:/usr/sbin
/usr/bin:/usr/sbin
Note that this is not portable due to stat not being portable but it will work on Linux (and Cygwin). For this to work on BSD systems you will have to adapt the format string, other Unices don't ship with stat at all OOTB (Solaris, for example).
It doesn't remove duplicates or directories not owned by root either but that can easily be added. The latter only requires the loop to be adapted slightly so that stat also returns the owner's username:
while read -r stat owner path; do
[ "${owner}${stat:5:1}${stat:8:1}" = 'root--' ] || continue
newpath="${newpath}:${path}"
done < <(stat -c "%A:%U:%n" "${paths[#]}" 2>/dev/null)

Recursive directory listing in shell without using ls

I am looking for a script that recursively lists all files using export and read link and by not using ls options. I have tried the following code, but it does not fulfill the purpose. Please can you help.
My Code-
#!/bin/bash
for i in `find . -print|cut -d"/" -f2`
do
if [ -d $i ]
then
echo "Hello"
else
cd $i
echo *
fi
done
Here's a simple recursive function which does a directory listing:
list_dir() {
local i # do not use a global variable in our for loop
# ...note that 'local' is not POSIX sh, but even ash
# and dash support it.
[[ -n $1 ]] || set -- . # if no parameter is passed, default to '.'
for i in "$1"/*; do # look at directory contents
if [ -d "$i" ]; then # if our content is a directory...
list_dir "$i" # ...then recurse.
else # if our content is not a directory...
echo "Found a file: $i" # ...then list it.
fi
done
}
Alternately, if by "recurse", you just mean that you want the listing to be recursive, and can accept your code not doing any recursion itself:
#!/bin/bash
# ^-- we use non-POSIX features here, so shebang must not be #!/bin/sh
while IFS='' read -r -d '' filename; do
if [ -f "$filename" ]; then
echo "Found a file: $filename"
fi
done < <(find . -print0)
Doing this safely calls for using -print0, so that names are separated by NULs (the only character which cannot exist in a filename; newlines within names are valid.

Trying to create a file to call another file for a looped search

I'm attempting to write a script that calls another script and uses it either once, or in a loop, depending on the inputs.
I wrote a script that simply searches a file for a pattern and then prints the file name and lists the lines that the search was found on. That script is here
#!/bin/bash
if [[ $# < 2 ]]
then
echo "error: must provide 2 arguments."
exit -1
fi
if [[ -e $2 ]]
then
echo "error: second argument must be a file."
exit -2
fi
echo "------ File =" $2 "------"
grep -ne $1 $2
So now I want to write a new script that will call this is the user enters just one file as a second argument, and will also loop and search all the files in the directory if they select a directory.
So if the input is:
./searchscript if testfile
it'll just use the script but if the input is:
./searchscript if Desktop
It'll search all the files in a loop.
My heart runnith over for you all, as always.
something like could works:
#!/bin/bash
do_for_file() {
grep "$1" "$2"
}
do_for_dir() {
cd "$2" || exit 1
for file in *
do
do_for "$1" "$file"
done
cd ..
}
do_for() {
where="file"
[[ -d "$2" ]] && where=dir
do_for_$where "$1" "$2"
}
do_for "$1" "$2"
How about this:
#!/bin/bash
dirSearch() {
for file in $(find $2 -type f)
do
./searchscript $file
done
}
if [ -d $2 ]
then
dirSearch
elif [ -e $2 ]
then
./searchscript $2
fi
Alternatively if you don't want to parse the output of find you can do the following:
#!/bin/bash
if [ -d $2 ]
then
find $2 -type f -exec ./searchscript {} \;
elif [ -e $2 ]
then
./searchscript $2
fi
er... maybe too simple, but what about letting "grep" do all the work?
#myscript
if [ $# -lt 2 ]
then
echo "error: must provide 2 arguments."
exit -1
fi
if [ -e "$2" ]
then
echo "error: second argument must be a file."
exit -2
fi
echo "------ File =" $2 "------"
grep -rne "$1" "$2"
I just added "-r" to the grep invocation : if it's a file, no recursion, if it's a dir, it'll recurse on it.
You could even get rid of the argument checks and let grep barf the appropriate error messages : (keep the quotes or this will fail)
#myscript
grep -rne "$1" "$2"
Assuming you do not want to search recursively:
#!/bin/bash
location=shift
if [[ -d $location ]]
then
for file in $location/*
do
your_script $file
done
else
# Insert a check for whether $location is a real file and exists, if needed
your_script $location
fi
NOTE1: This has a subtle bug: if some files in the directory start with a ".", as far as i recall, "for *" loop will NOT see them, so you need to add "in $location/* $location/.*" instead
NOTE2: If you want recursive search, instead use find:
in `find $location`

Resources