This question already has answers here:
A variable modified inside a while loop is not remembered
(8 answers)
Closed 4 years ago.
In my script I want to count how many directories and files I have inside different directories. Inside "assignments" I have a lot of directories named "repo1", "repo2" etc. Here is my code:
ls -1 assignments | while read -r repoDir
do
find assignments/"$repoDir" | grep -v .git | grep "$repoDir"/ | while read -r aPathOfRepoDir
do
BASENAME=`basename "$aPathOfRepoDir"`
if [[ -d "$aPathOfRepoDir" ]]; then
totalDirectories=$((totalDirectories+1))
elif [[ -f "$aPathOfRepoDir" ]] && [[ "$BASENAME" == *".txt" ]]; then
totalTextFiles=$((totalTextFiles+1))
else
totalOtherFiles=$((totalOtherFiles+1))
fi
done
echo "total directories: $totalDirectories"
echo "total text files: $totalTextFiles"
echo "total other files: $totalOtherFiles"
totalDirectories=0
totalTextFiles=0
totalOtherFiles=0;
done
When the while-loop is finished I lose the values of those 3 variables. I know that this is happening because the while-loop is a sub-shell but I don't know how can I somehow "store" the values of the variables for the parent shell. I thought about printing those messages inside the while-loop when I know that it's the last "aPathOfRepoDir" but that's kinda a "cheap" way to do it and won't be efficient. Is there another way?
Thanks in advance
Right hand side of a pipe runs in a subshell. Changes in the subshell variables don't propagate to the parent shell. Use process substitution instead:
while read -r aPathOfRepoDir
do
BASENAME=`basename "$aPathOfRepoDir"`
if [[ -d "$aPathOfRepoDir" ]]; then
totalDirectories=$((totalDirectories+1))
elif [[ -f "$aPathOfRepoDir" ]] && [[ "$BASENAME" == *".txt" ]]; then
totalTextFiles=$((totalTextFiles+1))
else
totalOtherFiles=$((totalOtherFiles+1))
fi
done < <(find assignments/"$repoDir" | grep -v .git | grep "$repoDir"/ )
How about this:
classify() {
local -A types=([dir]=0 [txt]=0 [other]=0)
local n=0
while read -r type path; do
if [[ $type == d ]]; then
(( types[dir] ++ ))
elif [[ $type == f && $path == *.txt ]]; then
(( types[txt] ++ ))
else
(( types[other] ++ ))
fi
((n++))
done
if [[ $n -gt 0 ]]; then
echo "$1"
echo "total directories: ${types[dir]}"
echo "total text files: ${types[txt]}"
echo "total other files: ${types[other]}"
fi
}
for repoDir in assignments/*; do
find "$repoDir" \
\( ! -path "$repoDir" -a ! -path '*/.git' -a ! -path '*/.git/*' \) \
-printf '%y %p\n' \
| classify "$repoDir"
done
find can exclude the files you don't want to see
find will also emit the file type to simplify the classification
the "classify" function will loop over the find output to count the various categories.
not parsing ls
the function helps to localize all the variables in a single subshell
Related
EDIT: I KNOW THIS IS REDUNDANT, IT IS HOMEWORK, I HAVE WRITTEN MY OWN CODE AND NEED HELP TROUBLESHOOTING>
As stated, I must write a BASH script to determine whether an executable file is in the users path.
such that if you type
./findcmd ping it returns /bin/ping
I have some code written, But it does not properly work and I hope someone can help me troubleshoot. When I type ./findcmd ping it just returns my file does not exist.(with any other file I try as well that I know exists.)
#!/bin/bash
#
# Invoke as ./findcmd command
#
# Check for argument
if [[ $# -ne 1 ]]
then
echo 'useage: ./findcmd command'
exit 1
fi
#
# Check for one argument
if [[ $# -eq 1 ]]
then
pathlist=`echo $PATH | tr ':' ' '`
for d in $pathlist;
do
if [[ ! -d $d || ! -x $d || ! -r $d ]]
then
echo 'You do not have read end execute
permissions!'
exit 2
fi
if [[ $(find $d -name $1 -print | wc -l) -ne 0 ]]
then
echo 'The file does not exist in the PATH!'
exit 0
fi
done
fi
exit 0
#
#
No need to use a bash array, tr'ing the ':' with ' ' will work just fine in a for loop.
#!/bin/bash
#
# Invoke as ./findcmd command
#
# Check for argument
if [[ $# -ne 1 ]]
then
echo 'usage: ./findcmd command'
exit 1
fi
f=$1
# No need to check the $# again, there's at least one arg and other will be ignored..
# Otherwise you can wrap this in a loop and keep shift'ing args and checking one by one
pathlist=`echo $PATH | tr ':' '\n'`
for d in $pathlist;
do
#echo command is your friend
#echo "Checking for $f in $d"
path="$d/$f"
if [[ -f "$path" && -x "$path" ]]; then
# PATH is not recursive, therefore no need to use find command
# Simply checking that the file exists and is executable should be enough
echo "Found $f at '$path'"
# Note the same filename may be present farther down the PATH
# Once the first executable is found, exit
exit 0
fi
done
# Getting here means file was not found
echo "$f could not be found"
exit 1
Here are the results:
rbanikaz#lightsaber:~$ ./which.sh grep
Found grep at '/usr/bin/grep'
rbanikaz#lightsaber:~$ ./which.sh foo
foo could not be found
rbanikaz#lightsaber:~$
The which command already does this...
Techinically this is a solution...
#!/bin/bash
which $1
I probably wouldn't submit it for as assignment though...
Update
Messing around a bit and I think the following will code will get your past your current bug:
#!/bin/bash
#
# Invoke as ./findcmd command
#
# Check for argument
if [[ $# -ne 1 ]]
then
echo 'useage: ./findcmd command'
exit 1
fi
#
# Check for one argument
if [[ $# -eq 1 ]]
then
d=$1
pathlist=($(echo $PATH | tr ':' ' '))
echo $pathlist
i=0
while read line; do
a4[i++]=$line
done < <(echo "$PATH" | tr ':' '\n')
n=${#a4[#]}
for ((i=0; i < n; i++)); do
if [[ ! -d $d || ! -x $d || ! -r $d ]]
then
echo 'You do not have read end execute
permissions!'
exit 2
fi
if [[ $(find $d -name $1 -print | wc -l) -ne 0 ]]
then
echo 'The file does not exist in the PATH!'
exit 0
fi
done
fi
exit 0
#
#
Pretty much, it uses a solution in this SO question to split the $PATH variable into an array and then loops through it, applying the logic you had inside your while statement.
I have a script running that is checking multiples directories and comparing them to expanded tarballs of the same directories elsewhere.
I am using diff -r -q and what I would like is that when diff finds any difference in the recursive run it will stop running instead of going through more directories in the same run.
All help appreciated!
Thank you
#bazzargh I did try it like you suggested or like this.
for file in $(find $dir1 -type f);
do if [[ $(diff -q $file ${file/#$dir1/$dir2}) ]];
then echo differs: $file > /tmp/$runid.tmp 2>&1; break;
else echo same: $file > /dev/null; fi; done
But this only works with files that exist in both directories. If one file is missing I won't get information about that. Also the directories I am working with have over 300.000 files so it seems to be a bit of overhead to do a find for each file and then diff.
I would like something like this to work, with and elif statement that checks if $runid.tmp contains data and breaks if it does. I added 2> after the first if statement so stderr is sent to the $runid.tmp file.
for file in $(find $dir1 -type f);
do if [[ $(diff -q $file ${file/#$dir1/$dir2}) ]] 2> /tmp/$runid.tmp;
then echo differs: $file > /tmp/$runid.tmp 2>&1; break;
elif [[ -s /tmp/$runid.tmp ]];
then echo differs: $file >> /tmp/$runid.tmp 2>&1; break;
else echo same: $file > /dev/null; fi; done
Would this work?
You can do the loop over files with 'find' and break when they differ. eg for dirs foo, bar:
for file in $(find foo -type f); do if [[ $(diff -q $file ${file/#foo/bar}) ]]; then echo differs: $file; break; else echo same: $file; fi; done
NB this will not detect if 'bar' has directories that do not exist in 'foo'.
Edited to add: I just realised I overlooked the really obvious solution:
diff -rq foo bar | head -n1
It's not 'diff', but with 'awk' you can compare two files (or more) and then exit when they have a different line.
Try something like this (sorry, it's a little rough)
awk '{ h[$0] = ! h[$0] } END { for (k in h) if (h[k]) exit }' file1 file2
Sources are here and here.
edit: to break out of the loop when two files have the same line, you may have to do the loop in awk. See here.
You can try the following:
#!/usr/bin/env bash
# Determine directories to compare
d1='./someDir1'
d2='./someDir2'
# Loop over the file lists and diff corresponding files
while IFS= read -r line; do
# Split the 3-column `comm` output into indiv. variables.
lineNoTabs=${line//$'\t'}
numTabs=$(( ${#line} - ${#lineNoTabs} ))
d1Only='' d2Only='' common=''
case $numTabs in
0)
d1Only=$lineNoTabs
;;
1)
d2Only=$lineNoTabs
;;
*)
common=$lineNoTabs
;;
esac
# If a file exists in both directories, compare them,
# and exit if they differ, continue otherwise
if [[ -n $common ]]; then
diff -q "$d1/$common" "$d2/$common" || {
echo "EXITING: Diff found: '$common'" 1>&2;
exit 1; }
# Deal with files unique to either directory.
elif [[ -n $d1Only ]]; then # fie
echo "File '$d1Only' only in '$d1'."
else # implies: if [[ -n $d2Only ]]; then
echo "File '$d2Only' only in '$d2."
fi
# Note: The `comm` command below is CASE-SENSITIVE, which means:
# - The input directories must be specified case-exact.
# To change that, add `I` after the last `|` in _both_ `sed commands`.
# - The paths and names of the files diffed must match in case too.
# To change that, insert `| tr '[:upper:]' '[:lower:]' before _both_
# `sort commands.
done < <(comm \
<(find "$d1" -type f | sed 's|'"$d1/"'||' | sort) \
<(find "$d2" -type f | sed 's|'"$d2/"'||' | sort))
The approach is based on building a list of files (using find) containing relative paths (using sed to remove the root path) for each input directory, sorting the lists, and comparing them with comm, which produces 3-column, tab-separated output to indicated which lines (and therefore files) are unique to the first list, which are unique to the second list, and which lines they have in common.
Thus, the values in the 3rd column can be diffed and action taken if they're not identical.
Also, the 1st and 2nd-column values can be used to take action based on unique files.
The somewhat complicated splitting of the 3 column values output by comm into individual variables is necessary, because:
read will treat multiple tabs in sequence as a single separator
comm outputs a variable number of tabs; e.g., if there's only a 1st-column value, no tab is output at all.
I got a solution to this thanks to #bazzargh.
I use this code in my script and now it works perfectly.
for file in $(find ${intfolder} -type f);
do if [[ $(diff -q $file ${file/#${intfolder}/${EXPANDEDROOT}/${runid}/$(basename ${intfolder})}) ]] 2> ${resultfile}.tmp;
then echo differs: $file > ${resultfile}.tmp 2>&1; break;
elif [[ -s ${resultfile}.tmp ]];
then echo differs: $file >> ${resultfile}.tmp 2>&1; break;
else echo same: $file > /dev/null;
fi; done
thanks!
I am using Linux ksh to remove some old directories that I don't want.
What I use is this:
#! /bin/ksh
OLD=/opt/backup
DIR_PREFIX="active"
DIRS=$(ls ${OLD} -t | grep ${DIR_PREFIX})
i=0
while [[ $i -lt ${#DIRS[*]} ]]; do
if [ $i -gt 4 ];
then
echo ${DIRS[$i]}
((i++))
else
((i++))
fi
done
what I am trying to do is: to store a list all the directories sorted by time into a variable-I assume it would be an array but somehow the size of it is 1... ..., then in the while loop, if the position of the directory is greater than 4, then I print out the directory name.
Any idea of how to
If all you want is to print all but the first four entries, just pipe it to head or sed:
#!/bin/sh
OLD=/opt/backup
DIR_PREFIX=active
ls $OLD -t | grep $DIR_PREFIX | sed 1,4d | while read DIR; do
echo $DIR;
done
If you are just using echo, the while loop is redundant, but presumably you will have more commands in the loop.
OLD=/opt/backup
DIR_PREFIX="active"
DIRS_RESULT=$(ls ${OLD} -t | grep ${DIR_PREFIX})
i=0
for DIR in ${DIRS_RESULT}
do
if [ $i -gt 4 ];
then
echo ${DIR}
rm -rf ${DIR}
((i++))
else
((i++))
fi
done
this one works for me
I found similar questions but not in Linux/Bash
I want my script to create a file with a given name (via user input) but add number at the end if filename already exists.
Example:
$ create somefile
Created "somefile.ext"
$ create somefile
Created "somefile-2.ext"
The following script can help you. You should not be running several copies of the script at the same time to avoid race condition.
name=somefile
if [[ -e $name.ext || -L $name.ext ]] ; then
i=0
while [[ -e $name-$i.ext || -L $name-$i.ext ]] ; do
let i++
done
name=$name-$i
fi
touch -- "$name".ext
Easier:
touch file`ls file* | wc -l`.ext
You'll get:
$ ls file*
file0.ext file1.ext file2.ext file3.ext file4.ext file5.ext file6.ext
To avoid the race conditions:
name=some-file
n=
set -o noclobber
until
file=$name${n:+-$n}.ext
{ command exec 3> "$file"; } 2> /dev/null
do
((n++))
done
printf 'File is "%s"\n' "$file"
echo some text in it >&3
And in addition, you have the file open for writing on fd 3.
With bash-4.4+, you can make it a function like:
create() { # fd base [suffix [max]]]
local fd="$1" base="$2" suffix="${3-}" max="${4-}"
local n= file
local - # ash-style local scoping of options in 4.4+
set -o noclobber
REPLY=
until
file=$base${n:+-$n}$suffix
eval 'command exec '"$fd"'> "$file"' 2> /dev/null
do
((n++))
((max > 0 && n > max)) && return 1
done
REPLY=$file
}
To be used for instance as:
create 3 somefile .ext || exit
printf 'File: "%s"\n' "$REPLY"
echo something >&3
exec 3>&- # close the file
The max value can be used to guard against infinite loops when the files can't be created for other reason than noclobber.
Note that noclobber only applies to the > operator, not >> nor <>.
Remaining race condition
Actually, noclobber does not remove the race condition in all cases. It only prevents clobbering regular files (not other types of files, so that cmd > /dev/null for instance doesn't fail) and has a race condition itself in most shells.
The shell first does a stat(2) on the file to check if it's a regular file or not (fifo, directory, device...). Only if the file doesn't exist (yet) or is a regular file does 3> "$file" use the O_EXCL flag to guarantee not clobbering the file.
So if there's a fifo or device file by that name, it will be used (provided it can be open in write-only), and a regular file may be clobbered if it gets created as a replacement for a fifo/device/directory... in between that stat(2) and open(2) without O_EXCL!
Changing the
{ command exec 3> "$file"; } 2> /dev/null
to
[ ! -e "$file" ] && { command exec 3> "$file"; } 2> /dev/null
Would avoid using an already existing non-regular file, but not address the race condition.
Now, that's only really a concern in the face of a malicious adversary that would want to make you overwrite an arbitrary file on the file system. It does remove the race condition in the normal case of two instances of the same script running at the same time. So, in that, it's better than approaches that only check for file existence beforehand with [ -e "$file" ].
For a working version without race condition at all, you could use the zsh shell instead of bash which has a raw interface to open() as the sysopen builtin in the zsh/system module:
zmodload zsh/system
name=some-file
n=
until
file=$name${n:+-$n}.ext
sysopen -w -o excl -u 3 -- "$file" 2> /dev/null
do
((n++))
done
printf 'File is "%s"\n' "$file"
echo some text in it >&3
Try something like this
name=somefile
path=$(dirname "$name")
filename=$(basename "$name")
extension="${filename##*.}"
filename="${filename%.*}"
if [[ -e $path/$filename.$extension ]] ; then
i=2
while [[ -e $path/$filename-$i.$extension ]] ; do
let i++
done
filename=$filename-$i
fi
target=$path/$filename.$extension
Use touch or whatever you want instead of echo:
echo file$((`ls file* | sed -n 's/file\([0-9]*\)/\1/p' | sort -rh | head -n 1`+1))
Parts of expression explained:
list files by pattern: ls file*
take only number part in each line: sed -n 's/file\([0-9]*\)/\1/p'
apply reverse human sort: sort -rh
take only first line (i.e. max value): head -n 1
combine all in pipe and increment (full expression above)
Try something like this (untested, but you get the idea):
filename=$1
# If file doesn't exist, create it
if [[ ! -f $filename ]]; then
touch $filename
echo "Created \"$filename\""
exit 0
fi
# If file already exists, find a similar filename that is not yet taken
digit=1
while true; do
temp_name=$filename-$digit
if [[ ! -f $temp_name ]]; then
touch $temp_name
echo "Created \"$temp_name\""
exit 0
fi
digit=$(($digit + 1))
done
Depending on what you're doing, replace the calls to touch with whatever code is needed to create the files that you are working with.
This is a much better method I've used for creating directories incrementally.
It could be adjusted for filename too.
LAST_SOLUTION=$(echo $(ls -d SOLUTION_[[:digit:]][[:digit:]][[:digit:]][[:digit:]] 2> /dev/null) | awk '{ print $(NF) }')
if [ -n "$LAST_SOLUTION" ] ; then
mkdir SOLUTION_$(printf "%04d\n" $(expr ${LAST_SOLUTION: -4} + 1))
else
mkdir SOLUTION_0001
fi
A simple repackaging of choroba's answer as a generalized function:
autoincr() {
f="$1"
ext=""
# Extract the file extension (if any), with preceeding '.'
[[ "$f" == *.* ]] && ext=".${f##*.}"
if [[ -e "$f" ]] ; then
i=1
f="${f%.*}";
while [[ -e "${f}_${i}${ext}" ]]; do
let i++
done
f="${f}_${i}${ext}"
fi
echo "$f"
}
touch "$(autoincr "somefile.ext")"
without looping and not use regex or shell expr.
last=$(ls $1* | tail -n1)
last_wo_ext=$($last | basename $last .ext)
n=$(echo $last_wo_ext | rev | cut -d - -f 1 | rev)
if [ x$n = x ]; then
n=2
else
n=$((n + 1))
fi
echo $1-$n.ext
more simple without extension and exception of "-1".
n=$(ls $1* | tail -n1 | rev | cut -d - -f 1 | rev)
n=$((n + 1))
echo $1-$n.ext
I'm trying to write a function that will traverse the file directory and give me the value of the deepest directory. I've written the function and it seems like it is going to each directory, but my counter doesn't seem to work at all.
dir_depth(){
local olddir=$PWD
local dir
local counter=0
cd "$1"
for dir in *
do
if [ -d "$dir" ]
then
dir_depth "$1/$dir"
echo "$dir"
counter=$(( $counter + 1 ))
fi
done
cd "$olddir"
}
What I want it to do is feed the function a directory, say /home, and it'll go down each subdirectory within and find the deepest value. I'm trying to learn recursion better, but I'm not sure what I'm doing wrong.
Obviously find should be used for this
find . -type d -exec bash -c 'echo $(tr -cd / <<< "$1"|wc -c):$1' -- {} \; | sort -n | tail -n 1 | awk -F: '{print $1, $2}'
At the end I use awk to just print the output, but if that were the output you wanted it would be better just to echo it that way to begin with.
Not that it helps learn about recursion, of course.
Here's a one–liner that's pretty fast:
find . -type d -printf '%d:%p\n' | sort -n | tail -1
Or as a function:
depth()
{
find $1 -type d -printf '%d:%p\n' | sort -n | tail -1
}
Here is a version that seems to work:
#!/bin/sh
dir_depth() {
cd "$1"
maxdepth=0
for d in */.; do
[ -d "$d" ] || continue
depth=`dir_depth "$d"`
maxdepth=$(($depth > $maxdepth ? $depth : $maxdepth))
done
echo $((1 + $maxdepth))
}
dir_depth "$#"
Just a few small changes to your script. I've added several explanatory comments:
dir_depth(){
# don't need olddir and counter needs to be "global"
local dir
cd -- "$1" # the -- protects against dirnames that start with -
# do this out here because we're counting depth not visits
((counter++))
for dir in *
do
if [ -d "$dir" ]
then
# we want to descend from where we are rather than where we started from
dir_depth "$dir"
fi
done
if ((counter > max))
then
max=$counter # these are what we're after
maxdir=$PWD
fi
((counter--)) # decrement and test to see if we're back where we started
if (( counter == 0 ))
then
echo $max $maxdir # ta da!
unset counter # ready for the next run
else
cd .. # go up one level instead of "olddir"
fi
}
It prints the max depth (including the starting directory as 1) and the first directory name that it finds at that depth. You can change the test if ((counter > max)) to >= and it will print the last directory name it finds at that depth.
The AIX (6.1) find command seems to be quite limited (e.g. no printf option). If you like to list all directories up to a given depth try this combination of find and dirname. Save the script code as maxdepth.ksh. In comparison to the Linux find -maxdepth option, AIX find will not stop at the given maximum level which results in a longer runtime, depending on the size/depth of the scanned direcory:
#!/usr/bin/ksh
# Param 1: maxdepth
# Param 2: Directoryname
max_depth=0
netxt_dir=$2
while [[ "$netxt_dir" != "/" ]] && [[ "$netxt_dir" != "." ]]; do
max_depth=$(($max_depth + 1))
netxt_dir=$(dirname $netxt_dir)
done
if [ $1 -lt $max_depth ]; then
ret=1
else
ret=0
ls -d $2
fi
exit $ret
Sample call:
find /usr -type d -exec maxdepth.ksh 2 {} \;
The traditional way to do this is to have dir_depth return the maximum depth too. So you'll return both the name and depth.
You can't return an array, struct, or object in bash, so you can return e.g. a comma-separated string instead..
dir_depth(){
local dir
local max_dir="$1"
local max_depth=0
for dir in $1/*
do
if [ -d "$dir" ]
then
cur_ret=$(dir_depth "$dir")
cur_depth=$(expr "$cur_ret" : '\([^,]*\)')
cur_dir=$(expr "$cur_ret" : '.*,\(.*\)')
if [[ "$cur_depth" -gt "$max_depth" ]]; then
max_depth="$cur_depth"
max_dir="$cur_dir"
fi
fi
done
max_depth=$(($max_depth + 1))
echo "$max_depth,$max_dir"
}
EDIT: Fixed now. It starts with the directory you passed in as level 1, then counts upwards. I removed the cd, as it isn't necessary. Note that this will fail if filenames contain commas.
You might want to consider using a programming language with more built-in data structures, like Python.