So I'm given a directory in $dir and an Unix command in $1 I need to check how many files are in $dir directory to which the $1 Unix command can be executed.
for dir in `echo $PATH|tr : '\n'`
do
for file in `find $dir -type f`
do
#Here I would like to check if the command works on the file
then
echo " $1 $dir/$file works"
else
echo " $1 $dir/$file doesn't work"
fi
done
done
It appears that you want to search through all the files in the PATH and, for each file, see if command $1 succeeds or fails with that file as an argument. In that case:
#!/bin/bash
(IFS=:
find $PATH -type f -exec bash -c 'if "$1" "$2"; then echo "$1 $2 works"; else echo "$1 $2 fails"; fi' None "$1" {} \;
)
Or, for greater efficiency:
(IFS=:
find $PATH -type f -exec bash -c 'cmd=$1; shift; for f in "$#"; do if "$cmd" "$f"; then echo "$cmd $f works"; else echo "$cmd $f fails"; fi; done' None "$1" {} +
)
How it works
(
This starts a subshell. This is done so that IFS returns to its original value after the subshell finishes.
IFS=:
This tells the shell to do word splitting on :.
find $PATH -type f -exec bash -c '...' None "$1" {} +
This looks for all regular files underneath directories that are in the PATH and executes the commands in '...' on them.
More specifically, the commands in '...' are given as positional arguments, the name of the command $1 and one or more (probably many) files to test as arguments.
The commands in '...' are:
cmd=$1
shift
for f in "$#"; do
if "$cmd" "$f"
then echo "$cmd $f works"
else echo "$cmd $f fails"
fi
done
These commands test if the command succeeded and report the results.
)
This closes the subshell
Silencing the output from the commands
As glenn jackman suggests, you might not want to see the output from each run of the command $1 and instead just keep track of whether it succeeded or failed. In that case, we can redirect the command's output to /dev/null as follows:
#!/bin/bash
(IFS=:; find $PATH -type f -exec bash -c 'if "$1" "$2" >/dev/null 2>&1; then echo "$1 $2 works"; else echo "$1 $2 fails"; fi' None "$1" {} \; )
When this is done, the output may look like:
$ bash scriptname ls
ls /bin/keyctl works
ls /bin/mt-gnu works
ls /bin/uncompress works
ls /bin/nano works
ls /bin/zless works
ls /bin/run-parts works
[...snip...]
Related
I have created a script to zip and move log files from one directory to another directory to free space. This is the script:
#!/bin/bash
logsDirectory="/test//logs/"
email=""
backupDirectory="/test/backup"
pid="/data/test/scripts/backup.pid"
usage=$(df | grep /data/logs | awk '{ print $2 }')
space=450000000
getBackup ()
{
if [[ ! -e $pid ]] then
if [[ $usage -le $space ]]
then
touch $pid
find $backupDirectory -mtime +15 -type f -delete;
for i in $(find $logsDirectory -type f -not -path "*/irws/*")
do
/sbin/fuser $i > /dev/null 2>&1
if [ $? -ne 0 ]
then
gzip $i
mv -v $i.gz $backupDirectory
else
continue
fi
done
[[ ! -z $email ]] && echo "Backup is ready" | mas"Backup" $email
rm -f $pid
fi
fi
}
getBackup
I am getting this error:
gzip: /data/logs/log01.log.gz already has .gz suffix -- unchanged
mv: cannot stat `/data/logs/log01.log.gz': No such file or directory
I got the error every time I ran the script in my DEV and PROD (CentOS servers) environments. To analyse it, I ran the same script in a VM (Ubuntu) in my laptop, and I don't get the error there.
My questions:
How can I prevent this error?
What I have done wrong in the script?
Your script contains a number of common clumsy or inefficient antipatterns. Here is a refactoring. The only real change is skipping any *.gz files.
#!/bin/bash
logsDirectory="/test//logs/"
email=""
backupDirectory="/test/backup"
pid="/data/test/scripts/backup.pid"
# Avoid useless use of grep -- awk knows how to match a regex
# Better still run df /data/logs
usage=$(df /data/logs/ | awk '{ print $2 }')
space=450000000
getBackup ()
{
# Quote variables
if [[ ! -e "$pid" ]]; then
if [[ "$usage" -le "$space" ]]; then
touch "$pid"
find "$backupDirectory" -mtime +15 -type f -delete;
# Exclude *.gz files
# This is still not robust against file names with spaces or wildcards in their names
for i in $(find "$logsDirectory" -type f -not -path "*/irws/*" -not -name '*.gz')
do
# Avoid useless use of $?
if /sbin/fuser "$i" > /dev/null 2>&1
then
gzip "$i"
mv -v "$i.gz" "$backupDirectory"
# no need for do-nothing else
fi
done
[[ ! -z "$email" ]] &&
echo "Backup is ready" | mas"Backup" "$email"
rm -f "$pid"
fi
fi
}
getBackup
With a slightly more intrusive refactoring, the proper fix to the find loop would perhaps look something like
find "$logsDirectory" -type f \
-not -path "*/irws/*" -not -name '*.gz' \
-exec sh -c '
for i; do
if /sbin/fuser "$i" > /dev/null 2>&1
then
gzip "$i"
mv -v "$i.gz" "$backupDirectory"
fi
done' _ {} +
where the secret sauce is to have find ... -exec + pass in the arguments to the sh -c script in a way which does not involve exposing the arguments to the current shell at all.
What I have done wrong in the script?
Your script tries to zip every file but the gzip command is rejecting files already zipped
How can I prevent this error?
Have the script check whether the file is zipped or not and only gzip if it corresponds (1). Alternatively, you could force re-compression even if it is already compressed (2).
Going with option number 1):
getBackup ()
{
if [[ ! -e $pid ]] then
if [[ $usage -le $space ]]
then
touch $pid
find $backupDirectory -mtime +15 -type f -delete;
for i in $(find $logsDirectory -type f -not -path "*/irws/*")
do
/sbin/fuser $i > /dev/null 2>&1
if [ $? -ne 0 ]
then
if [[ $i =~ \.gz$ ]]
# File is already zipped
mv -v $i $backupDirectory
else
gzip $i
mv -v $i.gz $backupDirectory
fi
else
continue
fi
done
[[ ! -z $email ]] && echo "Backup is ready" | mas"Backup" $email
rm -f $pid
fi
fi
}
#!/bin/bash
#sh j
find . -name "*first*" && echo "file found" || echo "file not found"
read -p "Run command $foo? [yn]" answer
case "$answer" in
y*) find . -type f -exec rename 's/(.*)\/(.*)first(.*)/$1\/beginning_$2changed$3/' {} + ;;
n*) echo "not renamed" ;;
esac
fi
I want the script to loop through folder and subfolders and find files that contain certain string and then have an option to rename the file or let it be(That is the y/n option) after selection the script should continue finding.
Also i have a problem that says "syntax error unexpected token 'fi' "
Try this:
#bin/bash
handle_file(){
local file=$0
local pattern=some_pattern
if [[ $(grep -c ${pattern} ${file}) -gt 0 ]];
then
......................................
do anything you want with your ${file}
......................................
fi
}
export -f handle_file
find . -type f -exec bash -c 'handle_file "$#"' {} \;
handle_file is a function that will be invoked as handle_function <filename>, so the <filename> is available as $0 inside the function.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I keep getting these errors running my script and i just cannot work it out...
the error that keeps coming up is;
rm: cannot remove ~/my-documents/article:': Is a directory. The directory its referring to is $2...here is my script.
#! /bin/sh
SRC=$1
DES=$2
if [ $# -ne 2 ]; then
echo "1. Please enter the source directory"
echo "2. Please enter the destination directory"
echo "thankyou"
exit
fi
if [ ! -d $1 ]; then
echo "$1 is not a directory please enter a valid directory"
echo "thankyou"
exit
fi
#gives the user a error warning the source directory is invalid
if [ -d $2 ]; then
echo "output directory exists"
else
echo "Output directory does not exist, creating directory"
mkdir $2
fi
#creates the destination directory if one doesn't exist
IFILE=$GETFILES;
FINDFILE=$FINDFILE;
find $1 -name "*.doc" > FINDFILE
find $1 -name "*.pdf" > FINDFILE
find $1 -name "*.PDF" > FINDFILE
#finds doc, pdf & PDF files and sends data to findfile.
while read -r line;
do
cp $line $2
done < FINDFILE
#files read and copied to destination directory
IFILE=$2/$GETFILES;
ls -R $1 | egrep -i ".doc | .pdf" > IFILE;
LCOUNT=0
DIFFCOUNT=0
FOUND=0
ARCHIVE=1
BASE="${line%.*}"
EXTENSION="${line##*.}"
COUNT=$COUNT;
ls $2 | grep ${line%%.*} \; | wc -l
if [[ $COUNT -eq 0 ]];
then
cp $1/$line $2;
else
echo "there is already a file in the output so need to compare"
COMP=$2/$line
fi
while [[ $FOUND -eq 0 ]] && [[ $LCOUNT -lt $COUNT ]];
do
echo "diffcount is $DIFFCOUNT"
###compares the file from the input directory to the file in
###the output directory
if [ $DIFFCOUNT -eq 0 ];
then
echo "file has already been archived no action required"
FOUND=$FOUND [ $FOUND+1 ]
else
LCOUNT=$LCOUNT [ $LCOUNT+1 ]
COMP="OUT"/"$BASE"_"$LCOUNT"."$EXTENSION"
echo "line count for next compare is $LCOUNT"
echo "get the next file to compare"
echo "the comparison file is now $COMP"
fi
if [ $LCOUNT -ne $COUNT ]; then
ARCHIVE=$ [ $ARCHIVE+1 ]
else
ARCHIVE=0
fi
if [ $ARCHIVE -eq 0 ];
then
NEWOUT="OUT"/"$BASE"_"$LCOUNT"."$EXTENSION";
echo "newfile name is $NEWOUT"
cp $1/$LINE $NEWOUT
fi
done < $IFILE
rm $IFILE
OFILE=$2/DOCFILES;
ls $2 | grep ".doc" > $OFILE;
while read -r line;
do
BASE=${line%.*}
EXTENSION=${line##*.}
NEWEXTENSION=".pdf"
SEARCHFILE=$BASE$NEWEXTENSION
find $2 -name "$SEARCHFILE" -exec {} \;
done < $OFILE
rm $OFILE
### this will then remove any duplicate files so only
### individual .doc .pdf files will exist
a plain call to rm can only remove files, not directories.
$ touch /tmp/myfile
$ rm /tmp/myfile
$ mkdir /tmp/mydir
$ rm /tmp/mydir
rm: cannot remove ‘/tmp/mydir/’: Is a directory
You can remove directories by specifying the -d (to delete empty directories) or the -r (to delete directories and content recursively) flag:
$ mkdir /tmp/mydir
$ rm -r /tmp/mydir
$
this is well described in man rm.
apart from that, you seem to ignore quoting:
$ rm $OFILE
might break badly if the value of OFILE contains spaces, use quotes instead:
$ rm "${OFILE}"
and never parse the output of ls:
ls $2 | grep ".doc" > $OFILE
(e.g. if your "$2" is actually "/home/foo/my.doc.files/" it will put all files in this directory into $OFILE).
and then you iterate over the contents of this file?
instead, just use loop with file-globbing:
for o in "${2}"/*.doc
do
## loop code in here
done
or just do the filtering with find (and don't forget to call an executable with -exex):
find "$2" -name "$SEARCHFILE" -mindepth 1 -maxdepth 1 -type f -exec convertfile \{\} \;
I am trying to create a script that will find all the files in a folder that contain, for example, the string 'J34567' and process them. Right now I can process all the files in the folder with my code, however, my script will not just process the contained string it will process all the files in the folder. In other words once I run the script even with the string name ./bashexample 'J37264' it will still process all the files even without that string name. Here is my code below:
#!/bin/bash
directory=$(cd `dirname .` && pwd)
tag=$1
echo find: $tag on $directory
find $directory . -type f -exec grep -sl "$tag" {} \;
for files in $directory/*$tag*
do
for i in *.std
do
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < $i > $i.sum
done
for j in *.txt
do
egrep "device|Device|\(F\)" $i > $i.fail
done
echo $files
done
Kevin, you could try the following:
#!/bin/bash
directory='/home'
tag=$1
for files in $directory/*$tag*
do
if [ -f $files ]
then
#do your stuff
echo $files
fi
done
where directory is your directory name (you could pass it as a command-line argument too) and tag is the search term you are looking for in a filename.
Following script will give you the list of files that contain (inside the file, not in file name) the given pattern.
#!/bin/bash
directory=`pwd`
tag=$1
for file in $(find "$directory" -type f -exec grep -l "$tag" {} \;); do
echo $file
# use $file for further operations
done
What is the relevance of .std, .txt, .sum and .fail files to the files containing given pattern?
Its assumed there are no special characters, spaces, etc. in file names.
If that is the case following should help working around those.
How can I escape white space in a bash loop list?
Capturing output of find . -print0 into a bash array
There are multiple issues in your script.
Following is not required to set the operating directory to current directory.
directory=$(cd `dirname .` && pwd)
find is executed twice for the current directory due to $directory and ..
find $directory . -type f -exec grep -sl "$tag" {} \;
Also, result/output of above find is not used in for loop.
For loop is run for files in the $directory (sub directories not considered) with their file name having the given pattern.
for files in $directory/*$tag*
Following for loop will run for all .txt files in current directory, but will result in only one output file due to use of $i from previous loop.
for j in *.txt
do
egrep "device|Device|\(F\)" $i > $i.fail
done
This is my temporary solution. Please check if it follows your intention.
#!/bin/bash
directory=$(cd `dirname .` && pwd) ## Should this be just directory=$PWD ?
tag=$1
echo "find: $tag on $directory"
find "$directory" . -type f -exec grep -sl "$tag" {} \; ## Shouldn't you add -maxdepth 1 ? Are the files listed here the one that should be processed in the loop below instead?
for file in "$directory"/*"$tag"*; do
if [[ $file == *.std ]]; then
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < "$file" > "${file}.sum"
fi
if [[ $file == *.txt ]]; then
egrep "device|Device|\(F\)" "$file" > "${file}.fail"
fi
echo "$file"
done
Update 1
#!/bin/bash
directory=$PWD ## Change this to another directory if needed.
tag=$1
echo "find: $tag on $directory"
while IFS= read -rd $'\0' file; do
echo "$file"
case "$file" in
*.std)
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < "$file" > "${file}.sum"
;;
*.txt)
egrep "device|Device|\(F\)" "$file" > "${file}.fail"
;;
*)
echo "Unexpected match: $file"
;;
esac
done < <(exec find "$directory" -maxdepth 1 -type f -name "*${tag}*" \( -name '*.std' -or -name '*.txt' \) -print0) ## Change or remove the maxdepth option as wanted.
Update 2
#!/bin/bash
directory=$PWD
tag=$1
echo "find: $tag on $directory"
while IFS= read -rd $'\0' file; do
echo "$file"
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < "$file" > "${file}.sum"
done < <(exec find "$directory" . -maxdepth 1 -type f -name "*${tag}*" -name '*.std' -print0)
while IFS= read -rd $'\0' file; do
echo "$file"
egrep "device|Device|\(F\)" "$file" > "${file}.fail"
done < <(exec find "$directory" -maxdepth 1 -type f -name "*${tag}*" -name '*.txt' -print0)
I'm attempting to write a script that calls another script and uses it either once, or in a loop, depending on the inputs.
I wrote a script that simply searches a file for a pattern and then prints the file name and lists the lines that the search was found on. That script is here
#!/bin/bash
if [[ $# < 2 ]]
then
echo "error: must provide 2 arguments."
exit -1
fi
if [[ -e $2 ]]
then
echo "error: second argument must be a file."
exit -2
fi
echo "------ File =" $2 "------"
grep -ne $1 $2
So now I want to write a new script that will call this is the user enters just one file as a second argument, and will also loop and search all the files in the directory if they select a directory.
So if the input is:
./searchscript if testfile
it'll just use the script but if the input is:
./searchscript if Desktop
It'll search all the files in a loop.
My heart runnith over for you all, as always.
something like could works:
#!/bin/bash
do_for_file() {
grep "$1" "$2"
}
do_for_dir() {
cd "$2" || exit 1
for file in *
do
do_for "$1" "$file"
done
cd ..
}
do_for() {
where="file"
[[ -d "$2" ]] && where=dir
do_for_$where "$1" "$2"
}
do_for "$1" "$2"
How about this:
#!/bin/bash
dirSearch() {
for file in $(find $2 -type f)
do
./searchscript $file
done
}
if [ -d $2 ]
then
dirSearch
elif [ -e $2 ]
then
./searchscript $2
fi
Alternatively if you don't want to parse the output of find you can do the following:
#!/bin/bash
if [ -d $2 ]
then
find $2 -type f -exec ./searchscript {} \;
elif [ -e $2 ]
then
./searchscript $2
fi
er... maybe too simple, but what about letting "grep" do all the work?
#myscript
if [ $# -lt 2 ]
then
echo "error: must provide 2 arguments."
exit -1
fi
if [ -e "$2" ]
then
echo "error: second argument must be a file."
exit -2
fi
echo "------ File =" $2 "------"
grep -rne "$1" "$2"
I just added "-r" to the grep invocation : if it's a file, no recursion, if it's a dir, it'll recurse on it.
You could even get rid of the argument checks and let grep barf the appropriate error messages : (keep the quotes or this will fail)
#myscript
grep -rne "$1" "$2"
Assuming you do not want to search recursively:
#!/bin/bash
location=shift
if [[ -d $location ]]
then
for file in $location/*
do
your_script $file
done
else
# Insert a check for whether $location is a real file and exists, if needed
your_script $location
fi
NOTE1: This has a subtle bug: if some files in the directory start with a ".", as far as i recall, "for *" loop will NOT see them, so you need to add "in $location/* $location/.*" instead
NOTE2: If you want recursive search, instead use find:
in `find $location`