Search Multiple directories and return one result - linux

I'm trying to bring a piece of code that I had running on an AIX box over to Linux and I just can't get it to work.
[ -f $FolderPath/*/FileName.txt ] && echo 1 || echo 0
The above searchs the FolderPath and then the * is all its subdirectories looking for FileName.txt. If it is found (more than once), it returns 1, otherwise returns 0.
In Linux, I get the Too Many Arguments error, so I thought changing to [[ ]] would fix this, but it doesn't seem to handle the wildcard * in that.
Anyone any ideas?
Thanks

You could try the following command and then query the status code $?:
find $FolderPath -name 'FileName.txt' | grep -E '*'
This returns 1 when there are no files listed by the find command and 0 when there are.
Optionally, if you're only interested in hitting a specific level to avoid deep searches down the directory tree then you can use the -maxdepth n option.

Related

How can I list the files in a directory that have zero size/length in the Linux terminal?

I am new to using the Linux terminal, so I'm just starting to learn about the commands I can use. I have figured out how to list the files in a directory using the Linux terminal, and how to list them according to file size. I was wondering if there's a way to list only the files of a specific file size. Right now, I'm trying to list files with zero size, like those that you might create using the touch command. I looked through the flags I could use when I use ls, but I couldn't find exactly what I was looking for. Here's what I have right now:
ls -lsh /mydirectory
The "mydirectory" part is just a placeholder. Is there anything I can add that will only list files that have zero size?
There's a few ways you can go about this; if you want to stick with ls -l you could use e.g. awk in a pipeline to do the filtering.
ls -lsh /mydirectory | awk '$5 == 0'
Here, $5 is the fifth field in ls's output, the size.
Another approach would be to use a different tool, find.
find /mydirectory -maxdepth 1 -size 0 -ls
This will also list hidden files, analogous to an ls -la.
The -maxdepth 1 is there so it doesn't traverse the directory tree if you have nested directories.
A simple script can do this.
for file_name in *
do
if [[ !( -s $file_name ) ]]
then
echo $file_name
fi
done
explanation:
for is a loop. * gives list of all files in a current directory.
-s file_name becomes true if the file has size greater than 0.
! to negate that

Find out if a backup ran by searching the newest file

I'd like to write a short and simple script, that searches for a file using a specivic filter, and checks the age of that file. I want to write a short output and an error-code. This should be accessible for an NRPE-Server.
The script itself works, but I only have a problem when the file does not exist. This happens with that command:
newestfile=$(ls -t $path/$filter | head -1)
When the files exist, everything works as it should. When there nothing matches my filter, I get the output (I changed the filter to *.zip to show):
ls: cannot access '/backup/*.zip': No such file or directory
But I want to get the following output and then just exit the script with code 1:
there are no backups with the filter *.zip in the directory /backup
I am pretty sure this is a very easy problem but I just don't know whats wron. By the way, I am still "new" to bash scripts.
Here is my whole code:
#!/bin/bash
# Set the variables
path=/backup
filter=*.tar.gz
# Find the newest file
newestfile=$(ls -t $path/$filter | head -1)
# check if we even have a file
if [ ! -f $newestfile ]; then
echo "there are no backups with the filter $filter in the directory $path"
exit 1
fi
# check how old the file is that we found
if [[ $(find "$newestfile" -mtime +1 -print) ]]; then
echo "File $newestfile is older than 24 hours"
exit 2
else
echo "the file $newestfile is younger than 24 hours"
exit 0
fi
Actually, with your code you should also get an error message bash: no match: /backup/*.zip
UPDATE: Fixed the proposed solution, and the missing quotes in the original solution:
I suggest the following approach:
shopt -u failglob # Turn off error from globbing
pathfilter="/backup/*.tar.gz" # Quotes to avoid the wildcards to be expanded here already
# First see whether we have any matching files
files=($pathfilter)
if [[ ! -e ${#files[0]} ]]
then
# .... No matching files
else
# Now you can safely fetch the newest file
# Note: This does NOT work if you have filenames
# containing newlines
newestfile=$(ls -tA $pathfilter | head -1)
fi
I don't like using ls for this task, but I don't see an easy way in bash to do it better.

Linux bash script: Recurisely delete files if empty based on file format from user input [duplicate]

This question already has answers here:
How do I pass a wildcard parameter to a bash file
(2 answers)
Closed 5 years ago.
So as the title describes, I want to recursively delete all files which match a naming pattern given by the user, but only if the file is empty. Here is my attempt:
#!/bin/bash
_files="$1"
[ $# -eq 0 ] && { echo "Usage: $0 filename"; exit 1; }
[ ! -f "$_files" ] && { echo "Error: $0 no files were found which match given naming structure."; exit 2; }
for f in $(find -name $_files)
do
if [ -s "$f" ]
then
echo "$f has some data."
# do something as file has data
else
echo "$f is empty. Deleting file."
rm $f
fi
done
Example output:
./remove_blank.sh *.o*
./Disp_variations_higher_0.o1906168 has some data.
./remove_blank.sh *.e*
./Disp_variations_higher_15.e1906183 is empty. Deleting file.
As you can see, the code works, but only for one file at a time. Should be a relatively simple fix to get it to work, but I extremely new to bash scripting and can't seem to figure it out. Sorry for the noobish question. I did some research to find an answer but didn't find exactly what I needed. Thanks in advance for any help.
Edit
I have found two different solutions to the problem. As #David Z's suggestion, one can fix this by 1st deleting the Error checking part of the script as well as putting quotes around the $_files variable in the find function. Then the code looks like this:
#!/bin/bash
_files=$1
[ $# -eq 0 ] && { echo "Usage: $0 filename"; exit 1; }
for f in $(find -name "$_files")
do
if [ -s $f ]
then
echo "$f has some data."
# do something as file has data
else
echo "$f is empty. Deleting file."
rm $f
fi
done
Or, one can also simply change the for loop to for f in "$#", which allows the error check to be kept in the script. I am not sure which method is better but will update again if I find out.
It looks like the way you're invoking the script, the shell expands the pattern before running your script. For example, in
./remove_blank.sh *.o*
the shell converts *.o* to the list of filenames that match that pattern, and then what it actually runs is something like
./remove_blank.sh Disp_variations_higher_0.o1906168 other_file.o12345 ...
But you only check the first argument in your script, so that's the one that winds up getting deleted.
Solution: quote the pattern when you run the script.
./remove_blank.sh '*.o*'
You will also need to remove the test [ ! -f "$_files" ] ... because $_files is being set to the pattern (such as *.o*), not a filename. In fact, you might want to rename the variable to make that clear. And finally, you need to quote the variable in your find command,
... $(find -name "$_files") ...
so that the pattern makes it all the way through to find without being converted into filenames by the shell.
There are some other issues with the script but you might want to ask about those on Code Review. What I've identified here is just the minimum needed to get it working.
Incidentally, you can accomplish this whole task using find itself, which will at least give cleaner code:
find -name "$_files" -type f -empty -delete

Show where directory exists

I have a bunch of code thats relatively new i.e. lots of bugs hiding and I have code as such:
if [ -d $DATA_ROOT/$name ], I've done research and understand that this means if directory exists but now I'm trying to print out those directories that exist to fix a problem.
Tried using
echo `First: $DATA_ROOT`
echo `Second: $name`
echo `Last: $DATA_ROOT/$name`
exit 1;
Got command not found for all, the code is meant to fix the bug I'm trying to by extracting all files but does not end up extracting all ending up with the data extraction failed error below, code:
num_files=`find $DATA_ROOT/$name -name '*' | wc -l`
if [ ! $num_files -eq $extract_file ] ; then
echo "Data extraction failed! Extracted $num_files instead of $extract_file"
exit 1;
I just want to extract all files correctly, how to do this please?
The back-ping you are using means "execute this as an command"
echo `First: $DATA_ROOT`
echo `Second: $name`
echo `Last: $DATA_ROOT/$name`
would try to execute a command called "First:" which does not exists.
Instead use double quotes as they allow for variable substitution, like this and does not try to execute it as a command
echo "First: $DATA_ROOT"
echo "Second: $name"
echo "Last: $DATA_ROOT/$name"
Also
find $DATA_ROOT/$name -name '*'
is probably not what you want, the -name '*' is the default so you don't need it. As others points out, find will return everything, including directories and special files if you have any of those. find "$DATA_ROOT/$name" -type f is what you want if you only want to list the files or find "$DATA_ROOT/$name" -type d if you only want to list directories. Also always use double quotes around your "$DATA_ROOT/$name" as it allows you to handle file names with spaces -- if you have a $name that contains a space you will fail otherwise.
find reports not only ordinary files, but also directories (including .).
Use find "$DATA_ROOT/$name" -type f.
You are using backticks and hence anything under backticks is treated as a command to execute and as a result you are getting command not found exception. You could use double quotes to avoid the error like below:
echo "First: $DATA_ROOT"
echo "Second: $name"
echo "Last: $DATA_ROOT/$name"
You could use find command to list down all directories like:
find $DATA_ROOT/$name -type d
Above command would list all the directories (with -type d option and use -type f to list all the files) within $DATA_ROOT/$name and then you can perform operations on those directories.

Bash - find exec return value

I need a way to tell if grep does find something, and ideally pass that return value to an if statement.
Let's say I have a tmp folder (current folder), in that there are several files and sub-folders. I want to search all files named abc for a pattern xyz. The search is assumed to be successful if it finds any occurrence of xyz (it does not matter how many times xyz is found). The search fails if no occurrence of xyz is found.
In bash, it can be done like this:
find . -name "abc" -exec grep "xyz" {} \;
That would show if xyz is found at all. But I'm not sure how pass the result (successful or not) back to an if statement.
Any help would be appreciated.
You can try
x=`find . -name abc | xargs grep xyz`
echo $x
That is, x contains your return value. It is blank when there is no match.
If you want to know that find finds some files abc and that at least one of them contains the string xyz, then you can probably use:
if find . -name 'abc' -type f -exec grep -q xyz {} +
then : All invocations of grep found at least one xyz and nothing else failed
else : One or more invocations of grep failed to find xyz or something else failed
fi
This relies on find returning an exit status for its own operations, and a non-zero exit status of any of the command(s) it executes. The + at the end groups as many file names as find thinks reasonable into a single command line. You need quite a lot of file name (a large number of fairly long names) to make find run grep multiple times. On a Mac running Mac OS X 10.10.4, I got to about 3,000 files, each with about 32 characters in the name, for an argument list of just over 100 KiB, without grep being run multiple times. OTOH, when I had just under 8000 files, I got two runs of grep, with around 130 KiB of argument list for each.
Someone briefly left a comment asking whether the exit status of find is guaranteed. The answer is 'yes' — though I had to modify my description of the exit status a bit. Under the description of -exec, POSIX specifies:
If any invocation [of the 'utility', such as grep in this question] returns a non-zero value as exit status, the find utility shall return a non-zero exit status.
And under the general 'Exit status' it says:
The following exit values shall be returned:
0 — All path operands were traversed successfully.
>0 — An error occurred.
Thus, find will report success as long as none of its own operations fails and as long as none of the grep commands it invokes fails. Zero files found but no failures will be reported as success. (Failures might be lack of permission to search a directory, for example.)
find returns the result of the -exec'd command as its result, just place the command in an if statement:
if [[ -n $(find . -name "abc" -exec grep "xyz" {} \;) ]]
then
# do something
fi
The grep command can be given a -q option that will "quiet" its output. Grep will return a success or a failure on the basis of whether it found anything in the file you pointed it at.
If your shell is capable of it, rather than trying to parse the output of a find command, you might want to try using a for loop instead. For example in bash:
shopt -s globstar
cd /some/directory
for file in **/abc; do
grep -q "xyz" "$file" && echo "Found something in $file"
done
Is this what you're looking for?
You should be able to do this with just grep with --include I believe.
Like this:
if grep -r --include=abc -q xyz; then
: Found at least one match
else
: No matching lines found
fi

Resources