How to display true if find is not empty - linux

I am very new to bash. I have just started to learn last week. I am trying to search for a file name.
How can I display a message if the file is found?
this is what i have but it keeps saying 'no'
echo ' [Enter] a file name '
read findFile
if [[ -n $(find /$HOME -type f -name "findFile") ]]
then
echo 'yes'
else
echo 'no'
fi

A few issues:
Use var= or read var when defining a variable, but $var when using it.
There is no reason to keep searching after finding a file, so do something like below, where find will -quit after finding a single file and return it as a result of the -print
#!/bin/bash
echo ' [Enter] a file name '
read findFile
if [[ -f $(find "$HOME" -type f -name "$findFile" -print -quit) ]]; then
echo 'yes'
else
echo 'no'
fi
Note that the option -quit will work on GNU and FreeBSD operating systems (which means this will work in most cases), but for example, you will need to change it to -exit on NetBSD.
You can see this answer from Unix/Linux StackExchange for details on this option.
Also note, per Adaephon's comment, that although the / is not needed in front of $HOME, it's not wrong and the files will still be found .

Use wc to count the number of lines in the find output:
if [ $(find $HOME -type f -name "thisFile" 2> /dev/null | wc -l) -gt 0 ]; then
echo 'yes'
else
echo 'no'
fi
the 2> /dev/null part hides possible error messages.

Related

Why check if file is exists in shell always false?

I created a cron using bash to delete files older than 3 days, but when checking the age of the files with mtime +3 &> /dev/null it is always false. here's the script:
now=$(date)
create log file
file_names=('*_takrib_golive.gz' '*_takrib_golive_filestore.tar.gz')
touch /media/nfs/backup/backup_delete.log
echo "Date: $now" >> /media/nfs/backup/backup_delete.log
for filename in "${file_names[#]}";
do
echo $filename
if ls /media/nfs/backup/${filename} &> /dev/null
then
echo "backup files exist"
if find /media/nfs/backup -maxdepth 1 -mtime +3 -name ${filename} -ls &> /dev/null
then
echo "The following backup file was deleted" >> /media/nfs/backup/backup_delete.log
find /media/nfs/backup -maxdepth 1 -mtime +3 -name ${filename} -delete
else
echo "There are no ${filename} files older than 3 days in /media/nfs/backup" &>> /media/nfs/backup/backup_delete.log
fi
else
echo "No ${filename} files found in /media/nfs/backup" >> /media/backup/backup_delete.log
fi
done
exit 0
in if find /media/nfs/backup -maxdepth 1 -mtime +3 -name ${filename} -ls &> /dev/null always goes to else, even though files older than 3 days are in the directory
You are not quoting the -name attribute so it expands to the name of the file which already exists.
I would refactor this rather extensively anyway. Don't parse ls output and perhaps simplify this by making it more obvious when to quote and when not to.
Untested, but hopefully vaguely useful still:
#!/bin/bash
backup=/media/nfs/backup
backuplog=$backup/backup_delete.log
# no need to touch if you write to the file anyway
date +"Date: %C" >> "$backuplog"
# Avoid using a variable, just loop over the stems
for stem in takrib_golive takrib_golive_filestore.tar
do
# echo $filename
# Avoid parsing ls; instead, loop over matches
for filename in "$backup"/*_"$stem".gz; do
pattern="*_$stem.gz"
if [ -e "$filename" ]; then
echo "backup files exist"
if files=$(find "$backup" -maxdepth 1 -mtime +3 -false -o -name "$pattern" -print -delete)
then
echo "The following backup file was deleted" >> "$backuplog"
echo "$files" >> "$backuplog"
else
echo "There are no $pattern files older than 3 days in $backup" >> "$backuplog"
fi
else
echo "No $pattern files found in $backup" >> "$backuplog"
fi
# Either way, we can break the loop after one iteration
break
done
done
# no need to explicitly exit 0
The for + if [ -e ... ] arrangement is slightly clumsy, but that's how you check if a wildcard matched any files. If the wildcard did not match, if [ -e will check for a file whose name is literally the wildcard expression itself, and fail.

Bash Globbing Pattern Matching for Imagemagick recursive convert to pdf

I have the following 2 scripts, that recursively convert folders of images to pdf's for my wifes japanese manga kindle using find and Imagemagick convert:
#!/bin/bash
_d="$(pwd)"
echo "$_d"
find . -type d -exec echo "Will convert in the following order: {}" \;
find . -type d -exec echo "Converting: '{}'" \; -exec convert '{}/*.jpg' "$_d/{}.pdf" \;
and the same for PNG
#!/bin/bash
_d="$(pwd)"
echo "$_d"
find . -type d -exec echo "Will convert in the following order: {}" \;
find . -type d -exec echo "Converting: '{}'" \; -exec convert '{}/*.png' "$_d/{}.pdf" \;
Unfortunately I am not able make one universal script that works for all image formats.
How do I make one script that works for both ?
I would also need JPG,PNG as well as jpeg,JPEG
Thx in advance
I wouldn't use find at all, just a loop:
#!/use/bin/env bash
# enable recursive globs
shopt -s globstar
for dir in **/*/; do
printf "Converting jpgs in %s\n" "$dir"
convert "$dir"/*.jpg "$dir/out.pdf"
done
If you want to combine .jpg and .JPG in the same pdf, add nocaseglob to the shopt line. Add .jpeg to the mix? Add extglob and change "$dir"/*.jpg to "$dir"/*.#(jpg|jpeg)
You can do more complicated actions if you turn the find exec into a bash function (or even a standalone script).
#!/bin/bash
do_convert()(
shopt -s nullglob
for dir in "$#"; do
files=("$dir"/*.{jpg,JPG,PNG,jpeg,JPEG})
if [[ -z $files ]]; then
echo 1>&2 "no suitable files in $dir"
continue
fi
echo "Converting $dir"
convert "${files[#]}" "$dir.pdf"
done
)
export -f do_convert
pwd
echo "Will convert in the following order:"
find . -type d
# find . -type d -exec bash -c 'do_convert {}' \;
find . -type d -exec bash -c 'do_convert "$#"' -- {} \+
nullglob makes *.xyz return nothing if there is no match, instead of returning the original string unchanged
p/*.{a,b,c} expands into p/*.a p/*.b p/*.c before the * are expanded
x()(...) instead of the more normal x(){...} uses a subshell so we don't have to remember to unset nullglob again or clean up any variable definitions
export -f x makes function x available in subshells
we skip conversion if there are no suitable files
with the slightly more complicated find command, we can reduce the number of invocations of bash (probably doesn't save a great deal in this particular case)
how about a one-liner
dry-run
find -name \*.jpg -or -name \*.png | xargs -I xxx echo "xxx =>" xxx.pdf
run
find -name \*.jpg -or -name \*.png | xargs -I xxx echo xxx xxx.pdf
help
-name match name
-or logical or => both jpg and png
xargs map input into a name to execute a command on
-I select a name, it is like {} in file
NOTE
instead of $(pwd) which is a command substitution you can use variable $PWD
xxx maps into a name and xxx.pdf still has the matched extension found by find. which means filename.png becomes filename.png.pdf. If this is not desired, you can sed it
to run convert command in parallel you can use -P 0 with xargs -- see xargs --help
With sed to remove extensions
dry-run
find -name \*.jpg -or -name \*.png | sed 's/.\(png\|jpg\)$//g' | xargs -I xxx echo "xxx =>" xxx.pdf
#shawn Your solution works, just as I stated in the comments, I am to stupid to name the resulting pdf properly (folder name) and save in the script caller directory. Nevertheless, it solves my case insensitive jpg, jpeg, png problems just fine.
Here is shawns solution:
#!/bin/bash
# enable recursive globs
shopt -s globstar nocaseglob extglob
for dir in **/*/; do
printf "Converting (jpg|jpeg|png) in %s\n" "$dir"
convert "$dir"/*.#(jpg|jpeg|png) "$dir/out.pdf"
done
#jhnc Your solution works out of the box, it does exactly what I intended, and I really like calling functions, or even standalone scripts to increase complexity. One drawback is, that I can not Ctrl-c the process, because it is thereby threaded, or runs in a subshell ? I think you were missing an exit statement at the end of the function, it never stopped.
#!/bin/bash
do_convert()(
shopt -s nullglob
for dir in "$#"; do
files=("$dir"/*.{jpg,JPG,png,PNG,jpeg,JPEG})
if [[ -z $files ]]; then
echo 1>&2 "no suitable files in $dir"
continue
fi
echo "Converting $dir"
convert "${files[#]}" "$dir.pdf"
done
exit
)
export -f do_convert
pwd
echo "Will convert in the following order:"
find . -type d
# find . -type d -exec bash -c 'do_convert {}' \;
find . -type d -exec bash -c 'do_convert "$#"' -- {} \+
# everyone else, it's already after midnight again, I guess this is a trivial question for you guys, and I am very grateful for your ALL your answers, I didn't have the time to try everything.
I find linux bash very challenging.
A lot of ways to skin this cat. My thought is:
for F in `find . -type f -print`
do
TYPE=`file -n --mime-type $F`
if [ "$TYPE" = image/png ]
then
## do png conversion here
elif [ "$TYPE" = image/jpg ]
then
## do jpg conversion here
fi
done

How to find and delete resized Wordpress images if the original image was already deleted?

This question pertains to the situation where
An image was uploaded, say mypicture.jpg
Wordpress created multiple copies of it with different resolutions like mypicture-300x500.jpg and mypicture-600x1000.jpg
You delete the original image only
In this scenario, the remaining photos on the filesystem are mypicture-300x500.jpg and mypicture-600x1000.jpg.
How can you script this to find these "dangling" images with the missing original and delete the "dangling" images.
You could use find to find all lower resolution pictures with the -regex test:
find . -type f -regex '.*-[0-9]+x[0-9]+\.jpg'
And this would be much better than trying to parse the ls output which is for humans only, not for automation. A safer (and simpler) bash script could thus be:
#!/usr/bin/env bash
while IFS= read -r -d '' f; do
[[ "$f" =~ (.*)-[0-9]+x[0-9]+\.jpg ]] &&
! [ -f "${BASH_REMATCH[1]}".jpg ] &&
echo rm -f "$f"
done < <(find . -type f -regex '.*-[0-9]+x[0-9]+\.jpg' -print0)
(delete the echo once you will be convinced that it works as expected).
Note: we use the -print0 action and the empty read delimiter (-d '') to separate the file names with the NUL character instead of the newline character. This is preferable because it works as expected even if you have unusual file names (e.g., with spaces).
Note: as we test the file name inside the loop we could simply search for files (find . -type f -print0). But I suspect that if you have a large number of files the performance would be negatively impacted. So keeping the -regex test is probably better.
Bash loops are OK but they tend to become really slow when the number of iteration increases. So, let's incorporate our simple bash script in a single find command with the -exec action:
find . -type f -exec bash -c '[[ "$1" =~ (.*)-[0-9]+x[0-9]+\.jpg ]] &&
! [ -f "${BASH_REMATCH[1]}".jpg ]' _ {} \; -print
Note: bash -c takes a script to execute as first argument, then the positional parameters to pass to the script, starting with $0. This is why we pass _ (my favourite for don't care), followed by {} (the current file path).
Note: -print is normally the default find action but here it is needed because -exec is one of the find actions that inhibit the default behaviour.
This will print a list of files. Check that it is correct and, once you will be satisfied, add the -delete action:
find . -type f -exec bash -c '[[ "$1" =~ (.*)-[0-9]+x[0-9]+\.jpg ]] &&
! [ -f "${BASH_REMATCH[1]}".jpg ]' _ {} \; -delete -print
See man find and man bash for more explanations.
Demo:
$ touch mypicture.jpg mypicture-300x500.jpg mypicture-600x1000.jpg
$ find . -type f -exec bash -c '[[ "$1" =~ (.*)-[0-9]+x[0-9]+\.jpg ]] &&
! [ -f "${BASH_REMATCH[1]}".jpg ]' _ {} \; -print
$ rm -f mypicture.jpg
$ find . -type f -exec bash -c '[[ "$1" =~ (.*)-[0-9]+x[0-9]+\.jpg ]] &&
! [ -f "${BASH_REMATCH[1]}".jpg ]' _ {} \; -print
./mypicture-300x500.jpg
./mypicture-600x1000.jpg
$ find . -type f -exec bash -c '[[ "$1" =~ (.*)-[0-9]+x[0-9]+\.jpg ]] &&
! [ -f "${BASH_REMATCH[1]}".jpg ]' _ {} \; -delete -print
./mypicture-300x500.jpg
./mypicture-600x1000.jpg
$ ls *.jpg
ls: cannot access '*.jpg': No such file or directory
One last note: if, by accident, one of your full resolution picture matches the regular expression for lower resolution pictures (e.g., if you have a balloon-1x1.jpg full resolution picture) it will be deleted. This is unfortunate but according your specifications there is no easy way to distinguish it from an orphan lower resolution picture. Be careful...
I've written a Bash script that will attempt to find the original filename (i.e. mypicture.jpg) based on scraping away the WordPress resolution (i.e. mypicture-300x500.jpg), and if it's not found, delete the "dangling image" (i.e. rm -f mypicture-300x500.jpg)
#!/bin/bash
for directory in $(find . -type d)
do
for image in $(ls $directory)
do
echo "The current filename is $image"
resolution=$(echo $image | rev | cut -f 1 -d "-" | rev | xargs)
echo "The resolution is $resolution"
extension=$(echo $resolution | rev| cut -f 1 -d "." | rev | xargs)
echo "The extension is $extension"
resolutiononly=$(echo $resolution | sed "s#.$extension##g")
echo "The resolution only is $resolutiononly"
pattern="[0-9]+x[0-9]+"
if [[ $resolutiononly =~ $pattern ]]; then
echo "The pattern matches"
originalfilename=$(echo $image | sed "s#-$resolution#.$extension#g")
echo "The current filename is $image"
echo "The original filename is $originalfilename"
if [[ -f "$originalfilename" ]]; then
echo "The file exists $originalfilename"
else
rm -f $directory/$image
fi
else
break
fi
done
done

check if find command return something (in bash script)

i have the following bash script on my server:
today=$(date +"%Y-%m-%d")
find /backups/www -type f -mtime -1|xargs tar uf /daily/backup-$today.tar
as you can see it creates backups of files modified/created in the last 24h. However if no files are found, it creates corrupted tar file. I would like to wrap it in if..fi statement so id doesn't create empty/corrupted tar files.
Can someone help me modify this script?
Thanks
You can check if result is ok then check if result is empty :
today=$(date +"%Y-%m-%d")
results=`find /backups/www -type f -mtime -1`
if [[ 0 == $? ]] ; then
if [[ -z $results ]] ; then
echo "No files found"
else
tar uf /daily/backup-$today.tar $results
fi
else
echo "Search failed"
fi
find /backups/www -type f -mtime -1 -exec tar uf /daily/backup-$today.tar {} +
Using -exec is preferable to xargs. There's no pipeline needed and it will handle file names with spaces, newlines, and other unusual characters without extra work. The {} at the end is a placeholder for the file names, and + marks the end of the -exec command (in case there were more arguments to find).
As a bonus it won't execute the command if no files are found.
One relatively simple trick would be this:
today=$(date +"%Y-%m-%d")
touch /backups/www/.timestamp
find /backups/www -type f -mtime -1|xargs tar uf /daily/backup-$today.tar
That way you're guaranteed to always find at least one file (and it's minimal in size).
xargs -r does nothing if there is no input.

A bash script to run a program for directories that do not have a certain file

I need a Bash Script to Execute a program for all directories that do not have a specific file and create the output file on the same directory.This program needs an input file which exist in every directory with the name *.DNA.fasta.Suppose I have the following directories that may contain sub directories also
dir1/a.protein.fasta
dir2/b.protein.fasta
dir3/anyfile
dir4/x.orf.fasta
I have started by finding the directories that don't have that specific file whic name is *.protein.fasta
in this case I want the dir3 and dir4 to be listed (since they do not contain *.protein.fasta)
I have tried this code:
find . -maxdepth 1 -type d \! -exec test -e '{}/*protein.fasta' \; -print
but it seems I missed some thing it does not work.
also I do not know how to proceed for the whole story.
This is a tricky one.
I can't think of a good solution. But here's a solution, nevertheless. Note that this is guaranteed not to work if your directory or file names contain newlines, and it's not guaranteed to work if they contain other special characters. (I've only tested with the samples in your question.)
Also, I haven't included a -maxdepth because you said you need to search subdirectories too.
#!/bin/bash
# Create an associative array
declare -A excludes
# Build an associative array of directories containing the file
while read line; do
excludes[$(dirname "$line")]=1
echo "excluded: $(dirname "$line")" >&2
done <<EOT
$(find . -name "*protein.fasta" -print)
EOT
# Walk through all directories, print only those not in array
find . -type d \
| while read line ; do
if [[ ! ${excludes[$line]} ]]; then
echo "$line"
fi
done
For me, this returns:
.
./dir3
./dir4
All of which are directories that do not contain a file matching *.protein.fasta. Of course, you can replace the last echo "$line" with whatever you need to do with these directories.
Alternately:
If what you're really looking for is just the list of top-level directories that do not contain the matching file in any subdirectory, the following bash one-liner may be sufficient:
for i in *; do test -d "$i" && ( find "$i" -name '*protein.fasta' | grep -q . || echo "$i" ); done
#!/bin/bash
for dir in *; do
test -d "$dir" && ( find "$dir" -name '*protein.fasta' | grep -q . || Programfoo"$dir/$dir.DNA.fasta");
done

Resources