Linux filename check if exist and delete - linux

I have some problem in Shell scripting.
So I have to write a script that find every file in a directory with this string: "gom". So i found all of them. After I need to cut it off, and compare that the remaining filename is exist. If exist i need to remove the file that contains the string.
Example: there are 5 files: algomb, gomba, alb, algomba, alba.
I need to find the filenames with "gom". algomb, gomba, algomba.
After it i need to cut the "gom". And a remaining filenames is exist I need to remove the file with "gom" string.
So after the cutting "gom" there will be 5 files: alb, ba, alb, alba, alba
So there are two files that is extist: alb, alba....I need to remove the following files: algomb, albomba.
After it the will be 3 files: gomba, alb, alba.
Sorry for my bad english.
I can find, I can remove, but I cant compare the filenames.
Here's my code:
#!/bin/bash
sz="gom"
talal=`find . -type f -name "*$sz*" -exec basename {} \;`
ossz=`find . -type f -exec basename {} \;`
c=`echo ${talal%%:*}${talal##*:}`
for c in ossz; do
if [ ! -d ]; then
echo "This is a directory"
else
if [ -f ];
then
find .-type f -name "*$sz*" -exec basename {} \;
else
echo ${talal%%:*}${talal##*:}
fi
fi
done
So this is works. This echo ${talal%%:*}${talal##*:} is give back the filename without "gom". But I cant compare these values with find . -type f -exec basename {} \; results.
Sorry for my bad english.
Can sombody help me?
Best regards, Richard

I would do it this way, without find.
shopt -s globstar
for gom_file in **/*gom*; do
# Skip non-regular files
[[ -f $gom_file ]] || continue
# Delete the file if the gom-free file exists
[[ -f ${gom_file/$sz//} ]] && rm "$gom_file"
done
Using find is slightly less efficient, since you need to fork a new shell for each file:
find . -type f -name "*gom*" -exec bash -c 'echo rm -f "${1/gom/}"' {} \;
Run this to test that it outputs the rm commands you want to execute, then run it again with echo removed.

I think you want to use those bash features:
array (declare -a)
hashtable (declare -A)
regex ( ${a/b/c} ${a//b/c} )
Here is an example as a general idea:
#!/bin/bash
sz="gom"
declare -a ta
declare -A ossz
i=0
while read -r -d $'\0' ff
do ta[$i]=$ff
i=$((i+1))
done < <(find . -type f -name "*$sz*" -print0)
while read -r -d $'\0' ff
do ossz[${ff##*/}]=1
done < <(find . -type f -print0)
for ff in "${ta[#]}"
do subff=${ff/$sz/}
subff=${subff##*/}
if [ _${ossz[$subff]} = _1 ]
then echo "$ff"
fi
done

Related

Find files with certain date strings

I have a number of files with dates in their names:
file_19990101.txt
file_19990102.txt
...
file_20031231.txt
I want to generate an ASCII file with all files up to the date 20010320, and then a second file with files from 20010321 to 20031231. How can this be accomplished using bash commands?
My current solution is to do a bunch of find commands:
find . -name "file_1999*" -print > index.txt
find . -name "file_2000*" -print >> index.txt
find . -name "file_200101*" -print >> index.txt
find . -name "file_200102*" -print >> index.txt
find . -name "file_2001030*" -print >> index.txt
find . -name "file_2001031*" -print >> index.txt
find . -name "file_20010320*" -print >> index.txt
etc. But there must be an easier way to accomplish this task!
This might work:
for f in file_*
do
ymd=${f#file_}
ymd=${ymd%.txt}
[[ "${ymd}" < "20010321" ]] && echo "${f}" >>index1 || echo "${f}" >>index2
done
Loop over the glob, strip the prefix and suffix, compare with the cutoff and append to one file or the other.
Edit: I missed the second condition.
#!/bin/bash
for f in file_*
do
ymd=${f#file_}
ymd=${ymd%.txt}
if [[ "${ymd}" < "20010321" ]]
then
echo "${f}" >>index1
elif [[ "${ymd}" > "20031231" ]]
then
:
else
echo "${f}" >>index2
fi
done

search recursive files with type of data that ended with specific extension in order to delete the files

We want to search and delete the data files that ended with extension of .pppd
We can search the files as
find $path -type f -name '*.pppd' -delete
but how to tell to find command to filter only the data files?
Example how to verify if file is data ( by file command )
file /data/file.pppd
/data/file.pppd: data
file command from manual page
NAME
file — determine file type
SYNOPSIS
file [-bchiklLNnprsvz0] [--apple] [--mime-encoding] [--mime-type] [-e testname] [-F separator] [-f namefile] [-m magicfiles] file ...
file -C [-m magicfiles]
file [--help]
You have to launch a shell:
find "${path}" \
-type f \
-name '*.pppd' \
-exec bash -c 'test "$(file "${1}"|awk -F: "{print \$NF}")" = "data"' -- {} \; \
-print
You can use the find command with the exec option to launch an explicit subshell that runs a shell loop to compare the output type.
find "$path" -type f \
-name '*.pppd' \
-exec bash -c 'for f; do [[ $(file -b "$f") = "data" ]] && echo "$f" ; done' _ {} +
This Unix.SE answer beautifully explains how the -exec bash -c option works with the find command. To briefly explain how it works, the result of find command based on the filter conditions ( -name, -type and -path ) are passed as positional arguments to the loop run under exec bash -c '..'. The loop iterates over the argument list ( for f is analogous to for f in "$#" ) and prints only the files whose type is data. Instead of parsing the result of file, use file -b to get the type directly.
You can do it like this. you can change empty regex for a valid Bash regex like for instance ^data and the txt extension for what you want to search for :
#!/bin/bash
read -a files <<< $(find . -type f -name '*.pppd' )
for file in "${files[#]}"
do
[[ "$(file -b $file )" =~ ^empty ]] && echo $file
done
If you want to delete the file :
[[ "$(file -b $file )" =~ ^empty ]] || rm "$file"
Hope it helps!

Run find command from a bash file

Hi people: I'm making a xfe script to take a given directory as source file, use zenity to get output dir and perform some operations, for example:
#!/bin/bash
OUTPUT_DIR=`zenity --file-selection --directory --filename="$1"`
if [ $? == 0 ]; then
find . -maxdepth 1 -type f -name "*.wav" -exec bash -c 'oggenc -Q "$0" -q 3 "$OUTPUT_DIR/${0%.wav}.ogg"' {} \;
fi
When the script is invoked, oggenc is not executed...any ideas?
Solution: Based on answers bellow, this works as expected:
#!/usr/bin/sh
OUTPUT_DIR=$(zenity --file-selection --directory --filename="$1")
if [ $? = 0 ]; then
export OUTPUT_DIR
find "$1" -maxdepth 1 -type f -name "*.wav" -exec sh -c 'oggenc -Q "$0" -q 3 -o "${OUTPUT_DIR}/$(basename "${0/.wav/.ogg}")"' {} \;
fi
zenity --info --text="Done"
To make the variable $OUTPUT_DIR available to the child process, add one line:
OUTPUT_DIR=$(zenity --file-selection --directory --filename="$1")
if [ $? = 0 ]; then
export OUTPUT_DIR
find . -maxdepth 1 -type f -name "*.wav" -exec bash -c 'oggenc -Q "$0" -q 3 "$OUTPUT_DIR/${0%.wav}.ogg"' {} \;
fi
Or, slightly simpler:
if OUTPUT_DIR=$(zenity --file-selection --directory --filename="$1"); then
export OUTPUT_DIR
find . -maxdepth 1 -type f -name "*.wav" -exec bash -c 'oggenc -Q "$0" -q 3 "$OUTPUT_DIR/${0%.wav}.ogg"' {} \;
fi
Notes:
The command 'oggenc -Q "$0" -q 3 "$OUTPUT_DIR/${0%.wav}.ogg"' appears in single-quotes. This means that the variables are not expanded by the parent shell. They are expanded by the child shell. To make it available to the child shell, a variable must be exported.
[ $? == 0 ] works in bash but [ $? = 0 ] will also work and is more portable.
Command substitution can be done with backticks and some old shells only accept backticks. For modern shells, however, the $(...) has the advantage of improved readability (some fonts don't clearly distinguish back and normal quotes). Also $(...) can be nested in a clear and sensible way.
I'd prefer to use while loop over pipelining. Your code may be rewritten in this way
while IFS= read -r -d '' file; do
oggenc -Q "${file}" -q 3 "${OUTPUT_DIR}/$(basename ${file/.wav/.ogg})"
done < <(find . -maxdepth 1 -type f -name "*.wav" -print0)
The reason why your code was not working is that single quotes ' forbids variables expansion so $OUTPUT_DIR will not expand.
EDIT
-print0 is used in conjunction with IFS= is to split find output only on \0 but not on whitespace in filenames.

Recursive unrar and deletion in directory and all subdirectories

I'm trying to work on a script that will crawl my Plex media folder, find any header ".r00" files, extract them in their own directory, and trash the archive zips after it's done. I have two options I've been playing around with. Combined they do what I want, but I would like to have it all in one nice little script.
Option 1:
This script opens the "LinRAR" GUI, makes me navigate to a specific directory, finds and extracts any .r00 file in that directory, and successfully deleted all archive zips.
while true; do
if dir=$(zenity --title="LinRAR by dExIT" --file-selection --directory); then
if [[ ! -d $dir ]]; then
echo "$dir: Wrong Directory" >&2
else
( cd "$dir" && for f in *.r00; do [[ -f $f ]] || continue; rar e "$f" && rm "${f%00}"[0-9][0-9]; done )
fi
else
echo "$bold Selection cancelled $bold_off" >&2
exit 1
fi
zenity --title="What else...?" --question --text="More work to be done?" || break
done
Option 2:
This script cd's to my Plex folder, recursively finds any .r00 files, extracts to my /home/user folder, and does not remove the archive zips.
(cd '/home/user/Plex');
while [ "`find . -type f -name '*.r00' | wc -l`" -gt 0 ];
do find -type f -name "*.r00" -exec rar e -- '{}' \; -exec rm -- '{}' \;;
done
I would like to have something that takes the first working script, and applies the recursive find to all folders inside of /Plex instead of only letting me navigate to one folder at a time through the "LinRAR" GUI.
No need to use cd. find takes a starting directory.
It's that dot (.) you passed to it.
Also added another (more sane) alternative for the find & loop:
#!/bin/bash
while true; do
if dir=$(zenity --title="LinRAR by dExIT" --file-selection --directory); then
if [[ ! -d $dir ]]; then
echo "$dir: Wrong Directory" >&2
else
# Alternative 1 - a little more comfortable
files="$(find "${dir}" -type f -name '*.r00')"
for file in ${files}; do
rar e "${file}" && rm "${file}"
done
# Alternative 2 - based on your original code
while [ "`find "${dir}" -type f -name '*.r00' | wc -l`" -gt 0 ]; do
find "${dir}" -type f -name "*.r00" -exec rar e -- '{}' \; -exec rm -- '{}' \;;
done
fi
else
echo "$bold Selection cancelled $bold_off" >&2
exit 1
fi
zenity --title="What else...?" --question --text="More work to be done?" || break
done
According to the comments, I ran a small example of this code and it works perfectly fine:
#!/bin/bash
if dir=$(zenity --title="LinRAR by dExIT" --file-selection --directory); then
if [[ ! -d $dir ]]; then
echo "$dir: Wrong directory" >&2
else
find $dir -type f
fi
else
echo "cancelled"
fi
A directory is successfully picked and all its files are printed. If I chose to cancel in zenity, then it prints 'cancelled'.

Bash Script to process data containing input string

I am trying to create a script that will find all the files in a folder that contain, for example, the string 'J34567' and process them. Right now I can process all the files in the folder with my code, however, my script will not just process the contained string it will process all the files in the folder. In other words once I run the script even with the string name ./bashexample 'J37264' it will still process all the files even without that string name. Here is my code below:
#!/bin/bash
directory=$(cd `dirname .` && pwd)
tag=$1
echo find: $tag on $directory
find $directory . -type f -exec grep -sl "$tag" {} \;
for files in $directory/*$tag*
do
for i in *.std
do
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < $i > $i.sum
done
for j in *.txt
do
egrep "device|Device|\(F\)" $i > $i.fail
done
echo $files
done
Kevin, you could try the following:
#!/bin/bash
directory='/home'
tag=$1
for files in $directory/*$tag*
do
if [ -f $files ]
then
#do your stuff
echo $files
fi
done
where directory is your directory name (you could pass it as a command-line argument too) and tag is the search term you are looking for in a filename.
Following script will give you the list of files that contain (inside the file, not in file name) the given pattern.
#!/bin/bash
directory=`pwd`
tag=$1
for file in $(find "$directory" -type f -exec grep -l "$tag" {} \;); do
echo $file
# use $file for further operations
done
What is the relevance of .std, .txt, .sum and .fail files to the files containing given pattern?
Its assumed there are no special characters, spaces, etc. in file names.
If that is the case following should help working around those.
How can I escape white space in a bash loop list?
Capturing output of find . -print0 into a bash array
There are multiple issues in your script.
Following is not required to set the operating directory to current directory.
directory=$(cd `dirname .` && pwd)
find is executed twice for the current directory due to $directory and ..
find $directory . -type f -exec grep -sl "$tag" {} \;
Also, result/output of above find is not used in for loop.
For loop is run for files in the $directory (sub directories not considered) with their file name having the given pattern.
for files in $directory/*$tag*
Following for loop will run for all .txt files in current directory, but will result in only one output file due to use of $i from previous loop.
for j in *.txt
do
egrep "device|Device|\(F\)" $i > $i.fail
done
This is my temporary solution. Please check if it follows your intention.
#!/bin/bash
directory=$(cd `dirname .` && pwd) ## Should this be just directory=$PWD ?
tag=$1
echo "find: $tag on $directory"
find "$directory" . -type f -exec grep -sl "$tag" {} \; ## Shouldn't you add -maxdepth 1 ? Are the files listed here the one that should be processed in the loop below instead?
for file in "$directory"/*"$tag"*; do
if [[ $file == *.std ]]; then
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < "$file" > "${file}.sum"
fi
if [[ $file == *.txt ]]; then
egrep "device|Device|\(F\)" "$file" > "${file}.fail"
fi
echo "$file"
done
Update 1
#!/bin/bash
directory=$PWD ## Change this to another directory if needed.
tag=$1
echo "find: $tag on $directory"
while IFS= read -rd $'\0' file; do
echo "$file"
case "$file" in
*.std)
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < "$file" > "${file}.sum"
;;
*.txt)
egrep "device|Device|\(F\)" "$file" > "${file}.fail"
;;
*)
echo "Unexpected match: $file"
;;
esac
done < <(exec find "$directory" -maxdepth 1 -type f -name "*${tag}*" \( -name '*.std' -or -name '*.txt' \) -print0) ## Change or remove the maxdepth option as wanted.
Update 2
#!/bin/bash
directory=$PWD
tag=$1
echo "find: $tag on $directory"
while IFS= read -rd $'\0' file; do
echo "$file"
/projects/OPSLIB/BCMTOOLS/sumfmt_linux < "$file" > "${file}.sum"
done < <(exec find "$directory" . -maxdepth 1 -type f -name "*${tag}*" -name '*.std' -print0)
while IFS= read -rd $'\0' file; do
echo "$file"
egrep "device|Device|\(F\)" "$file" > "${file}.fail"
done < <(exec find "$directory" -maxdepth 1 -type f -name "*${tag}*" -name '*.txt' -print0)

Resources