I have a bunch of code thats relatively new i.e. lots of bugs hiding and I have code as such:
if [ -d $DATA_ROOT/$name ], I've done research and understand that this means if directory exists but now I'm trying to print out those directories that exist to fix a problem.
Tried using
echo `First: $DATA_ROOT`
echo `Second: $name`
echo `Last: $DATA_ROOT/$name`
exit 1;
Got command not found for all, the code is meant to fix the bug I'm trying to by extracting all files but does not end up extracting all ending up with the data extraction failed error below, code:
num_files=`find $DATA_ROOT/$name -name '*' | wc -l`
if [ ! $num_files -eq $extract_file ] ; then
echo "Data extraction failed! Extracted $num_files instead of $extract_file"
exit 1;
I just want to extract all files correctly, how to do this please?
The back-ping you are using means "execute this as an command"
echo `First: $DATA_ROOT`
echo `Second: $name`
echo `Last: $DATA_ROOT/$name`
would try to execute a command called "First:" which does not exists.
Instead use double quotes as they allow for variable substitution, like this and does not try to execute it as a command
echo "First: $DATA_ROOT"
echo "Second: $name"
echo "Last: $DATA_ROOT/$name"
Also
find $DATA_ROOT/$name -name '*'
is probably not what you want, the -name '*' is the default so you don't need it. As others points out, find will return everything, including directories and special files if you have any of those. find "$DATA_ROOT/$name" -type f is what you want if you only want to list the files or find "$DATA_ROOT/$name" -type d if you only want to list directories. Also always use double quotes around your "$DATA_ROOT/$name" as it allows you to handle file names with spaces -- if you have a $name that contains a space you will fail otherwise.
find reports not only ordinary files, but also directories (including .).
Use find "$DATA_ROOT/$name" -type f.
You are using backticks and hence anything under backticks is treated as a command to execute and as a result you are getting command not found exception. You could use double quotes to avoid the error like below:
echo "First: $DATA_ROOT"
echo "Second: $name"
echo "Last: $DATA_ROOT/$name"
You could use find command to list down all directories like:
find $DATA_ROOT/$name -type d
Above command would list all the directories (with -type d option and use -type f to list all the files) within $DATA_ROOT/$name and then you can perform operations on those directories.
Related
This question already has answers here:
How do I pass a wildcard parameter to a bash file
(2 answers)
Closed 5 years ago.
So as the title describes, I want to recursively delete all files which match a naming pattern given by the user, but only if the file is empty. Here is my attempt:
#!/bin/bash
_files="$1"
[ $# -eq 0 ] && { echo "Usage: $0 filename"; exit 1; }
[ ! -f "$_files" ] && { echo "Error: $0 no files were found which match given naming structure."; exit 2; }
for f in $(find -name $_files)
do
if [ -s "$f" ]
then
echo "$f has some data."
# do something as file has data
else
echo "$f is empty. Deleting file."
rm $f
fi
done
Example output:
./remove_blank.sh *.o*
./Disp_variations_higher_0.o1906168 has some data.
./remove_blank.sh *.e*
./Disp_variations_higher_15.e1906183 is empty. Deleting file.
As you can see, the code works, but only for one file at a time. Should be a relatively simple fix to get it to work, but I extremely new to bash scripting and can't seem to figure it out. Sorry for the noobish question. I did some research to find an answer but didn't find exactly what I needed. Thanks in advance for any help.
Edit
I have found two different solutions to the problem. As #David Z's suggestion, one can fix this by 1st deleting the Error checking part of the script as well as putting quotes around the $_files variable in the find function. Then the code looks like this:
#!/bin/bash
_files=$1
[ $# -eq 0 ] && { echo "Usage: $0 filename"; exit 1; }
for f in $(find -name "$_files")
do
if [ -s $f ]
then
echo "$f has some data."
# do something as file has data
else
echo "$f is empty. Deleting file."
rm $f
fi
done
Or, one can also simply change the for loop to for f in "$#", which allows the error check to be kept in the script. I am not sure which method is better but will update again if I find out.
It looks like the way you're invoking the script, the shell expands the pattern before running your script. For example, in
./remove_blank.sh *.o*
the shell converts *.o* to the list of filenames that match that pattern, and then what it actually runs is something like
./remove_blank.sh Disp_variations_higher_0.o1906168 other_file.o12345 ...
But you only check the first argument in your script, so that's the one that winds up getting deleted.
Solution: quote the pattern when you run the script.
./remove_blank.sh '*.o*'
You will also need to remove the test [ ! -f "$_files" ] ... because $_files is being set to the pattern (such as *.o*), not a filename. In fact, you might want to rename the variable to make that clear. And finally, you need to quote the variable in your find command,
... $(find -name "$_files") ...
so that the pattern makes it all the way through to find without being converted into filenames by the shell.
There are some other issues with the script but you might want to ask about those on Code Review. What I've identified here is just the minimum needed to get it working.
Incidentally, you can accomplish this whole task using find itself, which will at least give cleaner code:
find -name "$_files" -type f -empty -delete
I need a way to tell if grep does find something, and ideally pass that return value to an if statement.
Let's say I have a tmp folder (current folder), in that there are several files and sub-folders. I want to search all files named abc for a pattern xyz. The search is assumed to be successful if it finds any occurrence of xyz (it does not matter how many times xyz is found). The search fails if no occurrence of xyz is found.
In bash, it can be done like this:
find . -name "abc" -exec grep "xyz" {} \;
That would show if xyz is found at all. But I'm not sure how pass the result (successful or not) back to an if statement.
Any help would be appreciated.
You can try
x=`find . -name abc | xargs grep xyz`
echo $x
That is, x contains your return value. It is blank when there is no match.
If you want to know that find finds some files abc and that at least one of them contains the string xyz, then you can probably use:
if find . -name 'abc' -type f -exec grep -q xyz {} +
then : All invocations of grep found at least one xyz and nothing else failed
else : One or more invocations of grep failed to find xyz or something else failed
fi
This relies on find returning an exit status for its own operations, and a non-zero exit status of any of the command(s) it executes. The + at the end groups as many file names as find thinks reasonable into a single command line. You need quite a lot of file name (a large number of fairly long names) to make find run grep multiple times. On a Mac running Mac OS X 10.10.4, I got to about 3,000 files, each with about 32 characters in the name, for an argument list of just over 100 KiB, without grep being run multiple times. OTOH, when I had just under 8000 files, I got two runs of grep, with around 130 KiB of argument list for each.
Someone briefly left a comment asking whether the exit status of find is guaranteed. The answer is 'yes' — though I had to modify my description of the exit status a bit. Under the description of -exec, POSIX specifies:
If any invocation [of the 'utility', such as grep in this question] returns a non-zero value as exit status, the find utility shall return a non-zero exit status.
And under the general 'Exit status' it says:
The following exit values shall be returned:
0 — All path operands were traversed successfully.
>0 — An error occurred.
Thus, find will report success as long as none of its own operations fails and as long as none of the grep commands it invokes fails. Zero files found but no failures will be reported as success. (Failures might be lack of permission to search a directory, for example.)
find returns the result of the -exec'd command as its result, just place the command in an if statement:
if [[ -n $(find . -name "abc" -exec grep "xyz" {} \;) ]]
then
# do something
fi
The grep command can be given a -q option that will "quiet" its output. Grep will return a success or a failure on the basis of whether it found anything in the file you pointed it at.
If your shell is capable of it, rather than trying to parse the output of a find command, you might want to try using a for loop instead. For example in bash:
shopt -s globstar
cd /some/directory
for file in **/abc; do
grep -q "xyz" "$file" && echo "Found something in $file"
done
Is this what you're looking for?
You should be able to do this with just grep with --include I believe.
Like this:
if grep -r --include=abc -q xyz; then
: Found at least one match
else
: No matching lines found
fi
I need to make a linux file search which involves recursion for a project. I got a bit of help making this so I don't understand this code fully only parts of it. Could someone explain what it means and also give a bit of help as to how I would go about getting a user to input a keyword and for this function to search for that keyword in the directories? Thankyou
#!/bin/bash
lookIn() {
echo $2
for d in $(find $1 -type d)
do
if [ "$d" != "$1" ]
echo "looking in $d"
lookIn $d
fi
done
}
lookIn
You only need find. find will traverse the entire directory. Assuming $1 points to the folder you want to search:
read -p "Enter file name to find: " KEYWORD
find $1 -type f -name "$KEYWORD"
If you want to find names that contain the keyword, then use:
find $1 -type f -name "*${KEYWORD}*"
Try this then you can work this into your bigger script (whatever it does).
TL;DR
Don't use recursion. It may work, but it's more work than necessary; Bash doesn't have tail-call optimization, and it's not a functional programming language. Just use find with the right set of arguments.
Parameterized Bash Function to Call Find
find_name() {
starting_path="$1"
filename="$2"
find "$1" -name "$2" 2>&-
}
Example Output
Make sure you quote properly, especially if using globbing characters like * or ?. For example:
$ find_name /etc 'pass?d'
/etc/passwd
/etc/pam.d/passwd
You don't really need find for recursive file search. grep -r (recursive) will work fine.
See below script:
#!/bin/bash
# change dir to base dir where files are stored for search
cd /base/search/dir
# accept input from user
read -p "Enter Search Keyword: " kw
# perform case insensitive recursive search and list matched file
grep -irl "$kw" *
I just downloaded about 600 files from my server and need to remove the last 11 characters from the filename (not including the extension). I use Ubuntu and I am searching for a command to achieve this.
Some examples are as follows:
aarondyne_kh2_13thstruggle_or_1250556383.mus should be renamed to aarondyne_kh2_13thstruggle_or.mus
aarondyne_kh2_darknessofunknow_1250556659.mp3 should be renamed to aarondyne_kh2_darknessofunknow.mp3
It seems that some duplicates might exist after I do this, but if the command fails to complete and tells me what the duplicates would be, I can always remove those manually.
Try using the rename command. It allows you to rename files based on a regular expression:
The following line should work out for you:
rename 's/_\d+(\.[a-z0-9A-Z]+)$/$1/' *
The following changes will occur:
aarondyne_kh2_13thstruggle_or_1250556383.mus renamed as aarondyne_kh2_13thstruggle_or.mus
aarondyne_kh2_darknessofunknow_1250556659.mp3 renamed as aarondyne_kh2_darknessofunknow.mp3
You can check the actions rename will do via specifying the -n flag, like this:
rename -n 's/_\d+(\.[a-z0-9A-Z]+)$/$1/' *
For more information on how to use rename simply open the manpage via: man rename
Not the prettiest, but very simple:
echo "$filename" | sed -e 's!\(.*\)...........\(\.[^.]*\)!\1\2!'
You'll still need to write the rest of the script, but it's pretty simple.
find . -type f -exec sh -c 'mv {} `echo -n {} | sed -E -e "s/[^/]{10}(\\.[^\\.]+)?$/\\1/"`' ";"
one way to go:
you get a list of your files, one per line (by ls maybe) then:
ls....|awk '{o=$0;sub(/_[^_.]*\./,".",$0);print "mv "o" "$0}'
this will print the mv a b command
e.g.
kent$ echo "aarondyne_kh2_13thstruggle_or_1250556383.mus"|awk '{o=$0;sub(/_[^_.]*\./,".",$0);print "mv "o" "$0}'
mv aarondyne_kh2_13thstruggle_or_1250556383.mus aarondyne_kh2_13thstruggle_or.mus
to execute, just pipe it to |sh
I assume there is no space in your filename.
This script assumes each file has just one extension. It would, for instance, rename "foo.something.mus" to "foo.mus". To keep all extensions, remove one hash mark (#) from the first line of the loop body. It also assumes that the base of each filename has at least 12 character, so that removing 11 doesn't leave you with an empty name.
for f in *; do
ext=${f##*.}
new_f=${base%???????????.$ext}
if [ -f "$new_f" ]; then
echo "Will not rename $f, $new_f already exists" >&2
else
mv "$f" "$new_f"
fi
done
I'm trying to find all files with a specific extension in a directory and its subdirectories with my bash (Latest Ubuntu LTS Release).
This is what's written in a script file:
#!/bin/bash
directory="/home/flip/Desktop"
suffix="in"
browsefolders ()
for i in "$1"/*;
do
echo "dir :$directory"
echo "filename: $i"
# echo ${i#*.}
extension=`echo "$i" | cut -d'.' -f2`
echo "Erweiterung $extension"
if [ -f "$i" ]; then
if [ $extension == $suffix ]; then
echo "$i ends with $in"
else
echo "$i does NOT end with $in"
fi
elif [ -d "$i" ]; then
browsefolders "$i"
fi
done
}
browsefolders "$directory"
Unfortunately, when I start this script in terminal, it says:
[: 29: in: unexpected operator
(with $extension instead of 'in')
What's going on here, where's the error?
But this curly brace
find "$directory" -type f -name "*.in"
is a bit shorter than that whole thing (and safer - deals with whitespace in filenames and directory names).
Your script is probably failing for entries that don't have a . in their name, making $extension empty.
find {directory} -type f -name '*.extension'
Example: To find all csv files in the current directory and its sub-directories, use:
find . -type f -name '*.csv'
The syntax I use is a bit different than what #Matt suggested:
find $directory -type f -name \*.in
(it's one less keystroke).
Without using find:
du -a $directory | awk '{print $2}' | grep '\.in$'
Though using find command can be useful here, the shell itself provides options to achieve this requirement without any third party tools. The bash shell provides an extended glob support option using which you can get the file names under recursive paths that match with the extensions you want.
The extended option is extglob which needs to be set using the shopt option as below. The options are enabled with the -s support and disabled with he -u flag. Additionally you could use couple of options more i.e. nullglob in which an unmatched glob is swept away entirely, replaced with a set of zero words. And globstar that allows to recurse through all the directories
shopt -s extglob nullglob globstar
Now all you need to do is form the glob expression to include the files of a certain extension which you can do as below. We use an array to populate the glob results because when quoted properly and expanded, the filenames with special characters would remain intact and not get broken due to word-splitting by the shell.
For example to list all the *.csv files in the recursive paths
fileList=(**/*.csv)
The option ** is to recurse through the sub-folders and *.csv is glob expansion to include any file of the extensions mentioned. Now for printing the actual files, just do
printf '%s\n' "${fileList[#]}"
Using an array and doing a proper quoted expansion is the right way when used in shell scripts, but for interactive use, you could simply use ls with the glob expression as
ls -1 -- **/*.csv
This could very well be expanded to match multiple files i.e. file ending with multiple extension (i.e. similar to adding multiple flags in find command). For example consider a case of needing to get all recursive image files i.e. of extensions *.gif, *.png and *.jpg, all you need to is
ls -1 -- **/+(*.jpg|*.gif|*.png)
This could very well be expanded to have negate results also. With the same syntax, one could use the results of the glob to exclude files of certain type. Assume you want to exclude file names with the extensions above, you could do
excludeResults=()
excludeResults=(**/!(*.jpg|*.gif|*.png))
printf '%s\n' "${excludeResults[#]}"
The construct !() is a negate operation to not include any of the file extensions listed inside and | is an alternation operator just as used in the Extended Regular Expressions library to do an OR match of the globs.
Note that these extended glob support is not available in the POSIX bourne shell and its purely specific to recent versions of bash. So if your are considering portability of the scripts running across POSIX and bash shells, this option wouldn't be right.
find "$PWD" -type f -name "*.in"
There's a { missing after browsefolders ()
All $in should be $suffix
The line with cut gets you only the middle part of front.middle.extension. You should read up your shell manual on ${varname%%pattern} and friends.
I assume you do this as an exercise in shell scripting, otherwise the find solution already proposed is the way to go.
To check for proper shell syntax, without running a script, use sh -n scriptname.
To find all the pom.xml files in your current directory and print them, you can use:
find . -name 'pom.xml' -print
find $directory -type f -name "*.in"|grep $substring
for file in "${LOCATION_VAR}"/*.zip
do
echo "$file"
done