I'm trying to go the current directory and all sub direcotires, and add some annotations into each file that ends in .sql
heres a snippet of the code
HEADER="--SQL HEADER"
for f in 'find . -name *.sql';
do
echo $f
echo -e $HEADER > $f.tmp;
FNAME=${f//\//_/};
echo -e "\n\n--MORE ANNOTATIONS ${FNAME%.*}:1" >> $f.tmp;
cat $f >> $f.tmp;
mv $f.tmp $f;
rm $f.tmp
done;
im a beginner at bash so i think some of the errors im getting might be due to the find statement with the loop
but this is the error i get
find . -name X.sql A.sql W.sql E.sql S.sql
./annotate.sh: line 6: $f.tmp: ambiguous redirect
./annotate.sh: line 8: $f.tmp: ambiguous redirect
./annotate.sh: line 9: $f.tmp: ambiguous redirect
mv: invalid option -- n
Try `mv --help' for more information.
rm: invalid option -- n
Try `rm --help' for more information.
any help would be greatly appreciated =)
Here's the problem. Your "echo" gives it away:
echo $f
outputs
find . -name X.sql A.sql W.sql E.sql S.sql
I think the problem is you have straight single quotes ('') in the find command, instead of backquotes (``). So it's not really running find, but simply expanding the wildcards.
You may have to quote the wildcard so it gets passed to find instead of evaluated by the shell:
for f in `find . -name \*.sql`;
However, there are several problems in your script, which you should address if you want to use it more than once. See ormaaj's answer.
The problem, as already pointed out, is that find isn't actually being executed. However, this pattern is very wrong. Iterating using a for loop over anything that happens with a command substitution doesn't work because splitting the output into words requires word-splitting, which requires not quoting, which is a problem even if pathname expansion is disabled because filenames can contain newlines.
Preferably, use -exec. First write this script to a file and chmod u+x scriptname:
#!/usr/bin/env bash
header="--SQL HEADER"
for f in "$#"; do
echo "$f" >&2
fname=${f//\//_/}
cat - "$f" <<EOF >"$f.tmp"
${header}$'\n\n'
--MORE ANNOTATIONS ${fname%.*}:1
EOF
mv "$f.tmp" "$f"
done
Then run find like this:
find . -name '*.sql' -exec scriptname {} +
Alternatively, (and assuming this is a recent version of Bash), use globstar and no find (ksh has a similar feature if you prefer). This may be slower depending upon the job - the shell must pre-generate the list of files.
#!/usr/bin/env bash
shopt -s globstar
for f in ./**/*.sql; do
...
Alternatively, if you have Bash 4 and a system with the necessary GNU utilities, use -print0.
find . -name '*.sql' -print0 | while IFS= read -rd '' f; do
# <body of the above for loop here>
done
See: http://mywiki.wooledge.org/UsingFind
Related
This might be a very simple thing for a shell scripting programmer but am pretty new to it. I was trying to execute the below command in a shell script and save the output into a variable
inputfile=$(ls -ltr *.{PDF,pdf} | head -1 | awk '{print $9}')
The command works fine when I fire it from terminal but fails when executed through a shell script (sh). Why is that the command fails, does it mean that shell script doesn't support the command or am I doing it wrong? Also how do I know if a command will work in shell or not?
Just to give you a glimpse of my requirement, I was trying to get the oldest file from a particular directory (I also want to make sure upper case and lower case extensions are handled). Is there any other way to do this ?
The above command will work correctly only if BOTH *.pdf and *.PDF files are in the directory you are currently.
If you would like to execute it in a directory with only one of those you should consider using e.g.:
inputfiles=$(find . -maxdepth 1 -type f \( -name "*.pdf" -or -name "*.PDF" \) | xargs ls -1tr | head -1 )
NOTE: The above command doesn't work with files with new lines, or with long list of found files.
Parsing ls is always a bad idea. You need another strategy.
How about you make a function that gives you the oldest file among the ones given as argument? the following works in Bash (adapt to your needs):
get_oldest_file() {
# get oldest file among files given as parameters
# return is in variable get_oldest_file_ret
local oldest f
for f do
[[ -e $f ]] && [[ ! $oldest || $f -ot $oldest ]] && oldest=$f
done
get_oldest_file_ret=$oldest
}
Then just call as:
get_oldest_file *.{PDF,pdf}
echo "oldest file is: $get_oldest_file_ret"
Now, you probably don't want to use brace expansions like this at all. In fact, you very likely want to use the shell options nocaseglob and nullglob:
shopt -s nocaseglob nullglob
get_oldest_file *.pdf
echo "oldest file is: $get_oldest_file_ret"
If you're using a POSIX shell, it's going to be a bit trickier to have the equivalent of nullglob and nocaseglob.
Is perl an option? It's ubiquitous on Unix.
I would suggest:
perl -e 'print ((sort { -M $b <=> -M $a } glob ( "*.{pdf,PDF}" ))[0]);';
Which:
uses glob to fetch all files matching the pattern.
sort, using -M which is relative modification time. (in days).
fetches the first element ([0]) off the sort.
Prints that.
As #gniourf_gniourf says, parsing ls is a bad idea. Such as leaving unquoted globs, and generally not counting for funny characters in file names.
find is your friend:
#!/bin/sh
get_oldest_pdf() {
#
# echo path of oldest *.pdf (case-insensitive) file in current directory
#
find . -maxdepth 1 -mindepth 1 -iname "*.pdf" -printf '%T# %p\n' \
| sort -n \
| tail -1 \
| cut -d\ -f1-
}
whatever=$(get_oldest_pdf)
Notes:
find has numerous ways of formatting the output, including
things like access time and/or write time. I used '%T# %p\n',
where %T# is last write time in UNIX time format incl.fractal part.
This will never containt space so it's safe to use as separator.
Numeric sort and tail get the last item, sorting by the time,
cut removes the time from the output.
I used IMO much easier to read/maintain pipe notation, with help of \.
the code should run on any POSIX shell,
You could easily adjust the function to parametrize the pattern,
time used (access/write), control the search depth or starting dir.
I wrote a function in a Bash shell script to search a Linux tree for filenames matching a pattern containing a regular expression, with colour highlighting:
function ggrep {
LS_="ls --color {}|sed s~./~~"
[ -n "$1" -a "$1" != "*" ] && NAME_="-iname $1" || NAME_=
[ -n "$2" ] && EXEC_="egrep -q \"$2\" \"{}\" && $LS_ && egrep -n \"$2\" --color=always \"{}\"|sed s~^B~\ B~" || EXEC_=$LS_
FIND_="find . -type f $NAME_ -exec sh -c \"$EXEC_\" \\;"
echo -e \\e[7m $FIND_ \\e[0m
$FIND_
}
e.g. ggrep a* lists all files starting with a under the current directory tree,
and ggrep a* x lists of files starting with a and containing x
When I run it, I get:
find: missing argument to `-exec'
even though I get the correct output when I copy and paste the line output by "echo" into the terminal. Can anyone please tell me what I've done wrong?
Secondly, it would be great if ggrep * x listed all files containing x, but * expands to a list of filenames and I need to use \* or '*' instead. Is there a way around this? Thanks!
Terminate the find command with \; instead of \\; .
find . -type f $NAME_ -exec sh -c \"$EXEC_\" \;
eval $FIND_
in the last line of the function body works fine for me.
Expansions in BASH are generally not recursive, so if you load a command from a variable, you should always use "eval" to enforce reprocessing the expanded variable as it was a fresh input. Normally quotes are not handled properly within a string that has already been expanded.
To your second problem, I think there is no satisfactory solution. The shell will always expand * before passing it to anything controlled by you. You can disable this expansion, but that is a global setting. Anyway, I think that this expansion could actually act in favor of your function. Consider rewriting it in a way that takes advantage of it. (I did not analyze whether the current version was close to that or not.)
In linux shell scripting I am trying to set the output of find into an array as below
#!/bin/bash
arr=($(find . -type -f))
but it give error as -type should contain only one character. can anybody tell me where is the issue.
Thanks
If you are using bash 4, the readarray command can be used along with process substitution.
readarray -t arr < <(find . -type f)
Properly supporting all file names, including those that contain newlines, requires a bit more work, along with a version of find that supports -print0:
while read -d '' -r; do
arr+=( "$REPLY" )
done < <(find . -type f -print0)
I suggest the following script:
#!/bin/bash
listoffiles=$(find . -type f)
nfiles=$(echo "${listoffiles}" | wc -l)
unset myarray
for i in $(seq 1 ${nfiles}) ; do
myarray[$((i-1))]=$(echo "${listoffiles}" | sed -n $i'{p;q}')
done
Because you cannot rely on the Bash automatic array instanciation through the myarr=( one two three ) syntax, because it treats the same way all whitespaces (including spaces) it sees within its parentheses. So you have to handle the resulting multiline variable listoffiles kindof manually, what I do in the above script.
echo without the -n option prints a trailing newline at the very end of the variable, but that's fine in our case because find doesn't (you may check this with echo -n "${listoffiles}").
And I use sed to extract the relevant i^th line, with the $i being interpreted by the shell before being given to sed as the first character of its own script.
I just downloaded about 600 files from my server and need to remove the last 11 characters from the filename (not including the extension). I use Ubuntu and I am searching for a command to achieve this.
Some examples are as follows:
aarondyne_kh2_13thstruggle_or_1250556383.mus should be renamed to aarondyne_kh2_13thstruggle_or.mus
aarondyne_kh2_darknessofunknow_1250556659.mp3 should be renamed to aarondyne_kh2_darknessofunknow.mp3
It seems that some duplicates might exist after I do this, but if the command fails to complete and tells me what the duplicates would be, I can always remove those manually.
Try using the rename command. It allows you to rename files based on a regular expression:
The following line should work out for you:
rename 's/_\d+(\.[a-z0-9A-Z]+)$/$1/' *
The following changes will occur:
aarondyne_kh2_13thstruggle_or_1250556383.mus renamed as aarondyne_kh2_13thstruggle_or.mus
aarondyne_kh2_darknessofunknow_1250556659.mp3 renamed as aarondyne_kh2_darknessofunknow.mp3
You can check the actions rename will do via specifying the -n flag, like this:
rename -n 's/_\d+(\.[a-z0-9A-Z]+)$/$1/' *
For more information on how to use rename simply open the manpage via: man rename
Not the prettiest, but very simple:
echo "$filename" | sed -e 's!\(.*\)...........\(\.[^.]*\)!\1\2!'
You'll still need to write the rest of the script, but it's pretty simple.
find . -type f -exec sh -c 'mv {} `echo -n {} | sed -E -e "s/[^/]{10}(\\.[^\\.]+)?$/\\1/"`' ";"
one way to go:
you get a list of your files, one per line (by ls maybe) then:
ls....|awk '{o=$0;sub(/_[^_.]*\./,".",$0);print "mv "o" "$0}'
this will print the mv a b command
e.g.
kent$ echo "aarondyne_kh2_13thstruggle_or_1250556383.mus"|awk '{o=$0;sub(/_[^_.]*\./,".",$0);print "mv "o" "$0}'
mv aarondyne_kh2_13thstruggle_or_1250556383.mus aarondyne_kh2_13thstruggle_or.mus
to execute, just pipe it to |sh
I assume there is no space in your filename.
This script assumes each file has just one extension. It would, for instance, rename "foo.something.mus" to "foo.mus". To keep all extensions, remove one hash mark (#) from the first line of the loop body. It also assumes that the base of each filename has at least 12 character, so that removing 11 doesn't leave you with an empty name.
for f in *; do
ext=${f##*.}
new_f=${base%???????????.$ext}
if [ -f "$new_f" ]; then
echo "Will not rename $f, $new_f already exists" >&2
else
mv "$f" "$new_f"
fi
done
I'm trying to find all files with a specific extension in a directory and its subdirectories with my bash (Latest Ubuntu LTS Release).
This is what's written in a script file:
#!/bin/bash
directory="/home/flip/Desktop"
suffix="in"
browsefolders ()
for i in "$1"/*;
do
echo "dir :$directory"
echo "filename: $i"
# echo ${i#*.}
extension=`echo "$i" | cut -d'.' -f2`
echo "Erweiterung $extension"
if [ -f "$i" ]; then
if [ $extension == $suffix ]; then
echo "$i ends with $in"
else
echo "$i does NOT end with $in"
fi
elif [ -d "$i" ]; then
browsefolders "$i"
fi
done
}
browsefolders "$directory"
Unfortunately, when I start this script in terminal, it says:
[: 29: in: unexpected operator
(with $extension instead of 'in')
What's going on here, where's the error?
But this curly brace
find "$directory" -type f -name "*.in"
is a bit shorter than that whole thing (and safer - deals with whitespace in filenames and directory names).
Your script is probably failing for entries that don't have a . in their name, making $extension empty.
find {directory} -type f -name '*.extension'
Example: To find all csv files in the current directory and its sub-directories, use:
find . -type f -name '*.csv'
The syntax I use is a bit different than what #Matt suggested:
find $directory -type f -name \*.in
(it's one less keystroke).
Without using find:
du -a $directory | awk '{print $2}' | grep '\.in$'
Though using find command can be useful here, the shell itself provides options to achieve this requirement without any third party tools. The bash shell provides an extended glob support option using which you can get the file names under recursive paths that match with the extensions you want.
The extended option is extglob which needs to be set using the shopt option as below. The options are enabled with the -s support and disabled with he -u flag. Additionally you could use couple of options more i.e. nullglob in which an unmatched glob is swept away entirely, replaced with a set of zero words. And globstar that allows to recurse through all the directories
shopt -s extglob nullglob globstar
Now all you need to do is form the glob expression to include the files of a certain extension which you can do as below. We use an array to populate the glob results because when quoted properly and expanded, the filenames with special characters would remain intact and not get broken due to word-splitting by the shell.
For example to list all the *.csv files in the recursive paths
fileList=(**/*.csv)
The option ** is to recurse through the sub-folders and *.csv is glob expansion to include any file of the extensions mentioned. Now for printing the actual files, just do
printf '%s\n' "${fileList[#]}"
Using an array and doing a proper quoted expansion is the right way when used in shell scripts, but for interactive use, you could simply use ls with the glob expression as
ls -1 -- **/*.csv
This could very well be expanded to match multiple files i.e. file ending with multiple extension (i.e. similar to adding multiple flags in find command). For example consider a case of needing to get all recursive image files i.e. of extensions *.gif, *.png and *.jpg, all you need to is
ls -1 -- **/+(*.jpg|*.gif|*.png)
This could very well be expanded to have negate results also. With the same syntax, one could use the results of the glob to exclude files of certain type. Assume you want to exclude file names with the extensions above, you could do
excludeResults=()
excludeResults=(**/!(*.jpg|*.gif|*.png))
printf '%s\n' "${excludeResults[#]}"
The construct !() is a negate operation to not include any of the file extensions listed inside and | is an alternation operator just as used in the Extended Regular Expressions library to do an OR match of the globs.
Note that these extended glob support is not available in the POSIX bourne shell and its purely specific to recent versions of bash. So if your are considering portability of the scripts running across POSIX and bash shells, this option wouldn't be right.
find "$PWD" -type f -name "*.in"
There's a { missing after browsefolders ()
All $in should be $suffix
The line with cut gets you only the middle part of front.middle.extension. You should read up your shell manual on ${varname%%pattern} and friends.
I assume you do this as an exercise in shell scripting, otherwise the find solution already proposed is the way to go.
To check for proper shell syntax, without running a script, use sh -n scriptname.
To find all the pom.xml files in your current directory and print them, you can use:
find . -name 'pom.xml' -print
find $directory -type f -name "*.in"|grep $substring
for file in "${LOCATION_VAR}"/*.zip
do
echo "$file"
done