Syntacticaly error while writing a Unix Shell script (Bash shell) [duplicate] - linux

Say I want to copy the contents of a directory excluding files and folders whose names contain the word 'Music'.
cp [exclude-matches] *Music* /target_directory
What should go in place of [exclude-matches] to accomplish this?

In Bash you can do it by enabling the extglob option, like this (replace ls with cp and add the target directory, of course)
~/foobar> shopt extglob
extglob off
~/foobar> ls
abar afoo bbar bfoo
~/foobar> ls !(b*)
-bash: !: event not found
~/foobar> shopt -s extglob # Enables extglob
~/foobar> ls !(b*)
abar afoo
~/foobar> ls !(a*)
bbar bfoo
~/foobar> ls !(*foo)
abar bbar
You can later disable extglob with
shopt -u extglob

The extglob shell option gives you more powerful pattern matching in the command line.
You turn it on with shopt -s extglob, and turn it off with shopt -u extglob.
In your example, you would initially do:
$ shopt -s extglob
$ cp !(*Music*) /target_directory
The full available extended globbing operators are (excerpt from man bash):
If the extglob shell option is enabled using the shopt builtin, several extended
pattern matching operators are recognized.A pattern-list is a list of one or more patterns separated by a |. Composite patterns may be formed using one or more of the following sub-patterns:
?(pattern-list)
Matches zero or one occurrence of the given patterns
*(pattern-list)
Matches zero or more occurrences of the given patterns
+(pattern-list)
Matches one or more occurrences of the given patterns
#(pattern-list)
Matches one of the given patterns
!(pattern-list)
Matches anything except one of the given patterns
So, for example, if you wanted to list all the files in the current directory that are not .c or .h files, you would do:
$ ls -d !(*#(.c|.h))
Of course, normal shell globing works, so the last example could also be written as:
$ ls -d !(*.[ch])

Not in bash (that I know of), but:
cp `ls | grep -v Music` /target_directory
I know this is not exactly what you were looking for, but it will solve your example.

If you want to avoid the mem cost of using the exec command, I believe you can do better with xargs. I think the following is a more efficient alternative to
find foo -type f ! -name '*Music*' -exec cp {} bar \; # new proc for each exec
find . -maxdepth 1 -name '*Music*' -prune -o -print0 | xargs -0 -i cp {} dest/

A trick I haven't seen on here yet that doesn't use extglob, find, or grep is to treat two file lists as sets and "diff" them using comm:
comm -23 <(ls) <(ls *Music*)
comm is preferable over diff because it doesn't have extra cruft.
This returns all elements of set 1, ls, that are not also in set 2, ls *Music*. This requires both sets to be in sorted order to work properly. No problem for ls and glob expansion, but if you're using something like find, be sure to invoke sort.
comm -23 <(find . | sort) <(find . | grep -i '.jpg' | sort)
Potentially useful.

You can also use a pretty simple for loop:
for f in `find . -not -name "*Music*"`
do
cp $f /target/dir
done

In bash, an alternative to shopt -s extglob is the GLOBIGNORE variable. It's not really better, but I find it easier to remember.
An example that may be what the original poster wanted:
GLOBIGNORE="*techno*"; cp *Music* /only_good_music/
When done, unset GLOBIGNORE to be able to rm *techno* in the source directory.

My personal preference is to use grep and the while command. This allows one to write powerful yet readable scripts ensuring that you end up doing exactly what you want. Plus by using an echo command you can perform a dry run before carrying out the actual operation. For example:
ls | grep -v "Music" | while read filename
do
echo $filename
done
will print out the files that you will end up copying. If the list is correct the next step is to simply replace the echo command with the copy command as follows:
ls | grep -v "Music" | while read filename
do
cp "$filename" /target_directory
done

One solution for this can be found with find.
$ mkdir foo bar
$ touch foo/a.txt foo/Music.txt
$ find foo -type f ! -name '*Music*' -exec cp {} bar \;
$ ls bar
a.txt
Find has quite a few options, you can get pretty specific on what you include and exclude.
Edit: Adam in the comments noted that this is recursive. find options mindepth and maxdepth can be useful in controlling this.

The following works lists all *.txt files in the current dir, except those that begin with a number.
This works in bash, dash, zsh and all other POSIX compatible shells.
for FILE in /some/dir/*.txt; do # for each *.txt file
case "${FILE##*/}" in # if file basename...
[0-9]*) continue ;; # starts with digit: skip
esac
## otherwise, do stuff with $FILE here
done
In line one the pattern /some/dir/*.txt will cause the for loop to iterate over all files in /some/dir whose name end with .txt.
In line two a case statement is used to weed out undesired files. – The ${FILE##*/} expression strips off any leading dir name component from the filename (here /some/dir/) so that patters can match against only the basename of the file. (If you're only weeding out filenames based on suffixes, you can shorten this to $FILE instead.)
In line three, all files matching the case pattern [0-9]*) line will be skipped (the continue statement jumps to the next iteration of the for loop). – If you want to you can do something more interesting here, e.g. like skipping all files which do not start with a letter (a–z) using [!a-z]*, or you could use multiple patterns to skip several kinds of filenames e.g. [0-9]*|*.bak to skip files both .bak files, and files which does not start with a number.

this would do it excluding exactly 'Music'
cp -a ^'Music' /target
this and that for excluding things like Music?* or *?Music
cp -a ^\*?'complete' /target
cp -a ^'complete'?\* /target

Related

Find files recursively and rename based on their full path

I'm looking to search for files of a specific name, modify the name to the full path and then copy the results to another folder.
Is it possible to update each find result with the full path as the file name; i.e.
./folder/subfolder/my-file.csv
becomes
folder_subfolder_my-file.csv
I am listing the files using the following and would like to script it.
find . -name my-file.csv -exec ls {} \;
Since you're using bash, you can take advantage of globstar and use a for loop:
shopt -s globstar # set globstar option
for csv in **/my-file.csv; do
echo "$csv" "${csv//\//_}"
done
shopt -u globstar # unset the option if you don't want it any more
With globstar enabled, ** does a recursive search (similar to the basic functionality of find).
"${csv//\//_}" is an example of ${var//match/replace}, which does a global replacement of all instances of match (here an escaped /) with replace.
If you're happy with the output, then change the echo to mv.
Just to demonstrate how to do this with find;
find . -type f -exec bash -c '
for file; do
f=${file#./}
cp "$file" "./${f//\//_}"
done' _ {} +
The Bash pattern expansion ${f//x/y} replaces x with y throughout. Because find prefixes each found file with the path where it was found (here, ./) we trim that off in order to avoid doing mv "./file" "._file". And because the slash is used in the parameter expansion itself, we need to backslash the slash we want the shell to interpret literally. Finally, because this parameter expansion syntax is a Bash-only extension, we use bash rather than sh.
Obviously, if you want to rename rather than copy, replace cp with mv.
If your find does not support -exec ... + this needs to be refactored somewhat (probably to use xargs); but it should be supported on any reasonably modern platform.
With perl's rename command ...
$ prename
Usage: rename [-v] [-n] [-f] perlexpr [filenames]
... you can rename multiple files by applying a regular expression. rename also accepts file names via stdin:
find ... | rename -n 's#/#_#g'
Check the results and if they are fine, remove -n.

No such file or directory when piping. Each command works separately, but not when piping

I have 2 folders: folder_a & folder_b. In each of these folders there are a bunch of files. I am trying to use sed to move all of these files out of these folders and into my current working directory I am currently in.
My folder structure looks like this:
mytest:
a:
1.txt
2.txt
3.txt
b:
4.txt
5.txt
The command I am trying to use is:
find . -type d ! -iname '*.*' # find all folders other than root
| sed -r 's/.*/&\/*/' # add '/*' to each of the arguments
| sed -r 'p;s/.*/./' # output: a/* . b/* .
| xargs -n 2 mv # should be creating two commands: 'mv a/* .' and 'mv b/* .'
Unfortunately I get an error:
mv: cannot stat './aaa/*': No such file or directory
I also get the same error when I try this other strategy (using ls instead of mv):
for dir in */; do
ls $dir;
done;
Even if I use sed to replace the spaces in each directory name with '\ ', or surround the directory names with quotes I get the same error.
I'm not sure if these 2 examples are related in my misunderstanding of bash but they both seem to demonstrate my ignorance of how bash translates the output from one command into the input of another command.
Can anyone shed some light on this?
Update: Completely rewritten.
As #EtanReisner and #melpomene have noted, mv */* . or, more specifically, mv a/* b/* . is the most straightforward solution, but you state that this is in part a learning exercise, so the remainder of the answer shows an efficient find-based solution and explains the problem with the original command.
An efficient find-based solution
Generally, if feasible, it's best and most efficient to let find itself do the work, without involving additional tools; find's -exec action is like a built-in xargs, with {} representing the path at hand (with terminator \;) / all paths (with +):
find . -type f -exec echo mv -t . {} +
To be safe, his will just print the mv commands that would be executed; remove the echo to actually execute them.
This will execute a single[1] mv command to which all matching files are passed, and -t . moves them all to the current dir.
[1] If the resulting command line is too long (which is unlikely), it is split up into multiple commands, just as with xargs.
Operating on files (-type f) bypasses the need for globbing, as find will then enumerate all files for you (it also bypasses the need to exclude . explicitly).
Note that this solution works on entire subtrees, not just (immediate) subdirectories.
It's tempting to consider turning on Bash 4's globstar option and using mv */** ., but that won't work, because it will attempt to move directories as well, not just the files in them.
A caveat re -exec with +: it only works if {} - the placeholder for all paths - is the token immediately before the +.
Since you're on Linux, we can satisfy this condition by specifying the target folder for mv with option -t before the {}; on BSD-based systems such as OSX, you could not do that, because mv doesn't support -t there, so you'd have to use terminator \;, which means that mv is called once for every path, which is obviously much slower.
Why your command didn't work:
As #EtanReisner points out in a comment, xargs invokes the command specified without (implicitly) involving a shell, so globbing won't work; you can verify this with the following command:
echo '*' | xargs echo # -> '*' - NO globbing
If we leave the globbing issue aside, additional work would have been necessary to make your xargs command work correctly with folder names with embedded spaces (or other shell metacharacters):
find . -mindepth 1 -type d |
sed -r "s/.*/'&'\/* ./" | # -> '<input-path>'/* . (including single-quotes)
xargs -n 2 echo mv # NOTE: still won't work due to lack of globbing
Note how the (combined) sed command now produces a single output line '<input-path>'/* ., with the input path enclosed in embedded single-quotes, which is required for xargs to recognize <input-path> as a single argument, even if it contains embedded spaces.
(If your filenames contain single-quotes, you'd have to do more work; also note that since now all arguments for a given dir. are on a single line, you could use xargs -L 1 ....)
Also note how -mindepth 1 (only process paths at the subdirectory level or below) is used to skip processing of . itself.
The only way to make globbing happen is to get the shell involved:
find . -mindepth 1 -type d |
sed -r "s/.*/'&'\/* ./" | # -> '<input-path>'/* . (including single-quotes)
xargs -I {} sh -c 'echo mv {}' # works, but is inefficient
Note the use of xargs' -I option to treat each input line as its own argument ({} is a self-chosen placeholder for the input).
sh -c invokes the (default) shell to execute the resulting command, at which globbing does happen.
However, overall, this is quite inefficient:
A pipeline with 3 segments is used.
A shell instance is invoked for every input path, which in turn calls the mv utility.
Compare this to the efficient find-only solution above, which (typically) creates only 2 processes in total.

Remove files not containing a specific string

I want to find the files not containing a specific string (in a directory and its sub-directories) and remove those files. How I can do this?
The following will work:
find . -type f -print0 | xargs --null grep -Z -L 'my string' | xargs --null rm
This will firstly use find to print the names of all the files in the current directory and any subdirectories. These names are printed with a null terminator rather than the usual newline separator (try piping the output to od -c to see the effect of the -print0 argument.
Then the --null parameter to xargs tells it to accept null-terminated inputs. xargs will then call grep on a list of filenames.
The -Z argument to grep works like the -print0 argument to find, so grep will print out its results null-terminated (which is why the final call to xargs needs a --null option too). The -L argument to grep causes grep to print the filenames of those files on its command line (that xargs has added) which don't match the regular expression:
my string
If you want simple matching without regular expression magic then add the -F option. If you want more powerful regular expressions then give a -E argument. It's a good habit to use single quotes rather than double quotes as this protects you against any shell magic being applied to the string (such as variable substitution)
Finally you call xargs again to get rid of all the files that you've found with the previous calls.
The problem with calling grep directly from the find command with the -exec argument is that grep then gets invoked once per file rather than once for a whole batch of files as xargs does. This is much faster if you have lots of files. Also don't be tempted to do stuff like:
rm $(some command that produces lots of filenames)
It's always better to pass it to xargs as this knows the maximum command-line limits and will call rm multiple times each time with as many arguments as it can.
Note that this solution would have been simpler without the need to cope with files containing white space and new lines.
Alternatively
grep -r -L -Z 'my string' . | xargs --null rm
will work too (and is shorter). The -r argument to grep causes it to read all files in the directory and recursively descend into any subdirectories). Use the find ... approach if you want to do some other tests on the files as well (such as age or permissions).
Note that any of the single letter arguments, with a single dash introducer, can be grouped together (for instance as -rLZ). But note also that find does not use the same conventions and has multi-letter arguments introduced with a single dash. This is for historical reasons and hasn't ever been fixed because it would have broken too many scripts.
GNU grep and bash.
grep -rLZ "$str" . | while IFS= read -rd '' x; do rm "$x"; done
Use a find solution if portability is needed. This is slightly faster.
EDIT: This is how you SHOULD NOT do this! Reason is given here. Thanks to #ormaaj for pointing it out!
find . -type f | grep -v "exclude string" | xargs rm
Note: grep pattern will match against full file path from current directory (see find . -type f output)
One possibility is
find . -type f '!' -exec grep -q "my string" {} \; -exec echo rm {} \;
You can remove the echo if the output of this preview looks correct.
The equivalent with -delete is
find . -type f '!' -exec grep -q "user_id" {} \; -delete
but then you don't get the nice preview option.
To remove files not containing a specific string:
Bash:
To use them, enable the extglob shell option as follows:
shopt -s extglob
And just remove all files that don't have the string "fix":
rm !(*fix*)
If you want to don't delete all the files that don't have the names "fix" and "class":
rm !(*fix*|*class*)
Zsh:
To use them, enable the extended glob zsh shell option as follows:
setopt extended_glob
Remove all files that don't have the string, in this example "fix":
rm -- ^*fix*
If you want to don't delete all the files that don't have the names "fix" and "class":
rm -- ^(*fix*|*class*)
It's possible to use it for extensions, you only need to change the regex: (.zip) , (.doc), etc.
Here are the sources:
https://www.tecmint.com/delete-all-files-in-directory-except-one-few-file-extensions/
https://codeday.me/es/qa/20190819/1296122.html
I can think of a few ways to approach this. Here's one: find and grep to generate a list of files with no match, and then xargs rm them.
find yourdir -type f -exec grep -F -L 'yourstring' '{}' + | xargs -d '\n' rm
This assumes GNU tools (grep -L and xargs -d are non-portable) and of course no filenames with newlines in them. It has the advantage of not running grep and rm once per file, so it'll be reasonably fast. I recommend testing it with "echo" in place of "rm" just to make sure it picks the right files before you unleash the destruction.
This worked for me, you can remove the -f if you're okay with deleting directories.
myString="keepThis"
for x in `find ./`
do if [[ -f $x && ! $x =~ $myString ]]
then rm $x
fi
done
Another solution (although not as fast). The top solution didn't work in my case because the string I needed to use in place of 'my string' has special characters.
find -type f ! -name "*my string*" -exec rm {} \; -print

List only numeric file names in directory

I have a list of files with numeric file names (e.g. #.php, ##.php or ###.php) that I'd like to copy/move in one fell swoop.
Does anyone know of an ls or grep combo command to accomplish this objective?
I do have this much:
ls -al | grep "[0-9].php"
In Bash, you can use extended pattern matching:
shopt -s extglob
ls -l +([0-9]).php
which will find files such as:
123.php
9.php
but not
a.php
2b.php
c3.php
Amend it like this:
ls -al | grep -E '^[0-9]+\.php$'
-E activates the extended regular expressions.
+ requires that at least one occurrence of the preceding group must appear.
\. escape dot otherwise it means "any character."
^ and $ to match the entire filename and not only a part.
Single quotes to prevent variable expansion (it would complain because of the $).
Use find:
$ find . -regex '^[0-9]+\.php' -exec mv '{}' dest/ ';'
Note that the -regex argument does a search, not a match, which is why the ^ is there to anchor it to the start. This also assumes that the files are in the same directory (.) as the one you're in when running the command.
The {} trickery in the mv command is replaced by find with the found filename.
Either use find (possibly combined with the -XXXdepth options):
find . -mindepth 1 -maxdepth 1 -regex '^[0-9]+\.php' -exec mv '{}' dest/ ';'
Or use the builtin regex capabilities:
pattern='^[0-9]+\.php$'
for file in *.php
do
[[ $file =~ $pattern ]] && echo "$file"
done
Don't use ls or grep.
This below worked in my case:
ls -an
You can use regular expression when listing files:
ls [0-9]*
This was an easy and minimalistic approach at the above problem but I think a better solution is
ls -al | grep -E '^[0-9]+\.php$'
as UncleZeiv explains below.

How do I rename all folders and files to lowercase on Linux?

I have to rename a complete folder tree recursively so that no uppercase letter appears anywhere (it's C++ source code, but that shouldn't matter).
Bonus points for ignoring CVS and Subversion version control files/folders. The preferred way would be a shell script, since a shell should be available on any Linux box.
There were some valid arguments about details of the file renaming.
I think files with the same lowercase names should be overwritten; it's the user's problem. When checked out on a case-ignoring file system, it would overwrite the first one with the latter, too.
I would consider A-Z characters and transform them to a-z, everything else is just calling for problems (at least with source code).
The script would be needed to run a build on a Linux system, so I think changes to CVS or Subversion version control files should be omitted. After all, it's just a scratch checkout. Maybe an "export" is more appropriate.
Smaller still I quite like:
rename 'y/A-Z/a-z/' *
On case insensitive filesystems such as OS X's HFS+, you will want to add the -f flag:
rename -f 'y/A-Z/a-z/' *
A concise version using the "rename" command:
find my_root_dir -depth -exec rename 's/(.*)\/([^\/]*)/$1\/\L$2/' {} \;
This avoids problems with directories being renamed before files and trying to move files into non-existing directories (e.g. "A/A" into "a/a").
Or, a more verbose version without using "rename".
for SRC in `find my_root_dir -depth`
do
DST=`dirname "${SRC}"`/`basename "${SRC}" | tr '[A-Z]' '[a-z]'`
if [ "${SRC}" != "${DST}" ]
then
[ ! -e "${DST}" ] && mv -T "${SRC}" "${DST}" || echo "${SRC} was not renamed"
fi
done
P.S.
The latter allows more flexibility with the move command (for example, "svn mv").
for f in `find`; do mv -v "$f" "`echo $f | tr '[A-Z]' '[a-z]'`"; done
Just simply try the following if you don't need to care about efficiency.
zip -r foo.zip foo/*
unzip -LL foo.zip
One can simply use the following which is less complicated:
rename 'y/A-Z/a-z/' *
This works on CentOS/Red Hat Linux or other distributions without the rename Perl script:
for i in $( ls | grep [A-Z] ); do mv -i "$i" "`echo $i | tr 'A-Z' 'a-z'`"; done
Source: Rename all file names from uppercase to lowercase characters
(In some distributions the default rename command comes from util-linux, and that is a different, incompatible tool.)
This works if you already have or set up the rename command (e.g. through brew install in Mac):
rename --lower-case --force somedir/*
The simplest approach I found on Mac OS X was to use the rename package from http://plasmasturm.org/code/rename/:
brew install rename
rename --force --lower-case --nows *
--force Rename even when a file with the destination name already exists.
--lower-case Convert file names to all lower case.
--nows Replace all sequences of whitespace in the filename with single underscore characters.
Most of the answers above are dangerous, because they do not deal with names containing odd characters. Your safest bet for this kind of thing is to use find's -print0 option, which will terminate filenames with ASCII NUL instead of \n.
Here is a script, which only alter files and not directory names so as not to confuse find:
find . -type f -print0 | xargs -0n 1 bash -c \
's=$(dirname "$0")/$(basename "$0");
d=$(dirname "$0")/$(basename "$0"|tr "[A-Z]" "[a-z]"); mv -f "$s" "$d"'
I tested it, and it works with filenames containing spaces, all kinds of quotes, etc. This is important because if you run, as root, one of those other scripts on a tree that includes the file created by
touch \;\ echo\ hacker::0:0:hacker:\$\'\057\'root:\$\'\057\'bin\$\'\057\'bash
... well guess what ...
Here's my suboptimal solution, using a Bash shell script:
#!/bin/bash
# First, rename all folders
for f in `find . -depth ! -name CVS -type d`; do
g=`dirname "$f"`/`basename "$f" | tr '[A-Z]' '[a-z]'`
if [ "xxx$f" != "xxx$g" ]; then
echo "Renaming folder $f"
mv -f "$f" "$g"
fi
done
# Now, rename all files
for f in `find . ! -type d`; do
g=`dirname "$f"`/`basename "$f" | tr '[A-Z]' '[a-z]'`
if [ "xxx$f" != "xxx$g" ]; then
echo "Renaming file $f"
mv -f "$f" "$g"
fi
done
Folders are all renamed correctly, and mv isn't asking questions when permissions don't match, and CVS folders are not renamed (CVS control files inside that folder are still renamed, unfortunately).
Since "find -depth" and "find | sort -r" both return the folder list in a usable order for renaming, I preferred using "-depth" for searching folders.
One-liner:
for F in K*; do NEWNAME=$(echo "$F" | tr '[:upper:]' '[:lower:]'); mv "$F" "$NEWNAME"; done
Or even:
for F in K*; do mv "$F" "${F,,}"; done
Note that this will convert only files/directories starting with letter K, so adjust accordingly.
The original question asked for ignoring SVN and CVS directories, which can be done by adding -prune to the find command. E.g to ignore CVS:
find . -name CVS -prune -o -exec mv '{}' `echo {} | tr '[A-Z]' '[a-z]'` \; -print
[edit] I tried this out, and embedding the lower-case translation inside the find didn't work for reasons I don't actually understand. So, amend this to:
$> cat > tolower
#!/bin/bash
mv $1 `echo $1 | tr '[:upper:]' '[:lower:]'`
^D
$> chmod u+x tolower
$> find . -name CVS -prune -o -exec tolower '{}' \;
Ian
Not portable, Zsh only, but pretty concise.
First, make sure zmv is loaded.
autoload -U zmv
Also, make sure extendedglob is on:
setopt extendedglob
Then use:
zmv '(**/)(*)~CVS~**/CVS' '${1}${(L)2}'
To recursively lowercase files and directories where the name is not CVS.
Using Larry Wall's filename fixer:
$op = shift or die $help;
chomp(#ARGV = <STDIN>) unless #ARGV;
for (#ARGV) {
$was = $_;
eval $op;
die $# if $#;
rename($was,$_) unless $was eq $_;
}
It's as simple as
find | fix 'tr/A-Z/a-z/'
(where fix is of course the script above)
for f in `find -depth`; do mv ${f} ${f,,} ; done
find -depth prints each file and directory, with a directory's contents printed before the directory itself. ${f,,} lowercases the file name.
This works nicely on macOS too:
ruby -e "Dir['*'].each { |p| File.rename(p, p.downcase) }"
This is a small shell script that does what you requested:
root_directory="${1?-please specify parent directory}"
do_it () {
awk '{ lc= tolower($0); if (lc != $0) print "mv \"" $0 "\" \"" lc "\"" }' | sh
}
# first the folders
find "$root_directory" -depth -type d | do_it
find "$root_directory" ! -type d | do_it
Note the -depth action in the first find.
Use typeset:
typeset -l new # Always lowercase
find $topPoint | # Not using xargs to make this more readable
while read old
do new="$old" # $new is a lowercase version of $old
mv "$old" "$new" # Quotes for those annoying embedded spaces
done
On Windows, emulations, like Git Bash, may fail because Windows isn't case-sensitive under the hood. For those, add a step that mv's the file to another name first, like "$old.tmp", and then to $new.
With MacOS,
Install the rename package,
brew install rename
Use,
find . -iname "*.py" -type f | xargs -I% rename -c -f "%"
This command find all the files with a *.py extension and converts the filenames to lower case.
`f` - forces a rename
For example,
$ find . -iname "*.py" -type f
./sample/Sample_File.py
./sample_file.py
$ find . -iname "*.py" -type f | xargs -I% rename -c -f "%"
$ find . -iname "*.py" -type f
./sample/sample_file.py
./sample_file.py
Lengthy But "Works With No Surprises & No Installations"
This script handles filenames with spaces, quotes, other unusual characters and Unicode, works on case insensitive filesystems and most Unix-y environments that have bash and awk installed (i.e. almost all). It also reports collisions if any (leaving the filename in uppercase) and of course renames both files & directories and works recursively. Finally it's highly adaptable: you can tweak the find command to target the files/dirs you wish and you can tweak awk to do other name manipulations. Note that by "handles Unicode" I mean that it will indeed convert their case (not ignore them like answers that use tr).
# adapt the following command _IF_ you want to deal with specific files/dirs
find . -depth -mindepth 1 -exec bash -c '
for file do
# adapt the awk command if you wish to rename to something other than lowercase
newname=$(dirname "$file")/$(basename "$file" | awk "{print tolower(\$0)}")
if [ "$file" != "$newname" ] ; then
# the extra step with the temp filename is for case-insensitive filesystems
if [ ! -e "$newname" ] && [ ! -e "$newname.lcrnm.tmp" ] ; then
mv -T "$file" "$newname.lcrnm.tmp" && mv -T "$newname.lcrnm.tmp" "$newname"
else
echo "ERROR: Name already exists: $newname"
fi
fi
done
' sh {} +
References
My script is based on these excellent answers:
https://unix.stackexchange.com/questions/9496/looping-through-files-with-spaces-in-the-names
How to convert a string to lower case in Bash?
In OS X, mv -f shows "same file" error, so I rename twice:
for i in `find . -name "*" -type f |grep -e "[A-Z]"`; do j=`echo $i | tr '[A-Z]' '[a-z]' | sed s/\-1$//`; mv $i $i-1; mv $i-1 $j; done
I needed to do this on a Cygwin setup on Windows 7 and found that I got syntax errors with the suggestions from above that I tried (though I may have missed a working option). However, this solution straight from Ubuntu forums worked out of the can :-)
ls | while read upName; do loName=`echo "${upName}" | tr '[:upper:]' '[:lower:]'`; mv "$upName" "$loName"; done
(NB: I had previously replaced whitespace with underscores using:
for f in *\ *; do mv "$f" "${f// /_}"; done
)
Slugify Rename (regex)
It is not exactly what the OP asked for, but what I was hoping to find on this page:
A "slugify" version for renaming files so they are similar to URLs (i.e. only include alphanumeric, dots, and dashes):
rename "s/[^a-zA-Z0-9\.]+/-/g" filename
I would reach for Python in this situation, to avoid optimistically assuming paths without spaces or slashes. I've also found that python2 tends to be installed in more places than rename.
#!/usr/bin/env python2
import sys, os
def rename_dir(directory):
print('DEBUG: rename('+directory+')')
# Rename current directory if needed
os.rename(directory, directory.lower())
directory = directory.lower()
# Rename children
for fn in os.listdir(directory):
path = os.path.join(directory, fn)
os.rename(path, path.lower())
path = path.lower()
# Rename children within, if this child is a directory
if os.path.isdir(path):
rename_dir(path)
# Run program, using the first argument passed to this Python script as the name of the folder
rename_dir(sys.argv[1])
If you use Arch Linux, you can install rename) package from AUR that provides the renamexm command as /usr/bin/renamexm executable and a manual page along with it.
It is a really powerful tool to quickly rename files and directories.
Convert to lowercase
rename -l Developers.mp3 # or --lowcase
Convert to UPPER case
rename -u developers.mp3 # or --upcase, long option
Other options
-R --recursive # directory and its children
-t --test # Dry run, output but don't rename
-o --owner # Change file owner as well to user specified
-v --verbose # Output what file is renamed and its new name
-s/str/str2 # Substitute string on pattern
--yes # Confirm all actions
You can fetch the sample Developers.mp3 file from here, if needed ;)
None of the solutions here worked for me because I was on a system that didn't have access to the perl rename script, plus some of the files included spaces. However, I found a variant that works:
find . -depth -exec sh -c '
t=${0%/*}/$(printf %s "${0##*/}" | tr "[:upper:]" "[:lower:]");
[ "$t" = "$0" ] || mv -i "$0" "$t"
' {} \;
Credit goes to "Gilles 'SO- stop being evil'", see this answer on the similar question "change entire directory tree to lower-case names" on the Unix & Linux StackExchange.
I believe the one-liners can be simplified:
for f in **/*; do mv "$f" "${f:l}"; done
( find YOURDIR -type d | sort -r;
find yourdir -type f ) |
grep -v /CVS | grep -v /SVN |
while read f; do mv -v $f `echo $f | tr '[A-Z]' '[a-z]'`; done
First rename the directories bottom up sort -r (where -depth is not available), then the files.
Then grep -v /CVS instead of find ...-prune because it's simpler.
For large directories, for f in ... can overflow some shell buffers.
Use find ... | while read to avoid that.
And yes, this will clobber files which differ only in case...
find . -depth -name '*[A-Z]*'|sed -n 's/\(.*\/\)\(.*\)/mv -n -v -T \1\2 \1\L\2/p'|sh
I haven't tried the more elaborate scripts mentioned here, but none of the single commandline versions worked for me on my Synology NAS. rename is not available, and many of the variations of find fail because it seems to stick to the older name of the already renamed path (eg, if it finds ./FOO followed by ./FOO/BAR, renaming ./FOO to ./foo will still continue to list ./FOO/BAR even though that path is no longer valid). Above command worked for me without any issues.
What follows is an explanation of each part of the command:
find . -depth -name '*[A-Z]*'
This will find any file from the current directory (change . to whatever directory you want to process), using a depth-first search (eg., it will list ./foo/bar before ./foo), but only for files that contain an uppercase character. The -name filter only applies to the base file name, not the full path. So this will list ./FOO/BAR but not ./FOO/bar. This is ok, as we don't want to rename ./FOO/bar. We want to rename ./FOO though, but that one is listed later on (this is why -depth is important).
This comand in itself is particularly useful to finding the files that you want to rename in the first place. Use this after the complete rename command to search for files that still haven't been replaced because of file name collisions or errors.
sed -n 's/\(.*\/\)\(.*\)/mv -n -v -T \1\2 \1\L\2/p'
This part reads the files outputted by find and formats them in a mv command using a regular expression. The -n option stops sed from printing the input, and the p command in the search-and-replace regex outputs the replaced text.
The regex itself consists of two captures: the part up until the last / (which is the directory of the file), and the filename itself. The directory is left intact, but the filename is transformed to lowercase. So, if find outputs ./FOO/BAR, it will become mv -n -v -T ./FOO/BAR ./FOO/bar. The -n option of mv makes sure existing lowercase files are not overwritten. The -v option makes mv output every change that it makes (or doesn't make - if ./FOO/bar already exists, it outputs something like ./FOO/BAR -> ./FOO/BAR, noting that no change has been made). The -T is very important here - it treats the target file as a directory. This will make sure that ./FOO/BAR isn't moved into ./FOO/bar if that directory happens to exist.
Use this together with find to generate a list of commands that will be executed (handy to verify what will be done without actually doing it)
sh
This pretty self-explanatory. It routes all the generated mv commands to the shell interpreter. You can replace it with bash or any shell of your liking.
Using bash, without rename:
find . -exec bash -c 'mv $0 ${0,,}' {} \;

Resources