How to loop over directories in Linux? - linux

I am writing a script in bash on Linux and need to go through all subdirectory names in a given directory. How can I loop through these directories (and skip regular files)?
For example:
the given directory is /tmp/
it has the following subdirectories: /tmp/A, /tmp/B, /tmp/C
I want to retrieve A, B, C.

All answers so far use find, so here's one with just the shell. No need for external tools in your case:
for dir in /tmp/*/ # list directories in the form "/tmp/dirname/"
do
dir=${dir%*/} # remove the trailing "/"
echo "${dir##*/}" # print everything after the final "/"
done

cd /tmp
find . -maxdepth 1 -mindepth 1 -type d -printf '%f\n'
A short explanation:
find finds files (quite obviously)
. is the current directory, which after the cd is /tmp (IMHO this is more flexible than having /tmp directly in the find command. You have only one place, the cd, to change, if you want more actions to take place in this folder)
-maxdepth 1 and -mindepth 1 make sure that find only looks in the current directory and doesn't include . itself in the result
-type d looks only for directories
-printf '%f\n prints only the found folder's name (plus a newline) for each hit.
Et voilĂ !

You can loop through all directories including hidden directrories (beginning with a dot) with:
for file in */ .*/ ; do echo "$file is a directory"; done
note: using the list */ .*/ works in zsh only if there exist at least one hidden directory in the folder. In bash it will show also . and ..
Another possibility for bash to include hidden directories would be to use:
shopt -s dotglob;
for file in */ ; do echo "$file is a directory"; done
If you want to exclude symlinks:
for file in */ ; do
if [[ -d "$file" && ! -L "$file" ]]; then
echo "$file is a directory";
fi;
done
To output only the trailing directory name (A,B,C as questioned) in each solution use this within the loops:
file="${file%/}" # strip trailing slash
file="${file##*/}" # strip path and leading slash
echo "$file is the directoryname without slashes"
Example (this also works with directories which contains spaces):
mkdir /tmp/A /tmp/B /tmp/C "/tmp/ dir with spaces"
for file in /tmp/*/ ; do file="${file%/}"; echo "${file##*/}"; done

Works with directories which contains spaces
Inspired by Sorpigal
while IFS= read -d $'\0' -r file ; do
echo $file; ls $file ;
done < <(find /path/to/dir/ -mindepth 1 -maxdepth 1 -type d -print0)
Original post (Does not work with spaces)
Inspired by Boldewyn: Example of loop with find command.
for D in $(find /path/to/dir/ -mindepth 1 -maxdepth 1 -type d) ; do
echo $D ;
done

find . -mindepth 1 -maxdepth 1 -type d -printf "%P\n"

The technique I use most often is find | xargs. For example, if you want to make every file in this directory and all of its subdirectories world-readable, you can do:
find . -type f -print0 | xargs -0 chmod go+r
find . -type d -print0 | xargs -0 chmod go+rx
The -print0 option terminates with a NULL character instead of a space. The -0 option splits its input the same way. So this is the combination to use on files with spaces.
You can picture this chain of commands as taking every line output by find and sticking it on the end of a chmod command.
If the command you want to run as its argument in the middle instead of on the end, you have to be a bit creative. For instance, I needed to change into every subdirectory and run the command latemk -c. So I used (from Wikipedia):
find . -type d -depth 1 -print0 | \
xargs -0 sh -c 'for dir; do pushd "$dir" && latexmk -c && popd; done' fnord
This has the effect of for dir $(subdirs); do stuff; done, but is safe for directories with spaces in their names. Also, the separate calls to stuff are made in the same shell, which is why in my command we have to return back to the current directory with popd.

a minimal bash loop you can build off of (based off ghostdog74 answer)
for dir in directory/*
do
echo ${dir}
done
to zip a whole bunch of files by directory
for dir in directory/*
do
zip -r ${dir##*/} ${dir}
done

If you want to execute multiple commands in a for loop, you can save the result of find with mapfile (bash >= 4) as a variable and go through the array with ${dirlist[#]}. It also works with directories containing spaces.
The find command is based on the answer by Boldewyn. Further information about the find command can be found there.
IFS=""
mapfile -t dirlist < <( find . -maxdepth 1 -mindepth 1 -type d -printf '%f\n' )
for dir in ${dirlist[#]}; do
echo ">${dir}<"
# more commands can go here ...
done

TL;DR:
(cd /tmp; for d in */; do echo "${d%/}"; done)
Explanation.
There's no need to use external programs. What you need is a shell globbing pattern. To avoid the need of removing /tmp afterward, I'm running it in a subshell, which may or not be suitable for your purposes.
Shell globbing patterns in a nutshell:
* Match any non-empty string any number of times.
? Match exactly one character.
[...] Matches with a character from between the brackets. You can also specify ranges ([a-z], [A-F0-9], etc.) or classes ([:digit:], [:alpha:], etc.).
[^...] Match one of the characters not between the braces.
* If no file names match the pattern, the shell will return the pattern unchanged. Any character or string that is not one of the above represents itself.
Consequently, the pattern */ will match any file name that ends with a /. A trailing / in a file name unambiguously identifies a directory.
The last bit is removing the trailing slash, which is achieved with the variable substitution ${var%PATTERN}, which removes the shortest matching pattern from the end of the string contained in var, and where PATTERN is any valid globbing pattern. So we write ${d%/}, meaning we want to remove the trailing slash from the string represented by d.

find . -type d -maxdepth 1

In short, put the results of find into an array and iterate the array and do what you want. Not the quickest but more organized thinking.
#!/bin/bash
cd /tmp
declare -a results=(`find -type d`)
#Iterate the results
for path in ${results[#]}
do
echo "Your path is $path"
#Do something with the path..
if [[ $path =~ "/A" ]]; then
echo $path | awk -F / '{print $NF}'
#prints A
elif [[ $path =~ "/B" ]]; then
echo $path | awk -F / '{print $NF}'
#Prints B
elif [[ $path =~ "/C" ]]; then
echo $path | awk -F / '{print $NF}'
#Prints C
fi
done
This can be reduced to find -type d | grep "/A" | awk -F / '{print $NF}' prints A
find -type d | grep "/B" | awk -F / '{print $NF}' prints B
find -type d | grep "/C" | awk -F / '{print $NF}' prints C

Related

LINUX Copy the name of the newest folder and paste it in a command [duplicate]

I would like to find the newest sub directory in a directory and save the result to variable in bash.
Something like this:
ls -t /backups | head -1 > $BACKUPDIR
Can anyone help?
BACKUPDIR=$(ls -td /backups/*/ | head -1)
$(...) evaluates the statement in a subshell and returns the output.
There is a simple solution to this using only ls:
BACKUPDIR=$(ls -td /backups/*/ | head -1)
-t orders by time (latest first)
-d only lists items from this folder
*/ only lists directories
head -1 returns the first item
I didn't know about */ until I found Listing only directories using ls in bash: An examination.
This ia a pure Bash solution:
topdir=/backups
BACKUPDIR=
# Handle subdirectories beginning with '.', and empty $topdir
shopt -s dotglob nullglob
for file in "$topdir"/* ; do
[[ -L $file || ! -d $file ]] && continue
[[ -z $BACKUPDIR || $file -nt $BACKUPDIR ]] && BACKUPDIR=$file
done
printf 'BACKUPDIR=%q\n' "$BACKUPDIR"
It skips symlinks, including symlinks to directories, which may or may not be the right thing to do. It skips other non-directories. It handles directories whose names contain any characters, including newlines and leading dots.
Well, I think this solution is the most efficient:
path="/my/dir/structure/*"
backupdir=$(find $path -type d -prune | tail -n 1)
Explanation why this is a little better:
We do not need sub-shells (aside from the one for getting the result into the bash variable).
We do not need a useless -exec ls -d at the end of the find command, it already prints the directory listing.
We can easily alter this, e.g. to exclude certain patterns. For example, if you want the second newest directory, because backup files are first written to a tmp dir in the same path:
backupdir=$(find $path -type -d -prune -not -name "*temp_dir" | tail -n 1)
The above solution doesn't take into account things like files being written and removed from the directory resulting in the upper directory being returned instead of the newest subdirectory.
The other issue is that this solution assumes that the directory only contains other directories and not files being written.
Let's say I create a file called "test.txt" and then run this command again:
echo "test" > test.txt
ls -t /backups | head -1
test.txt
The result is test.txt showing up instead of the last modified directory.
The proposed solution "works" but only in the best case scenario.
Assuming you have a maximum of 1 directory depth, a better solution is to use:
find /backups/* -type d -prune -exec ls -d {} \; |tail -1
Just swap the "/backups/" portion for your actual path.
If you want to avoid showing an absolute path in a bash script, you could always use something like this:
LOCALPATH=/backups
DIRECTORY=$(cd $LOCALPATH; find * -type d -prune -exec ls -d {} \; |tail -1)
With GNU find you can get list of directories with modification timestamps, sort that list and output the newest:
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\0" | sort -z -n | cut -z -f2- | tail -z -n1
or newline separated
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\n" | sort -n | cut -f2- | tail -n1
With POSIX find (that does not have -printf) you may, if you have it, run stat to get file modification timestamp:
find . -mindepth 1 -maxdepth 1 -type d -exec stat -c '%Y %n' {} \; | sort -n | cut -d' ' -f2- | tail -n1
Without stat a pure shell solution may be used by replacing [[ bash extension with [ as in this answer.
Your "something like this" was almost a hit:
BACKUPDIR=$(ls -t ./backups | head -1)
Combining what you wrote with what I have learned solved my problem too. Thank you for rising this question.
Note: I run the line above from GitBash within Windows environment in file called ./something.bash.

How to remove all but a few selected files in a directory?

I want to remove all files in a directory except some through a shell script. The name of files will be passed as command line argument and number of arguments may vary.
Suppose the directory has these 5 files:
1.txt, 2.txt, 3.txt. 4.txt. 5.txt
I want to remove two files from it through a shell script using file name. Also, the number of files may vary.
There are several ways this could be done, but the one that's most robust and highest performance with large directories is probably to construct a find command.
#!/usr/bin/env bash
# first argument is the directory name to search in
dir=$1; shift
# subsequent arguments are filenames to absolve from deletion
find_args=( )
for name; do
find_args+=( -name "$name" -prune -o )
done
if [[ $dry_run ]]; then
exec find "$dir" -mindepth 1 -maxdepth 1 "${find_args[#]}" -print
else
exec find "$dir" -mindepth 1 -maxdepth 1 "${find_args[#]}" -exec rm -f -- '{}' +
fi
Thereafter, to list files which would be deleted (if the above is in a script named delete-except):
dry_run=1 delete-except /path/to/dir 1.txt 2.txt
or, to actually delete those files:
delete-except /path/to/dir 1.txt 2.txt
A simple, straightforward way could be using the GLOBIGNORE variable.
GLOBIGNORE is a colon-separated list of patterns defining the set of filenames to be ignored by pathname expansion. If a filename matched by a pathname expansion pattern also matches one of the patterns in GLOBIGNORE, it is removed from the list of matches.
Thus, the solution is to iterate through the command line args, appending file names to the list. Then call rm *. Don't forget to unset GLOBIGNORE var at the end.
#!/bin/bash
for arg in "$#"
do
if [ $arg = $1 ]
then
GLOBIGNORE=$arg
else
GLOBIGNORE=${GLOBIGNORE}:$arg
fi
done
rm *
unset GLOBIGNORE
*In case you had set GLOBIGNORE before, you can just store the val in a tmp var then reset it at the end.
We can accomplish this in pure Bash, without the need for any external tools:
#!/usr/bin/env bash
# build an associative array that contains all the filenames to be preserved
declare -A skip_list
for f in "$#"; do
skip_list[$f]=1
done
# walk through all files and build an array of files to be deleted
declare -a rm_list
for f in *; do # loop through all files
[[ -f "$f" ]] || continue # not a regular file
[[ "${skip_list[$f]}" ]] && continue # skip this file
rm_list+=("$f") # now it qualifies for rm
done
# remove the files
printf '%s\0' "${rm_list[#]}" | xargs -0 rm -- # Thanks to Charles' suggestion
This solution will also work for files that have whitespaces or glob characters in them.
Thanks all for your answers, I have figured out my solution. Below is the solution worked for me:
find /home/mydir -type f | grep -vw "goo" | xargs rm

How to rename directory and subdirectories recursively in linux?

Let say I have 200 directories and it have variable hierarchy sub-directories, How can I rename the directory and its sub directories using mv command with find or any sort of combination?
for dir in ./*/; do (i=1; cd "$dir" && for dir in ./*; do printf -v dest %s_%02d "$dir" "$((i++))"; echo mv "$dir" "$dest"; done); done
This is for 2 level sub directory, is there more cleaner way to do it for multiple hierarchy? Any other one line command suggestions/ solutions are welcome.
I had a specific task - to replace non-ASCII symbols and square brackets, in directories and in files as well. It works fine.
First, exactly my case, as a working example:
find . -depth -execdir rename -v 's/([^\x00-\x7F]+)|([\[\]]+)/\_/g' {} \;
or separately non-ascii and brackets:
find . -depth -execdir rename -v 's/[^\x00-\x7F]+/\_/g' {} \;
find . -depth -execdir rename -v 's/[\[\]]+/\_/g' {} \;
If we'd like to work only with directories, add -type d (after the -depth option)
Now, in more generalized view:
find . -depth [-type d] [-type f] -execdir rename [-v] 's/.../.../g' '{}' \;
Here we can control dirs/files and verbosity. Quotes around {} may be needed or not on your machine (backslash before ; serves the same, may be replaced with quotes)
You have two options when you want to do recursive operations in files/directories:
Option 1 : Find
while IFS= read -r -d '' subd;do
#do your stuff here with var $subd
done < <(find . -type d -print0)
In this case we use find to return only dirs using -type d
We can ask find to return only files using -type f or not to specify any type and both directories and files will be returned.
We also use find option -print0 to force null separation of the find results and thus to ensure correct names handling in case names include special chars like spaces, etc.
Testing:
$ while IFS= read -r -d '' s;do echo "$s";done < <(find . -type d -print0)
.
./dir1
./dir1/sub1
./dir1/sub1/subsub1
./dir1/sub1/subsub1/subsubsub1
./dir2
./dir2/sub2
Option 2 : Using Bash globstar option
shopt -s globstar
for subd in **/ ; do
#Do you stuff here with $subd directories
done
In this case , the for loop will match all subdirs under current working directory (operation **/).
You can also ask bash to return both files and folders using
for sub in ** ;do #your commands;done
if [[ -d "$sub" ]];then
#actions for folders
elif [[ -e "$sub" ]];then
#actions for files
else
#do something else
fi
done
Folders Test:
$ shopt -s globstar
$ for i in **/ ;do echo "$i";done
dir1/
dir1/sub1/
dir1/sub1/subsub1/
dir1/sub1/subsub1/subsubsub1/
dir2/
dir2/sub2/
In your small script, just by enabling shopt -s globstar and by changing your for to for dir in **/;do it seems that work as you expect.

A bash script to run a program for directories that do not have a certain file

I need a Bash Script to Execute a program for all directories that do not have a specific file and create the output file on the same directory.This program needs an input file which exist in every directory with the name *.DNA.fasta.Suppose I have the following directories that may contain sub directories also
dir1/a.protein.fasta
dir2/b.protein.fasta
dir3/anyfile
dir4/x.orf.fasta
I have started by finding the directories that don't have that specific file whic name is *.protein.fasta
in this case I want the dir3 and dir4 to be listed (since they do not contain *.protein.fasta)
I have tried this code:
find . -maxdepth 1 -type d \! -exec test -e '{}/*protein.fasta' \; -print
but it seems I missed some thing it does not work.
also I do not know how to proceed for the whole story.
This is a tricky one.
I can't think of a good solution. But here's a solution, nevertheless. Note that this is guaranteed not to work if your directory or file names contain newlines, and it's not guaranteed to work if they contain other special characters. (I've only tested with the samples in your question.)
Also, I haven't included a -maxdepth because you said you need to search subdirectories too.
#!/bin/bash
# Create an associative array
declare -A excludes
# Build an associative array of directories containing the file
while read line; do
excludes[$(dirname "$line")]=1
echo "excluded: $(dirname "$line")" >&2
done <<EOT
$(find . -name "*protein.fasta" -print)
EOT
# Walk through all directories, print only those not in array
find . -type d \
| while read line ; do
if [[ ! ${excludes[$line]} ]]; then
echo "$line"
fi
done
For me, this returns:
.
./dir3
./dir4
All of which are directories that do not contain a file matching *.protein.fasta. Of course, you can replace the last echo "$line" with whatever you need to do with these directories.
Alternately:
If what you're really looking for is just the list of top-level directories that do not contain the matching file in any subdirectory, the following bash one-liner may be sufficient:
for i in *; do test -d "$i" && ( find "$i" -name '*protein.fasta' | grep -q . || echo "$i" ); done
#!/bin/bash
for dir in *; do
test -d "$dir" && ( find "$dir" -name '*protein.fasta' | grep -q . || Programfoo"$dir/$dir.DNA.fasta");
done

How do I rename all folders and files to lowercase on Linux?

I have to rename a complete folder tree recursively so that no uppercase letter appears anywhere (it's C++ source code, but that shouldn't matter).
Bonus points for ignoring CVS and Subversion version control files/folders. The preferred way would be a shell script, since a shell should be available on any Linux box.
There were some valid arguments about details of the file renaming.
I think files with the same lowercase names should be overwritten; it's the user's problem. When checked out on a case-ignoring file system, it would overwrite the first one with the latter, too.
I would consider A-Z characters and transform them to a-z, everything else is just calling for problems (at least with source code).
The script would be needed to run a build on a Linux system, so I think changes to CVS or Subversion version control files should be omitted. After all, it's just a scratch checkout. Maybe an "export" is more appropriate.
Smaller still I quite like:
rename 'y/A-Z/a-z/' *
On case insensitive filesystems such as OS X's HFS+, you will want to add the -f flag:
rename -f 'y/A-Z/a-z/' *
A concise version using the "rename" command:
find my_root_dir -depth -exec rename 's/(.*)\/([^\/]*)/$1\/\L$2/' {} \;
This avoids problems with directories being renamed before files and trying to move files into non-existing directories (e.g. "A/A" into "a/a").
Or, a more verbose version without using "rename".
for SRC in `find my_root_dir -depth`
do
DST=`dirname "${SRC}"`/`basename "${SRC}" | tr '[A-Z]' '[a-z]'`
if [ "${SRC}" != "${DST}" ]
then
[ ! -e "${DST}" ] && mv -T "${SRC}" "${DST}" || echo "${SRC} was not renamed"
fi
done
P.S.
The latter allows more flexibility with the move command (for example, "svn mv").
for f in `find`; do mv -v "$f" "`echo $f | tr '[A-Z]' '[a-z]'`"; done
Just simply try the following if you don't need to care about efficiency.
zip -r foo.zip foo/*
unzip -LL foo.zip
One can simply use the following which is less complicated:
rename 'y/A-Z/a-z/' *
This works on CentOS/Red Hat Linux or other distributions without the rename Perl script:
for i in $( ls | grep [A-Z] ); do mv -i "$i" "`echo $i | tr 'A-Z' 'a-z'`"; done
Source: Rename all file names from uppercase to lowercase characters
(In some distributions the default rename command comes from util-linux, and that is a different, incompatible tool.)
This works if you already have or set up the rename command (e.g. through brew install in Mac):
rename --lower-case --force somedir/*
The simplest approach I found on Mac OS X was to use the rename package from http://plasmasturm.org/code/rename/:
brew install rename
rename --force --lower-case --nows *
--force Rename even when a file with the destination name already exists.
--lower-case Convert file names to all lower case.
--nows Replace all sequences of whitespace in the filename with single underscore characters.
Most of the answers above are dangerous, because they do not deal with names containing odd characters. Your safest bet for this kind of thing is to use find's -print0 option, which will terminate filenames with ASCII NUL instead of \n.
Here is a script, which only alter files and not directory names so as not to confuse find:
find . -type f -print0 | xargs -0n 1 bash -c \
's=$(dirname "$0")/$(basename "$0");
d=$(dirname "$0")/$(basename "$0"|tr "[A-Z]" "[a-z]"); mv -f "$s" "$d"'
I tested it, and it works with filenames containing spaces, all kinds of quotes, etc. This is important because if you run, as root, one of those other scripts on a tree that includes the file created by
touch \;\ echo\ hacker::0:0:hacker:\$\'\057\'root:\$\'\057\'bin\$\'\057\'bash
... well guess what ...
Here's my suboptimal solution, using a Bash shell script:
#!/bin/bash
# First, rename all folders
for f in `find . -depth ! -name CVS -type d`; do
g=`dirname "$f"`/`basename "$f" | tr '[A-Z]' '[a-z]'`
if [ "xxx$f" != "xxx$g" ]; then
echo "Renaming folder $f"
mv -f "$f" "$g"
fi
done
# Now, rename all files
for f in `find . ! -type d`; do
g=`dirname "$f"`/`basename "$f" | tr '[A-Z]' '[a-z]'`
if [ "xxx$f" != "xxx$g" ]; then
echo "Renaming file $f"
mv -f "$f" "$g"
fi
done
Folders are all renamed correctly, and mv isn't asking questions when permissions don't match, and CVS folders are not renamed (CVS control files inside that folder are still renamed, unfortunately).
Since "find -depth" and "find | sort -r" both return the folder list in a usable order for renaming, I preferred using "-depth" for searching folders.
One-liner:
for F in K*; do NEWNAME=$(echo "$F" | tr '[:upper:]' '[:lower:]'); mv "$F" "$NEWNAME"; done
Or even:
for F in K*; do mv "$F" "${F,,}"; done
Note that this will convert only files/directories starting with letter K, so adjust accordingly.
The original question asked for ignoring SVN and CVS directories, which can be done by adding -prune to the find command. E.g to ignore CVS:
find . -name CVS -prune -o -exec mv '{}' `echo {} | tr '[A-Z]' '[a-z]'` \; -print
[edit] I tried this out, and embedding the lower-case translation inside the find didn't work for reasons I don't actually understand. So, amend this to:
$> cat > tolower
#!/bin/bash
mv $1 `echo $1 | tr '[:upper:]' '[:lower:]'`
^D
$> chmod u+x tolower
$> find . -name CVS -prune -o -exec tolower '{}' \;
Ian
Not portable, Zsh only, but pretty concise.
First, make sure zmv is loaded.
autoload -U zmv
Also, make sure extendedglob is on:
setopt extendedglob
Then use:
zmv '(**/)(*)~CVS~**/CVS' '${1}${(L)2}'
To recursively lowercase files and directories where the name is not CVS.
Using Larry Wall's filename fixer:
$op = shift or die $help;
chomp(#ARGV = <STDIN>) unless #ARGV;
for (#ARGV) {
$was = $_;
eval $op;
die $# if $#;
rename($was,$_) unless $was eq $_;
}
It's as simple as
find | fix 'tr/A-Z/a-z/'
(where fix is of course the script above)
for f in `find -depth`; do mv ${f} ${f,,} ; done
find -depth prints each file and directory, with a directory's contents printed before the directory itself. ${f,,} lowercases the file name.
This works nicely on macOS too:
ruby -e "Dir['*'].each { |p| File.rename(p, p.downcase) }"
This is a small shell script that does what you requested:
root_directory="${1?-please specify parent directory}"
do_it () {
awk '{ lc= tolower($0); if (lc != $0) print "mv \"" $0 "\" \"" lc "\"" }' | sh
}
# first the folders
find "$root_directory" -depth -type d | do_it
find "$root_directory" ! -type d | do_it
Note the -depth action in the first find.
Use typeset:
typeset -l new # Always lowercase
find $topPoint | # Not using xargs to make this more readable
while read old
do new="$old" # $new is a lowercase version of $old
mv "$old" "$new" # Quotes for those annoying embedded spaces
done
On Windows, emulations, like Git Bash, may fail because Windows isn't case-sensitive under the hood. For those, add a step that mv's the file to another name first, like "$old.tmp", and then to $new.
With MacOS,
Install the rename package,
brew install rename
Use,
find . -iname "*.py" -type f | xargs -I% rename -c -f "%"
This command find all the files with a *.py extension and converts the filenames to lower case.
`f` - forces a rename
For example,
$ find . -iname "*.py" -type f
./sample/Sample_File.py
./sample_file.py
$ find . -iname "*.py" -type f | xargs -I% rename -c -f "%"
$ find . -iname "*.py" -type f
./sample/sample_file.py
./sample_file.py
Lengthy But "Works With No Surprises & No Installations"
This script handles filenames with spaces, quotes, other unusual characters and Unicode, works on case insensitive filesystems and most Unix-y environments that have bash and awk installed (i.e. almost all). It also reports collisions if any (leaving the filename in uppercase) and of course renames both files & directories and works recursively. Finally it's highly adaptable: you can tweak the find command to target the files/dirs you wish and you can tweak awk to do other name manipulations. Note that by "handles Unicode" I mean that it will indeed convert their case (not ignore them like answers that use tr).
# adapt the following command _IF_ you want to deal with specific files/dirs
find . -depth -mindepth 1 -exec bash -c '
for file do
# adapt the awk command if you wish to rename to something other than lowercase
newname=$(dirname "$file")/$(basename "$file" | awk "{print tolower(\$0)}")
if [ "$file" != "$newname" ] ; then
# the extra step with the temp filename is for case-insensitive filesystems
if [ ! -e "$newname" ] && [ ! -e "$newname.lcrnm.tmp" ] ; then
mv -T "$file" "$newname.lcrnm.tmp" && mv -T "$newname.lcrnm.tmp" "$newname"
else
echo "ERROR: Name already exists: $newname"
fi
fi
done
' sh {} +
References
My script is based on these excellent answers:
https://unix.stackexchange.com/questions/9496/looping-through-files-with-spaces-in-the-names
How to convert a string to lower case in Bash?
In OS X, mv -f shows "same file" error, so I rename twice:
for i in `find . -name "*" -type f |grep -e "[A-Z]"`; do j=`echo $i | tr '[A-Z]' '[a-z]' | sed s/\-1$//`; mv $i $i-1; mv $i-1 $j; done
I needed to do this on a Cygwin setup on Windows 7 and found that I got syntax errors with the suggestions from above that I tried (though I may have missed a working option). However, this solution straight from Ubuntu forums worked out of the can :-)
ls | while read upName; do loName=`echo "${upName}" | tr '[:upper:]' '[:lower:]'`; mv "$upName" "$loName"; done
(NB: I had previously replaced whitespace with underscores using:
for f in *\ *; do mv "$f" "${f// /_}"; done
)
Slugify Rename (regex)
It is not exactly what the OP asked for, but what I was hoping to find on this page:
A "slugify" version for renaming files so they are similar to URLs (i.e. only include alphanumeric, dots, and dashes):
rename "s/[^a-zA-Z0-9\.]+/-/g" filename
I would reach for Python in this situation, to avoid optimistically assuming paths without spaces or slashes. I've also found that python2 tends to be installed in more places than rename.
#!/usr/bin/env python2
import sys, os
def rename_dir(directory):
print('DEBUG: rename('+directory+')')
# Rename current directory if needed
os.rename(directory, directory.lower())
directory = directory.lower()
# Rename children
for fn in os.listdir(directory):
path = os.path.join(directory, fn)
os.rename(path, path.lower())
path = path.lower()
# Rename children within, if this child is a directory
if os.path.isdir(path):
rename_dir(path)
# Run program, using the first argument passed to this Python script as the name of the folder
rename_dir(sys.argv[1])
If you use Arch Linux, you can install rename) package from AUR that provides the renamexm command as /usr/bin/renamexm executable and a manual page along with it.
It is a really powerful tool to quickly rename files and directories.
Convert to lowercase
rename -l Developers.mp3 # or --lowcase
Convert to UPPER case
rename -u developers.mp3 # or --upcase, long option
Other options
-R --recursive # directory and its children
-t --test # Dry run, output but don't rename
-o --owner # Change file owner as well to user specified
-v --verbose # Output what file is renamed and its new name
-s/str/str2 # Substitute string on pattern
--yes # Confirm all actions
You can fetch the sample Developers.mp3 file from here, if needed ;)
None of the solutions here worked for me because I was on a system that didn't have access to the perl rename script, plus some of the files included spaces. However, I found a variant that works:
find . -depth -exec sh -c '
t=${0%/*}/$(printf %s "${0##*/}" | tr "[:upper:]" "[:lower:]");
[ "$t" = "$0" ] || mv -i "$0" "$t"
' {} \;
Credit goes to "Gilles 'SO- stop being evil'", see this answer on the similar question "change entire directory tree to lower-case names" on the Unix & Linux StackExchange.
I believe the one-liners can be simplified:
for f in **/*; do mv "$f" "${f:l}"; done
( find YOURDIR -type d | sort -r;
find yourdir -type f ) |
grep -v /CVS | grep -v /SVN |
while read f; do mv -v $f `echo $f | tr '[A-Z]' '[a-z]'`; done
First rename the directories bottom up sort -r (where -depth is not available), then the files.
Then grep -v /CVS instead of find ...-prune because it's simpler.
For large directories, for f in ... can overflow some shell buffers.
Use find ... | while read to avoid that.
And yes, this will clobber files which differ only in case...
find . -depth -name '*[A-Z]*'|sed -n 's/\(.*\/\)\(.*\)/mv -n -v -T \1\2 \1\L\2/p'|sh
I haven't tried the more elaborate scripts mentioned here, but none of the single commandline versions worked for me on my Synology NAS. rename is not available, and many of the variations of find fail because it seems to stick to the older name of the already renamed path (eg, if it finds ./FOO followed by ./FOO/BAR, renaming ./FOO to ./foo will still continue to list ./FOO/BAR even though that path is no longer valid). Above command worked for me without any issues.
What follows is an explanation of each part of the command:
find . -depth -name '*[A-Z]*'
This will find any file from the current directory (change . to whatever directory you want to process), using a depth-first search (eg., it will list ./foo/bar before ./foo), but only for files that contain an uppercase character. The -name filter only applies to the base file name, not the full path. So this will list ./FOO/BAR but not ./FOO/bar. This is ok, as we don't want to rename ./FOO/bar. We want to rename ./FOO though, but that one is listed later on (this is why -depth is important).
This comand in itself is particularly useful to finding the files that you want to rename in the first place. Use this after the complete rename command to search for files that still haven't been replaced because of file name collisions or errors.
sed -n 's/\(.*\/\)\(.*\)/mv -n -v -T \1\2 \1\L\2/p'
This part reads the files outputted by find and formats them in a mv command using a regular expression. The -n option stops sed from printing the input, and the p command in the search-and-replace regex outputs the replaced text.
The regex itself consists of two captures: the part up until the last / (which is the directory of the file), and the filename itself. The directory is left intact, but the filename is transformed to lowercase. So, if find outputs ./FOO/BAR, it will become mv -n -v -T ./FOO/BAR ./FOO/bar. The -n option of mv makes sure existing lowercase files are not overwritten. The -v option makes mv output every change that it makes (or doesn't make - if ./FOO/bar already exists, it outputs something like ./FOO/BAR -> ./FOO/BAR, noting that no change has been made). The -T is very important here - it treats the target file as a directory. This will make sure that ./FOO/BAR isn't moved into ./FOO/bar if that directory happens to exist.
Use this together with find to generate a list of commands that will be executed (handy to verify what will be done without actually doing it)
sh
This pretty self-explanatory. It routes all the generated mv commands to the shell interpreter. You can replace it with bash or any shell of your liking.
Using bash, without rename:
find . -exec bash -c 'mv $0 ${0,,}' {} \;

Resources