bash script to rename all files in a directory? - linux

i have bunch of files that needs to be renamed.
file1.txt needs to be renamed to file1_file1.txt
file2.avi needs to be renamed to file2_file2.avi
as you can see i need the _ folowed by the original file name.
there are lot of these files.

So far all the answers given either:
Require some non-portable tool
Break horribly with filenames containing spaces or newlines
Is not recursive, i.e. does not descend into sub-directories
These two scripts solve all of those problems.
Bash 2.X/3.X
#!/bin/bash
while IFS= read -r -d $'\0' file; do
dirname="${file%/*}/"
basename="${file:${#dirname}}"
echo mv "$file" "$dirname${basename%.*}_$basename"
done < <(find . -type f -print0)
Bash 4.X
#!/bin/bash
shopt -s globstar
for file in ./**; do
if [[ -f "$file" ]]; then
dirname="${file%/*}/"
basename="${file:${#dirname}}"
echo mv "$file" "$dirname${basename%.*}_$basename"
fi
done
Be sure to remove the echo from whichever script you choose once you are satisfied with it's output and run it again
Edit
Fixed problem in previous version that did not properly handle path names.

For your specific case, you want to use mmv as follows:
pax> ll
total 0
drwxr-xr-x+ 2 allachan None 0 Dec 24 09:47 .
drwxrwxrwx+ 5 allachan None 0 Dec 24 09:39 ..
-rw-r--r-- 1 allachan None 0 Dec 24 09:39 file1.txt
-rw-r--r-- 1 allachan None 0 Dec 24 09:39 file2.avi
pax> mmv '*.*' '#1_#1.#2'
pax> ll
total 0
drwxr-xr-x+ 2 allachan None 0 Dec 24 09:47 .
drwxrwxrwx+ 5 allachan None 0 Dec 24 09:39 ..
-rw-r--r-- 1 allachan None 0 Dec 24 09:39 file1_file1.txt
-rw-r--r-- 1 allachan None 0 Dec 24 09:39 file2_file2.avi
You need to be aware that the wildcard matching is not greedy. That means that the file a.b.txt will be turned into a_a.b.txt, not a.b_a.b.txt.
The mmv program was installed as part of my CygWin but I had to
sudo apt-get install mmv
on my Ubuntu box to get it down. If it's not in you standard distribution, whatever package manager you're using will hopefully have it available.
If, for some reason, you're not permitted to install it, you'll have to use one of the other bash for-loop-type solutions shown in the other answers. I prefer the terseness of mmv myself but you may not have the option.

for file in file*.*
do
[ -f "$file" ] && echo mv "$file" "${file%%.*}_$file"
done
Idea for recursion
recurse() {
for file in "$1"/*;do
if [ -d "$file" ];then
recurse "$file"
else
# check for relevant files
# echo mv "$file" "${file%%.*}_$file"
fi
done
}
recurse /path/to/files

find . -type f | while read FN; do
BFN=$(basename "$FN")
NFN=${BFN%.*}_${BFN}
echo "$BFN -> $NFN"
mv "$FN" "$NFN"
done

I like the PERL cookbook's rename script for this. It may not be /bin/sh but you can do regular expression-like renames.
The /bin/sh method would be to use sed/cut/awk to alter each filename inside a for loop. If the directory is large you'd need to rely on xargs.

One should mention the mmv tool, which is especially made for this.
It's described here: http://tldp.org/LDP/GNU-Linux-Tools-Summary/html/mass-rename.html
...along with alternatives.

I use prename (perl based), which is included in various linux distributions. It works with regular expressions, so to say change all img_x.jpg to IMAGE_x.jpg you'd do
prename 's/img_/IMAGE_/' img*jpg
You can use the -n flag to preview changes without making any actual changes.
prename man entry

#!/bin/bash
# Don't do this like I did:
# files=`ls ${1}`
for file in *.*
do
if [ -f $file ];
then
newname=${file%%.*}_${file}
mv $file $newname
fi
done
This one won't rename sub directories, only regular files.

Related

Handle Whitespace and special character in shell script (using gio)

Hy,
I am trying to handle white spaces and special characters like "&" in a shell script which is supposed to set custom directory icons using gio in Ubuntu 18.04.
When directory names consist only of a single word eg. MyFolder the following script works just fine:
for dir in $(find "$PWD" -type d); do
icon="/.FolderIcon.png"
iconLocation="$dir$icon"
if [ -a "$iconLocation" ]; then
front="file://"
gio set "$dir" metadata::custom-icon "$front$iconLocation"
fi
done
However when the directory is named eg. "A & B" the above script does not change the icon of the respective directory.
So my question is: Is there a way to handle directories named like "A & B" in my script?
First, for var in $(cmd) is generally an antipattern.
In most cases, what you'd probably want is something like suggested in https://mywiki.wooledge.org/BashFAQ/020 -
while IFS= read -r -d '' dir; do
# stuff ...
done < <(find "$PWD" -type d -print0)
But for this particular example, you might just use shopt -s globstar.
I made a directory with an A & B subdirectory and ran this test loop:
$: shopt -s globstar
$: for d in **/; do touch "$d.FolderIcon.png"; if [[ -e "$d.FolderIcon.png" ]]; then ls -l "$d.FolderIcon.png"; fi; done
-rw-r--r-- 1 paul 1234567 0 Apr 20 09:25 'A & B/.FolderIcon.png'
**/ has some shortcomings - it won't find hidden directories, for example, or anything beneath them. It is pretty metacharacter-safe as long as you quote your variables, though.
Thanks to the answer of Paul Hodges the following solution finally worked for me:
shopt -s globstar
location="/path/to/location/you/want/to/modify"
prefix="file://"
for d in **/; do
if [[ -e "$d.FolderIcon.png" ]];
then gio set "$d" metadata::custom-icon "$prefix$location/$d.FolderIcon.png";
fi;
done

Differentiation on whether directory exists and permission error

Looking for a very simple way to check whether a file/directory exists while evaluating user permissions, returning different (code) errors:
There is command test that checks for permissions but fails to provide a better return code for case where file does not exist:
$ test -r /tmp/; echo $? # 0
$ test -r /tmp/root/; echo $? # 1
$ test -r /tmp/missing/; echo $? # 1
I am looking for something similar to ls where I get a different message for different errors:
$ ls /tmp/root
ls: root: Permission denied
$ ls /tmp/missing
ls: /tmp/missing: No such file or directory
I like the differentiation but the error code is 1 in both. To properly handle each error, I have to parse stderr which is honestly a very inelegant solution.
Isn't there a better and graceful way of doing this?
Something close to a pythonic way looks something like this:
import os
os.listdir("/tmp/root/dir/") # raises PermissionError
os.listdir("/tmp/foo/") # raises FileNotFoundError
Read the manual some more. There's also -d to specifically check whether the target is a directory, and a slew of other predicates to check for symlinks, device nodes, etc.
testthing () {
if ! [[ -e "$1" ]]; then
echo "$1: not found" >&2
return 2
elif ! [[ -d "$1" ]]; then
echo "$1: not a directory" >&2
return 4
elif ! [[ -r "$1" ]]; then
echo "$1: permission denied" >&2
return 8
fi
return 0
}
Usage:
testthing "/root/no/such/directory"
Notice that [[ is a Bash built-in which is somewhat more robust and versatile than the legacy [ aka test.
It's hard to predict what the priorities should be, but if you want the comparisons in a different order, by all means go for it. It is unavoidable that the shell cannot correctly tell the precise status of a directory entry when it lacks read access to the parent directory. Maybe solve this from the caller by examining the existence and permissions of every entry in the path, starting from the root directory.
The shell and standard utilities do not provide a command that does everything you seem to want:
with a single command execution,
terminate with an exit status that reports in detail on the existence and accessability of a given path,
contextualized for the current user,
accurately even in the event that a directory prior to the last path element is untraversable (note: you cannot have this one no matter what),
(maybe) correctly for both directories and regular files.
The Python os.listdir() doesn't do all of that either, even if you exclude the applicability to regular files and traversing untraversible directories, and reinterpret what "exit status" means. However, os.listdir() and ls both do demonstrate a good and useful pattern: to attempt a desired operation and deal with any failure that results, instead of trying to predict what would happen if you tried it.
Moreover, it's unclear why you want what you say you want. The main reason I can think of for wanting information about the reason for a file-access failure is user messaging, but in most cases you get that for free by just trying to perform the wanted access. If you take that out of the picture, then it rarely matters why a given path is inaccessible. Any way around, you need to either switch to an alternative or fail.
If you nevertheless really do want something that does as much as possible of the above, then you probably will have to roll your own. Since you expressed concern for efficiency in some of your comments, you'll probably want to do that in C.
Given:
$ ls -l
total 0
-rw-r--r-- 1 andrew wheel 0 Mar 22 12:01 can_read
---xr-x--x 1 andrew wheel 0 Mar 22 12:01 root
drwxr-xr-x 2 andrew wheel 64 Mar 22 13:09 sub
Note that permissions are by user for the first three, group for the second three and other or world for the last three.
Permission Denied error is from 1) Trying to read or write a file without that appropriate permission bit set for your user or group or 2) Tying to navigate to a directory without x set or 3) Trying to execute a file without appropriate permission.
You can test if a file is readable or not for the user with the -r test:
$ [[ -r root ]] && echo 'readable' || echo 'not readable'
not readable
So if you only are concerned with user permissions, -r, -w and -x test are what you are looking for.
If you want to test permissions generally, you need to use stat.
Here is a simple example with that same directory:
#!/bin/bash
arr=(can_read root sub missing)
for fn in "${arr[#]}"; do
if [[ -e "$fn" ]]
then
p=( $(stat -f "%SHp %SMp %SLp" "$fn") )
printf "File:\t%s\nUser:\t%s\nGroup:\t%s\nWorld:\t%s\nType:\t%s\n\n" "$fn" "${p[#]}" "$(stat -f "%HT" "$fn")"
else
echo "\"$fn\" does not exist"
fi
done
Prints:
File: can_read
User: rw-
Group: r--
World: r--
Type: Regular File
File: root
User: --x
Group: r-x
World: --x
Type: Regular File
File: sub
User: rwx
Group: r-x
World: r-x
Type: Directory
"missing" does not exist
Alternatively, you can grab these values directly from the drwxr-xr-x type data with:
for fn in "${arr[#]}"; do
if [[ -e "$fn" ]]
then
p=$(stat -f "%Sp" "$fn")
typ="${p:0:1}"
user="${p:1:3}"
grp="${p:4:3}"
wrld="${p:7:3}"
else
echo "\"$fn\" does not exist"
fi
done
In either case, you can then test the individual permissions with either Bash string functions, Bash regex, or get the octal equivalents and use bit masks.
Here is an example:
for fn in "${arr[#]}"; do
if [[ -e "$fn" ]]
then
p=$(stat -f "%Sp" "$fn")
user="${p:1:3}"
ty="$(stat -f "%HT" "$fn")"
printf "%s \"$fn\" is:\n" "$ty"
[[ $user =~ 'r' ]] && echo " readable" || echo " not readable"
[[ $user =~ 'w' ]] && echo " writeable" || echo " not writeable"
[[ $user =~ 'x' ]] && echo " executable" || echo " not executable"
else
echo "\"$fn\" does not exist"
fi
done
Prints:
Regular File "can_read" is:
readable
writeable
not executable
Regular File "root" is:
not readable
not writeable
executable
Directory "sub" is:
readable
writeable
executable
"missing" does not exist
(Note: stat tends to be platform specific. This is BSD and Linux will have different format strings...)
An example of use.
for d in 1 2 3; do
if [[ -e $d ]]; then printf "%s exists" $d
[[ -r $d ]] && echo " and is readable" || echo " but is not readable"
else echo "$d does not exist"
fi
stat -c "%A %n" $d
done
1 exists and is readable
drwxr-xr-x 1
2 exists but is not readable
d--------- 2
3 does not exist
stat: cannot stat ‘3’: No such file or directory
If you absolutely have to have it in one step with differentiated exit codes, write a function. (a/b is there and has accessible permissions.)
$: stat -c "%A %n" ? . a/b # note there is no directory named 3
drwxr-xr-x 1
drwxr-xr-x 2
drwxr-xr-x a
drwxrwxrwt .
drwxr-xr-x a/b
$: doubletest() { if [[ -e "$1" ]]; then [[ -r "$1" ]] && return 0 || return 2; else return 1; fi; }
$: result=( "exists and is readable" "does not exist" "exists but is unreadable" ) # EDITED - apologies, these were out of order
$: for d in . a a/b 1 2 3; do doubletest $d; echo "$d ${result[$?]}"; done
. exists and is readable
a exists and is readable
a/b exists and is readable
1 exists and is readable
2 exists and is readable
3 does not exist
$: chmod 0000 a
$: for d in . a a/b 1 2 3; do doubletest $d; echo "$d ${result[$?]}"; done
. exists and is readable
a exists but is unreadable
a/b does not exist
1 exists and is readable
2 exists but is unreadable
3 does not exist
"does not exist" for a/b is because a does not have read permissions, so there is no way for any tool to know what does or does not exist in that directory short of using root privileges.
$ sudo stat -c "%A %n" ? . a/b # sudo shows a/b
drwxr-xr-x 1
drwxr-xr-x 2
d--------- a
drwxrwxrwt .
drwxr-xr-x a/b
In that case your problem isn't the tool, it's that the tool can't do what you are asking it to do.

deal with filename with space in shell [duplicate]

This question already has answers here:
Iterate over a list of files with spaces
(12 answers)
Closed 3 years ago.
I've read the answer here,but still got wrong.
In my folder,I only want to deal with *.gz file,Windows 10.tar.gz got space in filename.
Assume the folder contain:
Windows 10.tar.gz Windows7.tar.gz otherfile
Here is my shell scripts,I tried everything to quote with "",still can't got what I want.
crypt_import_xml.sh
#/bin/sh
rule_dir=/root/demo/rule
function crypt_import_xml()
{
rule=$1
# list the file in absoulte path
for file in `ls ${rule}/*.gz`; do
echo "${file}"
#tar -xf *.gz
#mv a b.xml to ab.xml
done
}
crypt_import_xml ${rule_dir}
Here is what I got:
root#localhost.localdomain:[/root/demo]./crypt_import_xml.sh
/root/demo/rule/Windows
10.tar.gz
/root/demo/rule/Windows7.tar.gz
After tar xf the *.gz file,the xml filename still contain space.It is a nightmare for me to deal with filename contain spaces.
You shouldn't use ls in for loop.
$ ls directory
file.txt 'file with more spaces.txt' 'file with spaces.txt'
Using ls:
$ for file in `ls ./directory`; do echo "$file"; done
file.txt
file
with
more
spaces.txt
file
with
spaces.txt
Using file globbing:
$ for file in ./directory/*; do echo "$file"; done
./directory/file.txt
./directory/file with more spaces.txt
./directory/file with spaces.txt
So:
for file in "$rule"/*.gz; do
echo "$file"
#tar -xf *.gz
#mv a b.xml to ab.xml
done
You do not need to call that ls command in the for loop, the file globbing will take place in your shell, without running this additional command:
XXX-macbookpro:testDir XXX$ ls -ltra
total 0
drwx------+ 123 XXX XXX 3936 Feb 22 17:15 ..
-rw-r--r-- 1 XXX XXX 0 Feb 22 17:15 abc 123
drwxr-xr-x 3 XXX XXX 96 Feb 22 17:15 .
XXX-macbookpro:testDir XXX$ rule=.
XXX-macbookpro:testDir XXX$ for f in "${rule}"/*; do echo "$f"; done
./abc 123
In your case you can change the "${rule}"/* into:
"${rule}"/*.gz;

Linux - Sum total of files in different directories

How do I calculate the sum total size of multiple files located in different directories?
I have a text file containing the full path and name of the files.
I figure a simple script using while read line and du -h might do the trick...
Example of text file (new2.txt) containing list of files to sum:
/mount/st4000/media/A/amediafile.ext
/mount/st4000/media/B/amediafile.ext
/mount/st4000/media/C/amediafile.ext
/mount/st4000/media/D/amediafile.ext
/mount/st4000/media/E/amediafile.ext
/mount/st4000/media/F/amediafile.ext
/mount/st4000/media/G/amediafile.ext
/mount/st4000/media/H/amediafile.ext
/mount/st4000/media/I/amediafile.ext
/mount/st4000/media/J/amediafile.ext
/mount/st4000/media/K/amediafile.ext
Note: the folder structure is not necessarily consecutive as in A..K
Based on the suggestion from AndreaT, adapting it slightly, I tried
while read mediafile;do du -b "$mediafile"|cut -f -1>>subtotals.txt;done<new2.txt
subtotals.txt looks like
733402685
944869798
730564608
213768
13332480
366983168
6122559750
539944960
735039488
1755005744
733478912
To add all the subtotals
sum=0; while read num; do ((sum += num)); done < subtotals.txt; echo $sum
Assuming that file input is like this
/home/administrator/filesum/cliprdr.c
/home/administrator/filesum/cliprdr.h
/home/administrator/filesum/event.c
/home/administrator/filesum/event.h
/home/administrator/filesum/main.c
/home/administrator/filesum/main.h
/home/administrator/filesum/utils.c
/home/administrator/filesum/utils.h
and the result of command ls -l is
-rw-r--r-- 1 administrator administrator 13452 Oct 4 17:56 cliprdr.c
-rw-r--r-- 1 administrator administrator 1240 Oct 4 17:56 cliprdr.h
-rw-r--r-- 1 administrator administrator 8141 Oct 4 17:56 event.c
-rw-r--r-- 1 administrator administrator 2164 Oct 4 17:56 event.h
-rw-r--r-- 1 administrator administrator 32403 Oct 4 17:56 main.c
-rw-r--r-- 1 administrator administrator 1074 Oct 4 17:56 main.h
-rw-r--r-- 1 administrator administrator 5452 Oct 4 17:56 utils.c
-rw-r--r-- 1 administrator administrator 1017 Oct 4 17:56 utils.h
the simplest command to run is:
cat filelist.txt | du -cb | tail -1 | cut -f -1
with following output (in bytes)
69370
Keep in mind that du prints actual disk usage rounded up to a multiple of (usually) 4kb instead of logical file size.
For small files this approximation may not be acceptable.
To sum one directory, you will have to do a while, and export the result to the parent shell.
I used an echo an the subsequent eval :
eval ' let sum=0$(
ls -l | tail -n +2 |\
while read perms link user uid size date day hour name ; do
echo -n "+$size" ;
done
)'
It produces a line, directly evaluated, which looks like
let sum=0+205+1201+1201+1530+128+99
You just have to reproduce twice this command on both folders.
The du command doesn't have a -b option on the unix systems I have available. And there are other ways to get file size.
Assuming you like the idea of a while loop in bash, the following might work:
#!/bin/bash
case "$(uname -s)" in
Linux) stat_opt=(-c '%s') ;;
*BSD|Darwin) stat_opt=(-f '%z') ;;
*) printf 'ERROR: I don'\''t know how to run on %s\n' "$(uname -s)" ;;
esac
declare -i total=0
declare -i count=0
declare filename
while read filename; do
[[ -f "$filename" ]] || continue
(( total+=$(stat "${stat_opt[#]}" "$filename") ))
(( count++ ))
done
printf 'Total: %d bytes in %d files.\n' "$total" "$count"
This would take your list of files as stdin. You can run it in BSD unix or in Linux -- the options to the stat command (which is not internal to bash) are the bit that are platform specific.

rsync prints "skipping non-regular file" for what appears to be a regular directory

I back up my files using rsync. Right after a sync, I ran it expecting to see nothing, but instead it looked like it was skipping directories. I've (obviously) changed names, but I believe I've still captured all the information I could. What's happening here?
$ ls -l /source/backup/myfiles
drwxr-xr-x 2 me me 4096 2010-10-03 14:00 foo
drwxr-xr-x 2 me me 4096 2011-08-03 23:49 bar
drwxr-xr-x 2 me me 4096 2011-08-18 18:58 baz
$ ls -l /destination/backup/myfiles
drwxr-xr-x 2 me me 4096 2010-10-03 14:00 foo
drwxr-xr-x 2 me me 4096 2011-08-03 23:49 bar
drwxr-xr-x 2 me me 4096 2011-08-18 18:58 baz
$ file /source/backup/myfiles/foo
/source/backup/myfiles/foo/: directory
Then I sync (expecting no changes):
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
And here's the weird part:
$ echo 'hi' > /source/backup/myfiles/foo/test
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
backup/myfiles/foo
backup/myfiles/foo/test
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
So it worked:
$ ls -l /source/backup/myfiles/foo
-rw-r--r-- 1 me me 3126091 2010-06-15 22:22 IMGP1856.JPG
-rw-r--r-- 1 me me 3473038 2010-06-15 22:30 P1010615.JPG
-rw-r--r-- 1 me me 3 2011-08-24 13:53 test
$ ls -l /destination/backup/myfiles/foo
-rw-r--r-- 1 me me 3126091 2010-06-15 22:22 IMGP1856.JPG
-rw-r--r-- 1 me me 3473038 2010-06-15 22:30 P1010615.JPG
-rw-r--r-- 1 me me 3 2011-08-24 13:53 test
but still:
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
Other notes:
My actual directories "foo" and "bar" do have spaces, but no other strange characters. Other directories have spaces and have no problem. I 'stat'-ed and saw no differences between the directories that don't rsync and the ones that do.
If you need more information, just ask.
Are you absolutely sure those individual files are not symbolic links?
Rsync has a few useful flags such as -l which will "copy symlinks as symlinks". Adding -l to your command:
rsync -rtvpl /source/backup /destination
I believe symlinks are skipped by default because they can be a security risk. Check the man page or --help for more info on this:
rsync --help | grep link
To verify these are symbolic links or pro-actively to find symbolic links you can use file or find:
$ file /path/to/file
/path/to/file: symbolic link to `/path/file`
$ find /path -type l
/path/to/file
Are you absolutely sure that it's not a symbolic link directory?
try a:
file /source/backup/myfiles/foo
to make sure it's a directory
Also, it could very well be a loopback mount
try
mount
and make sure that /source/backup/myfiles/foo is not listed.
You should try the below command, most probably it will work for you:
rsync -ravz /source/backup /destination
You can try the following, it will work
rsync -rtvp /source/backup /destination
I personally always use this syntax in my script and works a treat to backup the entire system (skipping sys/* & proc/* nfs4/*)
sudo rsync --delete --stats --exclude-from $EXCLUDE -rlptgoDv / $TARGET/ | tee -a $LOG
Here is my script run by root's cron daily:
#!/bin/bash
#
NFS="/nfs4"
HOSTNAME=`hostname`
TIMESTAMP=`date "+%Y%m%d_%H%M%S"`
EXCLUDE="/home/gcclinux/Backups/root-rsync.excludes"
TARGET="${NFS}/${HOSTNAME}/SYS"
LOGDIR="${NFS}/${HOSTNAME}/SYS-LOG"
CMD=`/usr/bin/stat -f -L -c %T ${NFS}`
## CHECK IF NFS IS MOUNTED...
if [[ ! $CMD == "nfs" ]];then
echo "NFS NOT MOUNTED"
exit 1
fi
## CHECK IF LOG DIRECTORY EXIST
if [ ! -d "$LOGDIR" ]; then
/bin/mkdir -p $LOGDIR
fi
## CREATE LOG HEADER
LOG=$LOGDIR/"rsync_result."$TIMESTAMP".txt"
echo "-------------------------------------------------------" | tee -a $LOG
echo `date` | tee -a $LOG
echo "" | tee -a $LOG
## START RUNNING BACKUP
/usr/bin/rsync --delete --stats --exclude-from $EXCLUDE -rlptgoDv / $TARGET/ | tee -a $LOG
In some cases just copy file to another location (like home) then try again

Resources