Bulk rename files in Unix with current date as suffix - linux

I am trying to bulk rename all the files in the current folder with date suffix:
rename 's/(.*)/$1_$(date +%F)/' *
But that command is renaming info.txt to info.txt_1000 4 24 27 30 46 113 128 1000date +%F). I want the result to be info.txt_2016-10-13

You want $1 to be passed literally to rename, yet have $(date +%F) to be expanded by the shell. The latter won't happen when you use single quotes, only with double quotes. The solution is to use double quotes and escape $1 so the shell doesn't expand it.
rename "s/(.*)/\$1_$(date +%F)/" *

Portable Posix shell solution
Since you said "in Unix" and the rename command isn't portable (it's actually a part of the perl package), here is a solution that should work in more environments:
for file in *; do mv "$file" "${file}_$( date +%F )"; done
This creates a loop and then moves each individual file to the new name. Like the question, it uses date +%F via shell process substitution to insert the "full date" (YYYY-mm-dd). Process substitution must use double quotes (") and not single quotes (') because single quotes shut off shell interpretations (this goes for variables as well), as noted in John's answer.
Argument list too long
Your shell will complain if the directory has too many files in it, with an error like "Argument list too long." This is because * expands to the contents of the directory, all of which become arguments to the command you're running.
To solve that, create a script and then feed it through xargs as follows:
#!/bin/sh
if [ -z "$1" ]; then set *; fi # default to all files in curent directory
for file in "$#"; do mv "$file" "${file}_$( date +%F )"; done
Using ls to generate a file list isn't always wise for scripts (it can do weird things in certain contexts). Use find instead:
find . -type f -not -name '*_20??-??-??' -print0 |xargs -0 sh /path/to/script.sh
Note, that command is recursive (change that by adding -maxdepth 1 after the dot). find is extremely capable. In this example, it finds all files (-type f) that do not match the shell glob *_20??-??-?? (* matches any number of any characters, ? matches exactly one of any character, so this matches abc_2016-10-14 but not abc_2016-10-14-def). This uses find … -print0 and xargs -0 to ensure spacing is properly preserved (instead of spaces, these use the null character as the delimiter).
You can also do this with perl rename:
find . -type f -not -name '*_20??-??-??' -print0 |xargs -0 \
rename "s/(.*)/\$1_$( date +%F )/"

Related

Is there a utility for creating hard link backup?

I need to create a clone of a directory tree so I can clean up duplicate files.
I don't need copies of the files, I just need the files, so I want to create a matching tree with hard links.
I threw this together in a couple of minutes when I realized my backup was going to take hours
It just echos the commands which I redirect to a file to examine before I run it.
Of course the usual problems, like files and directories containing quote or commas have not been addressed (bash scripting sucks for this, doesn't it, this and files containing leading dashes)
Isn't there some utility that already does this in a robust fashion?
BASEDIR=$1
DESTDIR=$2
for DIR in `find "$BASEDIR" -type d`
do
RELPATH=`echo $DIR | sed "s,$BASEDIR,,"`
DESTPATH=${DESTDIR}/$RELPATH
echo mkdir -p \"$DESTPATH\"
done
for FILE in `find "$BASEDIR" -type f`
do
RELPATH=`echo $FILE | sed "s,$BASEDIR,,"`
DESTPATH=${DESTDIR}/$RELPATH
echo ln \"$FILE\" \"$DESTPATH\"
done
Generally using find like that is a bad idea - you are basically relying on separating filenames on whitespace, when in fact all forms of whitespace are valid as filenames on most UNIX systems. Find itself has the ability to run single commands on each file found, which is generally a better thing to use. I would suggest doing something like this (I'd use a couple of scripts for this for simplicity, not sure how easy it would be to do it all in one):
main.sh:
BASEDIR="$1" #I tend to quote all variables - good habit to avoid problems with spaces, etc.
DESTDIR="$2"
find "$BASEDIR" -type d -exec ./handle_file.sh \{\} "$BASEDIR" "$DESTDIR" \; # \{\} is replaced with the filename, \; tells find the command is over
find "$BASEDIR" -type f -exec ./handle_file.sh \{\} "$BASEDIR" "$DESTDIR" \;
handle_file.sh:
FILENAME="$1"
BASEDIR="$2"
DESTDIR="$3"
RELPATH="${FILENAME#"$BASEDIR"}" # bash string substitution double quoting, to stop BASEDIR being interpreted as a pattern
DESTPATH="${DESTDIR}/$RELPATH"
if [ -f "$FILENAME" ]; then
echo ln \""$FILENAME"\" \""$DESTPATH"\"
elif [ -d "$FILENAME" ]; then
echo mkdir -p \""$DESTPATH"\"
fi
I've tested this with a simple tree with spaces, asterisks, apostrophes and even a carriage return in filenames and it seems to work.
Obviously remove the escaped quotes and the "echo" (but leave the real quotes) to make it work for real.

problems with source command in shell

Good afternoon I have the following command to run me 'code.sh' file which I pass a parameter '$ 1' the problem is that I want to run a 'code.sh' with 'source' this is my command:
find . -name "*.txt" -type f -exec ./code.sh {} \;
And I do do well occupied
source ./code.sh
This is tricky. When you source a script you need to do it in the current shell, not in a sub-shell or child process. Executing source from find won't work because find is a child process, and so changes to environment variables will be lost.
It's rather roundabout, but you can use a loop to parse find's output and run the source commands directly in the top-level shell (using process substitution).
while read -d $'\0' fileName; do
source code.sh "$fileName"
done < <(find . -name "*.txt" -type f -print0)
Now what's with -print0 and -d $'\0', you ask? Using these two flags together is a way of making the script extra safe.† File names in UNIX are allowed to contain lots of oddball characters including spaces, tabs, and even newlines. While newlines are rare, they are indeed legal.
-print0 tells find to use NUL characters (\0) to separate the file names rather than the default newlines (\n). Doing this means file names containing \n won't mess up the loop. Using \0 as a separator works well because \0 is not a legal character in file names.
-d $'\0'&ddagger; does the same thing with read on the other side. It tells read that lines are delimited with \0 instead of \n.
† You may have seen this trick before. It's common to write find ... -print0 | xargs -0 ... to get the same sort of safety when pairing find with xargs.
&ddagger; If you're wondering about $'...': that's Bash ANSI-C quoting syntax for writing string literals containing escape codes. Dollar sign plus single quotes. You can write $'\n' for a newline or $'\t' for a tab or $'\0' for a NUL.
You won't be able to use find in this way; it will always execute a command in a separate process, not the current shell. If you are using bash 4, there's a simple alternative to using find:
shopt -s globstar
for f in **/*.txt; do
[[ -f $f ]] && source code.sh "$f"
done

bash script collecting filenames seems to get confused by spaces

I'm trying to build a script that lists all the zip files in a set of directories, with some filters and get it to spit them out to file but when a filename has a space in it it seems to appear on a new line.
This list will eventually be used as an input to tar to gzip all the zip files, script is below:
#!/bin/bash
rm -f set1.txt
rm -f set2.txt
for line in $(find /home -type d -name assets ;);
do
echo $line >> set1.txt
for line in $(find $line -type f -name \*.zip -mtime +2 ;);
do
echo \"$line\" >> set2.txt
done;
This works as expected until you get a space in a filename then set2.txt contains entries like this:
"/home/xxxxxx/oldwebroot/htdocs/upload/assets/jobbags/rbjbCost"
"in"
"use"
"sept"
"2010.zip"
Does anyone know how I can get it to keep these filenames with spaces in in a single line with the whole lot wrapped in one set of quotes?
Thanks!
The correct way to loop over a set of files located via find is with a while read construct, thus:
while IFS= read -r -d '' line ; do
echo "$line" >> set1.txt
while IFS= read -r -d '' file ; do
printf '"%s"\n' "$file" >> set2.txt
done < <(find "$line" -type f -name \*.zip -mtime +2 -print0)
done < <(find /home -type d -name assets -print0)
For clarity I have given the inner loop variable a different name.
If you didn't have bash you'd have to issue the find command separately and redirect the output to a file, then read the file with while read ; do .. done < filename.
Note that each expansion of each variable is double-quoted. This is necessary.
Note also, however, that for what you want you can simply use the -printf switch to find, if you have GNU find.
find /home -type f -path '*/assets/*.zip' -mtime +2 -printf '"%p"\n' > set2.txt
Although, as #sarnold notes, this is not safe.
You should probably be executing your tar(1) command through some other mechanism; the find(1) program supports a -print0 option to request ASCII NUL-separated filename output, and the xargs(1) program supports a -0 option to tell it that the input is separated by ASCII NUL characters. (Since NUL is the only character that is not allowed in filenames, this is the only way to get reliable filename handling.)
Simply using the -print0 and -0 options will help but this still leaves the script open to another problem -- xargs(1) might decide to execute the tar(1) command two, three, or more times, depending upon its input. The last execution is the one that will "win", and the data from earlier invocations will be lost for ever. (This is useless as a backup.)
So you should also look into adding the --concatenate command line option to tar(1), too, so that it will add to the archive. It might make sense to perform the compression after all the files have been added, via gzip(1) or bzip2(1). (This does mean you need to remove the archive before a "fresh run" of this script.)

Add prefix to all images (recursive)

I have a folder with more than 5000 images, all with JPG extension.
What i want to do, is to add recursively the "thumb_" prefix to all images.
I found a similar question: Rename Files and Directories (Add Prefix) but i only want to add the prefix to files with the JPG extension.
One of possibly solutions:
find . -name '*.jpg' -printf "'%p' '%h/thumb_%f'\n" | xargs -n2 echo mv
Principe: find all needed files, and prepare arguments for the standard mv command.
Notes:
arguments for the mv are surrounded by ' for allowing spaces in filenames.
The drawback is: this will not works with filenames what are containing ' apostrophe itself, like many mp3 files. If you need moving more strange filenames check bellow.
the above command is for dry run (only shows the mv commands with args). For real work remove the echo pretending mv.
ANY filename renaming. In the shell you need a delimiter. The problem is, than the filename (stored in a shell variable) usually can contain the delimiter itself, so:
mv $file $newfile #will fail, if the filename contains space, TAB or newline
mv "$file" "$newfile" #will fail, if the any of the filenames contains "
the correct solution are either:
prepare a filename with a proper escaping
use a scripting language what easuly understands ANY filename
Preparing the correct escaping in bash is possible with it's internal printf and %q formatting directive = print quoted. But this solution is long and boring.
IMHO, the easiest way is using perl and zero padded print0, like next.
find . -name \*.jpg -print0 | perl -MFile::Basename -0nle 'rename $_, dirname($_)."/thumb_".basename($_)'
The above using perl's power to mungle the filenames and finally renames the files.
Beware of filenames with spaces in (the for ... in ... expression trips over those), and be aware that the result of a find . ... will always start with ./ (and hence try to give you names like thumb_./file.JPG which isn't quite correct).
This is therefore not a trivial thing to get right under all circumstances. The expression I've found to work correctly (with spaces, subdirs and all that) is:
find . -iname \*.JPG -exec bash -c 'mv "$1" "`echo $1 | sed \"s/\(.*\)\//\1\/thumb/\"`"' -- '{}' \;
Even that can fall foul of certain names (with quotes in) ...
In OS X 10.8.5, find does not have the -printf option. The port that contained rename seemed to depend upon a WebkitGTK development package that was taking hours to install.
This one line, recursive file rename script worked for me:
find . -iname "*.jpg" -print | while read name; do cur_dir=$(dirname "$name"); cur_file=$(basename "$name"); mv "$name" "$cur_dir/thumb_$cur_file"; done
I was actually renaming CakePHP view files with an 'admin_' prefix, to move them all to an admin section.
You can use that same answer, just use *.jpg, instead of just *.
for file in *.JPG; do mv $file thumb_$file; done
if it's multiple directory levels under the current one:
for file in $(find . -name '*.JPG'); do mv $file $(dirname $file)/thumb_$(basename $file); done
proof:
jcomeau#intrepid:/tmp$ mkdir test test/a test/a/b test/a/b/c
jcomeau#intrepid:/tmp$ touch test/a/A.JPG test/a/b/B.JPG test/a/b/c/C.JPG
jcomeau#intrepid:/tmp$ cd test
jcomeau#intrepid:/tmp/test$ for file in $(find . -name '*.JPG'); do mv $file $(dirname $file)/thumb_$(basename $file); done
jcomeau#intrepid:/tmp/test$ find .
.
./a
./a/b
./a/b/thumb_B.JPG
./a/b/c
./a/b/c/thumb_C.JPG
./a/thumb_A.JPG
jcomeau#intrepid:/tmp/test$
Use rename for this:
rename 's/(\w{1})\.JPG$/thumb_$1\.JPG/' `find . -type f -name *.JPG`
For only jpg files in current folder
for f in `ls *.jpg` ; do mv "$f" "PRE_$f" ; done

Strip leading dot from filenames bash script

I have some files in a bunch of directories that have a leading dot and thus are hidden. I would like to revert that and strip the leading dot.
I was unsuccessful with the following:
for file in `find files/ -type f`;
do
base=`basename $file`
if [ `$base | cut -c1-2` = "." ];
then newname=`$base | cut -c2-`;
dirs=`dirname $file`;
echo $dirs/$newname;
fi
done
Which fails on the condition statement:
[: =: unary operator expected
Furthermore, some files have a space in them and file returns them split.
Any help would be appreciated.
The easiest way to delete something from the start of a variable is to use ${var#pattern}.
$ FILENAME=.bashrc; echo "${FILENAME#.}"
bashrc
$ FILENAME=/etc/fstab; echo "${FILENAME#.}"
/etc/fstab
See the bash man page:
${parameter#word}
${parameter##word}
The word is expanded to produce a pattern just as in pathname expansion. If the pattern matches the beginning of the value of parameter, then the result of the expansion is
the expanded value of parameter with the shortest matching pattern (the ‘‘#’’ case) or the longest matching pattern (the ‘‘##’’ case) deleted.
By the way, with a more selective find command you don't need to do all the hard work. You can have find only match files with a leading dot:
find files/ -type f -name '.*'
Throwing that all together, then:
find files/ -type f -name '.*' -printf '%P\0' |
while read -d $'\0' path; do
dir=$(dirname "$path")
file=$(basename "$path")
mv "$dir/$file" "$dir/${file#.}"
done
Additional notes:
To handle file names with spaces properly you need to quote variable names when you reference them. Write "$file" instead of just $file.
For extra robustness the -printf '\0' and read -d $'\0' use NUL characters as delimiters so even file names with embedded newlines '\n' will work.
find files/ -name '.*' -printf '%f\n'|while read f; do
mv "files/$f" "files/${f#.}"
done
This script works with any file you
can throw at it, even if they have
spaces, newlines or other nefarious
characters in their name.
It works no matter how many subdirectories deep the hidden file is
Unlike other answers thus far, you
don't have to change the rest of the script when you change the path
given to find
*Note: I included an echo so that you can test it like a dry-run. Remove the single echo if you are satisfied with the results.
find . -name '.*' -exec sh -c 'for arg; do d="${arg%/*}"; f=${arg:${#d}}; echo mv "$arg" "$d/${f#.}"; done' _ {} +

Resources