a lot of files I download have crap/spam in their filenames, e.g.
[ www.crap.com ] file.name.ext
www.crap.com - file.name.ext
I've come up with two ways for dealing with them but they both seem pretty clunky:
with parameter expansion:
if [[ ${base_name} != ${base_name//\[+([^\]])\]} ]]
then
mv -v "${dir_name}/${base_name}" "${dir_name}/${base_name//\[+([^\]])\]}" &&
base_name="${base_name//\[+([^\]])\]}"
fi
if [[ ${base_name} != ${base_name//www.*.com - /} ]]
then
mv -v "${dir_name}/${base_name}" "${dir_name}/${base_name//www.*.com - /}" &&
base_name="${base_name//www.*.com - /}"
fi
# more of these type of statements; one for each type of frequently-encountered pattern
and then with echo/sed:
tmp=`echo "${base_name}" | sed -e 's/\[[^][]*\]//g' | sed -e 's/\s-\s//g'`
mv "${base_name}" "{tmp}"
I feel like the parameter expansion is the worse of the two but I like it because I'm able to keep the same variable assigned to the file for further processing after the rename (the above code is used in a script that's called for each file after the file download is complete).
So anyway I was hoping there's a better/cleaner way to do the above that someone more knowledgeable than myself could show me, preferably in a way that would allow me to easily reassign the old/original variable to the new/renamed file.
Thanks
Two answer: using perl rename or using pure bash
As there are some people who dislike perl, I wrote my bash only version
Renaming files by using the rename command.
Introduction
Yes, this is a typical job for rename command which was precisely designed for:
man rename | sed -ne '/example/,/^[^ ]/p'
For example, to rename all files matching "*.bak" to strip the
extension, you might say
rename 's/\.bak$//' *.bak
To translate uppercase names to lower, you'd use
rename 'y/A-Z/a-z/' *
More oriented samples
Simply drop all spaces and square brackets:
rename 's/[ \[\]]*//g;' *.ext
Rename all .jpg by numbering from 1:
rename 's/^.*$/sprintf "IMG_%05d.JPG",++$./e' *.jpg
Demo:
touch {a..e}.jpg
ls -ltr
total 0
-rw-r--r-- 1 user user 0 sep 6 16:35 e.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 d.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 c.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 b.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 a.jpg
rename 's/^.*$/sprintf "IMG_%05d.JPG",++$./e' *.jpg
ls -ltr
total 0
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00005.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00004.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00003.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00002.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00001.JPG
Full syntax for matching SO question, in safe way
There is a strong and safe way using rename utility:
As this is perl common tool, we have to use perl syntax:
rename 'my $o=$_;
s/[ \[\]]+/-/g;
s/-+/-/g;
s/^-//g;
s/-\(\..*\|\)$/$1/g;
s/(.*[^\d])(|-(\d+))(\.[a-z0-9]{2,6})$/
my $i=$3;
$i=0 unless $i;
sprintf("%s-%d%s", $1, $i+1, $4)
/eg while
$o ne $_ &&
-f $_;
' *
Testing rule:
touch '[ www.crap.com ] file.name.ext' 'www.crap.com - file.name.ext'
ls -1
[ www.crap.com ] file.name.ext
www.crap.com - file.name.ext
rename 'my $o=$_; ...
...
...' *
ls -1
www.crap.com-file.name-1.ext
www.crap.com-file.name.ext
touch '[ www.crap.com ] file.name.ext' 'www.crap.com - file.name.ext'
ls -1
www.crap.com-file.name-1.ext
[ www.crap.com ] file.name.ext
www.crap.com - file.name.ext
www.crap.com-file.name.ext
rename 'my $o=$_; ...
...
...' *
ls -1
www.crap.com-file.name-1.ext
www.crap.com-file.name-2.ext
www.crap.com-file.name-3.ext
www.crap.com-file.name.ext
... and so on...
... and it's safe while you don't use -f flag to rename command: file won't be overwrited and you will get an error message if something goes wrong.
Renaming files by using bash and so called bashisms:
I prefer doing this by using dedicated utility, but this could even be done by using pure bash (aka without any fork)
There is no use of any other binary than bash (no sed, awk, tr or other):
#!/bin/bash
for file;do
newname=${file//[ \]\[]/.}
while [ "$newname" != "${newname#.}" ] ;do
newname=${newname#.}
done
while [ "$newname" != "${newname//[.-][.-]/.}" ] ;do
newname=${newname//[.-][.-]/-};done
if [ "$file" != "$newname" ] ;then
if [ -f $newname ] ;then
ext=${newname##*.}
basename=${newname%.$ext}
partname=${basename%%-[0-9]}
count=${basename#${partname}-}
[ "$partname" = "$count" ] && count=0
while printf -v newname "%s-%d.%s" $partname $[++count] $ext &&
[ -f "$newname" ] ;do
:;done
fi
mv "$file" $newname
fi
done
To be run with files as argument, for sample:
/path/to/my/script.sh \[*
Replacing spaces and square bracket by dot
Replacing sequences of .-, -., -- or .. by only one -.
Test if filename don't differ, there is nothing to do.
Test if a file exist with newname...
split filename, counter and extension, for making indexed newname
loop if a file exist with newname
Finaly rename the file.
Take advantage of the following classical pattern:
job_select /path/to/directory| job_strategy | job_process
where job_select is responsible for selecting the objects of your job, job_strategy prepares a processing plan for these objects and job_process eventually executes the plan.
This assumes that filenames do not contain a vertical bar | nor a newline character.
The job_select function
# job_select PATH
# Produce the list of files to process
job_select()
{
find "$1" -name 'www.*.com - *' -o -name '[*] - *'
}
The find command can examine all properties of the file maintained by the file system, like creation time, access time, modification time. It is also possible to control how the filesystem is explored by telling find not to descend into mounted filesystems, how much recursions levels are allowed. It is common to append pipes to the find command to perform more complicated selections based on the filename.
Avoid the common pitfall of including the contents of hidden directories in the output of the job_select function. For instance, the directories CVS, .svn, .svk and .git are used by the corresponding source control management tools and it is almost always wrong to include their contents in the output of the job_select function. By inadvertently batch processing these files, one can easily make the affected working copy unusable.
The job_strategy function
# job_strategy
# Prepare a plan for renaming files
job_strategy()
{
sed -e '
h
s#/www\..*\.com - *#/#
s#/\[^]]* - *#/#
x
G
s/\n/|/
'
}
This commands reads the output of job_select and makes a plan for our renaming job. The plan is represented by text lines having two fields separated by the character |, the first field being the old name of the file and the second being the new computed file of the file, it looks like
[ www.crap.com ] file.name.1.ext|file.name.1.ext
www.crap.com - file.name.2.ext|file.name.2.ext
The particular program used to produce the plan is essentially irrelevant, but it is common to use sed as in the example; awk or perl for this. Let us walk through the sed-script used here:
h Replace the contents of the hold space with the contents of the pattern space.
… Edit the contents of the pattern space.
x Swap the contents of the pattern and hold spaces.
G Append a newline character followed by the contents of the hold space to the pattern space.
s/\n/|/ Replace the newline character in the pattern space by a vertical bar.
It can be easier to use several filters to prepare the plan. Another common case is the use of the stat command to add creation times to file names.
The job_process function
# job_process
# Rename files according to a plan
job_process()
{
local oldname
local newname
while IFS='|' read oldname newname; do
mv "$oldname" "$newname"
done
}
The input field separator IFS is adjusted to let the function read the output of job_strategy. Declaring oldname and newname as local is useful in large programs but can be omitted in very simple scripts. The job_process function can be adjusted to avoid overwriting existing files and report the problematic items.
About data structures in shell programs
Note the use of pipes to transfer data from one stage to the other: apprentices often rely on variables to represent such information but it turns out to be a clumsy choice. Instead, it is preferable to represent data as tabular files or as tabular data streams moving from one process to the other, in this form, data can be easily processed by powerful tools like sed, awk, join, paste and sort — only to cite the most common ones.
You can use rnm
rnm -rs '/\[crap\]|\[spam\]//g' *.ext
The above will remove [crap] or [spam] from filename.
You can pass multiple regex pattern by terminating them with ; or overloading the -rs option.
rnm -rs '/[\[\]]//g;/\s*\[crap\]//g' -rs '/crap2//' *.ext
The general format of this replace string is /search_part/replace_part/modifier
search_part : regex to search for.
replace_part : string to replace with
modifier : i (case insensitive), g (global replace)
uppercase/lowercase:
A replace string of the form /search_part/\c/modifier will make the selected part of the filename (by the regex search_part) lowercase while \C (capital \C) in replace part will make it uppercase.
rnm -rs '/[abcd]/\C/g' *.ext
## this will capitalize all a,b,c,d in the filenames
If you have many regex patterns that need to be dealt with, then put those patterns in a file and pass the file with -rs/f option.
rnm -rs/f /path/to/regex/pattern/file *.ext
You can find some other examples here.
Note:
rnm uses PCRE2 (revised PCRE) regex.
You can undo an unwanted rename operation by running rnm -u
P.S: I am the author of this tool.
If you are using Ubunntu/Debian os use rename command to rename multiple files at time.
If you want to use something not depending on perl, you can use the following code (let's call it sanitizeNames.sh). It is only showing a few cases, but it's easily extensible using string substitution, tr (and sed too).
#!/bin/bash
ls $1 |while read f; do
newfname=$(echo "$f" \
|tr -d '\[ ' \ # Removing opened square bracket
|tr ' \]' '-' \ # Translating closing square bracket to dash
|tr -s '-' \ # Squeezing multiple dashes
|tr -s '.' \ # Squeezing multiple dots
)
newfname=${newfname//-./.}
if [ -f "$newfname" ]; then
# Some string magic...
extension=${newfname##*\.}
basename=${newfname%\.*}
basename=${basename%\-[1-9]*}
lastNum=$[ $(ls $basename*|wc -l) ]
mv "$f" "$basename-$lastNum.$extension"
else
mv "$f" "$newfname"
fi
done
And use it:
$ touch '[ www.crap.com ] file.name.ext' 'www.crap.com - file.name.ext' '[ www.crap.com ] - file.name.ext' '[www.crap.com ].file.anothername.ext2' '[www.crap.com ].file.name.ext'
$ ls -1 *crap*
[ www.crap.com ] - file.name.ext
[ www.crap.com ] file.name.ext
[www.crap.com ].file.anothername.ext2
[www.crap.com ].file.name.ext
www.crap.com - file.name.ext
$ ./sanitizeNames.sh *crap*
$ ls -1 *crap*
www.crap.com-file.anothername.ext2
www.crap.com-file.name-1.ext
www.crap.com-file.name-2.ext
www.crap.com-file.name-3.ext
www.crap.com-file.name.ext
Related
I would like to clean up a folder with videos. I have a bunch of videos that were downloaded with different resolutions, so each file will start with the same name and then end with "_480p" or "_720p" etc.
I just want to keep the largest file of each such set.
So I am looking for a way to delete files based on
check if name before "_" is identical
if true, then delete all files except largest one
Thinking of a flexible and fast way to approach the problem, you can gather a list of files ending in "[[:digit:]]+p" and then a quick way to parse the names is to provide them on stdin to awk and let awk index an array with the file prefix (path + part of name before '_') so it will be unique for files allowing the different format size to be obtained and stored at that index.
Then it's a simply matter of comparing the stored resolution number for the file against the current file number and deleting the lesser of the two.
Your find command to locate all files in the directory below the current, recursively, could be:
find ./tmp -type f -regex "^.*[0-9]+p$"
What I would do is then pipe the filename output to a short awk script where an array stores the last seen number for a given file prefix, and then if the current record (line) resolution number if bigger than the value stored in the array, a filename using the array number is created and that file deleted with system() using rm filename. If the current line resolution number is less than what is already stored in the array for the file, you simply delete the current file.
You can do that as:
#!/usr/bin/awk -f
BEGIN { FS = "/" }
{
num = $NF # last field holds number up to 'p'
prefix = $0 # prefix is name up to "_[[:digit:]]+p
sub (/^.*_/, "", num) # isolate number
sub (/p$/, "", num) # remove 'p' at and
sub (/_[[:digit:]]+p$/, "", prefix) # isolate path and name prefix
if (prefix in a) { # current file in array a[] ?
rmfile = $0 # set file to remove to current
if (num + 0 > a[prefix] + 0) { # current number > array number
rmfile = prefix "_" a[prefix] "p" # for remove filename from array
a[prefix] = num # update array with higher num
}
system ("rm " rmfile); # delete the file
}
else
a[prefix] = num # if no num for prefix in array, store first
}
(note: the field-separator splits the fields using the directory separator so you have all file components to work with.)
Example Use/Output
With a representative set of files in a tmp/ directory below the current, e,g.
$ ls -1 tmp
a_480p
a_720p
b_1080p
b_480p
c_1080p
c_720p
Running the find command piped to the awk script named awkparse.sh would be as follows (don't forget to make the awk script executable):
$ find ./tmp -type f -regex "^.*[0-9]+p$" | ./awkparse.sh
Looking at the directory after piping the results of find to the awk script, the tmp/ directory now only contains the highest resolution (largest) files for any given filename, e.g.
$ ls -1
a_720p
b_1080p
c_1080p
This would be highly efficient. It could also handle all files in a nested directory structure where multiple directory levels hold files you need to clean out. Look things over and let me know if you have questions.
This shell script might be what you want:
previous_prefix=
for file in *_[0-9]*[0-9]p*; do
prefix=${file%_*}
resolution=${file##*_}
resolution=${resolution%%p*}
if [ "$prefix" = "$previous_prefix" ]; then
if [ "$resolution" -gt "$greater_resolution" ]; then
file_to_be_removed=$greater_file
greater_file=$file
greater_resolution=$resolution
else
file_to_be_removed=$file
fi
echo rm -- "$file_to_be_removed"
else
greater_resolution=$resolution
greater_file=$file
previous_prefix=$prefix
fi
done
Drop the echo if the output looks good.
I would try to:
list all non-smallest files (non-480p): *_720p* and *_1080p*
for each of them replace *_720p*/*_1080p* in the name with all possible smaller resolutions
and try to delete those files with rm -f, whether they exist or not
#!/bin/sh -e
shopt -s nullglob
for file in *_1080p*; do
rm -f -- "${file//_1080p/_720p}"
rm -f -- "${file//_1080p/_480p}"
done
for file in *_720p*; do
rm -f -- "${file//_720p/_480p}"
done
And here is a Bash script using nested loops to automate the above:
#!/bin/bash -e
shopt -s nullglob
res=(_1080p _720p _480p _240p)
for r in ${res[#]}; do
res=("${res[#]:1}") # remove the first element in res array
for file in *$r*; do
for r2 in ${res[#]}; do
rm -f -- "${file//$r/$r2}"
done
done
done
I have a folder with over 400K txt files.
With names like
deID.RESUL_12433287659.txt_234323456.txt
deID.RESUL_34534563649.txt_345353567.txt
deID.RESUL_44235345636.txt_537967875.txt
deID.RESUL_35234663456.txt_423452545.txt
Each file has different content
I want to grab file name and file content and put in CSV.
Something like:
file_name,file_content
deID.RESUL_12433287659.txt_234323456.txt,Content 1
deID.RESUL_34534563649.txt_345353567.txt,Content 2
deID.RESUL_44235345636.txt_537967875.txt,Content 3
deID.RESUL_35234663456.txt_423452545.txt,Content 4
I know how to grab all the files in a directory in CSV using:
find * > files.csv
How can I also grab the contents of the file?
find * is somewhat strange, find already scans recursively. find . is enough to include all find * (well, unless there is somewhat strange shell glob rules you take into account).
We would need to iterate over the files. Also it would be nice to remove newlines.
# create file for a MCVE
while IFS=' ' read -r file content; do echo "$content" > "$file"; done <<EOF
deID.RESUL_12433287659.txt_234323456.txt Content 1
deID.RESUL_34534563649.txt_345353567.txt Content 2
deID.RESUL_44235345636.txt_537967875.txt Content 3
deID.RESUL_35234663456.txt_423452545.txt Content 4
EOF
{
# I'm using `|` as the separator for columns
# output header names
echo 'file_name|file_content';
# this is the hearth of the script
# find the files
# for each file execute `sh -c 'printf "%s|%s\n" "$1" "$(cat "$1")"' -- <filename>`
# printf - nice printing
# "$(cat "$1")" - gets file content and also removes trailing empty newlines. Neat.
find . -type f -name 'deID.*' -exec sh -c 'printf "%s|%s\n" "$1" "$(cat "$1")"' -- {} \;
} |
# nice formatting:
column -t -s'|' -o ' '
will output:
file_name file_content
./deID.RESUL_44235345636.txt_537967875.txt Content 3
./deID.RESUL_35234663456.txt_423452545.txt Content 4
./deID.RESUL_34534563649.txt_345353567.txt Content 2
./deID.RESUL_12433287659.txt_234323456.txt Content 1
I am writing a bash shell script to output the suffixes of filenames.
In this case I use:
sed 's|.*\.||'
So the output is e.g.:
png
exe
c
But what do I do if the file name has no suffix and therefore no dot? My output should be "no suffix", but I don't know how to do this with sed.
EDIT
What I've already tried:
Directory:
abc.x
abc.y
abc
Input:
find . -type f | sed -E 's/^[^.]+$/no suffix/; s/.*\.//'
Output:
x
y
/abc
Use 2 consecutive substitutions:
sed -E 's/^[^.]+$/no suffix/; s/.+\.//'
One in awk. First some test material that was not provided:
$ cat foo
this.foo
that.bar
nothing
The awk:
$ awk '{n=split($0,a,".");print (n>1?a[n]:"no suffix")}' foo
foo
bar
no suffix
$ cat file
abc.x
abc.y
abc
$ awk -F'.' '{print (NF>1 ? $NF : "no suffix")}' file
x
y
no suffix
How about
sed '/.*\./s///;t;s/.*/no suffix/'
The regex matches lines with a dot. On those lines, we perform a substitution. If a substitution occurred, we are done. Otherwise, perform the other substitution.
The use of an empty regex in the substitution pattern uses the previous pattern. The t command branches if a substitution occurred; without an argument, we branch to the end of the script. (Otherwise, you can set a label with :label and branch to that with tlabel.)
You can accomplish the same with the POSIX shell parameter expansions without invoking separate utilities. For example, to test whether a file contains a '.' you can simply use test, e.g.:
[ "$i" = "${i%.*}" ]
See Posix Programmer's Manual - Shell Command Language - Parameter Expansion
If it tests TRUE, then no extension is present, otherwise, you can use an additional parameter expansion to obtain the extension itself, e.g.
[ "$i" = "${i%.*}" ] && echo "$i - no suffix" || echo "$i - ${i##*.}"
(note: you would need an additional test to exclude .foo (e.g. dotfiles), but that is left to you)
Wrap that in a loop and exclude directory files and you can test every file within a directory or use read within a loop and pipe a list of names to it. For example, looping over the files in a directory would results in:
...
ftlcdfil.c - c
geany-plugin_build_w_gtk+2.patch - patch
geany-plugins.spec - spec
geany.spec - spec
geany_build_w_gtk+2.patch - patch
getfl - no suffix
gtkw_save_test.txt - txt
gtkwrite-master.zip - zip
helloleap - no suffix
helloleap.c - c
jnl - no suffix
messages - no suffix
opensuse_15.0_1s_delay.svg - svg
...
Using Perl
/tmp> ls ab*
abc abc.x abc.y
/tmp> perl -e ' print /\./? "$_\n" : "no suffix\n" for(glob("ab*")) '
no suffix
abc.x
abc.y
/tmp>
I am using Linux and I have a directory structure as follows:
- RootFolder
* DirectoryA
* * Directory(X) [1]
* * Directory Y
* * DirectoryZ
* DirectoryB
Note the special characters such as brackets, square brackets and spaces.
In practice, DirectoryA has a lot more subdirectories, but for the purpose of this example, let's say that I want to copy the 2 most recently created directories (with contents) from DirectoryA to DirectoryB.
My first attempt at this was to cd to RootFolder and run the following command:
ls -lt DirectoryA | head -2 | awk '{print "cp -r " $9 " DirectoryB/"$9 | sh
This failed because of special characters in various subdirectories and returned:
cp: can't stat Directory
cp: can't stat Directory(X)
Can someone advise on how to modify my command to work with spaces and special characters?
Quoting helps:
ls -t DirectoryA | head -2 | awk '{print "cp -vr \"DirectoryA/" $0 "\" DirectoryB/" }' |sh
I added a -v option to show what it does:
`DirectoryA/DirectoryZ' -> `DirectoryB/DirectoryZ'
`DirectoryA/Directory Y' -> `DirectoryB/Directory Y'
However, you would run into problems if the filenames contain double quotes or if they contain characters which ls does not represent except as question-marks.
As alternative, you can do the whole job using perl, with the help of built-in module File::Spec::Functions, that handles files and their paths, and external File::Copy::Recursive, that you will need to install from CPAN or similar, and copies directories recursively:
perl -MFile::Spec::Functions=catfile,catdir,splitdir -MFile::Copy::Recursive=rcopy -E '
$dest = catdir(($ARGV[0], $ARGV[2]));
$orig = catdir(($ARGV[0], $ARGV[1]));
opendir $dh, $orig or die;
for $f (
sort { (stat $b)[9] <=> (stat $a)[9] }
grep { -d $_ and $_ !~ m/\.\.?$/ }
map { catfile $orig, $_ }
readdir $dh)[0..1] ) {
rcopy($f, catdir $dest, (splitdir $f)[-1]) or die $!;
}
' /your/path/to/RootFolder DirectoryA DirectoryB
It accepts three arguments, the first one is the path to your RootFolder, the second one the from, and the last one the to. The grep filters out non-directory files and special entries, and the sort checks its modification time, the slice [0..1] only gets the two most recent based in the modification time.
It worked in my test but if it does not match exactly for you, I hope it is near the finish line so you can give it a boost to it.
I would like to rename multiple files, but not just appending my string to the end or the beginning of the file. I would like to place it in a specific location. This is the command I am working with right now, but it can only add things at the beginning and the end of the file name.
for f in `ls ~/tmp/*`; do FILE=`basename $f`; echo "Rename:"$f;echo $FILE; mv "$f" "/home/tmp/JC_"${FILE%.*}"_hg19."${FILE#*.}""; done
Lets say the file names are as follows hell_1.txt (and lets say there is a ton of them each with a different number for simplicity) I would like to add an o into the file name so the resulting name would be hello_1.txt it would be nice if you had a general solution not just for this example.
this should work:
for x in ~/tmp/*.txt; do mv $x `echo $x | sed -e 's#hell#hello#'`; done
if i understand you wish to change any "hell.*_NNN.txt" to "hel.*o_NNN.txt" (keeping the .* between "hell" and "_NNN.txt" (NNN being any number).
then:
for x in ~/tmp/*.txt; do
mv "$x" "$(echo "$x" | LC_COLLATE=C sed -e 's#\(hell.*\)\(_[0-9]*\.txt$\)#\1o\2#')"
done
I added the LC_COLLATE=C during sed invocation so you can rely on the "[0-9]" matching only digits '0' or '1' or ... or '9'
(If you wonder why adding the LC_COLLATE: with some locales [A-Z] could match every letters A-Z or a-y (except 'z'!) as in such locales letters appears in this order: 'A' 'a' 'B' 'b' ... 'Z' 'z'. And with other locales, who knows?)
(note: you could also replace "[0-9]" with the "[[:digit:]]" notation, but it could be less portable : "old" version of sed won't know about this notation and will try to match any of '[' or ':' or ... or 't' or ':', followed by a ']' (*, so 0,1 or more times) ... That's why I don't like using those special [[:things:]] with sed, tr, etc : i see them as less portable. Use perl instead if you prefer to use those?)
How about
rename 's/hell_/hello_/' /tmp/*.txt