Create CSV file using file name and file contents in Linux - linux

I have a folder with over 400K txt files.
With names like
deID.RESUL_12433287659.txt_234323456.txt
deID.RESUL_34534563649.txt_345353567.txt
deID.RESUL_44235345636.txt_537967875.txt
deID.RESUL_35234663456.txt_423452545.txt
Each file has different content
I want to grab file name and file content and put in CSV.
Something like:
file_name,file_content
deID.RESUL_12433287659.txt_234323456.txt,Content 1
deID.RESUL_34534563649.txt_345353567.txt,Content 2
deID.RESUL_44235345636.txt_537967875.txt,Content 3
deID.RESUL_35234663456.txt_423452545.txt,Content 4
I know how to grab all the files in a directory in CSV using:
find * > files.csv
How can I also grab the contents of the file?

find * is somewhat strange, find already scans recursively. find . is enough to include all find * (well, unless there is somewhat strange shell glob rules you take into account).
We would need to iterate over the files. Also it would be nice to remove newlines.
# create file for a MCVE
while IFS=' ' read -r file content; do echo "$content" > "$file"; done <<EOF
deID.RESUL_12433287659.txt_234323456.txt Content 1
deID.RESUL_34534563649.txt_345353567.txt Content 2
deID.RESUL_44235345636.txt_537967875.txt Content 3
deID.RESUL_35234663456.txt_423452545.txt Content 4
EOF
{
# I'm using `|` as the separator for columns
# output header names
echo 'file_name|file_content';
# this is the hearth of the script
# find the files
# for each file execute `sh -c 'printf "%s|%s\n" "$1" "$(cat "$1")"' -- <filename>`
# printf - nice printing
# "$(cat "$1")" - gets file content and also removes trailing empty newlines. Neat.
find . -type f -name 'deID.*' -exec sh -c 'printf "%s|%s\n" "$1" "$(cat "$1")"' -- {} \;
} |
# nice formatting:
column -t -s'|' -o ' '
will output:
file_name file_content
./deID.RESUL_44235345636.txt_537967875.txt Content 3
./deID.RESUL_35234663456.txt_423452545.txt Content 4
./deID.RESUL_34534563649.txt_345353567.txt Content 2
./deID.RESUL_12433287659.txt_234323456.txt Content 1

Related

Find string that are common or matched in a column of multiple txt files

I have 336 txt files and each txt file has 4 columns. I need help to find string that are common or matched in a column 2 (Gene) in all txt files and extract that information in new txt file.
For example: how many times “kdpDE beta” present and if it is present then print ‘1’ in the next column of output txt file if “kdpDE beta” is absent then print ‘0’.
Thank you for your help.
File_1.txt
Name Gene Family Class
KB2908 kdpE beta aminoglycoside lactamase
KB2908 ugd peptide transferase
File_2.txt
Name Gene Family Class
KB2909 kdpE beta aminoglycoside lactamase
KB2909 ugd peptide transferase
KB2909 PmrF macrolide phosphotransferase
You can use grep with wc to get a count of a certain string within a file. You can loop through it with a script to do this for every file in a directory. The following will loop through the directory, count the number of times <search term> appears in each file, and output it to a file called output.txt.
for FILE in *; do
echo $FILE >> output.txt
grep -o -i '<search term>' $FILE | wc -l >> output.txt
echo >> output.txt
done

How to delete smallest file if names are duplicate

I would like to clean up a folder with videos. I have a bunch of videos that were downloaded with different resolutions, so each file will start with the same name and then end with "_480p" or "_720p" etc.
I just want to keep the largest file of each such set.
So I am looking for a way to delete files based on
check if name before "_" is identical
if true, then delete all files except largest one
Thinking of a flexible and fast way to approach the problem, you can gather a list of files ending in "[[:digit:]]+p" and then a quick way to parse the names is to provide them on stdin to awk and let awk index an array with the file prefix (path + part of name before '_') so it will be unique for files allowing the different format size to be obtained and stored at that index.
Then it's a simply matter of comparing the stored resolution number for the file against the current file number and deleting the lesser of the two.
Your find command to locate all files in the directory below the current, recursively, could be:
find ./tmp -type f -regex "^.*[0-9]+p$"
What I would do is then pipe the filename output to a short awk script where an array stores the last seen number for a given file prefix, and then if the current record (line) resolution number if bigger than the value stored in the array, a filename using the array number is created and that file deleted with system() using rm filename. If the current line resolution number is less than what is already stored in the array for the file, you simply delete the current file.
You can do that as:
#!/usr/bin/awk -f
BEGIN { FS = "/" }
{
num = $NF # last field holds number up to 'p'
prefix = $0 # prefix is name up to "_[[:digit:]]+p
sub (/^.*_/, "", num) # isolate number
sub (/p$/, "", num) # remove 'p' at and
sub (/_[[:digit:]]+p$/, "", prefix) # isolate path and name prefix
if (prefix in a) { # current file in array a[] ?
rmfile = $0 # set file to remove to current
if (num + 0 > a[prefix] + 0) { # current number > array number
rmfile = prefix "_" a[prefix] "p" # for remove filename from array
a[prefix] = num # update array with higher num
}
system ("rm " rmfile); # delete the file
}
else
a[prefix] = num # if no num for prefix in array, store first
}
(note: the field-separator splits the fields using the directory separator so you have all file components to work with.)
Example Use/Output
With a representative set of files in a tmp/ directory below the current, e,g.
$ ls -1 tmp
a_480p
a_720p
b_1080p
b_480p
c_1080p
c_720p
Running the find command piped to the awk script named awkparse.sh would be as follows (don't forget to make the awk script executable):
$ find ./tmp -type f -regex "^.*[0-9]+p$" | ./awkparse.sh
Looking at the directory after piping the results of find to the awk script, the tmp/ directory now only contains the highest resolution (largest) files for any given filename, e.g.
$ ls -1
a_720p
b_1080p
c_1080p
This would be highly efficient. It could also handle all files in a nested directory structure where multiple directory levels hold files you need to clean out. Look things over and let me know if you have questions.
This shell script might be what you want:
previous_prefix=
for file in *_[0-9]*[0-9]p*; do
prefix=${file%_*}
resolution=${file##*_}
resolution=${resolution%%p*}
if [ "$prefix" = "$previous_prefix" ]; then
if [ "$resolution" -gt "$greater_resolution" ]; then
file_to_be_removed=$greater_file
greater_file=$file
greater_resolution=$resolution
else
file_to_be_removed=$file
fi
echo rm -- "$file_to_be_removed"
else
greater_resolution=$resolution
greater_file=$file
previous_prefix=$prefix
fi
done
Drop the echo if the output looks good.
I would try to:
list all non-smallest files (non-480p): *_720p* and *_1080p*
for each of them replace *_720p*/*_1080p* in the name with all possible smaller resolutions
and try to delete those files with rm -f, whether they exist or not
#!/bin/sh -e
shopt -s nullglob
for file in *_1080p*; do
rm -f -- "${file//_1080p/_720p}"
rm -f -- "${file//_1080p/_480p}"
done
for file in *_720p*; do
rm -f -- "${file//_720p/_480p}"
done
And here is a Bash script using nested loops to automate the above:
#!/bin/bash -e
shopt -s nullglob
res=(_1080p _720p _480p _240p)
for r in ${res[#]}; do
res=("${res[#]:1}") # remove the first element in res array
for file in *$r*; do
for r2 in ${res[#]}; do
rm -f -- "${file//$r/$r2}"
done
done
done

Creating multiple files with names matching a pattern from one file

I have a text file such as:
>Tolten.WP_096573835.1
MSSPKSLLIRRARIILPNGELMVGDVLTRDRQIVEVAPEIYTPTPTTEIDAAGLTLLPGVIDPQVHFREPGLEHKEDLFTASCACAKGGVTSFLEMPNTRPLTTN
--
>Trivar.WP_011317016.1
MSSPQSLLIRGARILLPNGEFLLGDVLIRDRHIIEVGTEIVNTTPATEIDAKGLTLLPGVIDPQVHFREPGLEHKEDLFTASCACAKGGVTSFLEMPNTRPLTTS
--
>uniSU2.WP_085434158.1
MTQLLIRHAQILLPNGQFLLGDVLTQDGKILEVASEIAATDLSNIIDATGLTLLPGVIDPQVHFREPGLEHKEDLFTATCACAKGGVTSFLEMPNTRPLTTTQAA
--
>Chlfri.WP_016876644.1
MSETPLLDKVIKNVRVVRPNQHTIEKVDIGIKNGKFAQIAPQISPDQTKEVFDAKNLLGFPGVVDAHMHIGIYQPLAQDAVSESKAAAMGGVTTSLNYIRTGQYY
--
>Noslin.WP_099070767.1
MSEASLLDRVIKNVRVVRPHNDAIELLDLGIKDGKFARIASHISPDTAKEVFDAKNLLGFPGVVDAHMHIGIYQPLDKDAVTESKAAAMGGVTTSLNYIRTGQYY
I want to create multiple text files with the content between each and every "--". The text files would be named after the the string starting with ">".
In the end I would have these text files:
Tolten.WP_096573835.1
Trivar.WP_011317016.1
uniSU2.WP_085434158.1
Chlfri.WP_016876644.1
Noslin.WP_099070767.1
With the following content:
Tolten.WP_096573835.1 text file:
>Tolten.WP_096573835.1
MSSPKSLLIRRARIILPNGELMVGDVLTRDRQIVEVAPEIYTPTPTTEIDAAGLTLLPGVIDPQVHFREPGLEHKEDLFTASCACAKGGVTSFLEMPNTRPLTTN
Trivar.WP_011317016.1 text file:
>Trivar.WP_011317016.1
MSSPQSLLIRGARILLPNGEFLLGDVLIRDRHIIEVGTEIVNTTPATEIDAKGLTLLPGVIDPQVHFREPGLEHKEDLFTASCACAKGGVTSFLEMPNTRPLTTS
uniSU2.WP_085434158.1
>uniSU2.WP_085434158.1
MTQLLIRHAQILLPNGQFLLGDVLTQDGKILEVASEIAATDLSNIIDATGLTLLPGVIDPQVHFREPGLEHKEDLFTATCACAKGGVTSFLEMPNTRPLTTTQAA
Chlfri.WP_016876644.1 text file:
>Chlfri.WP_016876644.1
MSETPLLDKVIKNVRVVRPNQHTIEKVDIGIKNGKFAQIAPQISPDQTKEVFDAKNLLGFPGVVDAHMHIGIYQPLAQDAVSESKAAAMGGVTTSLNYIRTGQYY
>Noslin.WP_099070767.1 text file:
>Noslin.WP_099070767.1
MSEASLLDRVIKNVRVVRPHNDAIELLDLGIKDGKFARIASHISPDTAKEVFDAKNLLGFPGVVDAHMHIGIYQPLDKDAVTESKAAAMGGVTTSLNYIRTGQYY
I know csplit works for this sort of thing:
csplit --suppress-matched original_text_file.txt '/^--/' '{*}'
But I can't get it to name the files appropriately.
Does anyone know how to help?
Thanks in advance :)
I'm afraid csplit can't do that directly, you can only change prefix and suffix of filenames using a "fixed" format. nothing stops you of doing the renaming afterwards using a simple loop, e.g.:
$ csplit --suppress-matched original_text_file.txt '/^--$/' '{*}'
129
129
129
129
129
$ for f in xx*; do mv "$f" "`head -n 1 "$f" | cut -c 2-`"; done
$ ls -1
Chlfri.WP_016876644.1
Noslin.WP_099070767.1
original_text_file.txt
Tolten.WP_096573835.1
Trivar.WP_011317016.1
uniSU2.WP_085434158.1
$
you can easily combine csplit and the loop to get a one-liner:
csplit --suppress-matched original_text_file.txt '/^--$/' '{*}' && for f in xx*; do mv "$f" "`head -n 1 "$f" | cut -c 2-`"; done

Removing a prefix from files recursively in ssh

I have a load of folders of images (a lot!) and some of the thumbnails have a 'tn' prefix, while others don't, so in order to be able to write a gallery for all, I'm trying to remove the 'tn' from the beginning of the files that have it recursively in the entire directory.
So, an offending thumbnail would have the files :
tngal001-001.jpg
tngal001-002.jpg
tngal001-003.jpg
etc...
and I need them to be :
gal001-001.jpg
gal001-002.jpg
gal001-003.jpg
or even better still... if I could get the whole tngal001- off, that'd be amazing, so, in the directory gallery I have:
gal001/thumbnails/tngal001-001.jpg
gal001/thumbnails/tngal001-002.jpg
gal001/thumbnails/tngal001-003.jpg
etc...
gal002/thumbnails/tngal002-001.jpg
gal002/thumbnails/tngal002-002.jpg
gal002/thumbnails/tngal002-003.jpg
etc...
gal003/thumbnails/tngal003-001.jpg
gal003/thumbnails/tngal003-002.jpg
gal003/thumbnails/tngal003-003.jpg
etc...
and I'd prefer to have:
gal001/thumbnails/001.jpg
gal001/thumbnails/002.jpg
gal001/thumbnails/003.jpg
etc...
gal002/thumbnails/001.jpg
gal002/thumbnails/002.jpg
gal002/thumbnails/003.jpg
etc...
gal003/thumbnails/001.jpg
gal003/thumbnails/002.jpg
gal003/thumbnails/003.jpg
etc...
I have tried find . -type f -name "tn*" -exec sh -c 'for f; do mv "$f" "{f#tn}"; done' find sh {} +
and find . -type f -exec sh -c 'for file in tn*; do mv "$file" "${file#tn}"; done' findsh {} +
but I'm not getting it quite right. I just want to understand how to strip off the letters/rename recursively, as I'm just getting my head around this stuff. All the other questions I have found seem to be talking about stripping out characters from file names and all the ascii characters and escaping spaces etc are confusing me. I would appreciate it if someone could explain it in plain(ish) english. I'm not stupid, but I am a newbie to linux! I know it's all logical once I understand what's happening.
Thanks in advance, Kirsty
find . -type f -name "tn*" -exec sh -c '
for f; do
fname=${f##*/}
mv -i -- "$f" "${f%/*}/${fname#tn*-}"
done
' sh {} +
You need to split "$f" into the parent path and filename before you start to remove the prefix from the filename. And you forgot to add a $ in your parameter expansion (${f#tn}).
${f##*/} removes the longest prefix */ and leaves the filename, e.g.
gal001/thumbnails/tngal001-001.jpg -> tngal001-001.jpg
(the same result as basename "$f")
${f%/*} removes the shortest suffix /* and leaves the parent path, e.g.
gal001/thumbnails/tngal001-001.jpg -> gal001/thumbnails
(the same result as dirname "$f")
${fname#tn*-} removes the shortest prefix tn*- from the filename, e.g.
tngal001-001.jpg -> 001.jpg
I added the -i option to prompt to overwrite an already existing file.
You can loop over all the folders and files in your gallery and then rename them as following.
Assuming you have your folder structure as
gallery/
gallery/gal001
gallery/gal002
gallery/gal003
...
gallery/gal001/thumbnails/
gallery/gal002/thumbnails/
gallery/gal003/thumbnails/
...
gallery/gal001/thumbnails/tngal001-001.jpg
gallery/gal001/thumbnails/tngal001-002.jpg
gallery/gal001/thumbnails/tngal001-002.jpg
Move to your gallery using cd gallery then run the following code
for j in *;
do
cd $j/thumbnails;
for i in *;
do
echo "Renaming $j/thumbnails/$i --> $(echo $i|sed "s/tn$j-//1")";
mv -i $i $(echo $i|sed "s/tn$j-//1");
done
cd ../..;
done
Explanation
for j in *;
loops over all the folders in gallery ie j contains gal001, gal002, gal003, etc.
cd $j/thumbnails;
moves inside 'gal001/thumbnails' direcotry.
for i in *; do
loops over all the files in the directory gal001/thumbnails and name of the file is contained in i.
echo "Renaming $j/thumbnails/$i --> $(echo $i|sed "s/tn$j-//1")"
Prints the file name and to which it is being renamed to. (Remove it if you don't want verbose).
mv $i $(echo $i|sed "s/tn$j-//1"); done
mv -i $i newname Renames $i (value of current file in loop). -i flag to prompt if the file name after rename already exist.
sed is stream editor, takes the filename by piping $i into sed,
"s/previous/new/1" replaces first occurence of previous value with new value in the stream. Here, replaces value of tn + j (which is name of directory gal001) i.e. tngal001- with "null string" (nothing between //).
cd ../.. to move back to gallery.

Better way to rename files based on multiple patterns

a lot of files I download have crap/spam in their filenames, e.g.
[ www.crap.com ] file.name.ext
www.crap.com - file.name.ext
I've come up with two ways for dealing with them but they both seem pretty clunky:
with parameter expansion:
if [[ ${base_name} != ${base_name//\[+([^\]])\]} ]]
then
mv -v "${dir_name}/${base_name}" "${dir_name}/${base_name//\[+([^\]])\]}" &&
base_name="${base_name//\[+([^\]])\]}"
fi
if [[ ${base_name} != ${base_name//www.*.com - /} ]]
then
mv -v "${dir_name}/${base_name}" "${dir_name}/${base_name//www.*.com - /}" &&
base_name="${base_name//www.*.com - /}"
fi
# more of these type of statements; one for each type of frequently-encountered pattern
and then with echo/sed:
tmp=`echo "${base_name}" | sed -e 's/\[[^][]*\]//g' | sed -e 's/\s-\s//g'`
mv "${base_name}" "{tmp}"
I feel like the parameter expansion is the worse of the two but I like it because I'm able to keep the same variable assigned to the file for further processing after the rename (the above code is used in a script that's called for each file after the file download is complete).
So anyway I was hoping there's a better/cleaner way to do the above that someone more knowledgeable than myself could show me, preferably in a way that would allow me to easily reassign the old/original variable to the new/renamed file.
Thanks
Two answer: using perl rename or using pure bash
As there are some people who dislike perl, I wrote my bash only version
Renaming files by using the rename command.
Introduction
Yes, this is a typical job for rename command which was precisely designed for:
man rename | sed -ne '/example/,/^[^ ]/p'
For example, to rename all files matching "*.bak" to strip the
extension, you might say
rename 's/\.bak$//' *.bak
To translate uppercase names to lower, you'd use
rename 'y/A-Z/a-z/' *
More oriented samples
Simply drop all spaces and square brackets:
rename 's/[ \[\]]*//g;' *.ext
Rename all .jpg by numbering from 1:
rename 's/^.*$/sprintf "IMG_%05d.JPG",++$./e' *.jpg
Demo:
touch {a..e}.jpg
ls -ltr
total 0
-rw-r--r-- 1 user user 0 sep 6 16:35 e.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 d.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 c.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 b.jpg
-rw-r--r-- 1 user user 0 sep 6 16:35 a.jpg
rename 's/^.*$/sprintf "IMG_%05d.JPG",++$./e' *.jpg
ls -ltr
total 0
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00005.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00004.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00003.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00002.JPG
-rw-r--r-- 1 user user 0 sep 6 16:35 IMG_00001.JPG
Full syntax for matching SO question, in safe way
There is a strong and safe way using rename utility:
As this is perl common tool, we have to use perl syntax:
rename 'my $o=$_;
s/[ \[\]]+/-/g;
s/-+/-/g;
s/^-//g;
s/-\(\..*\|\)$/$1/g;
s/(.*[^\d])(|-(\d+))(\.[a-z0-9]{2,6})$/
my $i=$3;
$i=0 unless $i;
sprintf("%s-%d%s", $1, $i+1, $4)
/eg while
$o ne $_ &&
-f $_;
' *
Testing rule:
touch '[ www.crap.com ] file.name.ext' 'www.crap.com - file.name.ext'
ls -1
[ www.crap.com ] file.name.ext
www.crap.com - file.name.ext
rename 'my $o=$_; ...
...
...' *
ls -1
www.crap.com-file.name-1.ext
www.crap.com-file.name.ext
touch '[ www.crap.com ] file.name.ext' 'www.crap.com - file.name.ext'
ls -1
www.crap.com-file.name-1.ext
[ www.crap.com ] file.name.ext
www.crap.com - file.name.ext
www.crap.com-file.name.ext
rename 'my $o=$_; ...
...
...' *
ls -1
www.crap.com-file.name-1.ext
www.crap.com-file.name-2.ext
www.crap.com-file.name-3.ext
www.crap.com-file.name.ext
... and so on...
... and it's safe while you don't use -f flag to rename command: file won't be overwrited and you will get an error message if something goes wrong.
Renaming files by using bash and so called bashisms:
I prefer doing this by using dedicated utility, but this could even be done by using pure bash (aka without any fork)
There is no use of any other binary than bash (no sed, awk, tr or other):
#!/bin/bash
for file;do
newname=${file//[ \]\[]/.}
while [ "$newname" != "${newname#.}" ] ;do
newname=${newname#.}
done
while [ "$newname" != "${newname//[.-][.-]/.}" ] ;do
newname=${newname//[.-][.-]/-};done
if [ "$file" != "$newname" ] ;then
if [ -f $newname ] ;then
ext=${newname##*.}
basename=${newname%.$ext}
partname=${basename%%-[0-9]}
count=${basename#${partname}-}
[ "$partname" = "$count" ] && count=0
while printf -v newname "%s-%d.%s" $partname $[++count] $ext &&
[ -f "$newname" ] ;do
:;done
fi
mv "$file" $newname
fi
done
To be run with files as argument, for sample:
/path/to/my/script.sh \[*
Replacing spaces and square bracket by dot
Replacing sequences of .-, -., -- or .. by only one -.
Test if filename don't differ, there is nothing to do.
Test if a file exist with newname...
split filename, counter and extension, for making indexed newname
loop if a file exist with newname
Finaly rename the file.
Take advantage of the following classical pattern:
job_select /path/to/directory| job_strategy | job_process
where job_select is responsible for selecting the objects of your job, job_strategy prepares a processing plan for these objects and job_process eventually executes the plan.
This assumes that filenames do not contain a vertical bar | nor a newline character.
The job_select function
# job_select PATH
# Produce the list of files to process
job_select()
{
find "$1" -name 'www.*.com - *' -o -name '[*] - *'
}
The find command can examine all properties of the file maintained by the file system, like creation time, access time, modification time. It is also possible to control how the filesystem is explored by telling find not to descend into mounted filesystems, how much recursions levels are allowed. It is common to append pipes to the find command to perform more complicated selections based on the filename.
Avoid the common pitfall of including the contents of hidden directories in the output of the job_select function. For instance, the directories CVS, .svn, .svk and .git are used by the corresponding source control management tools and it is almost always wrong to include their contents in the output of the job_select function. By inadvertently batch processing these files, one can easily make the affected working copy unusable.
The job_strategy function
# job_strategy
# Prepare a plan for renaming files
job_strategy()
{
sed -e '
h
s#/www\..*\.com - *#/#
s#/\[^]]* - *#/#
x
G
s/\n/|/
'
}
This commands reads the output of job_select and makes a plan for our renaming job. The plan is represented by text lines having two fields separated by the character |, the first field being the old name of the file and the second being the new computed file of the file, it looks like
[ www.crap.com ] file.name.1.ext|file.name.1.ext
www.crap.com - file.name.2.ext|file.name.2.ext
The particular program used to produce the plan is essentially irrelevant, but it is common to use sed as in the example; awk or perl for this. Let us walk through the sed-script used here:
h Replace the contents of the hold space with the contents of the pattern space.
… Edit the contents of the pattern space.
x Swap the contents of the pattern and hold spaces.
G Append a newline character followed by the contents of the hold space to the pattern space.
s/\n/|/ Replace the newline character in the pattern space by a vertical bar.
It can be easier to use several filters to prepare the plan. Another common case is the use of the stat command to add creation times to file names.
The job_process function
# job_process
# Rename files according to a plan
job_process()
{
local oldname
local newname
while IFS='|' read oldname newname; do
mv "$oldname" "$newname"
done
}
The input field separator IFS is adjusted to let the function read the output of job_strategy. Declaring oldname and newname as local is useful in large programs but can be omitted in very simple scripts. The job_process function can be adjusted to avoid overwriting existing files and report the problematic items.
About data structures in shell programs
Note the use of pipes to transfer data from one stage to the other: apprentices often rely on variables to represent such information but it turns out to be a clumsy choice. Instead, it is preferable to represent data as tabular files or as tabular data streams moving from one process to the other, in this form, data can be easily processed by powerful tools like sed, awk, join, paste and sort — only to cite the most common ones.
You can use rnm
rnm -rs '/\[crap\]|\[spam\]//g' *.ext
The above will remove [crap] or [spam] from filename.
You can pass multiple regex pattern by terminating them with ; or overloading the -rs option.
rnm -rs '/[\[\]]//g;/\s*\[crap\]//g' -rs '/crap2//' *.ext
The general format of this replace string is /search_part/replace_part/modifier
search_part : regex to search for.
replace_part : string to replace with
modifier : i (case insensitive), g (global replace)
uppercase/lowercase:
A replace string of the form /search_part/\c/modifier will make the selected part of the filename (by the regex search_part) lowercase while \C (capital \C) in replace part will make it uppercase.
rnm -rs '/[abcd]/\C/g' *.ext
## this will capitalize all a,b,c,d in the filenames
If you have many regex patterns that need to be dealt with, then put those patterns in a file and pass the file with -rs/f option.
rnm -rs/f /path/to/regex/pattern/file *.ext
You can find some other examples here.
Note:
rnm uses PCRE2 (revised PCRE) regex.
You can undo an unwanted rename operation by running rnm -u
P.S: I am the author of this tool.
If you are using Ubunntu/Debian os use rename command to rename multiple files at time.
If you want to use something not depending on perl, you can use the following code (let's call it sanitizeNames.sh). It is only showing a few cases, but it's easily extensible using string substitution, tr (and sed too).
#!/bin/bash
ls $1 |while read f; do
newfname=$(echo "$f" \
|tr -d '\[ ' \ # Removing opened square bracket
|tr ' \]' '-' \ # Translating closing square bracket to dash
|tr -s '-' \ # Squeezing multiple dashes
|tr -s '.' \ # Squeezing multiple dots
)
newfname=${newfname//-./.}
if [ -f "$newfname" ]; then
# Some string magic...
extension=${newfname##*\.}
basename=${newfname%\.*}
basename=${basename%\-[1-9]*}
lastNum=$[ $(ls $basename*|wc -l) ]
mv "$f" "$basename-$lastNum.$extension"
else
mv "$f" "$newfname"
fi
done
And use it:
$ touch '[ www.crap.com ] file.name.ext' 'www.crap.com - file.name.ext' '[ www.crap.com ] - file.name.ext' '[www.crap.com ].file.anothername.ext2' '[www.crap.com ].file.name.ext'
$ ls -1 *crap*
[ www.crap.com ] - file.name.ext
[ www.crap.com ] file.name.ext
[www.crap.com ].file.anothername.ext2
[www.crap.com ].file.name.ext
www.crap.com - file.name.ext
$ ./sanitizeNames.sh *crap*
$ ls -1 *crap*
www.crap.com-file.anothername.ext2
www.crap.com-file.name-1.ext
www.crap.com-file.name-2.ext
www.crap.com-file.name-3.ext
www.crap.com-file.name.ext

Resources