file name matching, differentiation with numbers and characters - linux

I have a folder with many files,
the name of some files are like file_1 file_10 file_21 file_345
others are like file_fr file_de file_cn
I want to move the first type of files into another folder
like
mv file_* another_folder
but file_* will match all files
are there any good scripts?
thanks

Try this
mv file_[0-9]* another_folder
In response to glenn jackman’s comment
ls | grep 'file_[0-9]*$' | xargs mv -t another_folder

bash:
shopt -s extglob
mv file_+([0-9]) ..
http://www.gnu.org/software/bash/manual/bashref.html#Pattern-Matching

Related

script to move files based on extension criteria

I've a certain amount of files always containing same name but different extensions, for example sample.dat, sample.txt, etc.
I would like to create a script that looks where sample.dat is present and than moves all files with name sample*.* into another directory.
I know how to identify them with ls *.dat | sed 's/\(.*\)\..*/\1/', however I would like to concatenate with something like || mv (the result of the first part) *.* /otherdirectory/
You can use this bash one-liner:
for f in `ls | grep YOUR_PATTERN`; do mv ${f} NEW_DESTINATION_DIRECTORY/${f}; done
It iterates through the result of the operation ls | grep, which is the list of your files you wish to move, and then it moves each file to the new destination.
Something simple like this?
dat_roots=$(ls *.dat | sed 's/\.dat$//')
for i in $dat_roots; do
echo mv ${i}*.* other-directory
done
This will break for file names containing spaces, so be careful.
Or if spaces are an issue, this will do the job, but is less readable.
ls *.dat | sed 's/\.dat$//' | while read root; do
mv "${root}"*.* other-directory
done
Not tested, but this should do the job:
shopt -s nullglob
for f in *.dat
do
mv ${f%.dat}.* other-directory
done
Setting the nullglob option ensures that the loob is not executed, if no dat-file exists. If you use this code as part of a larger script, you might want to unset it afterwards (shopt -u nullglob).

rename all files in folder through regular expression

I have a folder with lots of files which name has the following structure:
01.artist_name - song_name.mp3
I want to go through all of them and rename them using the regexp:
/^d+\./
so i get only :
artist_name - song_name.mp3
How can i do this in bash?
You can do this in BASH:
for f in [0-9]*.mp3; do
mv "$f" "${f#*.}"
done
Use the Perl rename utility utility. It might be installed on your version of Linux or easy to find.
rename 's/^\d+\.//' -n *.mp3
With the -n flag, it will be a dry run, printing what would be renamed, without actually renaming. If the output looks good, drop the -n flag.
Use 'sed' bash command to do so:
for f in *.mp3;
do
new_name="$(echo $f | sed 's/[^.]*.//')"
mv $f $new_name
done
...in this case, regular expression [^.].* matches everything before first period of a string.

linux shell command mv many files

I have many files like 1a1, 2a2, 3a3 and I want to mv the file names to 1b1, 2b2, 3b3. That means to replace 'a' to 'b' in these file names.
I have tried the command like:
for f in */*; do
mv "$f" "${f/a/b}"
done
ls | xargs -i mv {} ${{}/a/b}
ls | xargs -i mv {} \`echo {}|tr -t 'a' 'b'\`
but none works.
I know a command
rename 'a' 'b' *
can work.
But I still want to figure out how to use for, xargs involved with other cmds to do this work. After all, in every day use, they are much general than simple rename command.
Please help me, thanks.
#!/bin/bash
for old in *
do new=$(echo "$old" | sed -e 's/a/b/')
echo mv "$old" "$new" &>2
mv "$old" "$new"
done
This example will allow you to guess more complex name transformations as you learn how to use sed(1) command to do the name transformations.
The program walks all the command line parameters to the for loop, in each loop, the program gets a new variable new with the transformation of the original $old name. Then you only have to execute the command with the old and new values.
Just in case you want to know with rename:
rename 's/(.*)a(.*)/$1b$2/' *

Script for renaming files with logical

Someone has very kindly help get me started on a mass rename script for renaming PDF files.
As you can see I need to add a bit of logical to stop the below happening - so something like add a unique number to a duplicate file name?
rename 's/^(.{5}).*(\..*)$/$1$2/' *
rename -n 's/^(.{5}).*(\..*)$/$1$2/' *
Annexes 123114345234525.pdf renamed as Annex.pdf
Annexes 123114432452352.pdf renamed as Annex.pdf
Hope this makes sense?
Thanks
for i in *
do
x='' # counter
j="${i:0:2}" # new name
e="${i##*.}" # ext
while [ -e "$j$x" ] # try to find other name
do
((x++)) # inc counter
done
mv "$i" "$j$x" # rename
done
before
$ ls
he.pdf hejjj.pdf hello.pdf wo.pdf workd.pdf world.pdf
after
$ ls
he.pdf he1.pdf he2.pdf wo.pdf wo1.pdf wo2.pdf
This should check whether there will be any duplicates:
rename -n [...] | grep -o ' renamed as .*' | sort | uniq -d
If you get any output of the form renamed as [...], then you have a collision.
Of course, this won't work in a couple corner cases - If your files contain newlines or the literal string renamed as, for example.
As noted in my answer on your previous question:
for f in *.pdf; do
tmp=`echo $f | sed -r 's/^(.{5}).*(\..*)$/$1$2/'`
mv -b ./"$f" ./"$tmp"
done
That will make backups of deleted or overwritten files. A better alternative would be this script:
#!/bin/bash
for f in $*; do
tar -rvf /tmp/backup.tar $f
tmp=`echo $f | sed -r 's/^(.{5}).*(\..*)$/$1$2/'`
i=1
while [ -e tmp ]; do
tmp=`echo $tmp | sed "s/\./-$i/"`
i+=1
done
mv -b ./"$f" ./"$tmp"
done
Run the script like this:
find . -exec thescript '{}' \;
The find command gives you lots of options for specifing which files to run on, works recursively, and passes all the filenames in to the script. The script backs all file up with tar (uncompressed) and then renames them.
This isn't the best script, since it isn't smart enough to avoid the manual loop and check for identical file names.

Unix: How to delete files listed in a file

I have a long text file with list of file masks I want to delete
Example:
/tmp/aaa.jpg
/var/www1/*
/var/www/qwerty.php
I need delete them. Tried rm `cat 1.txt` and it says the list is too long.
Found this command, but when I check folders from the list, some of them still have files
xargs rm <1.txt Manual rm call removes files from such folders, so no issue with permissions.
This is not very efficient, but will work if you need glob patterns (as in /var/www/*)
for f in $(cat 1.txt) ; do
rm "$f"
done
If you don't have any patterns and are sure your paths in the file do not contain whitespaces or other weird things, you can use xargs like so:
xargs rm < 1.txt
Assuming that the list of files is in the file 1.txt, then do:
xargs rm -r <1.txt
The -r option causes recursion into any directories named in 1.txt.
If any files are read-only, use the -f option to force the deletion:
xargs rm -rf <1.txt
Be cautious with input to any tool that does programmatic deletions. Make certain that the files named in the input file are really to be deleted. Be especially careful about seemingly simple typos. For example, if you enter a space between a file and its suffix, it will appear to be two separate file names:
file .txt
is actually two separate files: file and .txt.
This may not seem so dangerous, but if the typo is something like this:
myoldfiles *
Then instead of deleting all files that begin with myoldfiles, you'll end up deleting myoldfiles and all non-dot-files and directories in the current directory. Probably not what you wanted.
Use this:
while IFS= read -r file ; do rm -- "$file" ; done < delete.list
If you need glob expansion you can omit quoting $file:
IFS=""
while read -r file ; do rm -- $file ; done < delete.list
But be warned that file names can contain "problematic" content and I would use the unquoted version. Imagine this pattern in the file
*
*/*
*/*/*
This would delete quite a lot from the current directory! I would encourage you to prepare the delete list in a way that glob patterns aren't required anymore, and then use quoting like in my first example.
You could use '\n' for define the new line character as delimiter.
xargs -d '\n' rm < 1.txt
Be careful with the -rf because it can delete what you don't want to if the 1.txt contains paths with spaces. That's why the new line delimiter a bit safer.
On BSD systems, you could use -0 option to use new line characters as delimiter like this:
xargs -0 rm < 1.txt
xargs -I{} sh -c 'rm "{}"' < 1.txt should do what you want. Be careful with this command as one incorrect entry in that file could cause a lot of trouble.
This answer was edited after #tdavies pointed out that the original did not do shell expansion.
You can use this one-liner:
cat 1.txt | xargs echo rm | sh
Which does shell expansion but executes rm the minimum number of times.
Just to provide an another way, you can also simply use the following command
$ cat to_remove
/tmp/file1
/tmp/file2
/tmp/file3
$ rm $( cat to_remove )
In this particular case, due to the dangers cited in other answers, I would
Edit in e.g. Vim and :%s/\s/\\\0/g, escaping all space characters with a backslash.
Then :%s/^/rm -rf /, prepending the command. With -r you don't have to worry to have directories listed after the files contained therein, and with -f it won't complain due to missing files or duplicate entries.
Run all the commands: $ source 1.txt
cat 1.txt | xargs rm -f | bash Run the command will do the following for files only.
cat 1.txt | xargs rm -rf | bash Run the command will do the following recursive behaviour.
Here's another looping example. This one also contains an 'if-statement' as an example of checking to see if the entry is a 'file' (or a 'directory' for example):
for f in $(cat 1.txt); do if [ -f $f ]; then rm $f; fi; done
Here you can use set of folders from deletelist.txt while avoiding some patterns as well
foreach f (cat deletelist.txt)
rm -rf ls | egrep -v "needthisfile|*.cpp|*.h"
end
This will allow file names to have spaces (reproducible example).
# Select files of interest, here, only text files for ex.
find -type f -exec file {} \; > findresult.txt
grep ": ASCII text$" findresult.txt > textfiles.txt
# leave only the path to the file removing suffix and prefix
sed -i -e 's/:.*$//' textfiles.txt
sed -i -e 's/\.\///' textfiles.txt
#write a script that deletes the files in textfiles.txt
IFS_backup=$IFS
IFS=$(echo "\n\b")
for f in $(cat textfiles.txt);
do
rm "$f";
done
IFS=$IFS_backup
# save script as "some.sh" and run: sh some.sh
In case somebody prefers sed and removing without wildcard expansion:
sed -e "s/^\(.*\)$/rm -f -- \'\1\'/" deletelist.txt | /bin/sh
Reminder: use absolute pathnames in the file or make sure you are in the right directory.
And for completeness the same with awk:
awk '{printf "rm -f -- '\''%s'\''\n",$1}' deletelist.txt | /bin/sh
Wildcard expansion will work if the single quotes are remove, but this is dangerous in case the filename contains spaces. This would need to add quotes around the wildcards.

Resources