I have a directory full of sub-directories that look like this:
Track_0000111
Track_0004444
Track_0022222
Track_0333333
Track_5555555
I would like to remove certain directories if they are contained within a list in the file "RemoveFromTop6000_reformatted.txt"
The contents of the text file look like this:
Track_0000111
Track_0022222
Track_0333333
I tried to write a small script to handle this, but it does not seem to work:
#!/bin/bash
for file in cat RemoveFromTop6000_reformatted.txt; do
rm -rfv $file
done
Unfortunately this simply removes the text files, rather than the directories. Any tips?
Thanks!
You forgot backquotes around your call to cat. Without them, rm will simply delete the files cat (which probably doesn't exist, but you might not notice because you're using rm -f) and RemoveFromTop6000_reformatted.txt
Try this:
#!/bin/bash
for file in `cat RemoveFromTop6000_reformatted.txt`; do
rm -rv "$file"
done
or, more simply,
rm -rv `cat $file`
(but this will only work if the directory names don't contain whitespace).
No need to for, for something like this you can do a while read ...; do ... done < file just like this:
#!/bin/bash
while read file
rm -rfv "$file"
done < RemoveFromTop6000_reformatted.txt
you can try below command,
Command:
sed 's/^/"/g' sample.txt | sed 's/$/"/g' | xargs rm -rfv
Description:
Command will remove files as well as directories mentioned in "sample.txt".
NOTE:
In your case,make sure that "RemoveFromTop6000_reformatted.txt"
contains only directories name.
Command will also work if the directories name contains whitespace.
Related
When i am trying to run below code it is giving error of cp: target "Featurespath" is not a directory
I have tried multiple option but not working.
Featurespath=/permanent/jag/media-*/*/print/cooked/*Features.xml
for file in $(ls $Featurespath);
do
cat $Featurespath | sed "/pB-/s/Direction=\"unidir\"/Direction=\"bidir\"/" $Featurespath > /permanent/jag/temp.xml
cp -rf /permanent/jag/temp.xml $Featurespath
rm /permanent/jag/temp.xml
done
i want modified xml to be pasted in same xml file.
The error you received was because of the cp line: bash expands$Featurespath into a list of files. When cp sees more than 2 parameters, it assumes the last parameter to be a directory, which is not in this case. Here is my suggested fix:
Featurespath=/permanent/jag/media-*/*/print/cooked/*Features.xml
for file in $Featurespath
do
sed "/pB-/s/Direction=\"unidir\"/Direction=\"bidir\"/" "$file" > /permanent/jag/temp.xml
mv -f /permanent/jag/temp.xml "$file"
done
Notes
Do not use ls: bash can expand the wildcards just fine
Within the loop, you are now dealing with individual files $file, not the list of file $Featurespath
Do not need to use the cat command, the sed command can take a file name
sed has an inline editing option, which eliminate the need for temp file. You might want to look into it.
Replace cp/rm combination with mv
Ultimately, like others have said, sed is not the right tool to edit XML contents, but it might work for simple cases
I'm encountering many files with the same content and the same name on some of my servers. I need to quarantine these files for analysis so I can't just remove the duplicates. The OS is Linux (centos and ubuntu).
I enumerate the file names and locations and put them into a text file.
Then I do a for statement to move the files to quarantine.
for file in $(cat bad-stuff.txt); do mv $file /quarantine ;done
The problem is that they have the same file name and I just need to add something unique to the filename to get it to save properly. I'm sure it's something simple but I'm not good with regex. Thanks for the help.
Since you're using Linux, you can take advantage of GNU mv's --backup.
while read -r file
do
mv --backup=numbered "$file" "/quarantine"
done < "bad-stuff.txt"
Here's an example that shows how it works:
$ cat bad-stuff.txt
./c/foo
./d/foo
./a/foo
./b/foo
$ while read -r file; do mv --backup=numbered "$file" "./quarantine"; done < "bad-stuff.txt"
$ ls quarantine/
foo foo.~1~ foo.~2~ foo.~3~
$
I'd use this
for file in $(cat bad-stuff.txt); do mv $file /quarantine/$file.`date -u +%s%N`; done
You'll get everyfile with a timestamp appended (in nanoseconds).
You can create a new file name composed by the directory and the filename. Thus you can add one more argument in your original code:
for ...; do mv $file /quarantine/$(echo $file | sed 's:/:_:g') ; done
Please note that you should replace the _ with a proper character which is special enough.
On my computer running Ubuntu, I have a folder full of hundreds files all named "index.html.n" where n starts at one and continues upwards. Some of those files are actual html files, some are image files (png and jpg), and some of them are zip files.
My goal is to permanently remove every single file except the zip archives. I assume it's some combination of rm and file, but I'm not sure of the exact syntax.
If it fits into your argument list and no filenames contain colon a simple pipe with xargs should do:
file * | grep -vi zip | cut -d: -f1 | tr '\n' '\0' | xargs -0 rm
First find to find matching file, then file to get file types. sed eliminates other file types and also removes everything but the filenames from the output of file. lastly, rm for deleting:
find -name 'index.html.[0-9]*' | \
xargs file | \
sed -n 's/\([^:]*\): Zip archive.*/\1/p' |
xargs rm
I would run:
for f in in index.html.*
do
file "$f" | grep -qi zip
[ $? -ne 0 ] && rm -i "$f"
done
and remove -i option if you feel confident enough
Here's the approach I'd use; it's not entirely automated, but it's less error-prone than some other approaches.
file * > cleanup.sh
or
file index.html.* > cleanup.sh
This generates a list of all files (excluding dot files), or of all index.html.* files, in your current directory and writes the list to cleanup.sh.
Using your favorite text editor (mine happens to be vim), edit cleanup.sh:
Add #!/bin/sh as the first line
Delete all lines containing the string "Zip archive"
On each line, delete everything from the : to the end of the line (in vim, :%s/:.*$//)
Replace the beginning of each line with "rm" followed by a space
Exit your editor, updating the file.
chmod +x cleanup.sh
You should now have a shell script that will delete everything except zip files.
Carefully inspect the script before running it. Look out for typos, and for files whose names contain shell metacharacters. You might need to add quotation marks to the file names.
(Note that if you do this as a one-line shell command, you don't have the opportunity to inspect the list of files you're going to delete before you actually delete them.)
Once you're satisfied that your script is correct, run
./cleanup.sh
from your shell prompt.
for i in index.html.*
do
$type = file $i;
if [[ ! $file =~ "Zip" ]]
then
rm $file
fi
done
Change the rm to a ls for testing purposes.
I have a series of files named like such:
file 1.jpeg
file 2.jpeg
file 3.jpeg
...
file 40.jpeg
I would like remove the space from all of their filenames without having to individually do it. I know its possible using something like: file{1,40}.jpeg or something like that but i can't remember and I don't even know how to search for it.
Thanks!
EDIT: linux
http://www.google.es/search?q=shell+rename+similar+files+in+a+directory
The first result is http://www.debian-administration.org/articles/150
Using the perl rename command [...] we can also, for example, strip spaces from filenames with this:
~$ rename 's/ //' *.jpeg
In other posts I've found this kind of commands that do not require perl:
for f in *; do mv "$f" `echo $f | tr --delete ' '`; done
I've not tried any of them.
I have a long text file with list of file masks I want to delete
Example:
/tmp/aaa.jpg
/var/www1/*
/var/www/qwerty.php
I need delete them. Tried rm `cat 1.txt` and it says the list is too long.
Found this command, but when I check folders from the list, some of them still have files
xargs rm <1.txt Manual rm call removes files from such folders, so no issue with permissions.
This is not very efficient, but will work if you need glob patterns (as in /var/www/*)
for f in $(cat 1.txt) ; do
rm "$f"
done
If you don't have any patterns and are sure your paths in the file do not contain whitespaces or other weird things, you can use xargs like so:
xargs rm < 1.txt
Assuming that the list of files is in the file 1.txt, then do:
xargs rm -r <1.txt
The -r option causes recursion into any directories named in 1.txt.
If any files are read-only, use the -f option to force the deletion:
xargs rm -rf <1.txt
Be cautious with input to any tool that does programmatic deletions. Make certain that the files named in the input file are really to be deleted. Be especially careful about seemingly simple typos. For example, if you enter a space between a file and its suffix, it will appear to be two separate file names:
file .txt
is actually two separate files: file and .txt.
This may not seem so dangerous, but if the typo is something like this:
myoldfiles *
Then instead of deleting all files that begin with myoldfiles, you'll end up deleting myoldfiles and all non-dot-files and directories in the current directory. Probably not what you wanted.
Use this:
while IFS= read -r file ; do rm -- "$file" ; done < delete.list
If you need glob expansion you can omit quoting $file:
IFS=""
while read -r file ; do rm -- $file ; done < delete.list
But be warned that file names can contain "problematic" content and I would use the unquoted version. Imagine this pattern in the file
*
*/*
*/*/*
This would delete quite a lot from the current directory! I would encourage you to prepare the delete list in a way that glob patterns aren't required anymore, and then use quoting like in my first example.
You could use '\n' for define the new line character as delimiter.
xargs -d '\n' rm < 1.txt
Be careful with the -rf because it can delete what you don't want to if the 1.txt contains paths with spaces. That's why the new line delimiter a bit safer.
On BSD systems, you could use -0 option to use new line characters as delimiter like this:
xargs -0 rm < 1.txt
xargs -I{} sh -c 'rm "{}"' < 1.txt should do what you want. Be careful with this command as one incorrect entry in that file could cause a lot of trouble.
This answer was edited after #tdavies pointed out that the original did not do shell expansion.
You can use this one-liner:
cat 1.txt | xargs echo rm | sh
Which does shell expansion but executes rm the minimum number of times.
Just to provide an another way, you can also simply use the following command
$ cat to_remove
/tmp/file1
/tmp/file2
/tmp/file3
$ rm $( cat to_remove )
In this particular case, due to the dangers cited in other answers, I would
Edit in e.g. Vim and :%s/\s/\\\0/g, escaping all space characters with a backslash.
Then :%s/^/rm -rf /, prepending the command. With -r you don't have to worry to have directories listed after the files contained therein, and with -f it won't complain due to missing files or duplicate entries.
Run all the commands: $ source 1.txt
cat 1.txt | xargs rm -f | bash Run the command will do the following for files only.
cat 1.txt | xargs rm -rf | bash Run the command will do the following recursive behaviour.
Here's another looping example. This one also contains an 'if-statement' as an example of checking to see if the entry is a 'file' (or a 'directory' for example):
for f in $(cat 1.txt); do if [ -f $f ]; then rm $f; fi; done
Here you can use set of folders from deletelist.txt while avoiding some patterns as well
foreach f (cat deletelist.txt)
rm -rf ls | egrep -v "needthisfile|*.cpp|*.h"
end
This will allow file names to have spaces (reproducible example).
# Select files of interest, here, only text files for ex.
find -type f -exec file {} \; > findresult.txt
grep ": ASCII text$" findresult.txt > textfiles.txt
# leave only the path to the file removing suffix and prefix
sed -i -e 's/:.*$//' textfiles.txt
sed -i -e 's/\.\///' textfiles.txt
#write a script that deletes the files in textfiles.txt
IFS_backup=$IFS
IFS=$(echo "\n\b")
for f in $(cat textfiles.txt);
do
rm "$f";
done
IFS=$IFS_backup
# save script as "some.sh" and run: sh some.sh
In case somebody prefers sed and removing without wildcard expansion:
sed -e "s/^\(.*\)$/rm -f -- \'\1\'/" deletelist.txt | /bin/sh
Reminder: use absolute pathnames in the file or make sure you are in the right directory.
And for completeness the same with awk:
awk '{printf "rm -f -- '\''%s'\''\n",$1}' deletelist.txt | /bin/sh
Wildcard expansion will work if the single quotes are remove, but this is dangerous in case the filename contains spaces. This would need to add quotes around the wildcards.