My code works, but not in the way I want to exactly. Basically, what my code does is that it looks through the current directory, searches for folders, and within those folders it looks at the files. If one of the file extensions is the value of the $Example variable, then it should delete all other files with the same beginning file name, regardless of extension and rename the one with the $Example extension to the same name, just without the $Example extension. Here is the code:
#!/bin/sh
set -o errexit
Example=dummy
for d in *; do
if test "$(ls -A "$d" 2>/dev/null)"; then
if [ $(ls -1 ${d}/*.$Example 2>/dev/null | wc -l) -ge 1 ]; then
cd $(pwd)/$d;
for f in *.$Example; do
fileName="${f%.$Example}";
mv "$f" "${f%.$Example}";
#tr "\r" "\n" < "${f%.$Example}" > "${f%.$Example}"
listToDelete=$(find -name "$fileName.*");
for g in $listToDelete; do
rm $g;
done;
done;
cd ..;
fi;
fi;
done
The files being used have been created in VIM, so are supposed to have Linux formatting, rather than Windows formatting. For some reason or other, once the extension has been stripped, using this code, it gets formatted with \r, and the file fails to run. I added the comment where my temporary solution is located, but I was wondering if either there is some way to alter the mv function to keep the Linux formatting, or maybe there is another way to achieve what I want. Thanks
The files being used have been created in VIM, so are supposed to have Linux formatting, rather than Windows formatting.
That has no impact on the line separator being used.
But I can think of 3 different possible causes.
ViM substitution
The \r and \n may have been mixed up or the Windows line separator (\r\n) may have been striped out incorrectly. If you're just trying to convert the Windows line separator at some point, convert them to to Unix line separated files with dos2unix instead of sed or vim magic if possible and then edit them. If the files are adding or replacing line separators using ViM, remember that in ViM line separators are searched for using \n and replaced with \r since ViM just loads file data into a buffer. e.g. %s/\n/somestring/ and %s/somestring/\r/g.
Cygwin
If you're using Cygwin, I'm pretty sure it defaults to ViM using Windows line separators, but you can change it to Unix line separators. Don't remember how to off the top of my head though.
ViM default line separators
Not sure how this would have happened if you're on a GNU/Linux system, but the default line separator to be used in ViM can be specified to Unix using :e ++ff=unix
This answer does not address the issue with a possible stray \r in the script source code, but rather gives an alternative way of doing what I believe is what the user wants to achieve:
#!/bin/bash
ext='dummy'
tmpfile=$(mktemp)
find . -type f -name "*.$ext" \
-execdir bash -x -c \
'cp "$3" "$2" &&
tee "${3%.$1}"* <"$2" >/dev/null' bash "$ext" "$tmpfile" {} \;
rm -f "$tmpfile"
This finds all files with the given extension and for each of them executes
cp "$3" "$2" && tee "${3%.$1}"* <"$2" >/dev/null
$1 will be the extension without the dot.
$2 will be the name of a temporary file.
$3 will be the found file.
The command will first copy the found file to a temporary file, then it will feed this temporary file through tee which will duplicate its contents to all files with the same prefix (${3%.$1} will strip the extension from the found filename and ${3%.$1}* will expand to all files in the same directory that has the same prefix).
Most modern implementations of find supports -execdir which works like -exec with the difference that the given utility will be executed in the directory of the found name. Also, {} will be the pathless basename of the found thing.
Given the following files in a directory:
$ ls test/
f1.bar f1.foo f2.bar f2.foo f3.bar f3.foo
f1.dummy f1.txt f2.dummy f2.txt f3.dummy f3.txt
This script does the following:
$ bash -x script.sh
+ ext=dummy
++ mktemp
+ tmpfile=/tmp/tmp.9v5JMAcA12
+ find . -type f -name '*.dummy' -execdir bash -x -c 'cp "$3" "$2" &&
tee "${3%.$1}"* <"$2" >/dev/null' sh dummy /tmp/tmp.9v5JMAcA12 '{}' ';'
+ cp f1.dummy /tmp/tmp.9v5JMAcA12
+ tee f1.bar f1.dummy f1.foo f1.txt
+ cp f2.dummy /tmp/tmp.9v5JMAcA12
+ tee f2.bar f2.dummy f2.foo f2.txt
+ cp f3.dummy /tmp/tmp.9v5JMAcA12
+ tee f3.bar f3.dummy f3.foo f3.txt
+ rm -f /tmp/tmp.9v5JMAcA12
Remove -x from the invocation of bash (and from bash on the command line) to disable tracing.
I used bash rather than sh because my sh can't grok ${3%.$1} properly.
Related
I am looking for a way to make a simple loop in bash over everything my directory contains, i.e. files, directories and links including hidden ones.
I will prefer if it could be specifically in bash but it has to be the most general. Of course, file names (and directory names) can have white space, break line, symbols. Everything but "/" and ASCII NULL (0×0), even at the first character. Also, the result should exclude the '.' and '..' directories.
Here is a generator of files on which the loop has to deal with :
#!/bin/bash
mkdir -p test
cd test
touch A 1 ! "hello world" \$\"sym.dat .hidden " start with space" $'\n start with a newline'
mkdir -p ". hidden with space" $'My Personal\nDirectory'
So my loop should look like (but has to deal with the tricky stuff above):
for i in * ;
echo ">$i<"
done
My closest try was the use of ls and bash array, but it is not working with, is:
IFS=$(echo -en "\n\b")
l=( $(ls -A .) )
for i in ${l[#]} ; do
echo ">$i<"
done
unset IFS
Or using bash arrays but the ".." directory is not exclude:
IFS=$(echo -en "\n\b")
l=( [[:print:]]* .[[:print:]]* )
for i in ${l[#]} ; do
echo ">$i<"
done
unset IFS
* doesn't match files beginning with ., so you just need to be explicit:
for i in * .[^.]*; do
echo ">$i<"
done
.[^.]* will match all files and directories starting with ., followed by a non-. character, followed by zero or more characters. In other words, it's like the simpler .*, but excludes . and ... If you need to match something like ..foo, then you might add ..?* to the list of patterns.
As chepner noted in the comments below, this solution assumes you're running GNU bash along with GNU find GNU sort...
GNU find can be prevented from recursing into subdirectories with the -maxdepth option. Then use -print0 to end every filename with a 0x00 byte instead of the newline you'd usually get from -print.
The sort -z sorts the filenames between the 0x00 bytes.
Then, you can use sed to get rid of the dot and dot-dot directory entries (although GNU find seems to exclude the .. already).
I also used sed to get read of the ./ in front of every filename. basename could do that too, but older systems didn't have basename, and you might not trust it to handle the funky characters right.
(These sed commands each required two cases: one for a pattern at the start of the string, and one for the pattern between 0x00 bytes. These were so ugly I split them out into separate functions.)
The read command doesn't have a -z or -0 option like some commands, but you can fake it with -d "" and blanking the IFS environment variable.
The additional -r option prevents a backslash-newline combo from being interpreted as a line continuation. (A file called backslash\\nnewline would otherwise be mangled to backslashnewline.) It might be worth seeing if other backslash-combos get interpreted as escape sequences.
remove_dot_and_dotdot_dirs()
{
sed \
-e 's/^[.]\{1,2\}\x00//' \
-e 's/\x00[.]\{1,2\}\x00/\x00/g'
}
remove_leading_dotslash()
{
sed \
-e 's/^[.]\///' \
-e 's/\x00[.]\//\x00/g'
}
IFS=""
find . -maxdepth 1 -print0 |
sort -z |
remove_dot_and_dotdot_dirs |
remove_leading_dotslash |
while read -r -d "" filename
do
echo "Doing something with file '${filename}'..."
done
It may not be the most favorable way but I tried bellow thing
while read line ; do echo $line; done <<< $(ls -a | grep -v -w ".")
check the below trail which I did
Try the find command, something like:
find .
That will list all the files in all recursive directories.
To output only files excluding the leading . or .. try:
find . -type f -printf %P\\n
I just downloaded about 600 files from my server and need to remove the last 11 characters from the filename (not including the extension). I use Ubuntu and I am searching for a command to achieve this.
Some examples are as follows:
aarondyne_kh2_13thstruggle_or_1250556383.mus should be renamed to aarondyne_kh2_13thstruggle_or.mus
aarondyne_kh2_darknessofunknow_1250556659.mp3 should be renamed to aarondyne_kh2_darknessofunknow.mp3
It seems that some duplicates might exist after I do this, but if the command fails to complete and tells me what the duplicates would be, I can always remove those manually.
Try using the rename command. It allows you to rename files based on a regular expression:
The following line should work out for you:
rename 's/_\d+(\.[a-z0-9A-Z]+)$/$1/' *
The following changes will occur:
aarondyne_kh2_13thstruggle_or_1250556383.mus renamed as aarondyne_kh2_13thstruggle_or.mus
aarondyne_kh2_darknessofunknow_1250556659.mp3 renamed as aarondyne_kh2_darknessofunknow.mp3
You can check the actions rename will do via specifying the -n flag, like this:
rename -n 's/_\d+(\.[a-z0-9A-Z]+)$/$1/' *
For more information on how to use rename simply open the manpage via: man rename
Not the prettiest, but very simple:
echo "$filename" | sed -e 's!\(.*\)...........\(\.[^.]*\)!\1\2!'
You'll still need to write the rest of the script, but it's pretty simple.
find . -type f -exec sh -c 'mv {} `echo -n {} | sed -E -e "s/[^/]{10}(\\.[^\\.]+)?$/\\1/"`' ";"
one way to go:
you get a list of your files, one per line (by ls maybe) then:
ls....|awk '{o=$0;sub(/_[^_.]*\./,".",$0);print "mv "o" "$0}'
this will print the mv a b command
e.g.
kent$ echo "aarondyne_kh2_13thstruggle_or_1250556383.mus"|awk '{o=$0;sub(/_[^_.]*\./,".",$0);print "mv "o" "$0}'
mv aarondyne_kh2_13thstruggle_or_1250556383.mus aarondyne_kh2_13thstruggle_or.mus
to execute, just pipe it to |sh
I assume there is no space in your filename.
This script assumes each file has just one extension. It would, for instance, rename "foo.something.mus" to "foo.mus". To keep all extensions, remove one hash mark (#) from the first line of the loop body. It also assumes that the base of each filename has at least 12 character, so that removing 11 doesn't leave you with an empty name.
for f in *; do
ext=${f##*.}
new_f=${base%???????????.$ext}
if [ -f "$new_f" ]; then
echo "Will not rename $f, $new_f already exists" >&2
else
mv "$f" "$new_f"
fi
done
On my computer running Ubuntu, I have a folder full of hundreds files all named "index.html.n" where n starts at one and continues upwards. Some of those files are actual html files, some are image files (png and jpg), and some of them are zip files.
My goal is to permanently remove every single file except the zip archives. I assume it's some combination of rm and file, but I'm not sure of the exact syntax.
If it fits into your argument list and no filenames contain colon a simple pipe with xargs should do:
file * | grep -vi zip | cut -d: -f1 | tr '\n' '\0' | xargs -0 rm
First find to find matching file, then file to get file types. sed eliminates other file types and also removes everything but the filenames from the output of file. lastly, rm for deleting:
find -name 'index.html.[0-9]*' | \
xargs file | \
sed -n 's/\([^:]*\): Zip archive.*/\1/p' |
xargs rm
I would run:
for f in in index.html.*
do
file "$f" | grep -qi zip
[ $? -ne 0 ] && rm -i "$f"
done
and remove -i option if you feel confident enough
Here's the approach I'd use; it's not entirely automated, but it's less error-prone than some other approaches.
file * > cleanup.sh
or
file index.html.* > cleanup.sh
This generates a list of all files (excluding dot files), or of all index.html.* files, in your current directory and writes the list to cleanup.sh.
Using your favorite text editor (mine happens to be vim), edit cleanup.sh:
Add #!/bin/sh as the first line
Delete all lines containing the string "Zip archive"
On each line, delete everything from the : to the end of the line (in vim, :%s/:.*$//)
Replace the beginning of each line with "rm" followed by a space
Exit your editor, updating the file.
chmod +x cleanup.sh
You should now have a shell script that will delete everything except zip files.
Carefully inspect the script before running it. Look out for typos, and for files whose names contain shell metacharacters. You might need to add quotation marks to the file names.
(Note that if you do this as a one-line shell command, you don't have the opportunity to inspect the list of files you're going to delete before you actually delete them.)
Once you're satisfied that your script is correct, run
./cleanup.sh
from your shell prompt.
for i in index.html.*
do
$type = file $i;
if [[ ! $file =~ "Zip" ]]
then
rm $file
fi
done
Change the rm to a ls for testing purposes.
So far I've been able to find out how to add a line at the beginning of a file but that's not exactly what I want. I'll show it with an example:
File content
some text at the beginning
Result
<added text> some text at the beginning
It's similar but I don't want to create any new line with it...
I would like to do this with sed if possible.
sed can operate on an address:
$ sed -i '1s/^/<added text> /' file
What is this magical 1s you see on every answer here? Line addressing!.
Want to add <added text> on the first 10 lines?
$ sed -i '1,10s/^/<added text> /' file
Or you can use Command Grouping:
$ { echo -n '<added text> '; cat file; } >file.new
$ mv file{.new,}
If you want to add a line at the beginning of a file, you need to add \n at the end of the string in the best solution above.
The best solution will add the string, but with the string, it will not add a line at the end of a file.
sed -i '1s/^/your text\n/' file
If the file is only one line, you can use:
sed 's/^/insert this /' oldfile > newfile
If it's more than one line. one of:
sed '1s/^/insert this /' oldfile > newfile
sed '1,1s/^/insert this /' oldfile > newfile
I've included the latter so that you know how to do ranges of lines. Both of these "replace" the start line marker on their affected lines with the text you want to insert. You can also (assuming your sed is modern enough) use:
sed -i 'whatever command you choose' filename
to do in-place editing.
Use subshell:
echo "$(echo -n 'hello'; cat filename)" > filename
Unfortunately, command substitution will remove newlines at the end of file. So as to keep them one can use:
echo -n "hello" | cat - filename > /tmp/filename.tmp
mv /tmp/filename.tmp filename
Neither grouping nor command substitution is needed.
To insert just a newline:
sed '1i\\'
You can use cat -
printf '%s' "some text at the beginning" | cat - filename
To add a line to the top of the file:
sed -i '1iText to add\'
my two cents:
sed -i '1i /path/of/file.sh' filename
This will work even is the string containing forward slash "/"
Hi with carriage return:
sed -i '1s/^/your text\n/' file
Note that on OS X, sed -i <pattern> file, fails. However, if you provide a backup extension, sed -i old <pattern> file, then file is modified in place while file.old is created. You can then delete file.old in your script.
There is a very easy way:
echo "your header" > headerFile.txt
cat yourFile >> headerFile.txt
PROBLEM: tag a file, at the top of the file, with the base name of the parent directory.
I.e., for
/mnt/Vancouver/Programming/file1
tag the top of file1 with Programming.
SOLUTION 1 -- non-empty files:
bn=${PWD##*/} ## bn: basename
sed -i '1s/^/'"$bn"'\n/' <file>
1s places the text at line 1 of the file.
SOLUTION 2 -- empty or non-empty files:
The sed command, above, fails on empty files. Here is a solution, based on https://superuser.com/questions/246837/how-do-i-add-text-to-the-beginning-of-a-file-in-bash/246841#246841
printf "${PWD##*/}\n" | cat - <file> > temp && mv -f temp <file>
Note that the - in the cat command is required (reads standard input: see man cat for more information). Here, I believe, it's needed to take the output of the printf statement (to STDIN), and cat that and the file to temp ... See also the explanation at the bottom of http://www.linfo.org/cat.html.
I also added -f to the mv command, to avoid being asked for confirmations when overwriting files.
To recurse over a directory:
for file in *; do printf "${PWD##*/}\n" | cat - $file > temp && mv -f temp $file; done
Note also that this will break over paths with spaces; there are solutions, elsewhere (e.g. file globbing, or find . -type f ... -type solutions) for those.
ADDENDUM: Re: my last comment, this script will allow you to recurse over directories with spaces in the paths:
#!/bin/bash
## https://stackoverflow.com/questions/4638874/how-to-loop-through-a-directory-recursively-to-delete-files-with-certain-extensi
## To allow spaces in filenames,
## at the top of the script include: IFS=$'\n'; set -f
## at the end of the script include: unset IFS; set +f
IFS=$'\n'; set -f
# ----------------------------------------------------------------------------
# SET PATHS:
IN="/mnt/Vancouver/Programming/data/claws-test/corpus test/"
# https://superuser.com/questions/716001/how-can-i-get-files-with-numeric-names-using-ls-command
# FILES=$(find $IN -type f -regex ".*/[0-9]*") ## recursive; numeric filenames only
FILES=$(find $IN -type f -regex ".*/[0-9 ]*") ## recursive; numeric filenames only (may include spaces)
# echo '$FILES:' ## single-quoted, (literally) prints: $FILES:
# echo "$FILES" ## double-quoted, prints path/, filename (one per line)
# ----------------------------------------------------------------------------
# MAIN LOOP:
for f in $FILES
do
# Tag top of file with basename of current dir:
printf "[top] Tag: ${PWD##*/}\n\n" | cat - $f > temp && mv -f temp $f
# Tag bottom of file with basename of current dir:
printf "\n[bottom] Tag: ${PWD##*/}\n" >> $f
done
unset IFS; set +f
Just for fun, here is a solution using ed which does not have the problem of not working on an empty file. You can put it into a shell script just like any other answer to this question.
ed Test <<EOF
a
.
0i
<added text>
.
1,+1 j
$ g/^$/d
wq
EOF
The above script adds the text to insert to the first line, and then joins the first and second line. To avoid ed exiting on error with an invalid join, it first creates a blank line at the end of the file and remove it later if it still exists.
Limitations: This script does not work if <added text> is exactly equal to a single period.
echo -n "text to insert " ;tac filename.txt| tac > newfilename.txt
The first tac pipes the file backwards (last line first) so the "text to insert" appears last. The 2nd tac wraps it once again so the inserted line is at the beginning and the original file is in its original order.
The simplest solution I found is:
echo -n "<text to add>" | cat - myFile.txt | tee myFile.txt
Notes:
Remove | tee myFile.txt if you don't want to change the file contents.
Remove the -n parameter if you want to append a full line.
Add &> /dev/null to the end if you don't want to see the output (the generated file).
This can be used to append a shebang to the file. Example:
# make it executable (use u+x to allow only current user)
chmod +x cropImage.ts
# append the shebang
echo '#''!'/usr/bin/env ts-node | cat - cropImage.ts | tee cropImage.ts &> /dev/null
# execute it
./cropImage.ts myImage.png
Another solution with aliases. Add to your init rc/ env file:
addtail () { find . -type f ! -path "./.git/*" -exec sh -c "echo $# >> {}" \; }
addhead () { find . -type f ! -path "./.git/*" -exec sh -c "sed -i '1s/^/$#\n/' {}" \; }
Usage:
addtail "string to add at the beginning of file"
addtail "string to add at the end of file"
With the echo approach, if you are on macOS/BSD like me, lose the -n switch that other people suggest. And I like to define a variable for the text.
So it would be like this:
Header="my complex header that may have difficult chars \"like these quotes\" and line breaks \n\n "
{ echo "$Header"; cat "old.txt"; } > "new.txt"
mv new.txt old.txt
TL;dr -
Consider using ex. Since you want the front of a given line, then the syntax is basically the same as what you might find for sed but the option of "in place editing" is built-in.
I cannot imagine an environment where you have sed but not ex/vi, unless it is a MS Windows box with some special "sed.exe", maybe.
sed & grep sort of evolved from ex / vi, so it might be better to say sed syntax is the same as ex.
You can change the line number to something besides #1 or search for a line and change that one.
source=myFile.txt
Front="This goes IN FRONT "
man true > $source
ex -s ${source} <<EOF
1s/^/$Front/
wq
EOF
$ head -n 3 $source
This goes IN FRONT TRUE(1) User Commands TRUE(1)
NAME
Long version, I recommend ex (or ed if you are one of the cool kids).
I like ex because it is portable, extremely powerful, allows me to write in-place, and/or make backups all without needing GNU (or even BSD) extensions.
Additionally, if you know the ex way, then you know how to do it in vi - and probably vim if that is your jam.
Notice that EOF is not quoted when we use "i"nsert and using echo:
str="+++ TOP +++" && ex -s <<EOF
r!man true
1i
`echo "$str"`
.
"0r!echo "${str}"
wq! true.txt
EOF
0r!echo "${str}" might also be used as shorthand for :0read! or :0r! that you have likely used in vi mode (it is literally the same thing) but the : is optional here and some implementations do not support "r"ead address of zero.
"r"eading directly to the special line #0 (or from line 1) would automatically push everything "down", and then you just :wq to save your changes.
$ head -n 3 true.txt | nl -ba
1 +++ TOP +++
2 TRUE(1) User Commands TRUE(1)
3
Also, most classic sed implementations do not have extensions (like \U&) that ex should have by default.
cat concatenates multiple files. <() sends output of a command as a file. Combining these two, we can insert lines at the beginning and end of a file by,
cat <(echo "line before the file") file.txt <(echo "line after the file")
I have a long text file with list of file masks I want to delete
Example:
/tmp/aaa.jpg
/var/www1/*
/var/www/qwerty.php
I need delete them. Tried rm `cat 1.txt` and it says the list is too long.
Found this command, but when I check folders from the list, some of them still have files
xargs rm <1.txt Manual rm call removes files from such folders, so no issue with permissions.
This is not very efficient, but will work if you need glob patterns (as in /var/www/*)
for f in $(cat 1.txt) ; do
rm "$f"
done
If you don't have any patterns and are sure your paths in the file do not contain whitespaces or other weird things, you can use xargs like so:
xargs rm < 1.txt
Assuming that the list of files is in the file 1.txt, then do:
xargs rm -r <1.txt
The -r option causes recursion into any directories named in 1.txt.
If any files are read-only, use the -f option to force the deletion:
xargs rm -rf <1.txt
Be cautious with input to any tool that does programmatic deletions. Make certain that the files named in the input file are really to be deleted. Be especially careful about seemingly simple typos. For example, if you enter a space between a file and its suffix, it will appear to be two separate file names:
file .txt
is actually two separate files: file and .txt.
This may not seem so dangerous, but if the typo is something like this:
myoldfiles *
Then instead of deleting all files that begin with myoldfiles, you'll end up deleting myoldfiles and all non-dot-files and directories in the current directory. Probably not what you wanted.
Use this:
while IFS= read -r file ; do rm -- "$file" ; done < delete.list
If you need glob expansion you can omit quoting $file:
IFS=""
while read -r file ; do rm -- $file ; done < delete.list
But be warned that file names can contain "problematic" content and I would use the unquoted version. Imagine this pattern in the file
*
*/*
*/*/*
This would delete quite a lot from the current directory! I would encourage you to prepare the delete list in a way that glob patterns aren't required anymore, and then use quoting like in my first example.
You could use '\n' for define the new line character as delimiter.
xargs -d '\n' rm < 1.txt
Be careful with the -rf because it can delete what you don't want to if the 1.txt contains paths with spaces. That's why the new line delimiter a bit safer.
On BSD systems, you could use -0 option to use new line characters as delimiter like this:
xargs -0 rm < 1.txt
xargs -I{} sh -c 'rm "{}"' < 1.txt should do what you want. Be careful with this command as one incorrect entry in that file could cause a lot of trouble.
This answer was edited after #tdavies pointed out that the original did not do shell expansion.
You can use this one-liner:
cat 1.txt | xargs echo rm | sh
Which does shell expansion but executes rm the minimum number of times.
Just to provide an another way, you can also simply use the following command
$ cat to_remove
/tmp/file1
/tmp/file2
/tmp/file3
$ rm $( cat to_remove )
In this particular case, due to the dangers cited in other answers, I would
Edit in e.g. Vim and :%s/\s/\\\0/g, escaping all space characters with a backslash.
Then :%s/^/rm -rf /, prepending the command. With -r you don't have to worry to have directories listed after the files contained therein, and with -f it won't complain due to missing files or duplicate entries.
Run all the commands: $ source 1.txt
cat 1.txt | xargs rm -f | bash Run the command will do the following for files only.
cat 1.txt | xargs rm -rf | bash Run the command will do the following recursive behaviour.
Here's another looping example. This one also contains an 'if-statement' as an example of checking to see if the entry is a 'file' (or a 'directory' for example):
for f in $(cat 1.txt); do if [ -f $f ]; then rm $f; fi; done
Here you can use set of folders from deletelist.txt while avoiding some patterns as well
foreach f (cat deletelist.txt)
rm -rf ls | egrep -v "needthisfile|*.cpp|*.h"
end
This will allow file names to have spaces (reproducible example).
# Select files of interest, here, only text files for ex.
find -type f -exec file {} \; > findresult.txt
grep ": ASCII text$" findresult.txt > textfiles.txt
# leave only the path to the file removing suffix and prefix
sed -i -e 's/:.*$//' textfiles.txt
sed -i -e 's/\.\///' textfiles.txt
#write a script that deletes the files in textfiles.txt
IFS_backup=$IFS
IFS=$(echo "\n\b")
for f in $(cat textfiles.txt);
do
rm "$f";
done
IFS=$IFS_backup
# save script as "some.sh" and run: sh some.sh
In case somebody prefers sed and removing without wildcard expansion:
sed -e "s/^\(.*\)$/rm -f -- \'\1\'/" deletelist.txt | /bin/sh
Reminder: use absolute pathnames in the file or make sure you are in the right directory.
And for completeness the same with awk:
awk '{printf "rm -f -- '\''%s'\''\n",$1}' deletelist.txt | /bin/sh
Wildcard expansion will work if the single quotes are remove, but this is dangerous in case the filename contains spaces. This would need to add quotes around the wildcards.