How to set bash next bash operation input equal to the output of the prvious operation? [duplicate] - linux

This question already has answers here:
Use the output of a command as input of the next command
(4 answers)
How to apply shell command to each line of a command output?
(9 answers)
Closed 8 years ago.
I believe it is really easy, but I didn't find anywhere an answer for me. What I'm trying to do is:
echo file.war | sed s/.war// | rm -rf ???
to pass to the rm -rf the output of the sed command.
Not sure if it is the right way to get this...
Thanks for your help in advance!

This is what 'xargs' does.
echo file.war | sed s/\.war$// | xargs rm -rf
And note my changes to your regex. It needs to be anchored and the '.' needs to be escaped.

Other possibilities, in bash:
Use command substitution:
rm -rf $(echo file.war | sed 's/\.war$//')
Use a variable and shell parameter expansion:
file=file.war
rm -rf "${file%.war}"
Instead of the echo | sed, use a here-string (and the command substitution):
rm -rf $(sed 's/\.war$//' <<< "file.war")
This was in fact just to show you the command substitution $(...) thing (avoiding xargs that will fail miserably if you have file names containing funny symbols like spaces). Also to show you that in your case, sed is useless (thanks to the shell parameter expansion) and, if you really need sed, that the echo | sed thing can be avoided in bash.
I don't know what exactly you're trying to achieve. I could imagine:
you're trying to delete all files file such that file.war exist in current directory. In this case, I would do:
for file in *.war; do
rm -rf -- "${file%.war}"
done
You have a file called filenames that contains lines like:
file1.war
file2.war
...
filen.war
and you want to delete all files file1, file2, …, filen. Then I would do:
while read -r file; do
[[ ( $file = *.war ) && ( -f $file ) ]] || continue
rm -rf -- "${file%.war}"
done < filenames

An alternative approach is
rm -rf `echo file.war | sed s/\.war$//`

Related

How can I delete the oldest n group of files with the same prefix?

In Linux I use InfluxDB which can make a backup of the database for archival purposes. Each backup comprises a series of files with the same prefix "/tank/Backups/var/Influxdb/20191225T235655Z." and different extensions.
I wanted to write a bash script which first deletes the oldest existing backups, then creates a new one (here I paste only the removal):
ls -tp /tank/Backups/var/Influxdb/* | grep -v '/$' | sed -E 's/\..+//' | \
sort -ru | sed 's/$/.*/' | tail -n +4 | xargs -d '\n' -r rm --
However, when I run the script as "sudo", I get
rm: cannot remove '/tank/Backups/var/Influxdb/20191225T235655Z.*': No such file or directory
When I run the quoted script, except the latest part, I get:
/tank/Backups/var/Influxdb/20190930T215357Z.*
/tank/Backups/var/Influxdb/20190930T215352Z.*
which is correct. Also, if I manually write
sudo /tank/Backups/var/Influxdb/20190930T215357Z.*
the command succeeds.
Why is the script reporting an error?
I'm using Ubuntu 18.04 and the folder "/tank" is a ZFS volume.
Better do :
find /tank/Backups/var/Influxdb/* -mtime +5 -delete
to remove files older than 5 days.
Then, you can run the next command
Explaining the Error
This answer is only here to explain the error and give a deeper understanding of what is happening. If you are simply looking for an elegant solution search for other answers.
When I run the quoted script, except the latest part, I get:
/tank/Backups/var/Influxdb/20190930T215357Z.*
/tank/Backups/var/Influxdb/20190930T215352Z.*
which is correct
The listed strings are not what you want. When you pass these paths to rm it sees them just as literal strings, that is, two files whose names end with a literal *. Since you don't have such files you get an error.
When you type rm * manually into your console bash (not rm!) does globbing. bash searches files and replaces the * with the list of found files. Only after that bash executes rm foundFile1 foundFile2 .... rm never sees the *.
Strings inside a pipeline are not processed by bash, but by the commands in the pipeline, in your case rm. rm does not glob.
You could run bash inside your pipeline and let it expand the * you inserted earlier. To this end, replace the last command in your pipeline with xargs -r bash -c 'rm -- $*' --. However, note that your paths are not quoted here. If there are spaces or literal * in your filenames the command will break. This is necessary for globbing as quoted "*" are not expanded by bash.
To quote your files you have to insert the * glob inside the bash command:
ls -tp /tank/Backups/var/Influxdb/* | grep -v '/$' | sed -E 's/\..+//' |
sort -ru | tail -n +4 | xargs -d\\n -L1 -r bash -c 'rm -- "$0."*'
Above command is only a simple fix for your command. It is neither elegant nor very robust. Using tools like find is strongly recommended.

linux shell command mv many files

I have many files like 1a1, 2a2, 3a3 and I want to mv the file names to 1b1, 2b2, 3b3. That means to replace 'a' to 'b' in these file names.
I have tried the command like:
for f in */*; do
mv "$f" "${f/a/b}"
done
ls | xargs -i mv {} ${{}/a/b}
ls | xargs -i mv {} \`echo {}|tr -t 'a' 'b'\`
but none works.
I know a command
rename 'a' 'b' *
can work.
But I still want to figure out how to use for, xargs involved with other cmds to do this work. After all, in every day use, they are much general than simple rename command.
Please help me, thanks.
#!/bin/bash
for old in *
do new=$(echo "$old" | sed -e 's/a/b/')
echo mv "$old" "$new" &>2
mv "$old" "$new"
done
This example will allow you to guess more complex name transformations as you learn how to use sed(1) command to do the name transformations.
The program walks all the command line parameters to the for loop, in each loop, the program gets a new variable new with the transformation of the original $old name. Then you only have to execute the command with the old and new values.
Just in case you want to know with rename:
rename 's/(.*)a(.*)/$1b$2/' *

How to remove all not single files Linux directory?

I have duplicated files in directory on Linux machine which are listed like that:
ltulikowski#lukasz-pc:~$ ls -1
abcd
abcd.1
abcd.2
abdc
abdc.1
acbd
I want to remove all files witch aren't single so as a result I should have:
ltulikowski#lukasz-pc:~$ ls -1
acbd
The function uses extglob, so before execution, set extglob: shopt -s extglob
rm_if_dup_exist(){
arr=()
for file in *.+([0-9]);do
base=${file%.*};
if [[ -e $base ]]; then
arr+=("$base" "$file")
fi
done
rm -f -- "${arr[#]}"
}
This will also support file names with several digits after the . e.g. abcd.250 is also acceptable.
Usage example with your input:
$ touch abcd abcd.1 abcd.2 abdc abdc.1 acbd
$ rm_if_dup_exist
$ ls
acbd
Please notice that if, for example, abcd.1 exist but abcd does not exist, it won't delete abcd.1.
here is one way to do it
for f in *.[0-9]; do rm ${f%.*}*; done
may get exceptions since some files will be deleted more than once (abcd in your example). If versions always start with .1 you can restrict to match to that.
You can use:
while read -r f; do
rm "$f"*
done < <(printf "%s\n" * | cut -d. -f1 | uniq -d)
printf, cut and uniq are used to get duplicate entries (part before dot) in current directory.
The command
rm *.*
Should do the trick if I understand you correctly
Use ls to confirm first

Generating a script to delete a list of files

I have a file containing a list of paths I want to delete.
Adding rm in front of each path (to generate a script that will run these deletions) seems like the obvious approach. How can I do this?
Changing a list of filenames into a shell script by prepending rm to the beginning of each line is dangerous practice: Filenames may not map to themselves when interpreted by a shell, and may even have side effects that include running arbitrary commands. Don't do that.
If you want to delete all files named in a file, just use xargs to directly invoke rm with the filenames passed:
xargs rm -f -- <input-file
Note that this will have xargs attempt to interpret escape characters, quotes, etc. inside the names; if you don't want this, and have GNU xargs:
xargs -d $'\n' rm -f -- <input-file
Similarly, if you had control over your input file's format, you should use a NUL-delimited stream of filenames rather than a newline-delimited list of names. (This is because POSIX filesystems allow newline literals inside filenames). If your input file is null-delimited, then you can use:
xargs -0 rm -f -- <null-delimimted-input-file
If you really want to generate a shell script that will delete a listed set of names, by the way, you can do this in bash, like so:
while IFS= read -r filename; do
printf 'rm -f -- %q\n' "$filename"
done <input-list >output-script
Using printf %q escapes content in such a way that when reread by bash, it will be parsed as its literal contents (thus, putting backslashes before characters like * or $ which might otherwise be interpreted).
That said, because this invokes rm once per file, it will be less efficient than xargs (which passes multiple filenames to each rm invocation).
That said -- there actually is a middle ground: You can have xargs invoke bash, and generated a safely quoted list in the latter, with only a minimal number of invocations:
{
echo "#!/bin/bash"
xargs bash -c 'printf "rm -f -- "; printf "%q " "$#"; printf "\n"'
} <input-file >output-script
you can use sed
sed 's/^/rm /' foo.sh > foo2.sh
^ is the beginning of a line, so a start of each line will be replaced by rm.

Unix: How to delete files listed in a file

I have a long text file with list of file masks I want to delete
Example:
/tmp/aaa.jpg
/var/www1/*
/var/www/qwerty.php
I need delete them. Tried rm `cat 1.txt` and it says the list is too long.
Found this command, but when I check folders from the list, some of them still have files
xargs rm <1.txt Manual rm call removes files from such folders, so no issue with permissions.
This is not very efficient, but will work if you need glob patterns (as in /var/www/*)
for f in $(cat 1.txt) ; do
rm "$f"
done
If you don't have any patterns and are sure your paths in the file do not contain whitespaces or other weird things, you can use xargs like so:
xargs rm < 1.txt
Assuming that the list of files is in the file 1.txt, then do:
xargs rm -r <1.txt
The -r option causes recursion into any directories named in 1.txt.
If any files are read-only, use the -f option to force the deletion:
xargs rm -rf <1.txt
Be cautious with input to any tool that does programmatic deletions. Make certain that the files named in the input file are really to be deleted. Be especially careful about seemingly simple typos. For example, if you enter a space between a file and its suffix, it will appear to be two separate file names:
file .txt
is actually two separate files: file and .txt.
This may not seem so dangerous, but if the typo is something like this:
myoldfiles *
Then instead of deleting all files that begin with myoldfiles, you'll end up deleting myoldfiles and all non-dot-files and directories in the current directory. Probably not what you wanted.
Use this:
while IFS= read -r file ; do rm -- "$file" ; done < delete.list
If you need glob expansion you can omit quoting $file:
IFS=""
while read -r file ; do rm -- $file ; done < delete.list
But be warned that file names can contain "problematic" content and I would use the unquoted version. Imagine this pattern in the file
*
*/*
*/*/*
This would delete quite a lot from the current directory! I would encourage you to prepare the delete list in a way that glob patterns aren't required anymore, and then use quoting like in my first example.
You could use '\n' for define the new line character as delimiter.
xargs -d '\n' rm < 1.txt
Be careful with the -rf because it can delete what you don't want to if the 1.txt contains paths with spaces. That's why the new line delimiter a bit safer.
On BSD systems, you could use -0 option to use new line characters as delimiter like this:
xargs -0 rm < 1.txt
xargs -I{} sh -c 'rm "{}"' < 1.txt should do what you want. Be careful with this command as one incorrect entry in that file could cause a lot of trouble.
This answer was edited after #tdavies pointed out that the original did not do shell expansion.
You can use this one-liner:
cat 1.txt | xargs echo rm | sh
Which does shell expansion but executes rm the minimum number of times.
Just to provide an another way, you can also simply use the following command
$ cat to_remove
/tmp/file1
/tmp/file2
/tmp/file3
$ rm $( cat to_remove )
In this particular case, due to the dangers cited in other answers, I would
Edit in e.g. Vim and :%s/\s/\\\0/g, escaping all space characters with a backslash.
Then :%s/^/rm -rf /, prepending the command. With -r you don't have to worry to have directories listed after the files contained therein, and with -f it won't complain due to missing files or duplicate entries.
Run all the commands: $ source 1.txt
cat 1.txt | xargs rm -f | bash Run the command will do the following for files only.
cat 1.txt | xargs rm -rf | bash Run the command will do the following recursive behaviour.
Here's another looping example. This one also contains an 'if-statement' as an example of checking to see if the entry is a 'file' (or a 'directory' for example):
for f in $(cat 1.txt); do if [ -f $f ]; then rm $f; fi; done
Here you can use set of folders from deletelist.txt while avoiding some patterns as well
foreach f (cat deletelist.txt)
rm -rf ls | egrep -v "needthisfile|*.cpp|*.h"
end
This will allow file names to have spaces (reproducible example).
# Select files of interest, here, only text files for ex.
find -type f -exec file {} \; > findresult.txt
grep ": ASCII text$" findresult.txt > textfiles.txt
# leave only the path to the file removing suffix and prefix
sed -i -e 's/:.*$//' textfiles.txt
sed -i -e 's/\.\///' textfiles.txt
#write a script that deletes the files in textfiles.txt
IFS_backup=$IFS
IFS=$(echo "\n\b")
for f in $(cat textfiles.txt);
do
rm "$f";
done
IFS=$IFS_backup
# save script as "some.sh" and run: sh some.sh
In case somebody prefers sed and removing without wildcard expansion:
sed -e "s/^\(.*\)$/rm -f -- \'\1\'/" deletelist.txt | /bin/sh
Reminder: use absolute pathnames in the file or make sure you are in the right directory.
And for completeness the same with awk:
awk '{printf "rm -f -- '\''%s'\''\n",$1}' deletelist.txt | /bin/sh
Wildcard expansion will work if the single quotes are remove, but this is dangerous in case the filename contains spaces. This would need to add quotes around the wildcards.

Resources