How to use ls command output in rm for a particular directory - linux

I want to delete oldest files in a directory when the number of files is greater than 5. I'm using
(ls -1t | tail -n 3)
to get the oldest 3 files in the directory. This works exactly as I want. Now I want to delete them in a single command with rm. As I'm running these commands on a Linux server, cd into the directory and deleting is not working so I need to use either find or ls with rm and delete the oldest 3 files. Please help out.
Thanks :)

If you want to delete files from some arbitrary directory, then pass the directory name into the ls command. The default is to use the current directory.
Then use $() parameter expansion to transfer the result of tail into rm like this
rm $(ls -1t dirname| tail -n 3)

rm $(ls -1t | tail -n 3) 2> /dev/null

ls may return No such file or directory error message, which may cause rm to run unnessesary with that value.
With the help of following answer: find - suppress "No such file or directory" errors and https://unix.stackexchange.com/a/140647/198423
find $dirname -type d -exec ls -1t {} + | tail -n 3 | xargs rm -rf

Related

Recursivley go to directories that start with *TEST* and preserve only the latest 5 folders

Here is my directory structure.
./TEST1/automation
./TEST2_1/automation
./TEST3.4/automation
./general/automation
I want to preserve only the latest 5 sub-folders under all directories that starts with TEST*/automation.
Currently, my script goes into each directory as below and executes the command:
./TEST1/automation
ls -dt */ | tail -n +5 | xargs rm -rf
./TEST2_1/automation
ls -dt */ | tail -n +5 | xargs rm -rf
./TEST3.4/automation
ls -dt */ | tail -n +5 | xargs rm -rf
Everytime we add a new folder that starts with TEST, I've to manually update the script.
Basically, go into all directories that starts with TEST*/automation and preseve only latest 5 folders.
Try this one:
find -regex '.*/TEST.*/automation' -print0 |xargs -0 -I {} -n1 bash -c 'cd "{}"; ls -rt | tail -n +4 | xargs -I {} echo rm -rf -- "{}"'
If the output looks alright (check that it does indeed show "rm ... " for all files/directories you want to get rid of) remove the echo.
Caveat: the second part of execution (the first xargs does not explicitly look for directories, it will also list files. From your description it's unclear whether your automation directories contain both files and directories or just directories.

Remove older backup from directory using shell command

In my shell script, I am creating a backup of my folder. I am setting this activity by cronjob and the schedule keeps on changing.
It is keeping the backup with timestamp. Like for e.g :
cd /tmp/BACKUP_DIR
backup_06-05-2014.tar
backup_06-08-2014.tar
backup_06-10-2014.tar
What I want, whenever I run the script, it should keep the latest one and the previously taken backup only. And delete the remaining backups.
Like if I run the script now, it should keep
backup_06-10-2014.tar
backup_06-18-2014.tar
And delete all the other one. What rm command should I use ?
Try as follows:
rm $(ls -1t /tmp/BACKUP_DIR | tail -n +2)
Listing sorted by date names of files with remaining only two newest
You could try deleting files older that 7 days using a find command, for example :
find /tmp/BACKUP_DIR -maxdepth 1 -type f -name "backup_*.tar" -mtime +6 -exec rm -f {} \;
Use
rm -rf `ls -lth backup_*.tar | awk '{print $NF}' | tail -n +4`
ls -lth backup_*.tar will give the sorted list of backup files (newest being at top)
awk '{print $NF}' will print file names and pass it to tail
tail -n +4 , will print file from number 3
At last tail's result is fed to rm to act
Another simplified method
rm -rf `ls -1rt backup_*.tar | tail -n +3`

How to delete all files that were recently created in a directory in linux?

I untarred something into a directory that already contained a lot of things. I wanted to untar into a separate directory instead. Now there are too many files to distinguish between. However the files that I have untarred have been created just now (right ?) and the original files haven’t been modified for long (at least a day). Is there a way to delete just these untarred files based on their creation information ?
Tar usually restores file timestamps, so filtering by time is not likely to work.
If you still have the tar file, you can use it to delete what you unpacked with something like:
tar tf file.tar --quoting-style=shell-always |xargs rm -i
The above will work in most cases, but not all (filenames that have a carriage return in them will break it), so be careful.
You could remove the directories by adding -r to that, but it's probably safer to just remove the toplevel directories manually.
find . -mtime -1 -type f | xargs rm
but test first with
find . -mtime -1 -type f | xargs echo
There are several different answers to this question in order of increasing complexity.
First, if this is a one off, and in this particular instance you are absolutely sure that there are no weird characters in your filenames (spaces are OK, but not tabs, newlines or other control characters, nor unicode characters) this will work:
tar -tf file.tar | egrep '^(\./)?[^/]+(/)?$' | egrep -v '^\./$' | tr '\n' '\0' | xargs -0 rm -r
All that egrepping is to skip out on all the subdirectories of the subdirectories.
Another way to do this that works with funky filenames is this:
mkdir foodir
cd foodir
tar -xf ../file.tar
for file in *; do rm -rf ../"$file"; done
That will create a directory in which your archive has been expanded, but it sounds like you wanted that already anyway. It also will not handle any files who's names start with ..
To make that method work with files that start with ., do this:
mkdir foodir
cd foodir
tar -xf ../file.tar
find . -mindepth 1 -maxdepth 1 -print0 | xargs -0 sh -c 'for file in "$#"; do rm -rf ../"$file"; done' junk
Lastly, taking from Mat's answer, you can do this and it will work for any filename and not require you to untar the directory again:
tar -tf file.tar | egrep '^(\./)?[^/]+(/)?$' | grep -v '^\./$' | tr '\n' '\0' | xargs -0 bash -c 'for fname in "$#"; do fname="$(echo -ne "$fname")"; echo -n "$fname"; echo -ne "\0"; done' junk | xargs -0 rm -r
You can handle files and directories in one pass with:
tar -tf ../test/bob.tar --quoting-style=shell-always | sed -e "s/^\(.*\/\)'$/rmdir \1'/; t; s/^\(.*\)$/rm \1/;" | sort | bash
You can see what is going to happen leave off the pipe to 'bash'
tar -tf ../test/bob.tar --quoting-style=shell-always | sed -e "s/^\(.*\/\)'$/rmdir \1'/; t; s/^\(.*\)$/rm \1/;" | sort
to handle filenames with linefeeds you need more processing.

Using find - Deleting all files/directories (in Linux ) except any one

If we want to delete all files and directories we use, rm -rf *.
But what if i want all files and directories be deleted at a shot, except one particular file?
Is there any command for that? rm -rf * gives the ease of deletion at one shot, but deletes even my favourite file/directory.
Thanks in advance
find can be a very good friend:
$ ls
a/ b/ c/
$ find * -maxdepth 0 -name 'b' -prune -o -exec rm -rf '{}' ';'
$ ls
b/
$
Explanation:
find * -maxdepth 0: select everything selected by * without descending into any directories
-name 'b' -prune: do not bother (-prune) with anything that matches the condition -name 'b'
-o -exec rm -rf '{}' ';': call rm -rf for everything else
By the way, another, possibly simpler, way would be to move or rename your favourite directory so that it is not in the way:
$ ls
a/ b/ c/
$ mv b .b
$ ls
a/ c/
$ rm -rf *
$ mv .b b
$ ls
b/
Short answer
ls | grep -v "z.txt" | xargs rm
Details:
The thought process for the above command is :
List all files (ls)
Ignore one file named "z.txt" (grep -v "z.txt")
Delete the listed files other than z.txt (xargs rm)
Example
Create 5 files as shown below:
echo "a.txt b.txt c.txt d.txt z.txt" | xargs touch
List all files except z.txt
ls|grep -v "z.txt"
a.txt
b.txt
c.txt
d.txt
We can now delete(rm) the listed files by using the xargs utility :
ls|grep -v "z.txt"|xargs rm
You can type it right in the command-line or use this keystroke in the script
files=`ls -l | grep -v "my_favorite_dir"`; for file in $files; do rm -rvf $file; done
P.S. I suggest -i switch for rm to prevent delition of important data.
P.P.S You can write the small script based on this solution and place it to the /usr/bin (e.g. /usr/bin/rmf). Now you can use it as and ordinary app:
rmf my_favorite_dir
The script looks like (just a sketch):
#!/bin/sh
if [[ -z $1 ]]; then
files=`ls -l`
else
files=`ls -l | grep -v $1`
fi;
for file in $files; do
rm -rvi $file
done;
At least in zsh
rm -rf ^filename
could be an option, if you only want to preserve one single file.
If it's just one file, one simple way is to move that file to /tmp or something, rm -Rf the directory and then move it back. You could alias this as a simple command.
The other option is to do a find and then grep out what you don't want (using -v or directly using one of finds predicates) and then rming the remaining files.
For a single file, I'd do the former. For anything more, I'd write something custom similar to what thkala said.
In bash you have the !() glob operator, which inverts the matched pattern. So to delete everything except the file my_file_name.txt, try this:
shopt -s extglob
rm -f !(my_file_name.txt)
See this article for more details:
http://karper.wordpress.com/2010/11/17/deleting-all-files-in-a-directory-with-exceptions/
I don't know of such a program, but I have wanted it in the past for some times. The basic syntax would be:
IFS='
' for f in $(except "*.c" "*.h" -- *); do
printf '%s\n' "$f"
done
The program I have in mind has three modes:
exact matching (with the option -e)
glob matching (default, like shown in the above example)
regex matching (with the option -r)
It takes the patterns to be excluded from the command line, followed by the separator --, followed by the file names. Alternatively, the file names might be read from stdin (if the option -s is given), each on a line.
Such a program should not be hard to write, in either C or the Shell Command Language. And it makes a good excercise for learning the Unix basics. When you do it as a shell program, you have to watch for filenames containing whitespace and other special characters, of course.
I see a lot of longwinded means here, that work, but with
a/ b/ c/ d/ e/
rm -rf *.* !(b*)
this removes everything except directory b/ and its contents (assuming your file is in b/.
Then just cd b/ and
rm -rf *.* !(filename)
to remove everything else, but the file (named "filename") that you want to keep.
mv subdir/preciousfile ./
rm -rf subdir
mkdir subdir
mv preciousfile subdir/
This looks tedious, but it is rather safe
avoids complex logic
never use rm -rf *, its results depend on your current directory (which could be / ;-)
never use a globbing *: its expansion is limited by ARGV_MAX.
allows you to check the error after each command, and maybe avoid the disaster caused by the next command.
avoids nasty problems caused by space or NL in the filenames.
cd ..
ln trash/useful.file ./
rm -rf trash/*
mv useful.file trash/
you need to use regular expression for this. Write a regular expression which selects all other files except the one you need.

How to copy a file to multiple directories using the gnu cp command

Is it possible to copy a single file to multiple directories using the cp command ?
I tried the following , which did not work:
cp file1 /foo/ /bar/
cp file1 {/foo/,/bar}
I know it's possible using a for loop, or find. But is it possible using the gnu cp command?
You can't do this with cp alone but you can combine cp with xargs:
echo dir1 dir2 dir3 | xargs -n 1 cp file1
Will copy file1 to dir1, dir2, and dir3. xargs will call cp 3 times to do this, see the man page for xargs for details.
No, cp can copy multiple sources but will only copy to a single destination. You need to arrange to invoke cp multiple times - once per destination - for what you want to do; using, as you say, a loop or some other tool.
Wildcards also work with Roberts code
echo ./fs*/* | xargs -n 1 cp test
I would use cat and tee based on the answers I saw at https://superuser.com/questions/32630/parallel-file-copy-from-single-source-to-multiple-targets instead of cp.
For example:
cat inputfile | tee outfile1 outfile2 > /dev/null
As far as I can see it you can use the following:
ls | xargs -n 1 cp -i file.dat
The -i option of cp command means that you will be asked whether to overwrite a file in the current directory with the file.dat. Though it is not a completely automatic solution it worked out for me.
These answers all seem more complicated than the obvious:
for i in /foo /bar; do cp "$file1" "$i"; done
ls -db di*/subdir | xargs -n 1 cp File
-b in case there is a space in directory name otherwise it will be broken as a different item by xargs, had this problem with the echo version
Not using cp per se, but...
This came up for me in the context of copying lots of Gopro footage off of a (slow) SD card to three (slow) USB drives. I wanted to read the data only once, because it took forever. And I wanted it recursive.
$ tar cf - src | tee >( cd dest1 ; tar xf - ) >( cd dest2 ; tar xf - ) | ( cd dest3 ; tar xf - )
(And you can add more of those >() sections if you want more outputs.)
I haven't benchmarked that, but it's definitely a lot faster than cp-in-a-loop (or a bunch of parallel cp invocations).
If you want to do it without a forked command:
tee <inputfile file2 file3 file4 ... >/dev/null
To use copying with xargs to directories using wildcards on Mac OS, the only solution that worked for me with spaces in the directory name is:
find ./fs*/* -type d -print0 | xargs -0 -n 1 cp test
Where test is the file to copy
And ./fs*/* the directories to copy to
The problem is that xargs sees spaces as a new argument, the solutions to change the delimiter character using -d or -E is unfortunately not properly working on Mac OS.
Essentially equivalent to the xargs answer, but in case you want parallel execution:
parallel -q cp file1 ::: /foo/ /bar/
So, for example, to copy file1 into all subdirectories of current folder (including recursion):
parallel -q cp file1 ::: `find -mindepth 1 -type d`
N.B.: This probably only conveys any noticeable speed gains for very specific use cases, e.g. if each target directory is a distinct disk.
It is also functionally similar to the '-P' argument for xargs.
No - you cannot.
I've found on multiple occasions that I could use this functionality so I've made my own tool to do this for me.
http://github.com/ddavison/branch
pretty simple -
branch myfile dir1 dir2 dir3
ls -d */ | xargs -iA cp file.txt A
Suppose you want to copy fileName.txt to all sub-directories within present working directory.
Get all sub-directories names through ls and save them to some temporary file say, allFolders.txt
ls > allFolders.txt
Print the list and pass it to command xargs.
cat allFolders.txt | xargs -n 1 cp fileName.txt
Another way is to use cat and tee as follows:
cat <source file> | tee <destination file 1> | tee <destination file 2> [...] > <last destination file>
I think this would be pretty inefficient though, since the job would be split among several processes (one per destination) and the hard drive would be writing several files at once over different parts of the platter. However if you wanted to write a file out to several different drives, this method would probably be pretty efficient (as all copies could happen concurrently).
Using a bash script
DESTINATIONPATH[0]="xxx/yyy"
DESTINATIONPATH[1]="aaa/bbb"
..
DESTINATIONPATH[5]="MainLine/USER"
NumberOfDestinations=6
for (( i=0; i<NumberOfDestinations; i++))
do
cp SourcePath/fileName.ext ${DESTINATIONPATH[$i]}
done
exit
if you want to copy multiple folders to multiple folders one can do something like this:
echo dir1 dir2 dir3 | xargs -n 1 cp -r /path/toyourdir/{subdir1,subdir2,subdir3}
If all your target directories match a path expression — like they're all subdirectories of path/to — then just use find in combination with cp like this:
find ./path/to/* -type d -exec cp [file name] {} \;
That's it.
If you need to be specific on into which folders to copy the file you can combine find with one or more greps. For example to replace any occurences of favicon.ico in any subfolder you can use:
find . | grep favicon\.ico | xargs -n 1 cp -f /root/favicon.ico
This will copy to the immediate sub-directories, if you want to go deeper, adjust the -maxdepth parameter.
find . -mindepth 1 -maxdepth 1 -type d| xargs -n 1 cp -i index.html
If you don't want to copy to all directories, hopefully you can filter the directories you are not interested in. Example copying to all folders starting with a
find . -mindepth 1 -maxdepth 1 -type d| grep \/a |xargs -n 1 cp -i index.html
If copying to a arbitrary/disjoint set of directories you'll need Robert Gamble's suggestion.
I like to copy a file into multiple directories as such:
cp file1 /foo/; cp file1 /bar/; cp file1 /foo2/; cp file1 /bar2/
And copying a directory into other directories:
cp -r dir1/ /foo/; cp -r dir1/ /bar/; cp -r dir1/ /foo2/; cp -r dir1/ /bar2/
I know it's like issuing several commands, but it works well for me when I want to type 1 line and walk away for a while.
For example if you are in the parent directory of you destination folders you can do:
for i in $(ls); do cp sourcefile $i; done

Resources