This question already has answers here:
bash script to remove directories based on modified file date
(3 answers)
Closed 8 years ago.
Hi so I'm trying to remove old backup files from a sub directory if the number of files exceeds the maximum and I found this command to do that
ls -t | sed -e '1,10d' | xargs -d '\n' rm
And my changes are as follows
ls -t subdirectory | sed -e '1,$f' | xargs -d '\n' rm
Obviously when I try running the script it gives me an error saying unknown commands: f
My only concern right now is that I'm passing in the max number of files allowed as an argument so I'm storing that in f but now I'm not too sure how to use that variable in the command above instead of having to set condition to a specific number.
Can anyone give me any pointers? And is there anything else I'm doing wrong?
Thanks!
The title of your question says "based on modification date". So why not simply using find with mtime option?
find subdirectory -mtime +5d -exec rm -v {} \;
Will delete all files older than 5 days.
The problem is that the file list you are passing to xargs does not contain the needed path information to delete the files. When called from the current directory, no path is needed, but if you call it with subdirectory, you need to then rm subdirectory/file from the current directory. Try it:
ls -t subdirectory # returns files with no path info
What you need to do is change to the subdirectory, call the removal script, then change back. In one line it could be done with:
pushd subdirectory &>/dev/null; ls -t | sed -e '1,$f' | xargs -d '\n' rm; popd
Other than doing it in a similar manner, you are probably better writing a slightly longer and more flexible script forming the list of files with the find command to insure the path information is retained.
Related
I have a script that runs every 30 minutes to find files matching a string and automatically hard-linking them to another folder. This folder then is uploaded to a backup and removed locally.
My current setup is working, but it inevitably hard-links the file again after it has been removed locally.
I am wanting to implement a way of logging what has already been linked, so when something is matched, it also checks against "hardlinklog.txt" file.
find . -name '*FILE*' -print0 | xargs -0 ln -t ~/media/
That is my current script with changed paths and filter.
This would be a job for grep -v -x -f hardlinklog.txt
-v: Pass only non-matching lines
-f <file>: Find the lines to check for in <file>
-x: Match entire lines only
I'm using Ubuntu 16.04.1 LTS
I found a script to delete everything but the 'n' newest files in a directory.
I modified it to this:
sudo rm /home/backup/`ls -t /home/backup/ | awk 'NR>5'`
It deletes only one file. It reports the following message about the rest of the files it should have deleted:
rm: cannot remove 'delete_me_02.tar': No such file or directory
rm: cannot remove 'delete_me_03.tar': No such file or directory
...
I believe that the problem is the path. It's looking for delete_me_02.tar (and subsequent files) in the current directory, and it's somehow lost its reference to the correct directory.
How can I modify my command to keep looking in the /home/backup/ directory for all 'n' files?
Maybe find could help you do what you want:
find /home/backup -type f | xargs ls -t | head -n 5 | xargs rm
But I would first check what find would return (just remove | xargs rm) and check what is going to be removed.
The command in the backticks will be expanded to the list of relative file paths:
%`ls -t /home/backup/ | awk 'NR>5'`
a.txt b.txt c.txt ...
so the full command will now look like this:
sudo rm /home/backup/a.txt b.txt c.txt
which, I believe, makes it pretty obvious on why only the first file is removed.
There is also a limit on a number of arguments you can pass to rm, so
you better modify your script to use xargs instead:
ls -t|tail -n+5|xargs -I{} echo rm /home/backup/'{}'
(just remove echo, once you verify that it produces an expected results for you)
After the command substitution expands, your command line looks like
sudo rm /home/backup/delete_me_01.tar delete_me_02.tar delete_me_03.tar etc
/home/backup is not prefixed to each word from the output. (Aside: don't use ls in a script; see http://mywiki.wooledge.org/ParsingLs.)
Frankly, this is something most shells just doesn't make easy to do properly. (Exception: with zsh, you would just use sudo rm /home/backup/*(Om[1,-6]).) I would use some other language.
I'm building a script in linux that will remove files from the disc that aren't in used currently by the OS. I want to use find command so I can execute rm for all the files that I find that are not open.
I tried so far this command without success:
find /folderToSearch/ -type f | while read filename ; do /sbin/fuser -s $filename || echo $filename ; done
I found this command in some website it supposed to print all files that are not in used. Although when I open using 'vi' command concurrently to find it still printing the filename.
To get the list of open files you can use lsof command. Get all the list of files from find command in a directory and remove them all.
1). Get the list of open files in a directory.
lsof /folderToSearch |awk '{print $NF}'|sed -e '1 d' |sort |uniq >/tmp/lsof
2). Get the list of files in a directory.
find /folderToSearch -type f -print >/tmp/find
3). Remove the /tmp/lsof list from /tmp/find file and then remove them.
grep -v -w -f /tmp/lsof /tmp/find |xargs rm -f
Due to an inefficient workflow where I have to copy directories between a Linux machine and a windows machine. The directories contain symlinks which (after copying Linux>Windows>Linux) contain the link in plaintext (eg foobar.C contains the text ../../../Foo/Bar/foobar.C)
Is there an efficient way to recreate the symlinks from the contents of the file recursively for a complete directory?
I have tried:
find . | xargs ln -s ??A?? ??B?? && mv ??B?? ??A??
where I really have no idea how to populate the variables, but ??A?? should be the symlink's destination from the file and ??B?? should be the name of the file with the suffix _temp appended.
If you are certain that all the files contain a symlink, it's not very hard.
find . -print0 | xargs -r 0 sh -c '
for f; do ln -s "$(cat "$f")" "${f}_temp" && mv "${f}_temp" "$f"; done' _
The _ dummy argument is necessary because the second argument to sh -c is used to populate $0 in the subshell. The shell itself is necessary because you cannot directly pass multiple commands to xargs.
The -print0 and corresponding xargs -0 are a GNU extension to correctly cope with tricky file names. See the find manual for details.
I would perhaps add a simple verification check before proceeding with the symlinking; for example, if grep -cm2 . on the file returns 2, skip the file (it contains more than one line of text). If you can be more specific (say, all the symlinks begin with ../) by all means be more specific.
I am trying to take some directories that and transfer them from Linux to Windows. The problem is that the files on Linux have colons in them. And I need to copy these directories (I cannot alter them directly since they are needed as they are the server) over to files with a name that Windows can use. For example, the name of a directory on the server might be:
IAPLTR2b-ERVK-LTR_chr9:113137544-113137860_-
while I need it to be:
IAPLTR2b-ERVK-LTR_chr9-113137544-113137860_-
I have about sixty of these directories and I have collected the names of the files with their absolute paths in a file I call directories.txt. I need to walk through this file changing the colons to hyphens. Thus far, my attempt is this:
#!/bin/bash
$DIRECTORIES=`cat directories.txt`
for $i in $DIRECTORIES;
do
cp -r "$DIRECTORIES" "`echo $DIRECTORIES | sed 's/:/-/'`"
done
However I get the error:
./my_shellscript.sh: line 10: =/bigpartition1/JKim_Test/test_bs_1/129c-test-biq/IAPLTR1_Mm-ERVK-LTR_chr10:104272652-104273004_+.fasta: No such file or directory ./my_shellscript.sh: line 14: `$i': not a valid identifier
Can anyone here help me identify what I am doing wrong and maybe what I need to do?
Thanks in advance.
This monstrosity will rename the directories in situ:
find tmp -depth -type d -exec sh -c '[ -d "{}" ] && echo mv "{}" "$(echo "{}" | tr : _)"' \;
I use -depth so it descends down into the deepest subdirectories first.
The [ -d "{}" ] is necessary because as soon as the subdirectory is renamed, its parent directory (as found by find) may no longer exist (having been renamed).
Change "echo mv" to "mv" if you're satisfied it will do what you want.