Script to remove all directories older than x days but keep certain ones - linux

I'm trying to write a bash script to remove all directories and their files but keep certain ones.
drwxr-xr-x 20 ubuntu admin 4096 Jan 21 17:58 .
drwxr-xr-x 8 ubuntu admin 4096 Nov 21 16:45 ..
drwxr-xr-x 11 ubuntu admin 4096 Jan 9 13:09 1763
drwxr-xr-x 11 ubuntu admin 4096 Jan 16 16:46 1817
drwxr-xr-x 11 ubuntu admin 4096 Jan 16 17:39 1821
drwxr-xr-x 11 ubuntu admin 4096 Jan 19 10:15 1823
drwxr-xr-x 11 ubuntu admin 4096 Jan 19 11:57 1826
drwxr-xr-x 11 ubuntu admin 4096 Jan 19 14:55 1827
drwxr-xr-x 11 ubuntu admin 4096 Jan 19 21:34 1828
drwxr-xr-x 11 ubuntu admin 4096 Jan 20 13:29 1833
drwxr-xr-x 11 ubuntu admin 4096 Jan 20 16:13 1834
drwxr-xr-x 11 ubuntu admin 4096 Jan 21 10:06 1838
drwxr-xr-x 11 ubuntu admin 4096 Jan 21 12:51 1842
drwxr-xr-x 11 ubuntu admin 4096 Jan 21 15:20 1845
drwxr-xr-x 11 ubuntu admin 4096 Jan 22 13:00 1848
drwxr-xr-x 11 ubuntu admin 4096 Nov 24 16:34 217
drwxr-xr-x 11 ubuntu admin 4096 Dec 2 20:44 219
drwxr-xr-x 11 ubuntu admin 4096 Dec 15 16:42 221
drwxr-xr-x 11 ubuntu admin 4096 Dec 16 12:04 225
drwxr-xr-x 2 ubuntu admin 4096 Jan 20 16:10 app-conf
lrwxrwxrwx 1 ubuntu admin 19 Jan 21 17:58 latest -> /opt/qudiniapp/1848
In the example above we'd want to clear out all non sym-linked folders except the app-conf folder.
The plan is to have this triggered by my ansible deployment script before deployment so we can keep our server from filling up with builds.

Provided, all directories, that are to be deleted, consist only of numbers, this would be one way solve this:
cd /tempdir
rm -rf $(find . -type d -name "[0-9]*" | grep -v "$(readlink latest)")
As this is a housekeepingjob, you should create a cronjob, that regularly deletes old directories. The find command would then include checking, for example, if the last modification time is beyond a number of days:
rm -rf $(find . -type d -mtime +20 -name "[0-9]*" | grep -v "$(readlink latest)")

bash script:
#!/bin/bash
find /your/path -type d ! \( -path '*app-conf*' -prune \) -mtime +2 -delete
per man find
-P Never follow symbolic links. This is the default behaviour. When find examines or prints information a file, and the file is a symbolic link, the information used shall be taken from the properties of the symbolic link itself.
-mtime n File's data was last modified n*24 hours ago. See the comments for -atime to understand how rounding affects the interpretation of file modification times.

This is what I use in my Ansible deployments, hope it will be helpful for you as it does almost exactly what you need.
I always remove oldest release on each deployment if there are >= 5 builds in "{{ releases_path }}" directory. "{{ releases_path }}" contains directories which are basically Git commit hashes (long)
- name: Find oldest release to remove
shell: '[[ $(find "{{ releases_path | quote }}" -maxdepth 1 -mindepth 1 -type d | wc -l) -ge 6 ]] && IFS= read -r -d $"\0" line < <(find "{{ releases_path | quote }}" -maxdepth 1 -mindepth 1 -type d -printf "%T# %p\0" 2>/dev/null | sort -z -n); file="${line#* }"; echo "$file";'
args:
executable: /bin/bash
chdir: "{{ releases_path }}"
register: releasetoremove
changed_when: "releasetoremove.stdout != ''"
- debug: var=releasetoremove
- name: Remove oldest release
file: path={{ releasetoremove.stdout }} state=absent
when: releasetoremove|changed
This is what I always have on each server in releases directory (last 5 always kept):
$ ls -lt | cut -c 28-
62 Jan 22 17:42 current -> /srv/releases/2a7b80c82fb1dd658a3356fed7bba9718bc50527
4096 Jan 22 17:41 2a7b80c82fb1dd658a3356fed7bba9718bc50527
4096 Jan 22 15:22 73b1252ab4060833e43849e2e32f57fea6c6cd9b
4096 Jan 22 14:47 9df7f1097909aea69916695194ac41938a0c2e9a
4096 Jan 22 14:16 f6a2862d70f7f26ef75b67168a30fb9ef2202555
4096 Jan 22 13:49 fa89eefc5b2505e153b2e59ed02a23889400c4bf

Related

Strange problem with find command on ubuntu

I used the 'find' command to find a file and encountered a strange issue:
the file exists, but 'find' can't find it
I found two .sock in /run with 'sudo find /run -name docker.sock'
$sudo find /run -name docker.sock
/run/march/docker.sock
/run/docker.sock
I got nothing when run 'sudo find /var -name docker.sock' and 'sudo find /var/run -name docker.sock'
$sudo find /var -name docker.sock
$sudo find /var/run -name docker.sock
$
but in fact there are two .sock in /var/run/, any comments?
$ls -al /var/run/docker.sock
srwxrwxrwx+ 1 root docker 0 Oct 18 20:45 /var/run/docker.sock
$ls -al /var/run/march/docker.sock/
total 0
drwxr-xr-x 2 root root 40 Oct 31 20:35 .
drwxr-xr-x 5 root root 100 Oct 31 20:35 ..
$ls -al /var/run/march/
total 0
drwxr-xr-x 5 root root 100 Oct 31 20:35 .
drwxr-xr-x 34 root root 1120 Oct 31 23:45 ..
drwxr-xr-x 2 root root 40 Oct 31 20:35 docker
drwxr-xr-x 2 root root 40 Oct 31 20:35 docker.pid
drwxr-xr-x 2 root root 40 Oct 31 20:35 docker.sock
$
$
BTW it's on Ubuntu 20.04.2 LTS
Thanks in advance
As /var/run is a symbolic link to /run, you have to tell find to follow links :
sudo find -L /var/run -name docker.sock

Set the permissions of all files copied in a folder the same

I would like to create a folder (in Linux) that can be used as cloud-like storage location, where all files copied there automatically will have g+rw permissions (without the need of chmod'ing), such that they are readable and writable by people beloning to that specific group.
You can use the command setfacl, e.g.:
setfacl -d -m g::rwx test/
It sets the rwx permissions to every new file in test/ folder.
$ touch test/test
$ ls -la test/
total 48
drwxr-xr-x 2 manu manu 4096 Jan 28 08:39 .
drwxrwxrwt 20 root root 40960 Jan 28 08:39 ..
-rw-r--r-- 1 manu manu 0 Jan 28 08:39 test
$ setfacl -d -m g::rwx test/
$ ls -la test/
total 48
drwxr-xr-x+ 2 manu manu 4096 Jan 28 08:39 .
drwxrwxrwt 20 root root 40960 Jan 28 08:39 ..
-rw-r--r-- 1 manu manu 0 Jan 28 08:39 test
$ touch test/test2
$ ls -la test/
total 48
drwxr-xr-x+ 2 manu manu 4096 Jan 28 08:40 .
drwxrwxrwt 20 root root 40960 Jan 28 08:39 ..
-rw-r--r-- 1 manu manu 0 Jan 28 08:39 test
-rw-rw-r-- 1 manu manu 0 Jan 28 08:40 test2

How to get the name of the executables files in bash with ls

I try to get the name of the executable files using ls -l.
Then I tried to get the lines of ls -l which have a x using grep -w x but the result is not right : some executable files are missing (the .sh).
I just need the name of the executable files not the path but I don't know how ...
user#user-K53TA:~/Bureau$ ls -l
total 52
-rwxrwxrwx 1 user user 64 oct. 6 21:07 a.sh
-rw-rw-r-- 1 user user 11 sept. 29 21:51 e.txt
-rwxrwxrwx 1 user user 140 sept. 29 23:42 hi.sh
drwxrwxr-x 8 user user 4096 juil. 30 20:47 nerdtree-master
-rw-rw-r-- 1 user user 492 oct. 6 21:07 okk.txt
-rw-rw-r-- 1 user user 1543 oct. 6 21:07 ok.txt
-rw-rw-r-- 1 user user 119 sept. 29 23:27 oo.txt
-rwxrwxr-x 1 user user 8672 sept. 29 21:20 prog
-rw-rw-rw- 1 user user 405 sept. 29 21:23 prog.c
-rw-rw-r-- 1 user user 0 sept. 29 21:58 rev
drwxrwxr-x 3 user user 4096 sept. 29 20:51 sublime
user#user-K53TA:~/Bureau$ ls -l | grep -w x
drwxrwxr-x 8 user user 4096 juil. 30 20:47 nerdtree-master
-rwxrwxr-x 1 user user 8672 sept. 29 21:20 prog
drwxrwxr-x 3 user user 4096 sept. 29 20:51 sublime
Don't parse ls. This can be done with find.
find . -type f -perm /a+x
This finds files with any of the executable bits set: user, group, or other.
Use find instead:
find -executable
find -maxdepth 1 -type f -executable
find -maxdepth 1 -type f -executable -ls
One can use a for loop with glob expansion for discovering and manipulating file names. Observe:
#!/bin/sh
for i in *
do # Only print discoveries that are executable files
[ -f "$i" -a -x "$i" ] && printf "%s\n" "$i"
done
Since the accepted answer uses no ls at all:
ls -l | grep -e '^...x'

find command + remove older directories according to directory time stamp

I want to delete directories that older than 180 days
for example directories that older than 180 days:
drwxr-xr-x 2 root root 4096 Oct 1 2009 nis
drwxr-xr-x 3 root root 4096 Nov 4 2012 pkgs
I use this command:
find /var/tmp -depth -mindepth 1 -type d -ctime +180 -exec rm -rf {} \;
After I run the find command , I see that the older directories are still exist
Please advice what wrong with my find command?
[root#vm1 /var/tmp]# ls -ltr
total 20
drwxr-xr-x 2 root root 4096 Oct 1 2009 nis
drwxr-xr-x 3 root root 4096 Nov 4 2012 pkgs
drwxr-x--- 2 root root 4096 Dec 3 08:24 1
drwxr-x--- 2 root root 4096 Dec 3 08:41 2
drwxr-x--- 2 root root 4096 Dec 3 08:41 3
[root#vm1 /var/tmp]# find /var/tmp -depth -mindepth 1 -type d -ctime +180 -exec rm -rf {} \;
[root#vm1 /var/tmp]# ls -ltr
total 20
drwxr-xr-x 2 root root 4096 Oct 1 2009 nis
drwxr-xr-x 3 root root 4096 Nov 4 2012 pkgs
drwxr-x--- 2 root root 4096 Dec 3 08:24 1
drwxr-x--- 2 root root 4096 Dec 3 08:41 2
drwxr-x--- 2 root root 4096 Dec 3 08:41 3
I also try this ( but not remove the old dir ) the -mtime only change the date of the old dir to the current date
find /var/tmp -depth -mindepth 1 -type d -mtime +180 -exec rm -rf {} \;
-t sort by modification time
try
find /var/tmp -depth -mindepth 1 -type d -mtime +180 -exec rm -rf {} \;
Update : delete options depth and mindepth

Get ONLY sym links to a file

I looked into symbolic link: find all files that link to this file and https://stackoverflow.com/questions/6184849/symbolic-link-find-all-files-that-link-to-this-file but they didn't seem to solve the problem.
if I do find -L -samefile path/to/file
the result contains hard links as well as sym links.
I've been trying to come up with a solution to fetch ONLY sym links, but can't seem to figure it out.
I've been trying to combine -samefile and -type l but that got me nowhere.
man find
says you can combine some options into an expression, but I failed to do it properly.
Any help greatly appreciated!
Ok, I completely misread the question at first.
To find only symlinks to a certain file, I think it's still good approach to combine multiple commands.
So you know the file you want to link to, let's call it targetfile.txt. We have our directory structure like this:
$ ls -laR
.:
total 24
drwxrwxr-x 4 telorb telorb 4096 Mar 28 09:51 .
drwxrwxr-x 57 telorb telorb 4096 Mar 28 09:49 ..
-rw-rw-r-- 1 telorb telorb 21 Mar 28 09:51 another_file.txt
drwxrwxr-x 2 telorb telorb 4096 Mar 28 09:52 folder1
drwxrwxr-x 2 telorb telorb 4096 Mar 28 09:53 folder2
-rw-rw-r-- 3 telorb telorb 28 Mar 28 09:52 targetfile.txt
./folder1:
total 12
drwxrwxr-x 2 telorb telorb 4096 Mar 28 09:52 .
drwxrwxr-x 4 telorb telorb 4096 Mar 28 09:51 ..
-rw-rw-r-- 3 telorb telorb 28 Mar 28 09:52 hardlink
lrwxrwxrwx 1 telorb telorb 17 Mar 28 09:49 symlink1 -> ../targetfile.txt
./folder2:
total 12
drwxrwxr-x 2 telorb telorb 4096 Mar 28 09:57 .
drwxrwxr-x 4 telorb telorb 4096 Mar 28 09:51 ..
-rw-rw-r-- 3 telorb telorb 28 Mar 28 09:52 hardlink2
lrwxrwxrwx 1 telorb telorb 17 Mar 28 09:57 symlink2_to_targetfile -> ../targetfile.txt
lrwxrwxrwx 1 telorb telorb 19 Mar 28 09:53 symlink_to_anotherfile -> ../another_file.txt
The file in folder1/hardlink is a hardlink to targetfile.txt, folder1/symlink1 is a symbolic link we are interested, and same with folder2/symlink2_to_targetfile. There is also another symlink to another file, which we are not interested in.
The approach I would take is first use find . -type l to get symbolic links recursively from specified folder (and we still have full path information).
Then pipe that to xargs and ls -l to get the information which file the link is pointing to, and finally grep our targetfile.txt, so that we remove links that are not pointing to our desired file. The command in full:
find . -type l | xargs -I % ls -l % | grep targetfile.txt
lrwxrwxrwx 1 telorb telorb 17 Mar 28 09:57 ./folder2/symlink2_to_targetfile -> ../targetfile.txt
lrwxrwxrwx 1 telorb telorb 17 Mar 28 09:49 ./folder1/symlink1 -> ../targetfile.txt
The xargs -I % ls -l % sometimes confuses people. Basically with -I % you are telling xargs that % sign will denote all places where you want xargs place the input it receives. So it will effectively replace it to ls -l output_of_find_command

Resources