find command + remove older directories according to directory time stamp - linux

I want to delete directories that older than 180 days
for example directories that older than 180 days:
drwxr-xr-x 2 root root 4096 Oct 1 2009 nis
drwxr-xr-x 3 root root 4096 Nov 4 2012 pkgs
I use this command:
find /var/tmp -depth -mindepth 1 -type d -ctime +180 -exec rm -rf {} \;
After I run the find command , I see that the older directories are still exist
Please advice what wrong with my find command?
[root#vm1 /var/tmp]# ls -ltr
total 20
drwxr-xr-x 2 root root 4096 Oct 1 2009 nis
drwxr-xr-x 3 root root 4096 Nov 4 2012 pkgs
drwxr-x--- 2 root root 4096 Dec 3 08:24 1
drwxr-x--- 2 root root 4096 Dec 3 08:41 2
drwxr-x--- 2 root root 4096 Dec 3 08:41 3
[root#vm1 /var/tmp]# find /var/tmp -depth -mindepth 1 -type d -ctime +180 -exec rm -rf {} \;
[root#vm1 /var/tmp]# ls -ltr
total 20
drwxr-xr-x 2 root root 4096 Oct 1 2009 nis
drwxr-xr-x 3 root root 4096 Nov 4 2012 pkgs
drwxr-x--- 2 root root 4096 Dec 3 08:24 1
drwxr-x--- 2 root root 4096 Dec 3 08:41 2
drwxr-x--- 2 root root 4096 Dec 3 08:41 3
I also try this ( but not remove the old dir ) the -mtime only change the date of the old dir to the current date
find /var/tmp -depth -mindepth 1 -type d -mtime +180 -exec rm -rf {} \;

-t sort by modification time
try
find /var/tmp -depth -mindepth 1 -type d -mtime +180 -exec rm -rf {} \;
Update : delete options depth and mindepth

Related

Strange problem with find command on ubuntu

I used the 'find' command to find a file and encountered a strange issue:
the file exists, but 'find' can't find it
I found two .sock in /run with 'sudo find /run -name docker.sock'
$sudo find /run -name docker.sock
/run/march/docker.sock
/run/docker.sock
I got nothing when run 'sudo find /var -name docker.sock' and 'sudo find /var/run -name docker.sock'
$sudo find /var -name docker.sock
$sudo find /var/run -name docker.sock
$
but in fact there are two .sock in /var/run/, any comments?
$ls -al /var/run/docker.sock
srwxrwxrwx+ 1 root docker 0 Oct 18 20:45 /var/run/docker.sock
$ls -al /var/run/march/docker.sock/
total 0
drwxr-xr-x 2 root root 40 Oct 31 20:35 .
drwxr-xr-x 5 root root 100 Oct 31 20:35 ..
$ls -al /var/run/march/
total 0
drwxr-xr-x 5 root root 100 Oct 31 20:35 .
drwxr-xr-x 34 root root 1120 Oct 31 23:45 ..
drwxr-xr-x 2 root root 40 Oct 31 20:35 docker
drwxr-xr-x 2 root root 40 Oct 31 20:35 docker.pid
drwxr-xr-x 2 root root 40 Oct 31 20:35 docker.sock
$
$
BTW it's on Ubuntu 20.04.2 LTS
Thanks in advance
As /var/run is a symbolic link to /run, you have to tell find to follow links :
sudo find -L /var/run -name docker.sock

how to get previous date files and pass ls output to array in gawk

I have log files like below generated, and I need to daily run script ,which will list them , and then do 2 things.
1- get previous / yesterday files and transfer them to x server
2- get files older than one day and transfer them to y server
files are like below and I am trying below code but not working.
how can we pass ls -altr output to gawk ? can we built an associate array like below.
array[index]=ls -altr | awk '{print $6,$7,$8}'
code I am trying to retrieve previous date files , but not working
previous_dates=$(date -d "-1 days" '+-%d')
ls -altr |gawk '{if ( $7!=previous_dates ) print $9 }'
-r-------- 1 root root 6291563 Jun 22 14:45 audit.log.4
-r-------- 1 root root 6291619 Jun 24 09:11 audit.log.3
drwxr-xr-x. 14 root root 4096 Jun 26 03:47 ..
-r-------- 1 root root 6291462 Jun 26 04:15 audit.log.2
-r-------- 1 root root 6291513 Jun 27 23:05 audit.log.1
drwxr-x---. 2 root root 4096 Jun 27 23:05 .
-rw------- 1 root root 5843020 Jun 29 14:57 audit.log
To select files modified yesterday, you could use
find . -daystart -type f -mtime 1
and to select older files, you could use
find . -daystart -type f -mtime +1
possibly adding a -name test to select only files like audit.log*, for example. You could then use xargs to process the files, e.g.
find . -daystart -type f -mtime 1 | xargs -n 1 -I{} scp {} user#server

How to get the name of the executables files in bash with ls

I try to get the name of the executable files using ls -l.
Then I tried to get the lines of ls -l which have a x using grep -w x but the result is not right : some executable files are missing (the .sh).
I just need the name of the executable files not the path but I don't know how ...
user#user-K53TA:~/Bureau$ ls -l
total 52
-rwxrwxrwx 1 user user 64 oct. 6 21:07 a.sh
-rw-rw-r-- 1 user user 11 sept. 29 21:51 e.txt
-rwxrwxrwx 1 user user 140 sept. 29 23:42 hi.sh
drwxrwxr-x 8 user user 4096 juil. 30 20:47 nerdtree-master
-rw-rw-r-- 1 user user 492 oct. 6 21:07 okk.txt
-rw-rw-r-- 1 user user 1543 oct. 6 21:07 ok.txt
-rw-rw-r-- 1 user user 119 sept. 29 23:27 oo.txt
-rwxrwxr-x 1 user user 8672 sept. 29 21:20 prog
-rw-rw-rw- 1 user user 405 sept. 29 21:23 prog.c
-rw-rw-r-- 1 user user 0 sept. 29 21:58 rev
drwxrwxr-x 3 user user 4096 sept. 29 20:51 sublime
user#user-K53TA:~/Bureau$ ls -l | grep -w x
drwxrwxr-x 8 user user 4096 juil. 30 20:47 nerdtree-master
-rwxrwxr-x 1 user user 8672 sept. 29 21:20 prog
drwxrwxr-x 3 user user 4096 sept. 29 20:51 sublime
Don't parse ls. This can be done with find.
find . -type f -perm /a+x
This finds files with any of the executable bits set: user, group, or other.
Use find instead:
find -executable
find -maxdepth 1 -type f -executable
find -maxdepth 1 -type f -executable -ls
One can use a for loop with glob expansion for discovering and manipulating file names. Observe:
#!/bin/sh
for i in *
do # Only print discoveries that are executable files
[ -f "$i" -a -x "$i" ] && printf "%s\n" "$i"
done
Since the accepted answer uses no ls at all:
ls -l | grep -e '^...x'

Script to remove all directories older than x days but keep certain ones

I'm trying to write a bash script to remove all directories and their files but keep certain ones.
drwxr-xr-x 20 ubuntu admin 4096 Jan 21 17:58 .
drwxr-xr-x 8 ubuntu admin 4096 Nov 21 16:45 ..
drwxr-xr-x 11 ubuntu admin 4096 Jan 9 13:09 1763
drwxr-xr-x 11 ubuntu admin 4096 Jan 16 16:46 1817
drwxr-xr-x 11 ubuntu admin 4096 Jan 16 17:39 1821
drwxr-xr-x 11 ubuntu admin 4096 Jan 19 10:15 1823
drwxr-xr-x 11 ubuntu admin 4096 Jan 19 11:57 1826
drwxr-xr-x 11 ubuntu admin 4096 Jan 19 14:55 1827
drwxr-xr-x 11 ubuntu admin 4096 Jan 19 21:34 1828
drwxr-xr-x 11 ubuntu admin 4096 Jan 20 13:29 1833
drwxr-xr-x 11 ubuntu admin 4096 Jan 20 16:13 1834
drwxr-xr-x 11 ubuntu admin 4096 Jan 21 10:06 1838
drwxr-xr-x 11 ubuntu admin 4096 Jan 21 12:51 1842
drwxr-xr-x 11 ubuntu admin 4096 Jan 21 15:20 1845
drwxr-xr-x 11 ubuntu admin 4096 Jan 22 13:00 1848
drwxr-xr-x 11 ubuntu admin 4096 Nov 24 16:34 217
drwxr-xr-x 11 ubuntu admin 4096 Dec 2 20:44 219
drwxr-xr-x 11 ubuntu admin 4096 Dec 15 16:42 221
drwxr-xr-x 11 ubuntu admin 4096 Dec 16 12:04 225
drwxr-xr-x 2 ubuntu admin 4096 Jan 20 16:10 app-conf
lrwxrwxrwx 1 ubuntu admin 19 Jan 21 17:58 latest -> /opt/qudiniapp/1848
In the example above we'd want to clear out all non sym-linked folders except the app-conf folder.
The plan is to have this triggered by my ansible deployment script before deployment so we can keep our server from filling up with builds.
Provided, all directories, that are to be deleted, consist only of numbers, this would be one way solve this:
cd /tempdir
rm -rf $(find . -type d -name "[0-9]*" | grep -v "$(readlink latest)")
As this is a housekeepingjob, you should create a cronjob, that regularly deletes old directories. The find command would then include checking, for example, if the last modification time is beyond a number of days:
rm -rf $(find . -type d -mtime +20 -name "[0-9]*" | grep -v "$(readlink latest)")
bash script:
#!/bin/bash
find /your/path -type d ! \( -path '*app-conf*' -prune \) -mtime +2 -delete
per man find
-P Never follow symbolic links. This is the default behaviour. When find examines or prints information a file, and the file is a symbolic link, the information used shall be taken from the properties of the symbolic link itself.
-mtime n File's data was last modified n*24 hours ago. See the comments for -atime to understand how rounding affects the interpretation of file modification times.
This is what I use in my Ansible deployments, hope it will be helpful for you as it does almost exactly what you need.
I always remove oldest release on each deployment if there are >= 5 builds in "{{ releases_path }}" directory. "{{ releases_path }}" contains directories which are basically Git commit hashes (long)
- name: Find oldest release to remove
shell: '[[ $(find "{{ releases_path | quote }}" -maxdepth 1 -mindepth 1 -type d | wc -l) -ge 6 ]] && IFS= read -r -d $"\0" line < <(find "{{ releases_path | quote }}" -maxdepth 1 -mindepth 1 -type d -printf "%T# %p\0" 2>/dev/null | sort -z -n); file="${line#* }"; echo "$file";'
args:
executable: /bin/bash
chdir: "{{ releases_path }}"
register: releasetoremove
changed_when: "releasetoremove.stdout != ''"
- debug: var=releasetoremove
- name: Remove oldest release
file: path={{ releasetoremove.stdout }} state=absent
when: releasetoremove|changed
This is what I always have on each server in releases directory (last 5 always kept):
$ ls -lt | cut -c 28-
62 Jan 22 17:42 current -> /srv/releases/2a7b80c82fb1dd658a3356fed7bba9718bc50527
4096 Jan 22 17:41 2a7b80c82fb1dd658a3356fed7bba9718bc50527
4096 Jan 22 15:22 73b1252ab4060833e43849e2e32f57fea6c6cd9b
4096 Jan 22 14:47 9df7f1097909aea69916695194ac41938a0c2e9a
4096 Jan 22 14:16 f6a2862d70f7f26ef75b67168a30fb9ef2202555
4096 Jan 22 13:49 fa89eefc5b2505e153b2e59ed02a23889400c4bf

How to limit depth for recursive file list?

Is there a way to limit the depth of a recursive file listing in linux?
The command I'm using at the moment is:
ls -laR > dirlist.txt
But I've got about 200 directories and each of them have 10's of directories. So it's just going to take far too long and hog too many system resources.
All I'm really interested in is the ownership and permissions information for the first level subdirectories:
drwxr-xr-x 14 root root 1234 Dec 22 13:19 /var/www/vhosts/domain1.co.uk
drwxr--r-- 14 jon root 1234 Dec 22 13:19 /var/www/vhosts/domain1.co.uk/htdocs
drwxr--r-- 14 jon root 1234 Dec 22 13:19 /var/www/vhosts/domain1.co.uk/cgi-bin
drwxr-xr-x 14 root root 1234 Dec 22 13:19 /var/www/vhosts/domain2.co.uk
drwxr-xrwx 14 proftp root 1234 Dec 22 13:19 /var/www/vhosts/domain2.co.uk/htdocs
drwxr-xrwx 14 proftp root 1234 Dec 22 13:19 /var/www/vhosts/domain2.co.uk/cgi-bin
drwxr-xr-x 14 root root 1234 Dec 22 13:19 /var/www/vhosts/domain3.co.uk
drwxr-xr-- 14 jon root 1234 Dec 22 13:19 /var/www/vhosts/domain3.co.uk/htdocs
drwxr-xr-- 14 jon root 1234 Dec 22 13:19 /var/www/vhosts/domain3.co.uk/cgi-bin
drwxr-xr-x 14 root root 1234 Dec 22 13:19 /var/www/vhosts/domain4.co.uk
drwxr-xr-- 14 jon root 1234 Dec 22 13:19 /var/www/vhosts/domain4.co.uk/htdocs
drwxr-xr-- 14 jon root 1234 Dec 22 13:19 /var/www/vhosts/domain4.co.uk/cgi-bin
EDIT:
Final choice of command:
find -maxdepth 2 -type d -ls >dirlist
Checkout the -maxdepth flag of find
find . -maxdepth 1 -type d -exec ls -ld "{}" \;
Here I used 1 as max level depth, -type d means find only directories, which then ls -ld lists contents of, in long format.
Make use of find's options
There is actually no exec of /bin/ls needed;
Find has an option that does just that:
find . -maxdepth 2 -type d -ls
To see only the one level of subdirectories you are interested in, add -mindepth to the same level as -maxdepth:
find . -mindepth 2 -maxdepth 2 -type d -ls
Use output formatting
When the details that get shown should be different, -printf can show any detail about a file in custom format;
To show the symbolic permissions and the owner name of the file, use -printf with %M and %u in the format.
I noticed later you want the full ownership information, which includes
the group. Use %g in the format for the symbolic name, or %G for the group id (like also %U for numeric user id)
find . -mindepth 2 -maxdepth 2 -type d -printf '%M %u %g %p\n'
This should give you just the details you need, for just the right files.
I will give an example that shows actually different values for user and group:
$ sudo find /tmp -mindepth 2 -maxdepth 2 -type d -printf '%M %u %g %p\n'
drwx------ www-data www-data /tmp/user/33
drwx------ octopussy root /tmp/user/126
drwx------ root root /tmp/user/0
drwx------ siegel root /tmp/user/1000
drwxrwxrwt root root /tmp/systemd-[...].service-HRUQmm/tmp
(Edited for readability: indented, shortened last line)
Notes on performance
Although the execution time is mostly irrelevant for this kind of command, increase in performance
is large enough here to make it worth pointing it out:
Not only do we save creating a new process for each name - a huge task -
the information does not even need to be read, as find already knows it.
tree -L 2 -u -g -p -d
Prints the directory tree in a pretty format up to depth 2 (-L 2).
Print user (-u) and group (-g) and permissions (-p).
Print only directories (-d).
tree has a lot of other useful options.
All I'm really interested in is the ownership and permissions information for the first level subdirectories.
I found a easy solution while playing my fish, which fits your need perfectly.
ll `ls`
or
ls -l $(ls)

Resources