Shell command to delete folders less than N days [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
This is slightly different question/variation to this question
Remove all directories whose created on time is older than N days.
Don't consider the sub-directories/files within the directory.
Ex:
drwxrwxr-x 6 test test 4096 Aug 26 14:42 2.1.6-SNAPSHOT_201408261440_1
drwxrwxr-x 6 test test 4096 Sep 1 05:13 2.1.6-SNAPSHOT_201408281233_1
drwxrwxr-x 6 test test 4096 Sep 1 10:06 2.1.6-SNAPSHOT_201409011001_1
drwxrwxr-x 6 test test 4096 Sep 1 15:58 2.1.6-SNAPSHOT_201409011554_1
drwxrwxr-x 6 test test 4096 Sep 2 15:11 2.2.0-SNAPSHOT_201409021508_1
drwxrwxr-x 6 test test 4096 Sep 2 15:18 2.2.0-SNAPSHOT_201409021515_1
drwxrwxr-x 6 test test 4096 Sep 5 13:05 2.2.0-SNAPSHOT_201409051303_1
drwxrwxr-x 6 test test 4096 Sep 5 15:32 2.1.6-SNAPSHOT_201409051528_1
drwxrwxr-x 6 test test 4096 Sep 8 11:54 2.1.6-SNAPSHOT_201409081152_1
I should be able to delete all folders in this path whose created on is older than N days. The inside folder might have updated files/sub-directories which are new, it doesn't matter.

Assuming you want to delete old directories:
N=4
find . -type d -mtime +$N -exec rm -fr {} +
A depth-first search would ensure that sub-directories are removed before the directories that contain them, but might end up altering the modify time on the directory before find looks at it, which would mean that directories that were old are no longer counted as old. However, conversely, the rm may end up trying to remove directories it has already removed, but the -f option ensures this does not end up with error reports.
You might want to consult Explaining find … -mtime command for information about the meaning of +$N vs -$N vs $N (where N is assumed to hold a number).

Related

Linux group 998,what does it mean? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Ubuntu 20 LTS, Installed laradock,
in Ubuntu
$ pwd
/root/Docker
$ ls
blog laradock
$ rsync -a /media/sf_code/blog . && chmod -R 755 blog
$ cd laracock
$ docker-compose exec --user=root workspace bash
in docker
> ll
total 20
drwxr-xr-x 4 laradock laradock 4096 Nov 12 06:52 ./
drwxr-xr-x 1 root root 4096 Nov 12 02:30 ../
drwxr-xr-x 12 root 998 4096 Nov 12 03:09 blog/
drwxr-xr-x 74 laradock laradock 4096 Nov 12 06:35 laradock/
what does 998 mean?
The 4th column is the group id. It there is an entry in /etc/group with this id, then the group name will be printed otherwise the id.
The your example the group id of folder blog is 998 but no group exist inside the container with this id. Mapping a folder to a docker container does not change owner or group.
Some explanation can be found here

Mac OSX file permissions has '#' - how to remove that '#' [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
How can I remove that '#' character from the file permissions?
'#' in file permissions for Mac/OSX machines, is used to show that an extended attribute is set with this file.
Tried chmod 755 galaxy-ansible.yml but that didn't help.
Tried echo | chmod -E galaxy-ansible.yml, didn't help (even with using sudo).
Tried xattr -d galaxy-ansible.yml, that didn't help either (even with using sudo).
I even did the above operations as root user, still '#' character is not going away from file's permissions.
[arun#MacBook-Pro-2 ~/aks/anisble] $ ls -l# galaxy-ansible.yml
-rwxr-xr-x# 1 arun staff 270 Dec 22 12:31 galaxy-ansible.yml
com.apple.quarantine 67
My ~/aks folder is mapped to a CentOS vagrant box and if I'm on the vagrant box, doing ls -l doesn't give me '#' (as it's not a Max/OSX machine):
-rwxr-xr-x. 1 vagrant vagrant 270 Dec 22 00:12 galaxy-ansible.yml
On my Mac/OSX machine, there are other .yml files but those don't have '#' in the file permissions so I'm trying to remove '#' from galaxy-ansible.yml file (on Mac machine).
Right now the whole roles/.. folder has '#' character for any folder/files.
-rwxr-xr-x# 1 arun staff 1132 Dec 21 17:12 README.md
drwxr-xr-x# 3 arun staff 102 Dec 21 17:12 defaults
drwxr-xr-x# 3 arun staff 102 Dec 21 17:12 handlers
drwxr-xr-x# 4 arun staff 136 Dec 21 17:12 meta
drwxr-xr-x# 5 arun staff 170 Dec 21 17:12 tasks
drwxr-xr-x# 7 arun staff 238 Dec 21 17:12 templates
The following commands helped in clearing the extended attribute at file / folder(recursive) level.
xattr -c <yourfilename>
or
xattr -cr <yourfoldername>

Soft Link redirection in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have created a soft link as follows:
/bip/etl>ln -s /bip/etl bipet
And now can see the soft link being created as well..
/bip/etl>ls -lrt |tail
-rw-rw-rw- 1 cdtbipx cduserg 24988174 Jun 19 19:17 227015716_WLR3PSTN_Filtered_06202016_5of6.csv.gz.gpg
-rw-rw-rw- 1 cdtbipx cduserg 23857587 Jun 19 19:17 227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg
drwxrwxrwx 1082 prod release 61440 Jul 3 02:51 WSC
drwxrwxrwx 5 oracle oinstall 4096 Jul 4 01:22 dsl
lrwxrwxrwx 1 cdtbipx cduserg 8 Jul 4 08:43 bipet -> /bip/etl
However, I cannot refer to the soft link bipet while I try to search a specific file in the concerned folder.
ls -lrt /bipetl/227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg
ls: /bipetl/227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg: No such file or directory
What am I doing wrong here?
You created a link bipet in directory /bip/etl (current working directory when you did ln).
You you should do:
ls -lrt /bip/etl/bipetl/227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg
Or create the link using (assuming you have privileges to write to the /):
ln -s /bip/etl /bipet
And then you can do:
ls -lrt /bipetl/227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg

Linux memory issue [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a problem on my server.
When i try to start my server, it says that "there no left space on device"
If i execute the command "df", I see that on directory if full.
/dev/mapper/owegdc_vg-owegdc_logs_lv
10321208 9797004 0 100% /opt/application/owegdc/logs
When i get to the logs directory here what i see
ls -lrta
total 368
drwxr-x--- 2 oweadm grpowe 16384 Jan 15 2014 lost+found
drwxr-x--- 7 oweadm grpowe 4096 Jun 18 11:55 .
drwxr-xr-x 2 oweadm grpowe 12288 Aug 4 10:20 apache
drwxr-xr-x 2 oweadm grpowe 4096 Aug 5 00:56 batches
drwxr-xr-x 2 oweadm grpowe 4096 Sep 10 13:43 expl
drwxr-xr-x 2 oweadm grpowe 327680 Sep 10 13:50 jonas
drwxr-xr-x 11 oweadm grpowe 4096 Sep 10 13:50 ..
du -sk
9642792 .
I tried things like 'lsof' but it didn't work...
Do you have an idea ?
Thx
You could just try something like
du | sort -h -r
That would list the directories on your disk, ordered by their size descending. The first directory in the output list is the biggest one.
Better, if you're looking for large single files instead of a directory, this answer on Unix & Linux gives useful information, especially this:
find . -type f | xargs du -h | sort -rn
The output is the same, but it lists files instead of dirs.

what's the output format of 'find . -ls'? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm looking at the output of find . -ls. For example, here is a small excerpt for /lib64 on a CentOS system:
163542 28 -rwxr-xr-x 1 root root 28448 Aug 4 2010 ./libvolume_id.so.0.66.0
163423 0 lrwxrwxrwx 1 root root 16 Mar 3 2010 ./libwrap.so.0 -> libwrap.so.0.7.6
163601 0 lrwxrwxrwx 1 root root 11 Nov 9 2010 ./libc.so.6 -> libc-2.5.so
The find(1) man page says "list current file in ls -dils format on standard output". I then tried to figure it out from ls(1) man page, but I'm stumped on the second column. Any idea?
For reference: the columns (with ref. for the first line) are:
inode 163542
??? 28 what is this? stat that file doesn't mention any field equals to '28'
permissions -rwxr-xr-x
hard-links 1
owner root
group root
size(bytes) 28448
modified Aug 4 2010
name ./libvolume_id.so.0.66.0
(for logical links: -> softlink)
Doh, a casual regression against size reveals that it's roughly the number of 1024-byte blocks...

Resources