what's the output format of 'find . -ls'? [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm looking at the output of find . -ls. For example, here is a small excerpt for /lib64 on a CentOS system:
163542 28 -rwxr-xr-x 1 root root 28448 Aug 4 2010 ./libvolume_id.so.0.66.0
163423 0 lrwxrwxrwx 1 root root 16 Mar 3 2010 ./libwrap.so.0 -> libwrap.so.0.7.6
163601 0 lrwxrwxrwx 1 root root 11 Nov 9 2010 ./libc.so.6 -> libc-2.5.so
The find(1) man page says "list current file in ls -dils format on standard output". I then tried to figure it out from ls(1) man page, but I'm stumped on the second column. Any idea?
For reference: the columns (with ref. for the first line) are:
inode 163542
??? 28 what is this? stat that file doesn't mention any field equals to '28'
permissions -rwxr-xr-x
hard-links 1
owner root
group root
size(bytes) 28448
modified Aug 4 2010
name ./libvolume_id.so.0.66.0
(for logical links: -> softlink)

Doh, a casual regression against size reveals that it's roughly the number of 1024-byte blocks...

Related

Linux group 998,what does it mean? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Ubuntu 20 LTS, Installed laradock,
in Ubuntu
$ pwd
/root/Docker
$ ls
blog laradock
$ rsync -a /media/sf_code/blog . && chmod -R 755 blog
$ cd laracock
$ docker-compose exec --user=root workspace bash
in docker
> ll
total 20
drwxr-xr-x 4 laradock laradock 4096 Nov 12 06:52 ./
drwxr-xr-x 1 root root 4096 Nov 12 02:30 ../
drwxr-xr-x 12 root 998 4096 Nov 12 03:09 blog/
drwxr-xr-x 74 laradock laradock 4096 Nov 12 06:35 laradock/
what does 998 mean?
The 4th column is the group id. It there is an entry in /etc/group with this id, then the group name will be printed otherwise the id.
The your example the group id of folder blog is 998 but no group exist inside the container with this id. Mapping a folder to a docker container does not change owner or group.
Some explanation can be found here

What does the -s command show? and why it changes with -h? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
What does the the first column shows when using the -s command with ls command?
$ ls -als
41 -rw-r--r-- 1 user user 165287 Jul 10 11:18 '.tutorial.term'
1 lrwxrwxrwx 1 user user 18 Jul 1 08:40 .bash_profile -> /home/user/.bashrc
3 -rw-r--r-- 1 user user 2355 Jul 1 08:40 .bashrc
Does it show the number of blocks used for that file? or the size of blocks used for that file?
If I add the -h command to the mix, which prints sizes in a human readable format, why does the first column changes too? and why does the value differs from that in the 6th column which represents the actual size of the file?
$ ls -alsh
41K -rw-r--r-- 1 user user 163K Jul 10 12:34 '.tutorial.term'
512 lrwxrwxrwx 1 user user 18 Jul 1 08:40 .bash_profile -> /home/user/.bashrc
2.5K -rw-r--r-- 1 user user 2.3K Jul 1 08:40 .bashrc
as ls man page says, -s will print the allocated size of each file, in blocks
The size of a file and the space it occupies on your hard drive are rarely the same. Disk space is allocated in blocks. If a file is smaller than a block, an entire block is still allocated to it because the file system doesn’t have a smaller unit of real estate to use. reference
also when you use, -h option, it will change allocated block size and file content size into bytes to be human readable. So block size can be different from file size because, it often happens that file content won't use all allocated size
If you want to know why ls -l and ls -s give different sizes, read this answer. Basically, -l returns the actual size of the file while -s returns the size in the filesystem. h makes all sizes human-readable, including the ones for -s and -l.

Soft Link redirection in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have created a soft link as follows:
/bip/etl>ln -s /bip/etl bipet
And now can see the soft link being created as well..
/bip/etl>ls -lrt |tail
-rw-rw-rw- 1 cdtbipx cduserg 24988174 Jun 19 19:17 227015716_WLR3PSTN_Filtered_06202016_5of6.csv.gz.gpg
-rw-rw-rw- 1 cdtbipx cduserg 23857587 Jun 19 19:17 227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg
drwxrwxrwx 1082 prod release 61440 Jul 3 02:51 WSC
drwxrwxrwx 5 oracle oinstall 4096 Jul 4 01:22 dsl
lrwxrwxrwx 1 cdtbipx cduserg 8 Jul 4 08:43 bipet -> /bip/etl
However, I cannot refer to the soft link bipet while I try to search a specific file in the concerned folder.
ls -lrt /bipetl/227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg
ls: /bipetl/227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg: No such file or directory
What am I doing wrong here?
You created a link bipet in directory /bip/etl (current working directory when you did ln).
You you should do:
ls -lrt /bip/etl/bipetl/227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg
Or create the link using (assuming you have privileges to write to the /):
ln -s /bip/etl /bipet
And then you can do:
ls -lrt /bipetl/227015716_WLR3PSTN_Filtered_06202016_6of6.csv.gz.gpg

Linux memory issue [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a problem on my server.
When i try to start my server, it says that "there no left space on device"
If i execute the command "df", I see that on directory if full.
/dev/mapper/owegdc_vg-owegdc_logs_lv
10321208 9797004 0 100% /opt/application/owegdc/logs
When i get to the logs directory here what i see
ls -lrta
total 368
drwxr-x--- 2 oweadm grpowe 16384 Jan 15 2014 lost+found
drwxr-x--- 7 oweadm grpowe 4096 Jun 18 11:55 .
drwxr-xr-x 2 oweadm grpowe 12288 Aug 4 10:20 apache
drwxr-xr-x 2 oweadm grpowe 4096 Aug 5 00:56 batches
drwxr-xr-x 2 oweadm grpowe 4096 Sep 10 13:43 expl
drwxr-xr-x 2 oweadm grpowe 327680 Sep 10 13:50 jonas
drwxr-xr-x 11 oweadm grpowe 4096 Sep 10 13:50 ..
du -sk
9642792 .
I tried things like 'lsof' but it didn't work...
Do you have an idea ?
Thx
You could just try something like
du | sort -h -r
That would list the directories on your disk, ordered by their size descending. The first directory in the output list is the biggest one.
Better, if you're looking for large single files instead of a directory, this answer on Unix & Linux gives useful information, especially this:
find . -type f | xargs du -h | sort -rn
The output is the same, but it lists files instead of dirs.

Shell command to delete folders less than N days [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
This is slightly different question/variation to this question
Remove all directories whose created on time is older than N days.
Don't consider the sub-directories/files within the directory.
Ex:
drwxrwxr-x 6 test test 4096 Aug 26 14:42 2.1.6-SNAPSHOT_201408261440_1
drwxrwxr-x 6 test test 4096 Sep 1 05:13 2.1.6-SNAPSHOT_201408281233_1
drwxrwxr-x 6 test test 4096 Sep 1 10:06 2.1.6-SNAPSHOT_201409011001_1
drwxrwxr-x 6 test test 4096 Sep 1 15:58 2.1.6-SNAPSHOT_201409011554_1
drwxrwxr-x 6 test test 4096 Sep 2 15:11 2.2.0-SNAPSHOT_201409021508_1
drwxrwxr-x 6 test test 4096 Sep 2 15:18 2.2.0-SNAPSHOT_201409021515_1
drwxrwxr-x 6 test test 4096 Sep 5 13:05 2.2.0-SNAPSHOT_201409051303_1
drwxrwxr-x 6 test test 4096 Sep 5 15:32 2.1.6-SNAPSHOT_201409051528_1
drwxrwxr-x 6 test test 4096 Sep 8 11:54 2.1.6-SNAPSHOT_201409081152_1
I should be able to delete all folders in this path whose created on is older than N days. The inside folder might have updated files/sub-directories which are new, it doesn't matter.
Assuming you want to delete old directories:
N=4
find . -type d -mtime +$N -exec rm -fr {} +
A depth-first search would ensure that sub-directories are removed before the directories that contain them, but might end up altering the modify time on the directory before find looks at it, which would mean that directories that were old are no longer counted as old. However, conversely, the rm may end up trying to remove directories it has already removed, but the -f option ensures this does not end up with error reports.
You might want to consult Explaining find … -mtime command for information about the meaning of +$N vs -$N vs $N (where N is assumed to hold a number).

Resources