Which file size is most accurate between ll, ls, and block size M or G? - linux

So take the following dir:
4096 dir1
7255937636 dir2
This is what I get with just an ll command. If I do ls -l --block-size=M I end up with:
1M dir1
6920M dir2
Finally if I do ls -l --block-size=G I end up with:
1G dir1
7G dir2
I get that 6920 is easily rounded up to 7G but it seems like it's a big stretch to round that 4096 up to 1G. I also don't understand why the second example isn't 7256M or something more similar. Even more if we're always rounding up, why isn't the 7256 rounded up to 8G?
I guess I don't fully understand what it is I'm looking at here when nothing gives as accurate value as I'm thinking.

Apparently you are confusing the blocksize with the unit used for displaying the (correct) size. Try using
ls -lh
to enable auto scaling for human-readable output.
BTW: ll usually is just an alias for ls -l. This is also the most accurate value you will get.

Related

file created through 'truncate -s' can not get expected size

I create a file using truncate -s 1024M a.test.
I was expecting a file size of a.test to be 1024M, but not getting correct size somehow.
Below is my code.
$du -sh a.test
4.0K a.test
When using ls -l a.test, it is ok:
$ ll a.test
-rw-rw-r-- 1 work work 1073741824 Jul 12 17:26 a.test
Can some one help me out with this issue.
du tells you how much actual disk space you use. Since your file does not have any data in it, the OS will store it as a sparse file, so actual disk usage is much smaller than the size of the file. If you check it with "du --apparent-size -sh a.test", then that will report what you expected.

'size' vs 'ls -l' to get the size of an executable file

For the same file, I think the output of ls -l xxx is always greater than or equal to the output of size xxx.
But when I type ls -l /bin/ls the output is:
-rwxr-xr-x 1 root root 104508 1月 14 2015 /bin/ls
For size /bin/ls, the output is:
text data bss dec hex filename
101298 976 3104 105378 19ba2 /bin/ls
Why is ls showing less than size? 104508 < 105378
ls -l is telling you the size of the file, while the size command tells you the size of the executable image stored in the file -- how much memory it will require when loaded. Some segments (such as .bss) are zero-initialized rather than requiring data in the file to initialize them, so the the file may well be smaller than the executable image as a result.

How does `ls -lh` round file size?

I'm comparing the rounded file size value displayed by ls -lh to the raw size in bytes (as displayed by ls -l, say). I'm having a hard time figuring out what algorithm it uses to do the conversion from bytes.
My assumption is that it interprets the units K,M,G as either
(a) 10^3, 10^6, 10^9, or
(b) 1024, 1024^2, 1024^3.
On the one hand, I have one file that ls -l reports as 2052 bytes, and ls -lh rounds to 2.1K:
$ ls -l usercount.c
-rw-r--r-- 1 squirrel lsf 2052 May 13 15:41 usercount.c
$ ls -lh usercount.c
-rw-r--r-- 1 squirrel lsf 2.1K May 13 15:41 usercount.c
This would seem to support hypothesis (a), because 2052/1000=2.052 which rounds up to 2.1K but 2052/1024=2.0039 which clearly would display as 2.0K when rounded to one decimal place.
On the other hand, I have another file that ls -l reports as being 7223 bytes, which ls -lh displays as 7.1K:
$ ls -l traverse.readdir_r.c
-rw-r--r-- 1 squirrel lsf 7223 Jul 21 2014 traverse.readdir_r.c
$ ls -lh traverse.readdir_r.c
-rw-r--r-- 1 squirrel lsf 7.1K Jul 21 2014 traverse.readdir_r.c
This confusingly supports hypthesis (b), because 7223/1000=7.223 which should round down to 7.2K, but 7223/1024=7.0537 which rounds up to the displayed 7.1K
This leads me to conclude that my assumption is wrong and that it does neither (a) nor (b) exclusively. What algorithm does ls use to do this rounding?
GNU ls will by default round up in 1024-based units.
It does not round to nearest, as you've taken for granted.
Here's the formatting flag from gnulib human.h:
/* Round to plus infinity (default). */
human_ceiling = 0,
This is consistent with everything you're seeing:
2052 is 2.0039 KiB which rounds up to 2.1
7223 is 7.0537 KiB which rounds up to 7.1
by default the block size in ls is 1024, but for example if the output is 44.203125k it will round it to 45k
you can change it too
ls -lh --block-size=1000
and the source code: ls source code

Can cygwin ls show ACLs without providing the DOS path to file?

The commands
cd c:/p4
ls -ld . c:/p4 /cygdrive/c/p4
shows
d---------+ 1 jgunter Domain Users 0 Apr 27 18:41 .
d---------+ 1 jgunter Domain Users ? 0 Apr 27 18:41 /cygdrive/c/p4
drwxr-xr-x 1 jgunter Domain Users ? 0 Apr 27 18:41 c:/p4
ls shows the perms I want to see only for files specified with a C:/ path.
I know about getfacl, but I'm hoping there's some ls option that will show me what I want without requiring I spell out absolute paths.
I can do something like:
ls -ld `cygpath -da $#`
but when I'm in a deeply nested folder, the output is cluttered by full pathnames.
DOS path makes cygwin treat the file system as not having ACLs. It means that ls shows the correct ACLs, but the same directory is mounted with different options. Therefore ls doesn't have such an option, you need a workaround.
https://cygwin.com/cygwin-ug-net/ov-new1.7.html states at 1.7.2:
Handle native DOS paths always as if mounted with "posix=0,noacl"
Beside this, I think, that d---------+ is strange. I've tried it on my PC, with 1.7.31 cygwin version, and it shows drwx------+, which is a bit better. I had experienced other bugs and strange behaviour in cygwin ACL handling. I guess there is confusion and some hacks about this. chmod 777 was a good workaround in my case.

Linux space check

Collectively check space of files in linux...
I have nearly more than 100 of files ...to check the size collectively...
Edit: What I need is: I have a folder containing 1000 files and I need something so that I can calculate the total sum [of what?] of 100 files only which I need...not all 1000 files.....
This command will give you the size in kilobytes of all the individual files/directories in the current directory:
du -ks *
This command will give you the combined total size of the current directory:
du -ks .
If you need to recurse and get more detailed info, the find command might help.
If you want the total size of all files in the current directory (In "human readable format")
du -sh
This is a bit vague ... Assuming all you want is to get the total size of a bunch of files, there's any number of solutions.
If the files are all in the same directory, one very easy way is to just use
ls -lh | head -1
This prints a single line showing the "total" number, with a friendly "human-readable" (that's the -h option to ls) unit even.
Note that this does not work with wildcards, since then ls suppresses its "total"-line.
I'm no linux guru, but there should be some switch of the ls command that shows size.
If that fails, look into using du.
Using gdu:
aaa:vim70> gdu
5028 ./doc
4420 ./syntax
.
.
.
176 ./compiler
16 ./macros/hanoi
16 ./macros/life
48 ./macros/maze
20 ./macros/urm
200 ./macros
252 ./keymap
18000 .
You can use --max-depth to limit the depth of the search:
aaa:vim70> gdu --max-depth=1
5028 ./doc
136 ./print
76 ./colors
4420 ./syntax
420 ./indent
628 ./ftplugin
1260 ./autoload
64 ./plugin
800 ./tutor
3348 ./spell
176 ./compiler
200 ./macros
112 ./tools
844 ./lang
252 ./keymap
18000 .
Notice that the subdirectories of macros don't appear.
or even:
aaa:vim70> gdu --max-depth=0
18000 .
The default unit is kilobytes. You can use -h to get it in human readable form:
aaa:vim70> gdu --max-depth=1 -h
5.0M ./doc
136k ./print
76k ./colors
4.4M ./syntax
420k ./indent
628k ./ftplugin
1.3M ./autoload
64k ./plugin
800k ./tutor
3.3M ./spell
176k ./compiler
200k ./macros
112k ./tools
844k ./lang
252k ./keymap
18M .

Resources