file created through 'truncate -s' can not get expected size - linux

I create a file using truncate -s 1024M a.test.
I was expecting a file size of a.test to be 1024M, but not getting correct size somehow.
Below is my code.
$du -sh a.test
4.0K a.test
When using ls -l a.test, it is ok:
$ ll a.test
-rw-rw-r-- 1 work work 1073741824 Jul 12 17:26 a.test
Can some one help me out with this issue.

du tells you how much actual disk space you use. Since your file does not have any data in it, the OS will store it as a sparse file, so actual disk usage is much smaller than the size of the file. If you check it with "du --apparent-size -sh a.test", then that will report what you expected.

Related

how can i make a file with specific size in a bit time?

when i use of "truncate -s" for make a file with 10 Gigabyte size , must wait minimum 2 minutes until that's make .
is there any function in Linux or bash command for rapidly make a file with 10gig cap?
Have a look at fallocate, it can be used to allocate files of arbitrary sizes very quickly:
$ fallocate -l 10G ./largefile
$ ls -lh ./largefile
-rw-r--r-- 1 user group 10G Nov 18 11:29 largefile
Another method which is considered a bit older but should be supported if fallocate fails is to use dd:
$ dd if=/dev/zero of=largefile bs=16384 count=0 seek=634K
0+0 records in
0+0 records out
0 bytes copied, 0.00393638 s, 0.0kB/s
$ ls -lh ./largefile
-rw-r--r-- 1 user group 10G Nov 18 12:00 largefile
I found a way to create random file with random size
dd if=/dev/zero of=output.dat bs=10G seek=1 count=1
thanks for helping ("th3ant" and "Basile Starynkevitch")

How to check size of a folder in dsefs

what is the command to check the size of the folder in dsefs?
i tried df -h it's not giving me size of a particular folder, it gives me size of the whole filesystem on all the nodes.
Thank you.
To get the size of each folder.
du -sh *
Example:
$ du -sh *
124K bin
91M clients
101M demos
4.0K ds_branch.txt
4.0K ds_timestamp.txt
4.0K ds_version.txt
7.4M javadoc
8.2M lib
412K licenses
28K LICENSE.txt
4.0K README.md
800M resources
100K tools

'size' vs 'ls -l' to get the size of an executable file

For the same file, I think the output of ls -l xxx is always greater than or equal to the output of size xxx.
But when I type ls -l /bin/ls the output is:
-rwxr-xr-x 1 root root 104508 1月 14 2015 /bin/ls
For size /bin/ls, the output is:
text data bss dec hex filename
101298 976 3104 105378 19ba2 /bin/ls
Why is ls showing less than size? 104508 < 105378
ls -l is telling you the size of the file, while the size command tells you the size of the executable image stored in the file -- how much memory it will require when loaded. Some segments (such as .bss) are zero-initialized rather than requiring data in the file to initialize them, so the the file may well be smaller than the executable image as a result.

Disk usage - du showing different results [duplicate]

This question already has answers here:
why is the output of `du` often so different from `du -b`
(5 answers)
Closed 6 years ago.
I am confused with du command because it gives different result for files.
[root#gerrh6-05 sathish]# du -s saravana/admin/sqlnet.ora
4 saravana/admin/sqlnet.ora
[root#gerrh6-05 sathish]# du -h saravana/admin/sqlnet.ora
4.0K saravana/admin/sqlnet.ora
[root#gerrh6-05 sathish]# du -b saravana/admin/sqlnet.ora
65 saravana/admin/sqlnet.ora
[root#gerrh6-05 sathish]# du -bh saravana/admin/sqlnet.ora
65 saravana/admin/sqlnet.ora
[root#gerrh6-05 sathish]# ll -h saravana/admin/sqlnet.ora
-rw-r----- 1 root root 65 May 18 03:47 saravana/admin/sqlnet.ora
Disk usage summary return invalid result(-s gives 4 and -b gives 65), where bytes(-b) return same as ll result.
[root#gerrh6-05 sathish]# du -sh saravana/admin
114M saravana/admin
[root#gerrh6-05 sathish]# du -bh saravana/admin
12K saravana/admin/1/xdb_wallet
7.4K saravana/admin/1/pfile
7.2M saravana/admin/1/test/result/data
7.6M saravana/admin/1/test/result
7.0M saravana/admin/1/test/data
28M saravana/admin/1/test
7.2M saravana/admin/1/adump
4.0K saravana/admin/1/logbook/controlfile_trace
8.0K saravana/admin/1/logbook
4.2K saravana/admin/1/dpdump
35M saravana/admin/1
35M saravana/admin
From above which is correct size of /admin dir 35M or 114M.
Which one I should take?
Note: I am working on a linux machine where I don't have UI.Purpose why I ma looking for this is, I writing a script to taking backup. I should split folders and files based on size limit 4GB. Which one I should take to count.Because the different is large!!
From man du:
--apparent-size: print apparent sizes, rather than disk usage; although the apparent size is usually smaller, it may be larger due to holes in ('sparse') files, internal fragmentation, indirect blocks, and the like
-b, --bytes: equivalent to --apparent-size --block-size=1
So, -b tells you how much data is stored; without it, you get how much disk space is used. Both are "correct size", for different definition of "size".

Size() vs ls -la vs du -h which one is correct size?

I was compiling a custom kernel, and I wanted to test the size of the image file.
These are the results:
ls -la | grep vmlinux
-rwxr-xr-x 1 root root 8167158 May 21 12:14 vmlinux
du -h vmlinux
3.8M vmlinux
size vmlinux
text data bss dec hex filename
2221248 676148 544768 3442164 3485f4 vmlinux
Since all of them show different sizes, which one is closest to the actual image size?
Why are they different?
They are all correct, they just show different sizes.
ls shows size of the file (when you open and read it, that's how many bytes you will get)
du shows actual disk usage which can be smaller than the file size due to holes
size shows the size of the runtime image of an object/executable which is not directly related to the size of the file (bss uses no bytes in the file no matter how large, the file may contain debugging information that is not part of the runtime image, etc.)
If you want to know how much RAM/ROM an executable will take excluding dynamic memory allocation, size gives you the information you need.
Two definition need to be understood
1 runtime vs storetime (this is why size differs)
2 file depth vs directory (this is why du differs)
Look at the below example:
[root#localhost test]# ls -l
total 36
-rw-r--r-- 1 root root 712 May 12 19:50 a.c
-rw-r--r-- 1 root root 3561 May 12 19:42 a.h
-rwxr-xr-x 1 root root 71624 May 12 19:50 a.out
-rw-r--r-- 1 root root 1403 May 8 00:15 b.c
-rw-r--r-- 1 root root 1403 May 8 00:15 c.c
[root#localhost test]# du -abch --max-depth=1
1.4K ./b.c
1.4K ./c.c
3.5K ./a.h
712 ./a.c
70K ./a.out
81K .
81K total
[root#localhost test]# ls -l
total 36
-rw-r--r-- 1 root root 712 May 12 19:50 a.c
-rw-r--r-- 1 root root 3561 May 12 19:42 a.h
-rwxr-xr-x 1 root root 71624 May 12 19:50 a.out
-rw-r--r-- 1 root root 1403 May 8 00:15 b.c
-rw-r--r-- 1 root root 1403 May 8 00:15 c.c
[root#localhost test]# size a.out
text data bss dec hex filename
3655 640 16 4311 10d7 a.out
If using size not on executable, OS will report an error.
Empirically differences happen most often for sparse files and for compressed files and can go in both directions.
du < ls
Sparse files contain metadata about space needed for an application, which ls reads and applies for its result, while du doesn't. For example:
truncate -s 1m test.dat
creates a sparse file consisting entirely of nulls without disk usage, ie. du shows 0 and ls shows 1M.
du > ls
On the other hand du can indicate, as in your case, files which might occupy a lot of space on disk (ie. they spread among lots of blocks), but not all blocks are filled, i.e. their bytesize (measured by ls) is smaller than du (looking at occupied blocks). I observed this rather prominently e.g. for some python pickle files.

Resources