Linux space check - linux

Collectively check space of files in linux...
I have nearly more than 100 of files ...to check the size collectively...
Edit: What I need is: I have a folder containing 1000 files and I need something so that I can calculate the total sum [of what?] of 100 files only which I need...not all 1000 files.....

This command will give you the size in kilobytes of all the individual files/directories in the current directory:
du -ks *
This command will give you the combined total size of the current directory:
du -ks .
If you need to recurse and get more detailed info, the find command might help.

If you want the total size of all files in the current directory (In "human readable format")
du -sh

This is a bit vague ... Assuming all you want is to get the total size of a bunch of files, there's any number of solutions.
If the files are all in the same directory, one very easy way is to just use
ls -lh | head -1
This prints a single line showing the "total" number, with a friendly "human-readable" (that's the -h option to ls) unit even.
Note that this does not work with wildcards, since then ls suppresses its "total"-line.

I'm no linux guru, but there should be some switch of the ls command that shows size.
If that fails, look into using du.

Using gdu:
aaa:vim70> gdu
5028 ./doc
4420 ./syntax
.
.
.
176 ./compiler
16 ./macros/hanoi
16 ./macros/life
48 ./macros/maze
20 ./macros/urm
200 ./macros
252 ./keymap
18000 .
You can use --max-depth to limit the depth of the search:
aaa:vim70> gdu --max-depth=1
5028 ./doc
136 ./print
76 ./colors
4420 ./syntax
420 ./indent
628 ./ftplugin
1260 ./autoload
64 ./plugin
800 ./tutor
3348 ./spell
176 ./compiler
200 ./macros
112 ./tools
844 ./lang
252 ./keymap
18000 .
Notice that the subdirectories of macros don't appear.
or even:
aaa:vim70> gdu --max-depth=0
18000 .
The default unit is kilobytes. You can use -h to get it in human readable form:
aaa:vim70> gdu --max-depth=1 -h
5.0M ./doc
136k ./print
76k ./colors
4.4M ./syntax
420k ./indent
628k ./ftplugin
1.3M ./autoload
64k ./plugin
800k ./tutor
3.3M ./spell
176k ./compiler
200k ./macros
112k ./tools
844k ./lang
252k ./keymap
18M .

Related

how can i make a file with specific size in a bit time?

when i use of "truncate -s" for make a file with 10 Gigabyte size , must wait minimum 2 minutes until that's make .
is there any function in Linux or bash command for rapidly make a file with 10gig cap?
Have a look at fallocate, it can be used to allocate files of arbitrary sizes very quickly:
$ fallocate -l 10G ./largefile
$ ls -lh ./largefile
-rw-r--r-- 1 user group 10G Nov 18 11:29 largefile
Another method which is considered a bit older but should be supported if fallocate fails is to use dd:
$ dd if=/dev/zero of=largefile bs=16384 count=0 seek=634K
0+0 records in
0+0 records out
0 bytes copied, 0.00393638 s, 0.0kB/s
$ ls -lh ./largefile
-rw-r--r-- 1 user group 10G Nov 18 12:00 largefile
I found a way to create random file with random size
dd if=/dev/zero of=output.dat bs=10G seek=1 count=1
thanks for helping ("th3ant" and "Basile Starynkevitch")

file created through 'truncate -s' can not get expected size

I create a file using truncate -s 1024M a.test.
I was expecting a file size of a.test to be 1024M, but not getting correct size somehow.
Below is my code.
$du -sh a.test
4.0K a.test
When using ls -l a.test, it is ok:
$ ll a.test
-rw-rw-r-- 1 work work 1073741824 Jul 12 17:26 a.test
Can some one help me out with this issue.
du tells you how much actual disk space you use. Since your file does not have any data in it, the OS will store it as a sparse file, so actual disk usage is much smaller than the size of the file. If you check it with "du --apparent-size -sh a.test", then that will report what you expected.

Disk usage - du showing different results [duplicate]

This question already has answers here:
why is the output of `du` often so different from `du -b`
(5 answers)
Closed 6 years ago.
I am confused with du command because it gives different result for files.
[root#gerrh6-05 sathish]# du -s saravana/admin/sqlnet.ora
4 saravana/admin/sqlnet.ora
[root#gerrh6-05 sathish]# du -h saravana/admin/sqlnet.ora
4.0K saravana/admin/sqlnet.ora
[root#gerrh6-05 sathish]# du -b saravana/admin/sqlnet.ora
65 saravana/admin/sqlnet.ora
[root#gerrh6-05 sathish]# du -bh saravana/admin/sqlnet.ora
65 saravana/admin/sqlnet.ora
[root#gerrh6-05 sathish]# ll -h saravana/admin/sqlnet.ora
-rw-r----- 1 root root 65 May 18 03:47 saravana/admin/sqlnet.ora
Disk usage summary return invalid result(-s gives 4 and -b gives 65), where bytes(-b) return same as ll result.
[root#gerrh6-05 sathish]# du -sh saravana/admin
114M saravana/admin
[root#gerrh6-05 sathish]# du -bh saravana/admin
12K saravana/admin/1/xdb_wallet
7.4K saravana/admin/1/pfile
7.2M saravana/admin/1/test/result/data
7.6M saravana/admin/1/test/result
7.0M saravana/admin/1/test/data
28M saravana/admin/1/test
7.2M saravana/admin/1/adump
4.0K saravana/admin/1/logbook/controlfile_trace
8.0K saravana/admin/1/logbook
4.2K saravana/admin/1/dpdump
35M saravana/admin/1
35M saravana/admin
From above which is correct size of /admin dir 35M or 114M.
Which one I should take?
Note: I am working on a linux machine where I don't have UI.Purpose why I ma looking for this is, I writing a script to taking backup. I should split folders and files based on size limit 4GB. Which one I should take to count.Because the different is large!!
From man du:
--apparent-size: print apparent sizes, rather than disk usage; although the apparent size is usually smaller, it may be larger due to holes in ('sparse') files, internal fragmentation, indirect blocks, and the like
-b, --bytes: equivalent to --apparent-size --block-size=1
So, -b tells you how much data is stored; without it, you get how much disk space is used. Both are "correct size", for different definition of "size".

how to get the tasks taking more size on RAM in linux

With the command free -g, I am able to get the total occupied size and free size of RAM in Linux. But want to understand which tasks or process taking more size, so that I can free up the RAM size.
total used free shared buffers cached
Mem: 125 121 4 0 6 94
-/+ buffers/cache: 20 105
Swap: 31 0 31
Go for top command
then press shift+f
press a for pid information
ALso check
ps -eo pmem,vsz,pid
man ps
checkout pmem,vsz,pid.......
hope it helps..
thanks for the question !
You can use below command to find running processes sorted by memory use:
ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r | less

Size() vs ls -la vs du -h which one is correct size?

I was compiling a custom kernel, and I wanted to test the size of the image file.
These are the results:
ls -la | grep vmlinux
-rwxr-xr-x 1 root root 8167158 May 21 12:14 vmlinux
du -h vmlinux
3.8M vmlinux
size vmlinux
text data bss dec hex filename
2221248 676148 544768 3442164 3485f4 vmlinux
Since all of them show different sizes, which one is closest to the actual image size?
Why are they different?
They are all correct, they just show different sizes.
ls shows size of the file (when you open and read it, that's how many bytes you will get)
du shows actual disk usage which can be smaller than the file size due to holes
size shows the size of the runtime image of an object/executable which is not directly related to the size of the file (bss uses no bytes in the file no matter how large, the file may contain debugging information that is not part of the runtime image, etc.)
If you want to know how much RAM/ROM an executable will take excluding dynamic memory allocation, size gives you the information you need.
Two definition need to be understood
1 runtime vs storetime (this is why size differs)
2 file depth vs directory (this is why du differs)
Look at the below example:
[root#localhost test]# ls -l
total 36
-rw-r--r-- 1 root root 712 May 12 19:50 a.c
-rw-r--r-- 1 root root 3561 May 12 19:42 a.h
-rwxr-xr-x 1 root root 71624 May 12 19:50 a.out
-rw-r--r-- 1 root root 1403 May 8 00:15 b.c
-rw-r--r-- 1 root root 1403 May 8 00:15 c.c
[root#localhost test]# du -abch --max-depth=1
1.4K ./b.c
1.4K ./c.c
3.5K ./a.h
712 ./a.c
70K ./a.out
81K .
81K total
[root#localhost test]# ls -l
total 36
-rw-r--r-- 1 root root 712 May 12 19:50 a.c
-rw-r--r-- 1 root root 3561 May 12 19:42 a.h
-rwxr-xr-x 1 root root 71624 May 12 19:50 a.out
-rw-r--r-- 1 root root 1403 May 8 00:15 b.c
-rw-r--r-- 1 root root 1403 May 8 00:15 c.c
[root#localhost test]# size a.out
text data bss dec hex filename
3655 640 16 4311 10d7 a.out
If using size not on executable, OS will report an error.
Empirically differences happen most often for sparse files and for compressed files and can go in both directions.
du < ls
Sparse files contain metadata about space needed for an application, which ls reads and applies for its result, while du doesn't. For example:
truncate -s 1m test.dat
creates a sparse file consisting entirely of nulls without disk usage, ie. du shows 0 and ls shows 1M.
du > ls
On the other hand du can indicate, as in your case, files which might occupy a lot of space on disk (ie. they spread among lots of blocks), but not all blocks are filled, i.e. their bytesize (measured by ls) is smaller than du (looking at occupied blocks). I observed this rather prominently e.g. for some python pickle files.

Resources