how can i make a file with specific size in a bit time? - linux

when i use of "truncate -s" for make a file with 10 Gigabyte size , must wait minimum 2 minutes until that's make .
is there any function in Linux or bash command for rapidly make a file with 10gig cap?

Have a look at fallocate, it can be used to allocate files of arbitrary sizes very quickly:
$ fallocate -l 10G ./largefile
$ ls -lh ./largefile
-rw-r--r-- 1 user group 10G Nov 18 11:29 largefile
Another method which is considered a bit older but should be supported if fallocate fails is to use dd:
$ dd if=/dev/zero of=largefile bs=16384 count=0 seek=634K
0+0 records in
0+0 records out
0 bytes copied, 0.00393638 s, 0.0kB/s
$ ls -lh ./largefile
-rw-r--r-- 1 user group 10G Nov 18 12:00 largefile

I found a way to create random file with random size
dd if=/dev/zero of=output.dat bs=10G seek=1 count=1
thanks for helping ("th3ant" and "Basile Starynkevitch")

Related

file created through 'truncate -s' can not get expected size

I create a file using truncate -s 1024M a.test.
I was expecting a file size of a.test to be 1024M, but not getting correct size somehow.
Below is my code.
$du -sh a.test
4.0K a.test
When using ls -l a.test, it is ok:
$ ll a.test
-rw-rw-r-- 1 work work 1073741824 Jul 12 17:26 a.test
Can some one help me out with this issue.
du tells you how much actual disk space you use. Since your file does not have any data in it, the OS will store it as a sparse file, so actual disk usage is much smaller than the size of the file. If you check it with "du --apparent-size -sh a.test", then that will report what you expected.

How does `ls -lh` round file size?

I'm comparing the rounded file size value displayed by ls -lh to the raw size in bytes (as displayed by ls -l, say). I'm having a hard time figuring out what algorithm it uses to do the conversion from bytes.
My assumption is that it interprets the units K,M,G as either
(a) 10^3, 10^6, 10^9, or
(b) 1024, 1024^2, 1024^3.
On the one hand, I have one file that ls -l reports as 2052 bytes, and ls -lh rounds to 2.1K:
$ ls -l usercount.c
-rw-r--r-- 1 squirrel lsf 2052 May 13 15:41 usercount.c
$ ls -lh usercount.c
-rw-r--r-- 1 squirrel lsf 2.1K May 13 15:41 usercount.c
This would seem to support hypothesis (a), because 2052/1000=2.052 which rounds up to 2.1K but 2052/1024=2.0039 which clearly would display as 2.0K when rounded to one decimal place.
On the other hand, I have another file that ls -l reports as being 7223 bytes, which ls -lh displays as 7.1K:
$ ls -l traverse.readdir_r.c
-rw-r--r-- 1 squirrel lsf 7223 Jul 21 2014 traverse.readdir_r.c
$ ls -lh traverse.readdir_r.c
-rw-r--r-- 1 squirrel lsf 7.1K Jul 21 2014 traverse.readdir_r.c
This confusingly supports hypthesis (b), because 7223/1000=7.223 which should round down to 7.2K, but 7223/1024=7.0537 which rounds up to the displayed 7.1K
This leads me to conclude that my assumption is wrong and that it does neither (a) nor (b) exclusively. What algorithm does ls use to do this rounding?
GNU ls will by default round up in 1024-based units.
It does not round to nearest, as you've taken for granted.
Here's the formatting flag from gnulib human.h:
/* Round to plus infinity (default). */
human_ceiling = 0,
This is consistent with everything you're seeing:
2052 is 2.0039 KiB which rounds up to 2.1
7223 is 7.0537 KiB which rounds up to 7.1
by default the block size in ls is 1024, but for example if the output is 44.203125k it will round it to 45k
you can change it too
ls -lh --block-size=1000
and the source code: ls source code

Size() vs ls -la vs du -h which one is correct size?

I was compiling a custom kernel, and I wanted to test the size of the image file.
These are the results:
ls -la | grep vmlinux
-rwxr-xr-x 1 root root 8167158 May 21 12:14 vmlinux
du -h vmlinux
3.8M vmlinux
size vmlinux
text data bss dec hex filename
2221248 676148 544768 3442164 3485f4 vmlinux
Since all of them show different sizes, which one is closest to the actual image size?
Why are they different?
They are all correct, they just show different sizes.
ls shows size of the file (when you open and read it, that's how many bytes you will get)
du shows actual disk usage which can be smaller than the file size due to holes
size shows the size of the runtime image of an object/executable which is not directly related to the size of the file (bss uses no bytes in the file no matter how large, the file may contain debugging information that is not part of the runtime image, etc.)
If you want to know how much RAM/ROM an executable will take excluding dynamic memory allocation, size gives you the information you need.
Two definition need to be understood
1 runtime vs storetime (this is why size differs)
2 file depth vs directory (this is why du differs)
Look at the below example:
[root#localhost test]# ls -l
total 36
-rw-r--r-- 1 root root 712 May 12 19:50 a.c
-rw-r--r-- 1 root root 3561 May 12 19:42 a.h
-rwxr-xr-x 1 root root 71624 May 12 19:50 a.out
-rw-r--r-- 1 root root 1403 May 8 00:15 b.c
-rw-r--r-- 1 root root 1403 May 8 00:15 c.c
[root#localhost test]# du -abch --max-depth=1
1.4K ./b.c
1.4K ./c.c
3.5K ./a.h
712 ./a.c
70K ./a.out
81K .
81K total
[root#localhost test]# ls -l
total 36
-rw-r--r-- 1 root root 712 May 12 19:50 a.c
-rw-r--r-- 1 root root 3561 May 12 19:42 a.h
-rwxr-xr-x 1 root root 71624 May 12 19:50 a.out
-rw-r--r-- 1 root root 1403 May 8 00:15 b.c
-rw-r--r-- 1 root root 1403 May 8 00:15 c.c
[root#localhost test]# size a.out
text data bss dec hex filename
3655 640 16 4311 10d7 a.out
If using size not on executable, OS will report an error.
Empirically differences happen most often for sparse files and for compressed files and can go in both directions.
du < ls
Sparse files contain metadata about space needed for an application, which ls reads and applies for its result, while du doesn't. For example:
truncate -s 1m test.dat
creates a sparse file consisting entirely of nulls without disk usage, ie. du shows 0 and ls shows 1M.
du > ls
On the other hand du can indicate, as in your case, files which might occupy a lot of space on disk (ie. they spread among lots of blocks), but not all blocks are filled, i.e. their bytesize (measured by ls) is smaller than du (looking at occupied blocks). I observed this rather prominently e.g. for some python pickle files.

How to create a file with ANY given size in Linux?

I have read this question:
How to create a file with a given size in Linux?
But I havent got answer to my question.
I want to create a file of 372.07 MB,
I tried the following commands in Ubuntu 10.08:
dd if=/dev/zero of=output.dat bs=390143672 count=1
dd: memory exhausted
390143672=372.07*1024*1024
Is there any other methods?
Thanks a lot!
Edit:
How to view a file's size on Linux command line with decimal. I mean, the command line ls -hl just says: '373M' but the file is actually "372.07M".
Sparse file
dd of=output.dat bs=1 seek=390143672 count=0
This has the added benefit of creating the file sparse if the underlying filesystem supports that. This means, no space is wasted if some of the pages (_blocks) ever get written to and the file creation is extremely quick.
Non-sparse (opaque) file:
Edit since people have, rightly pointed out that sparse files have characteristics that could be disadvantageous in some scenarios, here is the sweet point:
You could use fallocate (in Debian present due to util-linux) instead:
fallocate -l 390143672 output.dat
This still has the benefit of not needing to actually write the blocks, so it is pretty much as quick as creating the sparse file, but it is not sparse. Best Of Both Worlds.
Change your parameters:
dd if=/dev/zero of=output.dat bs=1 count=390143672
otherwise dd tries to create a 370MB buffer in memory.
If you want to do it more efficiently, write the 372MB part first with large-ish blocks (say 1M), then write the tail part with 1 byte blocks by using the seek option to go to the end of the file first.
Ex:
dd if=/dev/zero of=./output.dat bs=1M count=1
dd if=/dev/zero of=./output.dat seek=1M bs=1 count=42
truncate - shrink or extend the size of a file to the specified size
The following example truncates putty.log from 298 bytes to 235 bytes.
root#ubuntu:~# ls -l putty.log
-rw-r--r-- 1 root root 298 2013-10-11 03:01 putty.log
root#ubuntu:~# truncate putty.log -s 235
root#ubuntu:~# ls -l putty.log
-rw-r--r-- 1 root root 235 2013-10-14 19:07 putty.log
Swap count and bs. bs bytes will be in memory, so it can't be that big.

Linux space check

Collectively check space of files in linux...
I have nearly more than 100 of files ...to check the size collectively...
Edit: What I need is: I have a folder containing 1000 files and I need something so that I can calculate the total sum [of what?] of 100 files only which I need...not all 1000 files.....
This command will give you the size in kilobytes of all the individual files/directories in the current directory:
du -ks *
This command will give you the combined total size of the current directory:
du -ks .
If you need to recurse and get more detailed info, the find command might help.
If you want the total size of all files in the current directory (In "human readable format")
du -sh
This is a bit vague ... Assuming all you want is to get the total size of a bunch of files, there's any number of solutions.
If the files are all in the same directory, one very easy way is to just use
ls -lh | head -1
This prints a single line showing the "total" number, with a friendly "human-readable" (that's the -h option to ls) unit even.
Note that this does not work with wildcards, since then ls suppresses its "total"-line.
I'm no linux guru, but there should be some switch of the ls command that shows size.
If that fails, look into using du.
Using gdu:
aaa:vim70> gdu
5028 ./doc
4420 ./syntax
.
.
.
176 ./compiler
16 ./macros/hanoi
16 ./macros/life
48 ./macros/maze
20 ./macros/urm
200 ./macros
252 ./keymap
18000 .
You can use --max-depth to limit the depth of the search:
aaa:vim70> gdu --max-depth=1
5028 ./doc
136 ./print
76 ./colors
4420 ./syntax
420 ./indent
628 ./ftplugin
1260 ./autoload
64 ./plugin
800 ./tutor
3348 ./spell
176 ./compiler
200 ./macros
112 ./tools
844 ./lang
252 ./keymap
18000 .
Notice that the subdirectories of macros don't appear.
or even:
aaa:vim70> gdu --max-depth=0
18000 .
The default unit is kilobytes. You can use -h to get it in human readable form:
aaa:vim70> gdu --max-depth=1 -h
5.0M ./doc
136k ./print
76k ./colors
4.4M ./syntax
420k ./indent
628k ./ftplugin
1.3M ./autoload
64k ./plugin
800k ./tutor
3.3M ./spell
176k ./compiler
200k ./macros
112k ./tools
844k ./lang
252k ./keymap
18M .

Resources