fsutil file createnew on windows vs dd on linux - dd

As title, I wonder how fsutil in windows can create a really large file so fast. Does it really allocate real cluster for that file or it just writes down file's metadata? Consider two commands below:
fsutil file createnew testFile <1Tb>
dd if=/dev/zero of=testFile bs=1024M count=1024
So I create a file with 1Tb size, the problem is with fsutil, the file is nearly created immediately, but with dd, it took over 1 hour to complete.
Therefore, I guess that fsutil only writes metadata to the file header, the real file cluster will expand whenever needed. Do I think right?

from Microsoft documentation
createnew
Creates a file of the specified name and size, with content that consists of zeroes.
from here, you can say that the file must be all zeros ([...]with content that consists of zeroes)
but, if this is truth
[...]the file is nearly created immediately[...]
I think that you are right: probably, fsutil creates the file with bytes marked as free at the time of execution, but doesn't write those bytes
When you use dd like this
dd if=/dev/zero of=testFile bs=1024M count=1024
you are actually writing, "byte by byte", zeros in each byte of the new file
You can do this:
fsutil file createnew testFile_fsutil <1Tb> #(on Windows)
dd if=/dev/zero of=testFile_dd bs=1024M count=1024 #(on Linux)
and then, you can see the contents of testFile_fsutil on any hexeditor, and looking for non-zero bytes, or, more precisely, from Linux you can do ( 1099511627776 bytes = 1 Tebibyte ):
cmp --bytes=1099511627776 testFile_fsutil testFile_dd
or
cmp --bytes=1099511627776 testFile_fsutil /dev/zero
or even using hashes:
dd if=/dev/zero bs=1024M count=1024 | sha1sum
return
fea475204f4ad1348f21fad930a7c721a2858971
so,
dd if=testFile_fsutil bs=1024M count=1024 | sha1sum
must return the same.
Note: to prove your point much more quickly, you can use a much smaller file for test.

Related

Linux split big files by chunks

I have a big file (15GB) located in my host.
I want to split this file into chunks of 200MB.
Currently, I do it using:
split -a 3 -d -b 200MB my_big_file /tmp/chunk_
The problem is that for now I have only 10GB free space, I want to split it by offset, meaning that the first step is to read from the big file 7GB, split it using split, remove the split files and then split from 7GB to 15GB.
How can I do it?
Use dd command to read the file and specify value of block size as 1 and value of count as exactly half the number of bytes in file in order to read first half of file and redirect the output of dd command to split command, like this:
(Assumptions: big_file is name of your 15GB file and its size, in bytes, is exactly 15GB):
# dd if=big_file bs=1 count=8053063680 | split -a 3 -d -b 200MB - /tmp/chunk_
This will split the first half of file in chunks of 200MB.
Note that 8053063680 is half of number of bytes in 15GB (16106127360 bytes).
For second half
# dd if=big_file bs=1 skip=8053063680 count=8053063680 | split -a 3 -d -b 200MB - /tmp/chunk_
Again, be sure about the exact size of your file in bytes and based on that give value to count and skip.

FIO: Always write error when direct=1 and use libaio in mergerfs

Linux FUSE (Filesystem in Userspace) support O_DIRECT? because I use fio benchmark to test fuse but it always shows errors when I use directIO
Mine machine is Ubuntu 4.4.0-38 x86_64 fio_version = 2.14 Below is my config file
[global]
ioengine=libaio
**direct=1**
time_based
runtime=60
ramp_time=30
size=64g
group_reporting
[S_100RW_1M_R]
rw=read
numjobs=1
iodepth=32
bs=1m
stonewall
[S_100RW_1M_W]
rw=write
numjobs=1
iodepth=32
bs=1m
stonewall
when I execute :sudo fio fio.cfg and it done
it shows the result of seq. read but no results of seq. write. it shows below:
fio: io_u error on file xxxxx : Invalid argument: write offset=0, buflen=1048576
I tried every times and the results are the same even I changed the tested device.
how come it happened?
thanks a lot
That suggests that for whatever reason the filesystem "liked" the O_DIRECT on the initial open but went on to refuse to do something when it came to time to do the 1 megabyte O_DIRECT write. I would guess you will get a similar result using dd with oflag=direct bs=1M etc.

Linux dd create multiple iso files

I want to create an iso from an external hard drive.
I used this command:
sudo dd if=/dev/sdb of=usb-image.iso
It works, however, the disk is large (700 GB), and i dont have space on my laptop to store that much.
I was thinking about creating multiple iso files (each file 5 GB for example), this way, I can manage them by storing some parts on other drives.
Any help?
Thanks
I'd use the split program to split the output from dd into different files. You can adjust the split size as you see fit (look at the 5000m argument):
dd if=/dev/sdb | split -b 5000m - /tmp/output.gz
This will yield files like /tmp/output.gz.aa, /tmp/output.gz.ab, etc.
Additionally, for further space storage, you can gzip your archive midstream, like this:
dd if=/dev/sdb | gzip -c | split -b 5000m - /tmp/output.gz
Later, when you want to restore, do this:
cat /tmp/output.gz.* | gzip -dc | dd of=/dev/sdb

Linux: Outputting DD results to a text file

I'm writing a script that tests the read and write speeds of my HDD. I've been able to output the read speeds using the hdparm command. I'm using this line to test the write speeds:
dd if=/dev/zero of=/tmp/test.data bs=1k count=128k
This outputs to the window:
131072+0 records in
131072+0 records out
134217728 bytes (134 MB) copied, 1.18678 s, 113 MB/s
I tried using >> and > to output the results to the text file and these didn't work. Does anyone know how I can output my results to a text file?
They're output to stderr, so try using 2> instead of >
dd if=/dev/zero of=/tmp/test.data bs=1k count=128k 2> output.txt
dd's good for sequential writes, but also check into iozone and bonnie. Seeks within a track tend to be far faster than seeks from one track to another, so random I/O can be very different from sequential.

How to create a file with a given size in Linux?

For testing purposes I have to generate a file of a certain size (to test an upload limit).
What is a command to create a file of a certain size on Linux?
For small files:
dd if=/dev/zero of=upload_test bs=file_size count=1
Where file_size is the size of your test file in bytes.
For big files:
dd if=/dev/zero of=upload_test bs=1M count=size_in_megabytes
Please, modern is easier, and faster. On Linux, (pick one)
truncate -s 10G foo
fallocate -l 5G bar
It needs to be stated that truncate on a file system supporting sparse files will create a sparse file and fallocate will not. A sparse file is one where the allocation units that make up the file are not actually allocated until used. The meta-data for the file will however take up some considerable space but likely no where near the actual size of the file. You should consult resources about sparse files for more information as there are advantages and disadvantages to this type of file. A non-sparse file has its blocks (allocation units) allocated ahead of time which means the space is reserved as far as the file system sees it. Also fallocate nor truncate will not set the contents of the file to a specified value like dd, instead the contents of a file allocated with fallocate or truncate may be any trash value that existed in the allocated units during creation and this behavior may or may not be desired. The dd is the slowest because it actually writes the value or chunk of data to the entire file stream as specified with it's command line options.
This behavior could potentially be different - depending on file system used and conformance of that file system to any standard or specification. Therefore it is advised that proper research is done to ensure that the appropriate method is used.
Just to follow up Tom's post, you can use dd to create sparse files as well:
dd if=/dev/zero of=the_file bs=1 count=0 seek=12345
This will create a file with a "hole" in it on most unixes - the data won't actually be written to disk, or take up any space until something other than zero is written into it.
Use this command:
dd if=$INPUT-FILE of=$OUTPUT-FILE bs=$BLOCK-SIZE count=$NUM-BLOCKS
To create a big (empty) file, set $INPUT-FILE=/dev/zero.
Total size of the file will be $BLOCK-SIZE * $NUM-BLOCKS.
New file created will be $OUTPUT-FILE.
On OSX (and Solaris, apparently), the mkfile command is available as well:
mkfile 10g big_file
This makes a 10 GB file named "big_file". Found this approach here.
You can do it programmatically:
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdlib.h>
int main() {
int fd = creat("/tmp/foo.txt", 0644);
ftruncate(fd, SIZE_IN_BYTES);
close(fd);
return 0;
}
This approach is especially useful to subsequently mmap the file into memory.
use the following command to check that the file has the correct size:
# du -B1 --apparent-size /tmp/foo.txt
Be careful:
# du /tmp/foo.txt
will probably print 0 because it is allocated as Sparse file if supported by your filesystem.
see also: man 2 open and man 2 truncate
Some of these answers have you using /dev/zero for the source of your data. If your testing network upload speeds, this may not be the best idea if your application is doing any compression, a file full of zeros compresses really well. Using this command to generate the file
dd if=/dev/zero of=upload_test bs=10000 count=1
I could compress upload_test down to about 200 bytes. So you could put yourself in a situation where you think your uploading a 10KB file but it would actually be much less.
What I suggest is using /dev/urandom instead of /dev/zero. I couldn't compress the output of /dev/urandom very much at all.
you could do:
[dsm#localhost:~]$ perl -e 'print "\0" x 100' > filename.ext
Where you replace 100 with the number of bytes you want written.
dd if=/dev/zero of=my_file.txt count=12345
Use fallocate if you don't want to wait for disk.
Example:
fallocate -l 100G BigFile
Usage:
Usage:
fallocate [options] <filename>
Preallocate space to, or deallocate space from a file.
Options:
-c, --collapse-range remove a range from the file
-d, --dig-holes detect zeroes and replace with holes
-i, --insert-range insert a hole at range, shifting existing data
-l, --length <num> length for range operations, in bytes
-n, --keep-size maintain the apparent size of the file
-o, --offset <num> offset for range operations, in bytes
-p, --punch-hole replace a range with a hole (implies -n)
-z, --zero-range zero and ensure allocation of a range
-x, --posix use posix_fallocate(3) instead of fallocate(2)
-v, --verbose verbose mode
-h, --help display this help
-V, --version display version
This will generate 4 MB text file with random characters in current directory and its name "4mb.txt"
You can change parameters to generate different sizes and names.
base64 /dev/urandom | head -c 4000000 > 4mb.txt
There are lots of answers, but none explained nicely what else can be done. Looking into man pages for dd, it is possible to better specify the size of a file.
This is going to create /tmp/zero_big_data_file.bin filled with zeros, that has size of 20 megabytes :
dd if=/dev/zero of=/tmp/zero_big_data_file.bin bs=1M count=20
This is going to create /tmp/zero_1000bytes_data_file.bin filled with zeros, that has size of 1000 bytes :
dd if=/dev/zero of=/tmp/zero_1000bytes_data_file.bin bs=1kB count=1
or
dd if=/dev/zero of=/tmp/zero_1000bytes_data_file.bin bs=1000 count=1
In all examples, bs is block size, and count is number of blocks
BLOCKS and BYTES may be followed by the following multiplicative suffixes: c =1, w =2, b =512, kB =1000, K =1024, MB =1000*1000, M =1024*1024, xM =M GB =1000*1000*1000, G =1024*1024*1024, and so on for T, P, E, Z, Y.
As shell command:
< /dev/zero head -c 1048576 > output
Kindly run below command for quickly creating larger file with certain size
in linux
for i in {1..10};do fallocate -l 2G filename$i;done
explanation:-Above command will create 10 files with 10GB size in just few seconds.

Resources