How to create a file with ANY given size in Linux? - linux

I have read this question:
How to create a file with a given size in Linux?
But I havent got answer to my question.
I want to create a file of 372.07 MB,
I tried the following commands in Ubuntu 10.08:
dd if=/dev/zero of=output.dat bs=390143672 count=1
dd: memory exhausted
390143672=372.07*1024*1024
Is there any other methods?
Thanks a lot!
Edit:
How to view a file's size on Linux command line with decimal. I mean, the command line ls -hl just says: '373M' but the file is actually "372.07M".

Sparse file
dd of=output.dat bs=1 seek=390143672 count=0
This has the added benefit of creating the file sparse if the underlying filesystem supports that. This means, no space is wasted if some of the pages (_blocks) ever get written to and the file creation is extremely quick.
Non-sparse (opaque) file:
Edit since people have, rightly pointed out that sparse files have characteristics that could be disadvantageous in some scenarios, here is the sweet point:
You could use fallocate (in Debian present due to util-linux) instead:
fallocate -l 390143672 output.dat
This still has the benefit of not needing to actually write the blocks, so it is pretty much as quick as creating the sparse file, but it is not sparse. Best Of Both Worlds.

Change your parameters:
dd if=/dev/zero of=output.dat bs=1 count=390143672
otherwise dd tries to create a 370MB buffer in memory.
If you want to do it more efficiently, write the 372MB part first with large-ish blocks (say 1M), then write the tail part with 1 byte blocks by using the seek option to go to the end of the file first.
Ex:
dd if=/dev/zero of=./output.dat bs=1M count=1
dd if=/dev/zero of=./output.dat seek=1M bs=1 count=42

truncate - shrink or extend the size of a file to the specified size
The following example truncates putty.log from 298 bytes to 235 bytes.
root#ubuntu:~# ls -l putty.log
-rw-r--r-- 1 root root 298 2013-10-11 03:01 putty.log
root#ubuntu:~# truncate putty.log -s 235
root#ubuntu:~# ls -l putty.log
-rw-r--r-- 1 root root 235 2013-10-14 19:07 putty.log

Swap count and bs. bs bytes will be in memory, so it can't be that big.

Related

Linux create swap using dd: swapon failed: Invalid argument

I have a swap file called /dev/dm-1 with 1G size, try to increase the size of the swap file to 4G using the steps below:
Turn off swap: swapoff /dev/dm-1
Remove old swap file:
rm /dev/dm-1
Create swap file using dd comand: dd if=/dev/zero of=/dev/dm-1 count=4096 bs=1MiB status=progress
Restrict privelages: chmod 600 /dev/dm-1
Setting up swapspace: mkswap /dev/dm-1
Start swap: swapon /dev/dm-1
After starting up swap that show error swapon failed: Invalid argument
I using SMP Debian 4.19.181-1 (2021-03-19) and filesystem is ext4
Can someone help?
Thanks, KamilCuk I create a swap in /srv directory and all works now.
Is /dev/dm-1 really a file? Isn't it a device? It's very odd to
create a file in /dev. Do not do it, /dev should be mounted as
devtmpfs, you could be basically creating a swap file in memory and
calling it swap.... Do not create regular files in /dev. Create the
swap file somewhere else, like in /srv. What does stat /dev/dm-1
output? What does findmnt /dev output? – KamilCuk

how can i make a file with specific size in a bit time?

when i use of "truncate -s" for make a file with 10 Gigabyte size , must wait minimum 2 minutes until that's make .
is there any function in Linux or bash command for rapidly make a file with 10gig cap?
Have a look at fallocate, it can be used to allocate files of arbitrary sizes very quickly:
$ fallocate -l 10G ./largefile
$ ls -lh ./largefile
-rw-r--r-- 1 user group 10G Nov 18 11:29 largefile
Another method which is considered a bit older but should be supported if fallocate fails is to use dd:
$ dd if=/dev/zero of=largefile bs=16384 count=0 seek=634K
0+0 records in
0+0 records out
0 bytes copied, 0.00393638 s, 0.0kB/s
$ ls -lh ./largefile
-rw-r--r-- 1 user group 10G Nov 18 12:00 largefile
I found a way to create random file with random size
dd if=/dev/zero of=output.dat bs=10G seek=1 count=1
thanks for helping ("th3ant" and "Basile Starynkevitch")

SD card bechmarking using iozone tool

I am trying to get performance of mounted Sd card to my board and i am using Iozone tool to do that but i am getting starnge results:
command:
# mount /dev/mmcblk2p2 /mnt/SD
# cd /mnt/SD
# iozone -a -s 10M -r 5K -w -e
results:
random random bkwd record stride
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
10240 5 4283 4136 68681 378738 337652 3871 133905 96074 216912 4122 5013 364024 376181
the results are in Kbytes that's mean the speed random read is 300MB/s ??
my card is class 4 normally the write speed is 4 MB/s and the reading speed is not very different to this value ??
iozone -a -s 10M -r 5K -w -e
random random bkwd record stride
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
10240 5 4283 4136 68681 378738 337652 3871 133905 96074 216912 4122 5013 364024 376181
Yes, your results are in kilobyte/s (KB/s; don't use -s silent option and iozone will say it Output is in kBytes/sec), and yes, there was 380 MB/s for "reread" speed (and 200 MB/s for read after reread?). But reread may be not the speed of your block device (SD card/HDD/SSD) if you test set (10 MB) is smaller than your RAM amount (it is).
Most OS (and Linux too) have software cache-in-RAM for filesystems and block devices. When you access some block for first time (since boot), it will be read from the device and stored in Page Cache of OS. Next access (read) of this block will be served directly from RAM, not from the device itself (unless O_DIRECT option was used in I/O operation, -I option of iozone).
So, your test run is incorrect. Read man page of iozone before use: http://linux.die.net/man/1/iozone and try bigger test set (gigabytes) or use -I to bypass page cache.
here is the results when i am using the -I option
random random bkwd record stride
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
10240 1024 2356 2950 19693 20865 20833 2095 20111 1734 14375 2875 3566 386809 389443
write seq : 2,3 Mo/s
read seq: 19,2 Mo/s
write rand: 2 Mo/s
read rand: 20 Mo/s
read blk 20 Mo/s
why the read speed still so high ?

How to create loop partition from already existing partition

I believe /images/backups is using the space in /images ?
/dev/sdb1 820G 645G 135G 83% /images
/dev/loop0 296G 296G 0 100% /images/backups
I've a similar kind of partition in another machine /images which is 500G free, and I would want to take out 350G for /images/backups, how to do it ?
Is it right that, it is a simple loop mount which can give specified amount of space or we should create a NULL file of required size and mount ? If so, what are the mount options should be used to specify the size ?
You'll need to create the destination with a fixed size, but can use a "sparse file" which doesn't actually have any blocks written to it yet (and which thus doesn't actually consume space until you write to it).
For instance:
dd if=/dev/zero of=file.img bs=1 count=0 seek=20G
will create a sparse file preallocated to 20GB. That said, actually writing 20GB of zeros to disk up-front (making the file non-sparse) will be faster on writes and lead to less fragmentation.
This can be attached to a loopback device with the losetup command, have a filesystem created, and be mounted:
losetup /dev/loop1 file.img
mke2fs -j /dev/loop1
mount /dev/loop1 /mnt/somewhere
If you want to know if an existing file is sparse, the following will do the trick (on a system with GNU tools; some of the below is not supported in a pure POSIX environment):
{
read block_count block_size file_size
if (( block_count * block_size < file_size )) ; then
echo "Sparse"
else
echo "Non-Sparse"
fi
} < <(stat --format='%b %B %s'$'\n' /images/backups.img)

How to Compress or write zero's /dev/zero to a swap file?

We have a few linux based (Centos) virtual machines which are to be used as distributable virtual appliances. We want to be able to compress them as much as possible for distribution ( via tar.gz, zip, etc).
We've removed all unnecessary files (.log's, /tmp/*, /var/log/, etc) and have written /dev/zero to the free space on the disk.
Is it possible to write zeros via /dev/zero to the swap partitions and files? I know I would need to swapoff -a first. I'm worried about corrupting any internal structures.
Our vm uses both partition swap and file swap.
Also, are there any other strategies for reducing the size of a VM for distribution?
We need to support all of the hypervisor technologies (Xen, VMW, etc), so although the vendors tools maybe useful, I'm looking for strategies that are cross platform.
--- Thanks
You may want to write zeroes and then use mkswap to create an empty swap partition.
$ dd -if=/dev/zero of=/path/to/file bs=512 count=1
adjust the size that you want your files to be.
sudo swapoff -v /dev/sda2 <== The swap partition
sudo dd if=/dev/zero of=/dev/sda2 bs=512 status=progress
sudo mkswap /dev/sda2 UUID=46c1a133-bfdd-4695-a484-08fcf8286896 <== The original UUID of the swap partition

Resources