Related
I have a very specific need:
to partially replace the content of a flash and to move MTD partition boundaries.
Current map is:
u-boot 0x000000 0x040000
u-boot-env 0x040000 0x010000
kernel 0x050000 0x230000
initrd 0x280000 0x170000
scripts 0x3f0000 0x010000
filesystem 0x400000 0xbf0000
firmware 0xff0000 0x010000
While desired output is:
u-boot 0x000000 0x040000
u-boot-env 0x040000 0x010000
kernel 0x050000 0x230000
filesystem 0x280000 0xd70000
firmware 0xff0000 0x010000
This means to collapse initrd, scripts and filesystem into a single area while leaving the others alone.
Problem is this should be achieved from the running system (booted with the "old" configuration") and I should rewrite kernel and "new" filesystem before rebooting.
The system is an embedded, so I have little space for maneuver (I have a SD card, though).
Of course the rewritten kernel will have "new" configuration written in its DTB.
Problem is transition.
Note: I have seen this Question, but it is very old and it has drawback to need kernel patches, which I would like to avoid.
NOTE2: this question has been flagged for deletion because "not about programming". I beg to disagree: I need to perform said operation on ~14k devices, most of them already sold to customers, so any workable solution should involve, at the very least, scripting.
NOTE3: if absolutely necessary I can even consider (small) kernel modifications (YES, I have means to update kernel remotely).
I will leave the Accepted answer as-is, but, for anyone who happens to come here to find a solution, I want to point out that:
Recent (<4 years old) mtd-utils, coupled with 4.0+ kernel support:
Definition of a "master" device (MTD device representing the full, unpartitioned Flash). This is a kernel option.
mtd-utils has a specific mtd-part utility that can add/delete MTD partitions dynamically. NOTE: this utility woks IF (and only if) the above is defined in Kernel.
With the above utility it's possible to build multiple, possibly overlapping partitions; use with care!
I have three ideas/suggestions:
Instead of moving the partitions, can you just split the "new" filesystem image into chunks and write them to the corresponding "old" MTD partitions? This way you don't really need to change MTD partition map. After booting into the new kernel, it will see the new contiguous root filesystem. For JFFS2 filesystem, it should be fairly straightforward to do using split or dd, flash_erase and nandwrite. Something like:
# WARNING: this script assumes that it runs from tmpfs and the old root filesystem is already unmounted.
# Also, it assumes that your shell has arithmetic evaluation, which handles hex (my busybox 1.29 ash does this).
# assuming newrootfs.img is the image of new rootfs
new_rootfs_img="newrootfs.img"
mtd_initrd="/dev/mtd3"
mtd_initrd_size=0x170000
mtd_scripts="/dev/mtd4"
mtd_scripts_size=0x010000
mtd_filesystem="/dev/mtd5"
mtd_filesystem_size=0xbf0000
# prepare chunks of new filesystem image
bs="0x1000"
# note: using arithmetic evaluation $(()) to convert from hex and do the math.
# dd doesn't handle hex numbers ("dd: invalid number '0x1000'") -- $(()) works this around
dd if="${new_rootfs_img}" of="rootfs_initrd" bs=$(( bs )) count=$(( mtd_initrd_size / bs ))
dd if="${new_rootfs_img}" of="rootfs_scripts" bs=$(( bs )) count=$(( mtd_scripts_size / bs )) skip=$(( mtd_initrd_size / bs ))
dd if="${new_rootfs_img}" of="rootfs_filesystem" bs=$(( bs )) count=$(( mtd_filesystem_size / bs )) skip=$(( ( mtd_initrd_size + mtd_scripts_size ) / bs ))
# there's no going back after this point
flash_eraseall -j "${mtd_initrd}"
flash_eraseall -j "${mtd_scripts}"
flash_eraseall -j "${mtd_filesystem}"
nandwrite -p "${mtd_initrd}" rootfs_initrd
nandwrite -p "${mtd_scripts}" rootfs_scripts
nandwrite -p "${mtd_filesystem}" rootfs_filesystem
# don't forget to update the kernel too
There is kernel support for concatenating MTD devices (which is exactly what you're trying to do). I don't see an easy way to use it, but you could create a kernel module, which concatenates the desired partitions for you into a contiguous MTD device.
In order to combine the 3 MTD partitions into one to write the new filesystem, you could create a dm-linear mapping over the 3 mtdblocks, and then turn it back into an MTD device using block2mtd. (i.e. mtdblock + device mapper linear + block2mtd) But it looks very awkward and I don't know if it'll work well (for say, OOB data).
EDIT1: added a comment explaining use of $(( bs )) -- to convert from hex as dd doesn't handle hex numbers directly (neither coreutils, nor busybox dd).
AFAIK, #andrey 's answer suggestion 1 is wrong.
an mtd partition is made of a sequence of blocks, any of which could be bad or go bad anytime. this is why the simple mtd char abstraction exists: an mtd char device (not the mtdblock one) is read sequentially and skips bad blocks. nandwrite also writes sequentially and skips bad blocks.
an mtd char device sort of acts like:
a single file into which you cannot random access, from which you can only read sequentially from the beginning to the end (or to where you get bored).
a single file into which you cannot random access, to which you can only write sequentially from the beginning (or from an erase block where you previously stopped reading) all the way to the end. (that is, you can truncate and append, but you cannot write mid-file.) to write you need to previously erase all erase blocks from where you start writing to the end of the partition.
this means that the partition size is the maximum theoretical capacity, but typically the capacity will be less due to bad blocks, and can be effectively reduced every time you rewrite the partition. you can never expect to write the full size of an mtd partition.
this is were #andrey 's suggestion 1 is wrong: it breaks up the file to be written into max-sized pieces before writing each piece. but you never know beforehand how much data will fit into an mtd partition without actually writing that data.
instead, you typically need to write some data, and you pray there will be enough good blocks to fit it. if at some point there are not, the write fails and the device reached end-of-life. needless to say, the larger the fraction of a partition you need, the higher the likelihood that the write will fail (and when that happens, it typically means that the device is toast).
to actually implement something akin to suggestion 1, you need to start writing into a partition (skipping bad blocks), and when you run out of erase blocks, you continue writing into the next partition, and so on. the point being: you cannot know where the data boundaries will lay until you actually write the data and fill each partition; there is no other way.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have been working on some programs that require data to be written/stored onto the SDHC cards, few MBs in size, Sandisk class 4 SDHC * Sandisk class 10 SDHC 16 GB cards in particular.
The results I have observed seems more strange. The write speeds of class 4 cards vs class 10 cards.
Commands used:
I have used dd command to write the data; something like:
dd if=file_10mb.img of=/dev/sdc conv=fsync bs=4096 count=2560
Measured the write speeds by:
iostat /dev/sdc 1 -m -t
Few figures:
Writing a 100MB file:
On class 10 card: 53 secs ->Avg. write speed = 2.03 MB_wrtn/sec
On class 4 card: 31 secs ->Avg. write speed = 2.62 MB_wrtn/sec
Writing a 10MB file:
On class 10 card: 5.7 secs ->Max. & Min. write speed = 1.85 & 1.15
MB_wrtn/sec
On class 4 card: 4 secs ->Max. & Min. write speed = 2.56 & 1.15
MB_wrtn/sec
I expected these results to be exactly opposite as class 10 cards should outperform class 4 cards.
I've tested these on two different cards to remove the probability of wrong readings due to aged cards. Also, the cards are fairly new.
Please let me know about the strange behaviour. Thanks in advance.
A brief research on internet lead me to this page: https://www.raspberrypi.org/forums/viewtopic.php?t=11258&p=123670
which talks about "erase blocks", the size of an "erase" operation; this erase block is generally bigger than a sector size, which is the minimum size for a write operation. On that page some example is shown:
16 GB SanDisk Extreme Pro: erase block size of 4 MB.
8 GB Transcend SDHC 150x: erase block size of 4 MB.
2 GB Transcend SD 150x: erase block size of 8 kB.
Now, your fsync options passed to dd means that after every write, a sync is performed on both the data and metadata, which could involve rewriting part of the FAT, or some other blocks if no FAT is used.
On a classic spinning magnetic disk, that would mean that the head travels a lot, every 4Kb; on a flash memory there is no head, but an erase operation is very costly. Moreover, flash memories have internal algorithms that reduce the wearing, so it becomes very difficult to know what really goes on underneath, inside the memory card.
The conclusion is that, as noted in a comment, 4K block size can be too small, and the fsync option slows down and can be very problematic. Get rid of fsync options, and perform again tests with different block sizes.
In reality, probably every different card has a preferred set of parameters. One way class 10 cards can work faster, can be to choose a big erase block. The time for erasing a block is more or less independent of its size, so a really big erase block effectively improves speed, by erasing more data in the same time. But if blocks are erased too often, speed is reduced instead.
The final answer, from inference, is that your set of parameters seem better suited for a class 4 card than for a class 10 card. In my opinion, your parameters are not well suited for anything, but nobody can be perfectly sure: flash memory cards are intricated. For example, often I record TV transmissions on my TV decoder; there are periods of time in which things go smoothly, and other periods not. 4 months ago the decoder was often complaining about "slow writing speed", with horrible results. Since a couple of months, everything is fine. I touched nothing, the flash USB memory is the same. Probably it entered another phase of its life...
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I typically end up in the following situation: I have, say, a 650 MB MPEG-2 .avi video file from a camera. Then, I use ffmpeg2theora to convert it into Theora .ogv video file, say some 150 MB in size. Finally, I want to upload this .ogv file to an ssh server.
Let's say, the ffmpeg2theora encoding process takes some 15 minutes on my PC. On the other hand, the upload goes on with a speed of about 60 KB/s, which takes some 45 minutes (for the 150MB .ogv). So: if I first encode, and wait for the encoding process to finish - and then upload, it would take approximately
15 min + 45 min = 1 hr
to complete the operation.
So, I thought it would be better if I could somehow start the upload, in parallel with the encoding operation; then, in principle - as the uploading process is slower (in terms of transferred bytes/sec) than the encoding one (in terms of generated bytes/sec) - the uploading process would always "trail behind" the encoding one, and so the whole operation (enc+upl) would complete in just 45 minutes (that is, just the time of the upload process +/- some minutes depending on actual upload speed situation on wire).
My first idea was to pipe the output of ffmpeg2theora to tee (so as to keep a local copy of the .ogv) and then, pipe the output further to ssh - as in:
./ffmpeg2theora-0.27.linux32.bin -v 8 -a 3 -o /dev/stdout MVI.AVI | tee MVI.ogv | ssh user#ssh.server.com "cat > ~/myvids/MVI.ogv"
While this command does, indeed, function - one can easily see in the running log in the terminal from ffmpeg2theora, that in this case, ffmpeg2theora calculates a predicted time of completion to be 1 hour; that is, there seems to be no benefit in terms of smaller completion time for both enc+upl. (While it is possible that this is due to network congestion, and me getting less of a network speed at the time - it seems to me, that ffmpeg2theora has to wait for an acknowledgment for each little chunk of data it sends through the pipe, and that ACK finally has to come from ssh... Otherwise, ffmpeg2theora would not have been able to provide a completion time estimate. Then again, maybe the estimate is wrong, while the operation would indeed complete in 45 mins - dunno, never had patience to wait and time the process; I just get pissed at 1hr as estimate, and hit Ctrl-C ;) ...)
My second attempt was to run the encoding process in one terminal window, i.e.:
./ffmpeg2theora-0.27.linux32.bin -v 8 -a 3 MVI.AVI # MVI.ogv is auto name for output
..., and the uploading process, using scp, in another terminal window (thereby 'forcing' 'parallelization'):
scp MVI.ogv user#ssh.server.com:~/myvids/
The problem here is: let's say, at the time when scp starts, ffmpeg2theora has already encoded 5 MB of the output .ogv file. At this time, scp sees this 5 MB as the entire file size, and starts uploading - and it exits when it encounters the 5 MB mark; while in the meantime, ffmpeg2theora may have produced additional 15 MB, making the .ogv file 20 MB in total size at the time scp has exited (finishing the transfer of the first 5 MB).
Then I learned (joen.dk » Tip: scp Resume) that rsync supports 'resume' of partially completed uploads, as in:
rsync --partial --progress myFile remoteMachine:dirToPutIn/
..., so I tried using rsync instead of scp - but it seems to behave exactly the same as scp in terms of file size, that is: it will only transfer up to the file size read at the beginning of the process, and then it will exit.
So, my question to the community is: Is there a way to parallelize the encoding and uploading process, so as to gain the decrease in total processing time?
I'm guessing there could be several ways, as in:
A command line option (that I haven't seen) that forces scp/rsync to continuously check the file size - if the file is open for writing by another process (then I could simply run the upload in another terminal window)
A bash script; say running rsync --partial in a while loop, that runs as long as the .ogv file is open for writing by another process (I don't actually like this solution, since I can hear the harddisk scanning for the resume point, every time I run rsync --partial - which, I guess, cannot be good; if I know that the same file is being written to at the same time)
A different tool (other than scp/rsync) that does support upload of a "currently generated"/"unfinished" file (the assumption being it can handle only growing files; it would exit if it encounters that the local file is suddenly less in size than the bytes already transferred)
... but it could also be, that I'm overlooking something - and 1hr is as good as it gets (in other words, it is maybe logically impossible to achieve 45 min total time - even if trying to parallelize) :)
Well, I look forward to comments that would, hopefully, clarify this for me ;)
Thanks in advance,
Cheers!
May be you can try sshfs (http://fuse.sourceforge.net/sshfs.html).This being a file system should have some optimization though I am not very sure.
I have a script that creates file system in a file on a linux machine. I see that to create the file system, it uses 'dd' with bs=x option, reads from /dev/zero and writes to a file. I think usually specifying ibs/obs/bs is useful to read from real hardware devices as one has specific block size constraints. In this case however, as it is reading from virtual device and writing to a file, I don't see any point behind using 'bs=x bytes' option. Is my understanding wrong here?
(Just in case if it helps, this file system is later on used to boot a qemu vm)
To understand block sizes, you have to be familiar with tape drives. If you're not interested in tape drives - for example, you don't think you're ever going to use one - then you can go back to sleep now.
Remember the tape drives from films in the 60s, 70s, maybe even 80s? The ones where the reel went spinning around, and so on? Not your Exabyte or even QIC - quarter-inch cartridge - tapes; your good old fashioned reel-to-reel half-inch tape drives? On those, block size mattered.
The data on a tape was written in blocks. Each block was separated from the next by an inter-record gap.
----+-------+-----+-------+-----+----
... | block | IRG | block | IRG | ...
----+-------+-----+-------+-----+----
Depending on the tape drive hardware and software, there were a variety of problems that could happen. For example, if the tape was written with a block size of 5120 bytes and you read the tape with a block size of 512 bytes, then the tape drive might read the first block, return you 512 bytes of it, and then discard the remaining data; the next read would start on the next block. Conversely, if the tape was written with a block size of 512 bytes and you requested blocks of 5120 bytes, you would get short reads; each read would return just 512 bytes, and if your software wasn't paying attention, you'd be reading garbage. There was also the issue that the tape drive had to get up to speed to read the block, and then slow down. The ASCII art suggests that the IRG was smaller than the data blocks; that was not necessarily the case. And it took time to read one block, overshoot the IRG, rewind backwards to get to the next block, and start forwards again. And if the tape drive didn't have the memory to buffer data - the cheaper ones did not - then you could seriously affect your tape drive performance.
War story: work prepared on newer machine with a slightly more modern tape drive. I wrote a tape using tar without a sensible block size (so it defaulted to 512 bytes). It was a large bit of software - must have been, oh, less than 100 MB in total (a long time ago, in other words). The tape wrote nicely because the machine was modern enough, and it took just a few seconds to do so. But, I had to get the material off the tape on a machine with an older tape drive, one that did not have any on-board buffer. So, it read the material, 512 bytes at a time, and the reel rocked forward, reading one block, and then rocked back all but maybe half an inch, and then read forwards to get to the next block, and then rocked back, and ... well, you could see it doing this, and since it took appreciable portions of a second to read each 512 byte block, the total time taken was horrendous. My flight was due to leave...and I needed to get that data across too. (It was long enough ago, and in a land far enough away, that last minute changes to flights weren't much of an option either.) To cut a long story short, it did get read - but if I'd used a sensible block size (such as 5120 bytes instead of the default of 512), I would have been done much, much quicker and with much less danger of missing the plane (but I did actually catch the plane, with maybe 20 minutes to spare, despite a taxi ride across Paris in the rush hour).
With more modern tape drives, there was enough memory on the drive to do buffering and getting a tape drive to stream - write continuously without reversing - was feasible. It used to be that I'd use a block size like 256 KB to get QIC tapes to stream. I've not done much with tape drives recently - let's see, not this millennium and not much for a few years before that, either; certainly not much since CD and DVD became the software distribution mechanisms of choice (when electronic download wasn't used).
But the block size really did matter in the old days. And dd provided good support for it. You could even transfer data from a tape drive that was written with, say, 4 KB block to another that you wanted to write with, say, 16 KB blocks, by specifying the ibs (input block size) separately from the obs (output block size). Darned useful!
Also, the count parameter is in terms of the (input) block size. It was useful to say 'dd bs=1024 count=1024 if=/dev/zero of=/my/file/of/zeroes' to copy 1 MB of zeroes around. Or to copy 1 MB of a file.
The importance of dd is vastly diminished; it was an essential part of the armoury for anybody who worked with tape drives a decade or more ago.
The block size is the number of bytes that are read and written at a time. Presumably there is a count= option, and that is specified in units of the block size. If there is a skip= or seek= option, those will also be in block size units. However if you are reading and writing a regular file, and there are no disk errors, then the block size doesn't really matter as long as you can scale those parameters accordingly and they are still integers. However certain sizes may be more efficient than others.
For reading from /dev/zero, it doesn't matter. ibs/obs/bs specify how many bytes will be read at a time. It's helpful to choose a number based on the way bytes are read/written in the operating system. For instance, Linux usually reads from a hard drive in 4096 byte chunks. If you have at least some idea about how the underlying hardware reads/writes, it might be a good idea to specify ibs/obs/bs. By the way, if you specify bs, it will override whatever you specify for ibs and obs.
In addition to the great answer by Jonathan Leffler, keep in mind that the bs= option isn't always a substitute for using both ibs= and obs=, in particular for the old [ugly] days of tape drives.
The bs= option reserves the right for dd to write the data as soon as it's read. This can cause you to no longer have identically sized blocks on the output. Here is GNU's take on this, but the behavior dates back as far as I can remember (80's):
(bs=) Set both input and output block sizes to bytes. This makes dd read and write bytes per block, overriding any ‘ibs’ and ‘obs’ settings. In addition, if no data-transforming conv option is specified, input is copied to the output as soon as it’s read, even if it is smaller than the block size.
For instance, back in the QIC days on an old Sun system, if you did this:
tar cvf /dev/rst0c /bla
It would work, but cause an enormous amount of back and forth thrashing while the drive wrote a small block, and tried to backup and read to reposition itself properly for the next write.
If you swapped this with:
tar cvf - /bla | dd ibs=16K obs=16K of=/dev/rst0c
You'd get the QIC drive writing much larger chunks and not thrashing quite so much.
However, if you made the mistake of this:
tar cvf - /bla | dd bs=16K of=/dev/rst0c
You'd run the risk of having precisely the same thrashing you had before depending upon how much data was available at the time of each read.
Specifying both ibs= and obs= precludes this from happening.
How can I quickly create a large file on a Linux (Red Hat Linux) system?
dd will do the job, but reading from /dev/zero and writing to the drive can take a long time when you need a file several hundreds of GBs in size for testing... If you need to do that repeatedly, the time really adds up.
I don't care about the contents of the file, I just want it to be created quickly. How can this be done?
Using a sparse file won't work for this. I need the file to be allocated disk space.
dd from the other answers is a good solution, but it is slow for this purpose. In Linux (and other POSIX systems), we have fallocate, which uses the desired space without having to actually writing to it, works with most modern disk based file systems, very fast:
For example:
fallocate -l 10G gentoo_root.img
This is a common question -- especially in today's environment of virtual environments. Unfortunately, the answer is not as straight-forward as one might assume.
dd is the obvious first choice, but dd is essentially a copy and that forces you to write every block of data (thus, initializing the file contents)... And that initialization is what takes up so much I/O time. (Want to make it take even longer? Use /dev/random instead of /dev/zero! Then you'll use CPU as well as I/O time!) In the end though, dd is a poor choice (though essentially the default used by the VM "create" GUIs). E.g:
dd if=/dev/zero of=./gentoo_root.img bs=4k iflag=fullblock,count_bytes count=10G
truncate is another choice -- and is likely the fastest... But that is because it creates a "sparse file". Essentially, a sparse file is a section of disk that has a lot of the same data, and the underlying filesystem "cheats" by not really storing all of the data, but just "pretending" that it's all there. Thus, when you use truncate to create a 20 GB drive for your VM, the filesystem doesn't actually allocate 20 GB, but it cheats and says that there are 20 GB of zeros there, even though as little as one track on the disk may actually (really) be in use. E.g.:
truncate -s 10G gentoo_root.img
fallocate is the final -- and best -- choice for use with VM disk allocation, because it essentially "reserves" (or "allocates" all of the space you're seeking, but it doesn't bother to write anything. So, when you use fallocate to create a 20 GB virtual drive space, you really do get a 20 GB file (not a "sparse file", and you won't have bothered to write anything to it -- which means virtually anything could be in there -- kind of like a brand new disk!) E.g.:
fallocate -l 10G gentoo_root.img
Linux & all filesystems
xfs_mkfile 10240m 10Gigfile
Linux & and some filesystems (ext4, xfs, btrfs and ocfs2)
fallocate -l 10G 10Gigfile
OS X, Solaris, SunOS and probably other UNIXes
mkfile 10240m 10Gigfile
HP-UX
prealloc 10Gigfile 10737418240
Explanation
Try mkfile <size> myfile as an alternative of dd. With the -n option the size is noted, but disk blocks aren't allocated until data is written to them. Without the -n option, the space is zero-filled, which means writing to the disk, which means taking time.
mkfile is derived from SunOS and is not available everywhere. Most Linux systems have xfs_mkfile which works exactly the same way, and not just on XFS file systems despite the name. It's included in xfsprogs (for Debian/Ubuntu) or similar named packages.
Most Linux systems also have fallocate, which only works on certain file systems (such as btrfs, ext4, ocfs2, and xfs), but is the fastest, as it allocates all the file space (creates non-holey files) but does not initialize any of it.
truncate -s 10M output.file
will create a 10 M file instantaneously (M stands for 10241024 bytes, MB stands for 10001000 - same with K, KB, G, GB...)
EDIT: as many have pointed out, this will not physically allocate the file on your device. With this you could actually create an arbitrary large file, regardless of the available space on the device, as it creates a "sparse" file.
For e.g. notice no HDD space is consumed with this command:
### BEFORE
$ df -h | grep lvm
/dev/mapper/lvm--raid0-lvm0
7.2T 6.6T 232G 97% /export/lvm-raid0
$ truncate -s 500M 500MB.file
### AFTER
$ df -h | grep lvm
/dev/mapper/lvm--raid0-lvm0
7.2T 6.6T 232G 97% /export/lvm-raid0
So, when doing this, you will be deferring physical allocation until the file is accessed. If you're mapping this file to memory, you may not have the expected performance.
But this is still a useful command to know. For e.g. when benchmarking transfers using files, the specified size of the file will still get moved.
$ rsync -aHAxvP --numeric-ids --delete --info=progress2 \
root#mulder.bub.lan:/export/lvm-raid0/500MB.file \
/export/raid1/
receiving incremental file list
500MB.file
524,288,000 100% 41.40MB/s 0:00:12 (xfr#1, to-chk=0/1)
sent 30 bytes received 524,352,082 bytes 38,840,897.19 bytes/sec
total size is 524,288,000 speedup is 1.00
Where seek is the size of the file you want in bytes - 1.
dd if=/dev/zero of=filename bs=1 count=1 seek=1048575
Examples where seek is the size of the file you want in bytes
#kilobytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200K
#megabytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200M
#gigabytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200G
#terabytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200T
From the dd manpage:
BLOCKS and BYTES may be followed by the following multiplicative suffixes: c=1, w=2, b=512, kB=1000, K=1024, MB=1000*1000, M=1024*1024, GB =1000*1000*1000, G=1024*1024*1024, and so on for T, P, E, Z, Y.
To make a 1 GB file:
dd if=/dev/zero of=filename bs=1G count=1
I don't know a whole lot about Linux, but here's the C Code I wrote to fake huge files on DC Share many years ago.
#include < stdio.h >
#include < stdlib.h >
int main() {
int i;
FILE *fp;
fp=fopen("bigfakefile.txt","w");
for(i=0;i<(1024*1024);i++) {
fseek(fp,(1024*1024),SEEK_CUR);
fprintf(fp,"C");
}
}
You can use "yes" command also. The syntax is fairly simple:
#yes >> myfile
Press "Ctrl + C" to stop this, else it will eat up all your space available.
To clean this file run:
#>myfile
will clean this file.
I don't think you're going to get much faster than dd. The bottleneck is the disk; writing hundreds of GB of data to it is going to take a long time no matter how you do it.
But here's a possibility that might work for your application. If you don't care about the contents of the file, how about creating a "virtual" file whose contents are the dynamic output of a program? Instead of open()ing the file, use popen() to open a pipe to an external program. The external program generates data whenever it's needed. Once the pipe is open, it acts just like a regular file in that the program that opened the pipe can fseek(), rewind(), etc. You'll need to use pclose() instead of close() when you're done with the pipe.
If your application needs the file to be a certain size, it will be up to the external program to keep track of where in the "file" it is and send an eof when the "end" has been reached.
One approach: if you can guarantee unrelated applications won't use the files in a conflicting manner, just create a pool of files of varying sizes in a specific directory, then create links to them when needed.
For example, have a pool of files called:
/home/bigfiles/512M-A
/home/bigfiles/512M-B
/home/bigfiles/1024M-A
/home/bigfiles/1024M-B
Then, if you have an application that needs a 1G file called /home/oracle/logfile, execute a "ln /home/bigfiles/1024M-A /home/oracle/logfile".
If it's on a separate filesystem, you will have to use a symbolic link.
The A/B/etc files can be used to ensure there's no conflicting use between unrelated applications.
The link operation is about as fast as you can get.
The GPL mkfile is just a (ba)sh script wrapper around dd; BSD's mkfile just memsets a buffer with non-zero and writes it repeatedly. I would not expect the former to out-perform dd. The latter might edge out dd if=/dev/zero slightly since it omits the reads, but anything that does significantly better is probably just creating a sparse file.
Absent a system call that actually allocates space for a file without writing data (and Linux and BSD lack this, probably Solaris as well) you might get a small improvement in performance by using ftrunc(2)/truncate(1) to extend the file to the desired size, mmap the file into memory, then write non-zero data to the first bytes of every disk block (use fgetconf to find the disk block size).
This is the fastest I could do (which is not fast) with the following constraints:
The goal of the large file is to fill a disk, so can't be compressible.
Using ext3 filesystem. (fallocate not available)
This is the gist of it...
// include stdlib.h, stdio.h, and stdint.h
int32_t buf[256]; // Block size.
for (int i = 0; i < 256; ++i)
{
buf[i] = rand(); // random to be non-compressible.
}
FILE* file = fopen("/file/on/your/system", "wb");
int blocksToWrite = 1024 * 1024; // 1 GB
for (int i = 0; i < blocksToWrite; ++i)
{
fwrite(buf, sizeof(int32_t), 256, file);
}
In our case this is for an embedded linux system and this works well enough, but would prefer something faster.
FYI the command dd if=/dev/urandom of=outputfile bs=1024 count = XX was so slow as to be unusable.
Shameless plug: OTFFS provides a file system providing arbitrarily large (well, almost. Exabytes is the current limit) files of generated content. It is Linux-only, plain C, and in early alpha.
See https://github.com/s5k6/otffs.
So I wanted to create a large file with repeated ascii strings. "Why?" you may ask. Because I need to use it for some NFS troubleshooting I'm doing. I need the file to be compressible because I'm sharing a tcpdump of a file copy with the vendor of our NAS. I had originally created a 1g file filled with random data from /dev/urandom, but of course since it's random, it means it won't compress at all and I need to send the full 1g of data to the vendor, which is difficult.
So I created a file with all the printable ascii characters, repeated over and over, to a limit of 1g in size. I was worried it would take a long time. It actually went amazingly quickly, IMHO:
cd /dev/shm
date
time yes $(for ((i=32;i<127;i++)) do printf "\\$(printf %03o "$i")"; done) | head -c 1073741824 > ascii1g_file.txt
date
Wed Apr 20 12:30:13 CDT 2022
real 0m0.773s
user 0m0.060s
sys 0m1.195s
Wed Apr 20 12:30:14 CDT 2022
Copying it from an nfs partition to /dev/shm took just as long as with the random file (which one would expect, I know, but I wanted to be sure):
cp ascii1gfile.txt /home/greygnome/
uptime; free -m; sync; echo 1 > /proc/sys/vm/drop_caches; free -m; date; dd if=/home/greygnome/ascii1gfile.txt of=/dev/shm/outfile bs=16384 2>&1; date; rm -f /dev/shm/outfile
But while doing that I ran a simultaneous tcpdump:
tcpdump -i em1 -w /dev/shm/dump.pcap
I was able to compress the pcap file down to 12M in size! Awesomesauce!
Edit: Before you ding me because the OP said, "I don't care about the contents," know that I posted this answer because it's one of the first replies to "how to create a large file linux" in a Google search. And sometimes, disregarding the contents of a file can have unforeseen side effects.
Edit 2: And fallocate seems to be unavailable on a number of filesystems, and creating a 1GB compressible file in 1.2s seems pretty decent to me (aka, "quickly").
You could use https://github.com/flew-software/trash-dump
you can create file that is any size and with random data
heres a command you can run after installing trash-dump (creates a 1GB file)
$ trash-dump --filename="huge" --seed=1232 --noBytes=1000000000
BTW I created it