I'm not too familiar with Linux so please forgive the ignorance. I'm trying to dump a file to a virtual floppy disk using the command:
dd if=a.out of=image bs=1024 count=1 conv=notrunc
It creates the image file with the exact same size of a.out (1kB). However on my friend's laptop, his image file is 1.5MB. His is correct (as said by my professor) however mine isn't. I even tried with his a.out file just to check and it produced the same, wrong results.
Any input on why this is and how to fix it?
It's likely your friend's target file was already 1.5M (or 1.44M for a normal floppy image). The notrunc means that it won't truncate the file before writing to it so that, if you write 1K to a larger file, only the first 1K is over-written and the rest remains as it was.
If you want a 1.44M file to start with, you can create it first with:
dd if=/dev/zero of=image bs=1440k count=1
then use your command to overwrite the first bit of it.
This dd statement will copy exactly 1024 bytes, so you got the expected results. So, either the professor is wrong, or you copied the command incorrectly.
I have a pendrive where I restore an image maked by dd command in other system.
When I restore the image, always I must do a fsck.
Is possbile that when I've generated the image the source system was corrupted or it is that I corrupt the pendrive when I restore the image file ?
Many thanks and sorry for my English!
To know if you have corruption in your data, when you create an image, you should compute a checksum (md5sum or sha1).
# dd if=/dev/sdb of=my_image.img # --> here you create your image
# md5sum /dev/sdb # --> here you compute your checksum
e2985322ca0ed3632027a6eb8fe70ae8 /dev/sdb
# md5sum my_image.img # --> check the integrity of the image
e2985322ca0ed3632027a6eb8fe70ae8 my_image.img
Thus, when you flash to another device, on an other computer or any system, you are able to check the integrity of data.
# dd if=my_image.img of=/dev/hdc # --> here you flash your image to a device
# md5sum /dev/hdc # --> check the integrity of the flashed data
e2985322ca0ed3632027a6eb8fe70ae8 /dev/hdc
Of course the obtained hash here is just an example but it is always the same for the same data. If any byte was altered the checksum would be totally different.
Obviously, if you copy with dd to a device (even from), be sure that one is not mounted using something like mount | grep /dev/hdc should return nothing if you want to flash the dev/hdc device.
This is just guessing, since you do not provide much information about the creation of the image, but yes, it is possible that the file system was in an unclean state when the image was taken. It is also possible that the file system was simply still mounted at that point of time - always unmount a file system (or mount it read-only) before you take an image of it.
I have been provided with a 3.8GB SD card image meant to be flashed to a 4GB SD card for booting a customized version of Raspian OS on the RaspBerry PI dev board. It has a first primary partition that is FAT32 that holds a bootloader, and another partition of a custom type that holds the OS.
I am able to boot the PI off the SD card with this image deployed on it, modify its contents while the board is running, then shut down the board.
I'd like to create my own disk image after I modify the contents of the card while it's booting. This would involve backing up the MBR, which I would attempt via:
dd if=/dev/sda of=~/Desktop/mbr.raw bs=512 count=1
I could then back up each partition one at a time to a separate file via:
dd if=/dev/sda1 of=~/Desktop/sda1.raw bs=1m
dd if=/dev/sda2 of=~/Desktop/sda2.raw bs=1m
Is there any way to concatenate these files into a single image, or safely script dd to extract all of their contents to a single file in the first place? The size of the bootloader and OS partitions may change in the future, but they will always be contiguous.
Use a subshell like this:
(dd if=/dev/sda1 bs=1m; dd if=/dev/sda2 bs=1m) > ~/Desktop/sda1+2.raw
Or, if you want the 512 byte MBR in there as well (probably not the best idea), you could do:
(dd if=/dev/sda bs=512 count=1; dd if=/dev/sda1 bs=1m; dd if=/dev/sda2 bs=1m) > ~/Desktop/MBR+sda1+2.raw
In the end, I did the following, which worked:
Use fdisk -l /dev/sdc to list all partitions on the SD card. Note the block size (usually 512) and the "count" (ie: number of blocks occupied) by the first partition
Define a variable, blks, as count+1.
Issue the command: dd if=/dev/sdc of=~/my_image.img bs=512 count=${blks}
Is there a script i can use to copy some particular sectors of my Harddisk?
I actually have two partitions say A and B, on my Harddisk. Both are of same sizes. What i want is to run a program which starts copying data from the starting sector of A to the starting sector of B until the end sector of A is copied to the end sector of B.
Looking for possible solutions...
Thanks a lot
How about using dd? Following copies 1024 blocks (of 512 bytes size, which is usually a sector size) with 4096 block offset from sda to sdb partition:
dd if=/dev/sda1 of=/dev/sdb1 bs=512 count=1024 skip=4096
PS. I also suppose it should be SuperUser or rather ServerFault question.
If you want to access the hard drive directly, not via partitions, then, well, just do that. Something like
dd if=/dev/sda of=/dev/sda bs=512 count=1024 skip=XX seek=YY
should copy 1024 sectors starting at sector XX to sectors YY->YY+1024. Of course, if the sector ranges overlap, results are probably not going to be pretty.
(Personally, I wouldn't attempt this without first taking a backup of the disk, but YMMV)
I am not sure if what you are looking for is a partion copier.
If that is what you mean try clonezilla.
(it will show you what exact statement it uses so can be used to find out how to do that in a script afterwards)
How can I quickly create a large file on a Linux (Red Hat Linux) system?
dd will do the job, but reading from /dev/zero and writing to the drive can take a long time when you need a file several hundreds of GBs in size for testing... If you need to do that repeatedly, the time really adds up.
I don't care about the contents of the file, I just want it to be created quickly. How can this be done?
Using a sparse file won't work for this. I need the file to be allocated disk space.
dd from the other answers is a good solution, but it is slow for this purpose. In Linux (and other POSIX systems), we have fallocate, which uses the desired space without having to actually writing to it, works with most modern disk based file systems, very fast:
For example:
fallocate -l 10G gentoo_root.img
This is a common question -- especially in today's environment of virtual environments. Unfortunately, the answer is not as straight-forward as one might assume.
dd is the obvious first choice, but dd is essentially a copy and that forces you to write every block of data (thus, initializing the file contents)... And that initialization is what takes up so much I/O time. (Want to make it take even longer? Use /dev/random instead of /dev/zero! Then you'll use CPU as well as I/O time!) In the end though, dd is a poor choice (though essentially the default used by the VM "create" GUIs). E.g:
dd if=/dev/zero of=./gentoo_root.img bs=4k iflag=fullblock,count_bytes count=10G
truncate is another choice -- and is likely the fastest... But that is because it creates a "sparse file". Essentially, a sparse file is a section of disk that has a lot of the same data, and the underlying filesystem "cheats" by not really storing all of the data, but just "pretending" that it's all there. Thus, when you use truncate to create a 20 GB drive for your VM, the filesystem doesn't actually allocate 20 GB, but it cheats and says that there are 20 GB of zeros there, even though as little as one track on the disk may actually (really) be in use. E.g.:
truncate -s 10G gentoo_root.img
fallocate is the final -- and best -- choice for use with VM disk allocation, because it essentially "reserves" (or "allocates" all of the space you're seeking, but it doesn't bother to write anything. So, when you use fallocate to create a 20 GB virtual drive space, you really do get a 20 GB file (not a "sparse file", and you won't have bothered to write anything to it -- which means virtually anything could be in there -- kind of like a brand new disk!) E.g.:
fallocate -l 10G gentoo_root.img
Linux & all filesystems
xfs_mkfile 10240m 10Gigfile
Linux & and some filesystems (ext4, xfs, btrfs and ocfs2)
fallocate -l 10G 10Gigfile
OS X, Solaris, SunOS and probably other UNIXes
mkfile 10240m 10Gigfile
HP-UX
prealloc 10Gigfile 10737418240
Explanation
Try mkfile <size> myfile as an alternative of dd. With the -n option the size is noted, but disk blocks aren't allocated until data is written to them. Without the -n option, the space is zero-filled, which means writing to the disk, which means taking time.
mkfile is derived from SunOS and is not available everywhere. Most Linux systems have xfs_mkfile which works exactly the same way, and not just on XFS file systems despite the name. It's included in xfsprogs (for Debian/Ubuntu) or similar named packages.
Most Linux systems also have fallocate, which only works on certain file systems (such as btrfs, ext4, ocfs2, and xfs), but is the fastest, as it allocates all the file space (creates non-holey files) but does not initialize any of it.
truncate -s 10M output.file
will create a 10 M file instantaneously (M stands for 10241024 bytes, MB stands for 10001000 - same with K, KB, G, GB...)
EDIT: as many have pointed out, this will not physically allocate the file on your device. With this you could actually create an arbitrary large file, regardless of the available space on the device, as it creates a "sparse" file.
For e.g. notice no HDD space is consumed with this command:
### BEFORE
$ df -h | grep lvm
/dev/mapper/lvm--raid0-lvm0
7.2T 6.6T 232G 97% /export/lvm-raid0
$ truncate -s 500M 500MB.file
### AFTER
$ df -h | grep lvm
/dev/mapper/lvm--raid0-lvm0
7.2T 6.6T 232G 97% /export/lvm-raid0
So, when doing this, you will be deferring physical allocation until the file is accessed. If you're mapping this file to memory, you may not have the expected performance.
But this is still a useful command to know. For e.g. when benchmarking transfers using files, the specified size of the file will still get moved.
$ rsync -aHAxvP --numeric-ids --delete --info=progress2 \
root#mulder.bub.lan:/export/lvm-raid0/500MB.file \
/export/raid1/
receiving incremental file list
500MB.file
524,288,000 100% 41.40MB/s 0:00:12 (xfr#1, to-chk=0/1)
sent 30 bytes received 524,352,082 bytes 38,840,897.19 bytes/sec
total size is 524,288,000 speedup is 1.00
Where seek is the size of the file you want in bytes - 1.
dd if=/dev/zero of=filename bs=1 count=1 seek=1048575
Examples where seek is the size of the file you want in bytes
#kilobytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200K
#megabytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200M
#gigabytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200G
#terabytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200T
From the dd manpage:
BLOCKS and BYTES may be followed by the following multiplicative suffixes: c=1, w=2, b=512, kB=1000, K=1024, MB=1000*1000, M=1024*1024, GB =1000*1000*1000, G=1024*1024*1024, and so on for T, P, E, Z, Y.
To make a 1 GB file:
dd if=/dev/zero of=filename bs=1G count=1
I don't know a whole lot about Linux, but here's the C Code I wrote to fake huge files on DC Share many years ago.
#include < stdio.h >
#include < stdlib.h >
int main() {
int i;
FILE *fp;
fp=fopen("bigfakefile.txt","w");
for(i=0;i<(1024*1024);i++) {
fseek(fp,(1024*1024),SEEK_CUR);
fprintf(fp,"C");
}
}
You can use "yes" command also. The syntax is fairly simple:
#yes >> myfile
Press "Ctrl + C" to stop this, else it will eat up all your space available.
To clean this file run:
#>myfile
will clean this file.
I don't think you're going to get much faster than dd. The bottleneck is the disk; writing hundreds of GB of data to it is going to take a long time no matter how you do it.
But here's a possibility that might work for your application. If you don't care about the contents of the file, how about creating a "virtual" file whose contents are the dynamic output of a program? Instead of open()ing the file, use popen() to open a pipe to an external program. The external program generates data whenever it's needed. Once the pipe is open, it acts just like a regular file in that the program that opened the pipe can fseek(), rewind(), etc. You'll need to use pclose() instead of close() when you're done with the pipe.
If your application needs the file to be a certain size, it will be up to the external program to keep track of where in the "file" it is and send an eof when the "end" has been reached.
One approach: if you can guarantee unrelated applications won't use the files in a conflicting manner, just create a pool of files of varying sizes in a specific directory, then create links to them when needed.
For example, have a pool of files called:
/home/bigfiles/512M-A
/home/bigfiles/512M-B
/home/bigfiles/1024M-A
/home/bigfiles/1024M-B
Then, if you have an application that needs a 1G file called /home/oracle/logfile, execute a "ln /home/bigfiles/1024M-A /home/oracle/logfile".
If it's on a separate filesystem, you will have to use a symbolic link.
The A/B/etc files can be used to ensure there's no conflicting use between unrelated applications.
The link operation is about as fast as you can get.
The GPL mkfile is just a (ba)sh script wrapper around dd; BSD's mkfile just memsets a buffer with non-zero and writes it repeatedly. I would not expect the former to out-perform dd. The latter might edge out dd if=/dev/zero slightly since it omits the reads, but anything that does significantly better is probably just creating a sparse file.
Absent a system call that actually allocates space for a file without writing data (and Linux and BSD lack this, probably Solaris as well) you might get a small improvement in performance by using ftrunc(2)/truncate(1) to extend the file to the desired size, mmap the file into memory, then write non-zero data to the first bytes of every disk block (use fgetconf to find the disk block size).
This is the fastest I could do (which is not fast) with the following constraints:
The goal of the large file is to fill a disk, so can't be compressible.
Using ext3 filesystem. (fallocate not available)
This is the gist of it...
// include stdlib.h, stdio.h, and stdint.h
int32_t buf[256]; // Block size.
for (int i = 0; i < 256; ++i)
{
buf[i] = rand(); // random to be non-compressible.
}
FILE* file = fopen("/file/on/your/system", "wb");
int blocksToWrite = 1024 * 1024; // 1 GB
for (int i = 0; i < blocksToWrite; ++i)
{
fwrite(buf, sizeof(int32_t), 256, file);
}
In our case this is for an embedded linux system and this works well enough, but would prefer something faster.
FYI the command dd if=/dev/urandom of=outputfile bs=1024 count = XX was so slow as to be unusable.
Shameless plug: OTFFS provides a file system providing arbitrarily large (well, almost. Exabytes is the current limit) files of generated content. It is Linux-only, plain C, and in early alpha.
See https://github.com/s5k6/otffs.
So I wanted to create a large file with repeated ascii strings. "Why?" you may ask. Because I need to use it for some NFS troubleshooting I'm doing. I need the file to be compressible because I'm sharing a tcpdump of a file copy with the vendor of our NAS. I had originally created a 1g file filled with random data from /dev/urandom, but of course since it's random, it means it won't compress at all and I need to send the full 1g of data to the vendor, which is difficult.
So I created a file with all the printable ascii characters, repeated over and over, to a limit of 1g in size. I was worried it would take a long time. It actually went amazingly quickly, IMHO:
cd /dev/shm
date
time yes $(for ((i=32;i<127;i++)) do printf "\\$(printf %03o "$i")"; done) | head -c 1073741824 > ascii1g_file.txt
date
Wed Apr 20 12:30:13 CDT 2022
real 0m0.773s
user 0m0.060s
sys 0m1.195s
Wed Apr 20 12:30:14 CDT 2022
Copying it from an nfs partition to /dev/shm took just as long as with the random file (which one would expect, I know, but I wanted to be sure):
cp ascii1gfile.txt /home/greygnome/
uptime; free -m; sync; echo 1 > /proc/sys/vm/drop_caches; free -m; date; dd if=/home/greygnome/ascii1gfile.txt of=/dev/shm/outfile bs=16384 2>&1; date; rm -f /dev/shm/outfile
But while doing that I ran a simultaneous tcpdump:
tcpdump -i em1 -w /dev/shm/dump.pcap
I was able to compress the pcap file down to 12M in size! Awesomesauce!
Edit: Before you ding me because the OP said, "I don't care about the contents," know that I posted this answer because it's one of the first replies to "how to create a large file linux" in a Google search. And sometimes, disregarding the contents of a file can have unforeseen side effects.
Edit 2: And fallocate seems to be unavailable on a number of filesystems, and creating a 1GB compressible file in 1.2s seems pretty decent to me (aka, "quickly").
You could use https://github.com/flew-software/trash-dump
you can create file that is any size and with random data
heres a command you can run after installing trash-dump (creates a 1GB file)
$ trash-dump --filename="huge" --seed=1232 --noBytes=1000000000
BTW I created it