I am using the following cmd where sda(500GB) is my laptop hd (unmounted) and sdc(500GB) is my external usb hd
dd if=/dev/sda of=/dev/sdc bs=4096
When complete this returns
122096647+0 records in
122096646+0 records out
50010782016 bytes (500GB) copied, 10975. 5 s, 45.6 MB/s
This shows records in != records out
fdisk -l
returns
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 718847 358407 7 HPFS/NTFS/exFAT
/dev/sda2 718848 977102847 488192000 7 HPFS/NTFS/exFAT
/dev/sdc1 * 2048 718847 358407 7 HPFS/NTFS/exFAT
/dev/sdc2 718848 977102847 976384000 7 HPFS/NTFS/exFAT
This also shows differences between the Block sizes
Another question is it normal for dd to take 3 hours for a 500GB copy.(laptop ssd to normal non ssd usb hd)
My Physical Sector on windows is 4096 whilst Logical Sector is 512
is it normal for dd to take 3 hours - yes. dd can take very long because you are copying everything off the drive bit by bit bit. And you need to recognize how the connection is made from source (sda) to destination (sdc). You mention sdc is your external usb hard drive, so what is the max transfer speed on USB? Then, it is unlikely that transfer will always happen at that max value. If it is USB 2.0 then yes, it can take very long.
Which is why i hate dd. It is often used when it should not be, and differences between source and destination such as partition sizes, types, block sizes cause problems.
In most cases you are better off using cp -rp or tar.
If you are trying to clone a drive that has a bootable linux operating system, you do not need to use dd there are better ways.
Related
On a Linux system, I need to create a large file (about 10GB), uncompressible file.
This file is supposed to reside in a Docker image, needed to test performance in transferring and storing large docker images on a local registry. Therefore, I need the image to be "intrinsically" large (that is: uncompressible), in order to bypass optimization mechanisms.
fallocate (described at Quickly create a large file on a Linux system ) works great to create large files very quickly, but the result is a 0 entropy large file, highly compressible. When pushing the large image to the registry, it takes only few MB.
So, how can a large, uncompressible file be created?
You may tray use /dev/urandom or /dev/random to fill your file for example
#debian-10:~$ SECONDS=0; dd if=/dev/urandom of=testfile bs=10M count=1000 ;echo $SECONDS
1000+0 record in
1000+0 record out
10485760000 bytes (10 GB, 9,8 GiB) copied, 171,516 s, 61,1 MB/s
171
Using bigger bs a little small time is needed:
*#debian-10:~$ SECONDS=0; dd if=/dev/urandom of=testfile bs=30M count=320 ;echo $SECONDS
320+0 record in
320+0 record out
10066329600 bytes (10 GB, 9,4 GiB) copied, 164,498 s, 61,2 MB/s
165
171 seconds VS. 165 seconds
Is less than 3 minutes on 4GiB of real data an acceptable speed?
A "random set" can be obtained by giving dd ready data instead of generating it.
The easiest way is to use a disk that is full to a degree greater than the required file size,
I used a disk with random binaries and video. If you are concerned about data leakage, you can process your data with something.
Everything goes to /dev/shm, because writing to RAM is much faster than writing to disk.
Naturally, there must be enough free space I had 4GB so the file in the example is 4GB
My processor is an aged i7 of the first generation.
% time dd if=/dev/sdb count=40 bs=100M >/dev/shm/zerofil
40+0 records in
40+0 records out
4194304000 bytes (4,2 GB, 3,9 GiB) copied, 163,211 s, 25,7 MB/s
real 2m43.313s
user 0m0.000s
sys 0m6.032s
% ls -lh /dev/shm/zerofil
-rw-r--r-- 1 root root 4,0G mar 5 13:11 zerofil
% more /dev/shm/zerofil
3��؎м
f`f1һ���r�f�F����fa�v
f�fFf��0�r'f�>
^���>b��<
...
I am working with NVMe card on linux(Ubuntu 14.04).
I am finding some performance degradation for Intel NVMe card when formatted with xfs file system with its default sector size(512). or any other sector size less than 4096.
In the experiment I formatted the card with xfs filesystem with default options. I tried running fio with 64k block size on an arm64 platform with 64k page size.
This is the command used
fio --rw=randread --bs=64k --ioengine=libaio --iodepth=8 --direct=1 --group_reporting --name=Write_64k_1 --numjobs=1 --runtime=120 --filename=new --size=20G
I could get only the below values
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=281670KB/s, minb=281670KB/s, maxb=281670KB/s, mint=744454msec, maxt=74454msec
Disk stats (read/write):
nvme0n1: ios=326821/8, merge=0/0, ticks=582640/0, in_queue=582370, util=99.93%
I tried formatting as follows:
mkfs.xfs -f -s size=4096 /dev/nvme0n1
then the values were :
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=781149KB/s, minb=781149KB/s, maxb=781149KB/s, mint=266
847msec, maxt=26847msec
Disk stats (read/write):
nvme0n1: ios=326748/7, merge=0/0, ticks=200270/0, in_queue=200350, util=99.51%
I find no performance degradation when used with
4k page size
Any fio block size lesser than 64k
With ext4 fs with default configs
What could be the issue? Is this any alignment issue? What Am I missing here? Any help appreciated
The issue is your SSD's native sector size is 4K. So your file system's block size should be set to match so that reads and writes are aligned on sector boundaries. Otherwise you'll have blocks that span 2 sectors, and therefore require 2 sector reads to return 1 block (instead of 1 read).
If you have an Intel SSD, the newer ones have a variable sector size you can set using their Intel Solid State Drive DataCenter Tool. But honestly 4096 is still probably the drive's true sector size anyway and you'll get the most consistent performance using it and setting your file system to match.
On ZFS on Linux the setting is ashift=12 for 4K blocks.
I have been provided with a 3.8GB SD card image meant to be flashed to a 4GB SD card for booting a customized version of Raspian OS on the RaspBerry PI dev board. It has a first primary partition that is FAT32 that holds a bootloader, and another partition of a custom type that holds the OS.
I am able to boot the PI off the SD card with this image deployed on it, modify its contents while the board is running, then shut down the board.
I'd like to create my own disk image after I modify the contents of the card while it's booting. This would involve backing up the MBR, which I would attempt via:
dd if=/dev/sda of=~/Desktop/mbr.raw bs=512 count=1
I could then back up each partition one at a time to a separate file via:
dd if=/dev/sda1 of=~/Desktop/sda1.raw bs=1m
dd if=/dev/sda2 of=~/Desktop/sda2.raw bs=1m
Is there any way to concatenate these files into a single image, or safely script dd to extract all of their contents to a single file in the first place? The size of the bootloader and OS partitions may change in the future, but they will always be contiguous.
Use a subshell like this:
(dd if=/dev/sda1 bs=1m; dd if=/dev/sda2 bs=1m) > ~/Desktop/sda1+2.raw
Or, if you want the 512 byte MBR in there as well (probably not the best idea), you could do:
(dd if=/dev/sda bs=512 count=1; dd if=/dev/sda1 bs=1m; dd if=/dev/sda2 bs=1m) > ~/Desktop/MBR+sda1+2.raw
In the end, I did the following, which worked:
Use fdisk -l /dev/sdc to list all partitions on the SD card. Note the block size (usually 512) and the "count" (ie: number of blocks occupied) by the first partition
Define a variable, blks, as count+1.
Issue the command: dd if=/dev/sdc of=~/my_image.img bs=512 count=${blks}
I have been playing around with BTRFS on a few drives I had lying around. At first I created BTRFS using the entire drive, but eventually I decided I wanted to use GPT partitions on the drives and recreated the filesystem I needed on the partitions that resulted. (This was so I could use a portion of each drive as Linux swap space, FYI.)
When I got this all done, BTRFS worked a treat. But I have annoying messages saying that I have some old filesystems from my previous experimentation that I have actually nuked. I worry this meant that BTRFS was confused about what space on the drives was available, or that some sort of corruption might occur.
The messages look like this:
$ sudo btrfs file show
Label: 'x' uuid: 06fa59c9-f7f6-4b73-81a4-943329516aee
Total devices 3 FS bytes used 159.20GB
devid 3 size 931.00GB used 134.01GB path /dev/sde
*** Some devices missing
Label: 'root' uuid: 5f63d01d-3fde-455c-bc1c-1b9946e9aad0
Total devices 4 FS bytes used 1.13GB
devid 4 size 931.51GB used 1.03GB path /dev/sdd
devid 3 size 931.51GB used 2.00GB path /dev/sdc
devid 2 size 931.51GB used 1.03GB path /dev/sdb
*** Some devices missing
Label: 'root' uuid: e86ff074-d4ac-4508-b287-4099400d0fcf
Total devices 5 FS bytes used 740.93GB
devid 4 size 911.00GB used 293.03GB path /dev/sdd1
devid 5 size 931.51GB used 314.00GB path /dev/sde1
devid 3 size 911.00GB used 293.00GB path /dev/sdc1
devid 2 size 911.00GB used 293.03GB path /dev/sdb1
devid 1 size 911.00GB used 293.00GB path /dev/sda1
As you can see, I have an old filesystem labeled 'x' and an old one labeled 'root', and both of these have "Some devices missing". The real filesystem, the last one shown, is the one that I am now using.
So how do I clean up the old "Some devices missing" filesystems? I'm a little worried, but mostly just OCD and wanting to tidy up this messy output.
Thanks.
To wipe from disks that are NOT part of your wanted BTRFS FS, I found:
How to clean up old superblock ?
...
To actually remove the filesystem use:
wipefs -o 0x10040 /dev/sda
8 bytes [5f 42 48 52 66 53 5f 4d] erased at offset 0x10040 (btrfs)"
from: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#I_can.27t_mount_my_filesystem.2C_and_I_get_a_kernel_oops.21
I actually figured this out for myself. Maybe it will help someone else.
I poked around in the code to see what was going on. When the btrfs filesystem show command is used to show all filesystems on all devices, it scans every device and partition in /proc/partitions. Each device and each partition is examined to see if there is a BTRFS "magic number" and associated valid root data structure found at 0x10040 offset from the beginning of the device or partition.
I then used hexedit on a disk that was showing up wrong in my own situation and sure enough there was a BTRFS magic number (which is the ASCII string _BHRfS_M) there from my previous experiments.
I simply nailed that magic number by overwriting a couple of the characters of the string with "**", also using hexedit, and the erroneous entries magically disappeared!
Is there a script i can use to copy some particular sectors of my Harddisk?
I actually have two partitions say A and B, on my Harddisk. Both are of same sizes. What i want is to run a program which starts copying data from the starting sector of A to the starting sector of B until the end sector of A is copied to the end sector of B.
Looking for possible solutions...
Thanks a lot
How about using dd? Following copies 1024 blocks (of 512 bytes size, which is usually a sector size) with 4096 block offset from sda to sdb partition:
dd if=/dev/sda1 of=/dev/sdb1 bs=512 count=1024 skip=4096
PS. I also suppose it should be SuperUser or rather ServerFault question.
If you want to access the hard drive directly, not via partitions, then, well, just do that. Something like
dd if=/dev/sda of=/dev/sda bs=512 count=1024 skip=XX seek=YY
should copy 1024 sectors starting at sector XX to sectors YY->YY+1024. Of course, if the sector ranges overlap, results are probably not going to be pretty.
(Personally, I wouldn't attempt this without first taking a backup of the disk, but YMMV)
I am not sure if what you are looking for is a partion copier.
If that is what you mean try clonezilla.
(it will show you what exact statement it uses so can be used to find out how to do that in a script afterwards)