problems restoring image file with dd - linux

I have a pendrive where I restore an image maked by dd command in other system.
When I restore the image, always I must do a fsck.
Is possbile that when I've generated the image the source system was corrupted or it is that I corrupt the pendrive when I restore the image file ?
Many thanks and sorry for my English!

To know if you have corruption in your data, when you create an image, you should compute a checksum (md5sum or sha1).
# dd if=/dev/sdb of=my_image.img # --> here you create your image
# md5sum /dev/sdb # --> here you compute your checksum
e2985322ca0ed3632027a6eb8fe70ae8 /dev/sdb
# md5sum my_image.img # --> check the integrity of the image
e2985322ca0ed3632027a6eb8fe70ae8 my_image.img
Thus, when you flash to another device, on an other computer or any system, you are able to check the integrity of data.
# dd if=my_image.img of=/dev/hdc # --> here you flash your image to a device
# md5sum /dev/hdc # --> check the integrity of the flashed data
e2985322ca0ed3632027a6eb8fe70ae8 /dev/hdc
Of course the obtained hash here is just an example but it is always the same for the same data. If any byte was altered the checksum would be totally different.
Obviously, if you copy with dd to a device (even from), be sure that one is not mounted using something like mount | grep /dev/hdc should return nothing if you want to flash the dev/hdc device.

This is just guessing, since you do not provide much information about the creation of the image, but yes, it is possible that the file system was in an unclean state when the image was taken. It is also possible that the file system was simply still mounted at that point of time - always unmount a file system (or mount it read-only) before you take an image of it.

Related

DD Image larger than source

I created an image file using dd on my disk /dev/sda which fdisk says it is 500107862016 bytes in size. The resulting image file is 500108886016 bytes which is exactly 1024000 bytes larger.
Why is the image file 1MB larger than my source disk? Is there something related to the fact that I specified bs=1M in my dd command?
When I restore the image file onto another identical disk, I get "dd: error writing ‘/dev/sda’: No space left on device" error. Is this a problem? Will my new disk be corrupted?
conv=noerror makes dd(1) continue after a reading error, and this is not what you want. Also conv=sync fills incomplete blocks (mainly last block) with zeros up to fill a complete block, so probably this appending zeros to your last block is what is making your file greater than the actual disk size.
You don't need to use any of the conv options you used. No conversion is going to be made, and dd(1) will write the incomplete last block in case of the image doesn't have a full block size (which is the case)
Just retry your command with:
dd if=/dev/sda of=yourfile.img
and then
dd if=yourfile.img of=/dev/sdb
If you plan to use some greater buffer size (not needed, as you are using a block device and the kernel doesn't impose a blocksize for reading block devices) just use a multiple of the sector size that is a divisor of the whole disk size (something like one full track ---absurd, as today's disks' tracks are completely logical and don't have any relationship with actual disk geometry)

Copy file to disk image (*.img)/analogue to mcopy on Mac

I want to copy few files to image of disk (*.img extension). On linux this command is used:
mcopy -i "$target_img" $file ::/
What is same command for mac (OS X)? How can I copy files to image?
Apparently my last answer doesn't fit the criteria of the moderator... well heres an answer acceptable in their "format." i guess i wont get back the little reputation i received that the admin stripped me from. its tough being 20 yr old silicon valley techy ;)
The hdiutil command can also be used to create a disk image based upon a folder.
hdiutil create {imagename}.dmg -volname "{Name of volume}" -srcfolder /{path to folder}'
Using the above command, we could perform the following:
hdiutil create ~/Desktop/newimage.dmg -volname "New Disk Image" -srcfolder ~/Desktop/myfolder
The name of the disk image and volume name are two different things, the first simply refers to the filename of the disk image created. When the image is mounted or restored to a disk, the volume name is what's referred to. So if I mount this new disk image, you'll see OS X mounts it as "New Disk Image".
Disk images generated using the above methods are read-only and formatted as HFS+ by default, though their size will be just enough to contain all the data needed.
Standard disk images cannot increase in size, so you'll need to specify a particular size if you intend to be adding data to it later. The parameter -size can be used to specify the size of the disk image to create.
If you need will need to modify or add more data to the disk image, use the parameter -format UDRW which represents read/write.
Combining all of the above, the command would be:
hdiutil create ~/Desktop/newimage.dmg -volname "New Disk Image" -size 1g -format UDRW -srcfolder ~/Desktop/myfolder

Check ISO is valid or not

Is there any C# way to check an ISO file is valid or not i.e. valid Iso format or any other check possible or not.
The scenario is like, if any text file(or any other format file) is renamed to ISO and given it for further processing. I want to check weather this ISO file is a valid ISO file or not? Is there any way exist programmatically like to check any property of the file or file header or any other things
Thanks for any reply in advance
To quote the wiki gods:
There is no standard definition for ISO image files. ISO disc images
are uncompressed and do not use a particular container format; they
are a sector-by-sector copy of the data on an optical disc, stored
inside a binary file. ISO images are expected to contain the binary
image of an optical media file system (usually ISO 9660 and its
extensions or UDF), including the data in its files in binary format,
copied exactly as they were stored on the disc. The data inside the
ISO image will be structured according to the file system that was
used on the optical disc from which it was created.
reference
So you basically want to detect whether a file is an ISO file or not, and not so much check the file, to see if it's valid (e.g. incomplete, corrupted, ...) ?
There's no easy way to do that and there certainly is not a C# function (that I know of) that can do this.
The best way to approach this is to guess the amount of bytes per block stored in the ISO.
Guess, or simply try all possible situations one by one, unless you have an associated CUE file that actually stores this information. PS. If the ISO is accompanied by a same-name .CUE file then you can be 99.99% sure that it's an ISO file anyway.
Sizes would be 2048 (user data) or 2352 (raw or audio) bytes per block. Other sizes are possible as well !!!! I just mentioned the two most common ones. In case of 2352 bytes per block the user data starts at an offset in this block. Usually 16 or 24 depending on the Mode.
Next I would try to detect the CD/DVD file-systems. Assume that the image starts at sector 0 (although you could for safety implement a scan that assumes -150 to 16 for instance).
You'll need to look into specifics of ISO9660 and UDF for that. Sectors 16, 256 etc. will be interesting sectors to check !!
Bottom line, it's not an easy task to do and you will need to familiarize yourself with optical disc layouts and optical disc file-systems (ISO9660, UDF but possibly also HFS and even FAT on BD).
If you're digging into this I strongly suggest to get IsoBuster (www.isobuster.com) to help you see what the size per block is, what file systems there are, to inspect the different key blocks etc.
In addition to the answers above (and especially #peter's answer): I recently made a very simple Python tool for the detection of truncated/incomplete ISO images. Definitely not validation (which as #Jake1164 correctly points out is impossible), but possibly useful for some scenarios nevertheless. It also supports ISO images that contain Apple (HFS) partitions. For more details see the following blog post:
Detecting broken ISO images: introducing Isolyzer
And the software's Github repo is here:
Isolyzer
You may run md5sum command to check the integrity of an image
For example, here's a list of ISO: http://mirrors.usc.edu/pub/linux/distributions/centos/5.4/isos/x86_64/
You may run:
md5sum CentOS-5.4-x86_64-LiveCD.iso
The output is supposed to be the same as 1805b320aba665db3e8b1fe5bd5a14cc, which you may find from here:
http://mirrors.usc.edu/pub/linux/distributions/centos/5.4/isos/x86_64/md5sum.txt

CIFS/SMB Write Optimization

I am looking at making a write optimization for CIFS/SMB such that the writing of duplicate blocks are suppressed. For example, I read a file from the remote share and modify a portion near the end of the file. When I save the file, I only want to send write requests back to the remote side for the portions of the file that have actually changed. So basically, suppress all writes up until the point at which a non duplicate write is encountered. At that point the suppression will be disabled and the writes will be allowed as usual. The problem is I can't find any documentation MS-SMB/MS-SMB2/MS-CIFS or otherwise that indicates whether or not this is a valid thing to do. Does anyone know if this would be valid?
Dig deep into the sources of the Linux kernel, there is documentation on CIFS - both in source and text. E.g. http://www.mjmwired.net/kernel/Documentation/filesystems/cifs.txt
If you want to study the behaviour of e.g. the CIFS protocol, you may be able to test it with the unix command "dd". Mount any remote file-system via CIFS, e.g. into /media/remote. Change into this folder cd /media/remote Now create a file with some random stuff (e.g. from the kernel's random pool): dd if=/dev/urandom of=test.bin bs=4M count=5 In this example, you should see some 20MB of traffic. Then create another smaller file, somewhere on your machine, say, your home-folder: dd if=/dev/urandom of=~/test_chunk.bin bs=4M count=1 The interesting thing is what happens, if you attempt to write the chunk into a specific position of the remote test file: dd if=~/test_chunk.bin of=test.bin bs=4M count=1 seek=3 conv=notrunc Actually, this should only change block #4 out of 5 in the target file.
I guess you can adjust the block size ... I did this with 4 MB blocks. But it should help to understand what happens on the network.
The CIFS protocol does allow applications to write back specific portions of the file. This is controlled by the parameters DataOffset and DataLength in the SMB WriteAndX packet.
Documentation for the same can be found here:
http://msdn.microsoft.com/en-us/library/ee441954.aspx
The client can use these fields to write a specific length of data to specific offsets within the file.
Similar support exists in more recent versions of the protocol as well ...
SMB protocol have such write optimization. It works with append cifs operation. Where protocol read EOF for file and start writing new data with offset set to EOF value and length as append data bytes.

DD img different MD5's?

We have a smart media card with a linux install on it that we need to duplicate. We created an img with DD and then used dd to write the img back to a couple of new smart media cards. We have compared the MD5 checksum of both the original and the new copies and they are different.
Here is what we used:
dd if=/dev/sdb of=myimage.img
dd if=myimage.img of=/dev/sdb
dd if=/dev/sdb of=newimage.img
Anyone have any ideas of why these come out different?
If the cards are different sizes, dd'ing the smaller image to a larger card will not "fill it up", and zeros will remain at the end of the card. An image made from this card will be different than the original image.
It's also always possible that data was mis-written, mis-read, or otherwise corrupted in-transit.
The card capacities differ?
Running 'ls -l myimage.img newimage.img' might tell you something.
Running 'cmp -l myimage.img newimage.img' might tell you something.
If you mounted /dev/sdb in between it would be an answer. If I remember correctly ext2 and ext3 have a "mount counter".

Resources