Creating a forensic copy of a drive with multiple partitions of different file types - dd

I have a small 80GB drive with three partitions, two FAT and one NTFS. Using FTKImager and dd-for-windows (http://www.chrysocome.net/dd), I need to create a forensic copy of this drive.
The only dd command I've been taught so far is:
C:\windows\system32>[location of dd.exe] if=[location of raw image dump] of=\\.\[letter path of the copy drive]
This can be used to turn a drive\partition into a forensic copy of a single partition if the drive is already using the same file system as the one its copying. To do this task just using this command, I'd have to partition the copy drive three times, make three image dumps and dd each partition-image pair individually. This seems tedious and wouldn't even create a true forensic copy of the original drive.
Is there a way to make a raw image of the original drive as a whole, and a dd command that will turn a second drive into a copy of the first from that image?
Thank you

Nothing easier than that! Just boot a Live Linux system e.g. Ubuntu, Debian or Kali then you go into the terminal ...
With fdisk -l you display the disk, look where your disk (80GB) is ... I will now start from /dev/sdb. Then you look where you want the dump to go, e.g. to an external disk with a partition that has enough space, which is assumed to be /dev/sdc1 here, then you can write the disk to an image ... for that you mount the target. I now assume a third disk /dev/sdd which should contain the clone
mkdir /tmp/mount
mount /dev/sdc1 /tmp/mount
and write the desired record into an image
dd if=/dev/sdb of=/tmp/mount/image.dd
Now you can copy the disk directly or through the image.dd to another one, but the target /dev/sdd (your clone disk) must not be mounted anymore! you can check with "mount" and remount everything before
dd if=/dev/sdb of=/dev/sdd #Direct copy
dd if=/tmp/image.dd of=/dev/sdd #From the image.dd to a disk
Hope this helps

Related

How to create an image file for use with QEMU in Linux?

I have been following a tutorial online and have built a 512-Byte bootloader saved as boot.bin.
I also have a second stage bootloader compiled and saved as 2ndstage.bin.
My bootloader is written such that the second stage doesn't have to be located directly after the first stage in memory, as it searches for it by file name.
How would I, in Linux, combine both bin files into some sort of file (maybe an image) that I can use with QEMU in order to run my bootloader?
Create a raw disk image file using dd if=/dev/zero of=image.raw bs=1M count=50 That will make a 50 megabyte image file out of zeros.
If you want to operate on a block device instead of a file, you can loopback mount image.raw as a block device (read losetup man page)
You can partition the file or loopback device using regular fdisk or sfdisk utilities. Then you can either use other dd options (read the man page for it) or other options to put your bin files into the right places in the disk image.
After that, undo the loopback device if you made one, and start your qemu / qemu-kvm session using the image.raw file as your disk device. If you did your bootloader correctly the qemu bios will start it.

mount a sd-card image - change files on a partition and write back

i want to mount an IMG file (which >1 partitions on it), change some files at one (ext4) partition and write the result back to this img.
One way would be to write the img to a sd card, change there and make an image again. But i dont have a SDcard writer(!) and i think this way is abit complex anyway. Anyway, I tried this once a different computer, it works this way but its very complex and time consuming. Trying with a "loopback device" didnt worked for me.
Can someone tell me how to do this on an Ubuntu (for example with a loopback device?).
You have to create loopback device with:
losetup -P /dev/loop0 file
then it will show all partitions on that file in the form of:
/dev/loop0
/dev/loop0p1
/dev/loop0p2
here is a quote from the man losetup
-P, --partscan
Force the kernel to scan the partition table on a newly created loop device.

How to archive logical volume in Linux?

Because of project requirement, We need to archive logical volume in CentOS 6.5.
We do not care the file and file system in the logical volume, and just want archive the volume into another place using multi-file. we expect that the archive can be restore into volume and the restored volume has the correct data (file and file system).
Is there some ways to do it?
If you want your volume up and running while archiving, create a LVM snapshot first. Then, you can use dd:
dd if=/dev/VolGroup00/LogVol00 of=...etc
To split the file, you can use split or 7z.

Azure Linux remove and add another disk

I had a need to increase the disk space, for my Linux azure, we attached a new empty disk and followed the steps here http://azure.microsoft.com/en-in/documentation/articles/virtual-machines-linux-how-to-attach-disk , the only difference is that the newly added deviceid was not found in /var/log/messages.
Now I need to add another disk, and we attached another disk, the problem is that for doing the first step of fdisk
sudo fdisk /dev/sdc
I have no idea where the recent disk is attached, total clueless, also what are the steps if i want to remove a disk altogether, I know umount will unmount a disk, but that doesn't neccessarily takes off the device from the instance, i want a total detachment.
Finally figured it out - the additional SCSI disks added are started from /dev/sda, /dev/sdb , /dev/sdc, /dev/sdd, /dev/sde.... The reason why the MS tutorial talks about /dev/sdc is because its the 3rd disk in the system, 1st your root volume, second your ephemeral temp storage, and this is your 3rd one, now as the /dev/sdc is not good enough for you and you want to remove it
Remove entry from /etc/fstab file
umount /datadrive
you can now remove the attached disk from your Azure console.
Lets say that sdc is still there and you want to add another, just attach from the azure console, and do the same steps as given in http://azure.microsoft.com/en-in/documentation/articles/virtual-machines-linux-how-to-attach-disk/#initializeinlinux the only diff is that the another disk would be at /dev/sdd places where you make a partitions at /dev/sdc1 will become /dev/sdd1 that's pretty much about it.
References
http://www.yolinux.com/TUTORIALS/LinuxTutorialAdditionalHardDrive.html
http://azure.microsoft.com/en-in/documentation/articles/virtual-machines-linux-how-to-attach-disk/#initializeinlinux
When adding more than one disk, it will also become important to start using uuid in fstab. Search for uuid in this article for more.

How to format sd/mmc partition to cramfs

First, is it possible to format a sd/mmc partition to cramfs filesystem? If the answer is yes please show me how can I perform that.
Note: I am not asking about how to create a cramfs image, I have already created such image for Ramdisk.
There seems to be a bit of confusion about how cramfs images are created. One doesn't typically create a cramfs filesystem as cramfs is a read-only filesystem, so the entire image -- filesystem structure and contents -- are created in one go.
If the OP is asking how to copy a cramfs image so that it resides within an ext2 filesystem, you'd create the image in the usual fashion, using mkfs.cramfs, then copy the resultant image file into some directory in the ext2 filesystem, then mount that image with "mount -t cramfs /path/to/cramfs/image /mount/point". The image is simply yet another file within the ext2 filesystem.
The more common use of cramfs is storing the image into a partition as the only filesystem within that partition. In that instance, you would create the cramfs image in the same manner as before, then copy the entire image to the partition using the appropriate tools. There is no need to format the partition first as the cramfs image will over-write whatever was previously written there. The same "mount -t cramfs /path/to/image /mount/point" approach is used here as well, with the caveat that there is no cramfs partition type as there is with ext2 and others and /path/to/image is typically going to reside within /dev.

Resources