Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Let's assume I have a hard drive with some linux distribution on it. My task is to set up similar system (with similar distro, kernel version, software versions etc.) on the other hard drive. How can i do that if:
Case a: I'm allowed to use any software i want (include software like Virtualbox to make full image of the system)
Case b: I'm not allowed to use anything but standard linux utilities to retrieve all characteristics i need, and then install "fresh" system on other hard drive manually.
Thanks for reading. It's very hard to me to express what i mean, i hope you understood it.
One word: CloneZilla
It can clone the partitions, disks, copies the boot record. You can boot it up from CD or USB drive or even via network (PXE).
You could go with dd but it's slow because it copies everything, even the empty space on disk, and if your partitions are not the same size you can have various problems, so I do not recommend dd.
You could also boot the system from some live CD like Knoppix, mount the partitios and copy everything using cp -a. And run something like watch df in a second terminal to monitor the progress. But even then you need to mess with the bootloader after copy is done.
I used to use various manual ways to clone Linux systems in the past, until I discovered CloneZilla. Life is much easier since then.
Easiest way is to use dd from the command prompt.
dd if=/dev/sda of=/dev/sdb --bsize=8096
dd (the disk duplicator) is used for exactly this purpose. I would check the man page to ensure my blocksize argument is correct though. The other two arguments are if (in file) and of (out file). The of= hard drive should be the same size or larger than the if= hard drive.
You can create an exact copy of the system on the first disk with dd or cpio and a live cd.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have an Acer Aspire R laptop with 260GB SSD, UEFI, Ubuntu and Windows 10 dual boot. How can I backup / clone / image the whole drive to be reinstalled on a new drive if current drive fails?
Clonezillla: Will it backup all partitions (EFI, recovery, Ubuntu, swap, Windows) to external drive, so I can restore it to a new drive, no problem? Which file system should the external drive have?
GParted: Or should I partition the external drive like the existing drive and copy the partitions with gparted?
You can use Clonezilla to make a bootable copy of the whole existing SSD with all of its partitions including Windows.
The boot menu comes from Grub2 and it gets created from templates in /etc/grub.d and settings from /etc/default/grub.
So, if your Clonezilla ISO file lives at /srv/iso/clonezilla-live-disco-amd64.iso and /srv directory lives in hard drive 0 in partition 13, then create a new executable file in /etc/grub.d, such as 40_clonezilla and put the following in it:
#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
menuentry "Clonezilla live" {
set root=(hd0,13)
set isofile="/iso/clonezilla-live-disco-amd64.iso"
loopback loop $isofile
linux (loop)/live/vmlinuz boot=live union=overlay username=user config components quiet noswap nolocales edd=on nomodeset ocs_live_run=\"ocs-live-general\" ocs_live_extra_param=\"\" keyboard-layouts= ocs_live_batch=\"no\" locales= vga=788 ip=frommedia nosplash toram=live,syslinux,EFI findiso=$isofile
initrd (loop)/live/initrd.img
}
Then, run update-grub to regenerate your grub menu.
When you reboot, you will have a new boot option that boots from Clonezilla, and, from there, you can make a bootable copy of the existing hard-drive onto an external drive and overwrite whatever is already on that external drive.
All of this stuff, editing Grub templates and overwriting drives is quite dangerous and the penalty for getting something wrong is high.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am trying to perform an image on a hard disk which is failing.
The issue I am encountering causes the program to fail as the disk will routinely drop during the image process and when it is re-recognised by the system it is under a different address (/dev/sdb is now /dev/sde).
I have tried imaging each partition independently but on a 500GB disk I am strugging to get past 100GB a session before the disk will drop (i think the head is going as it clicks).
My question is, if using dd is there a way to image the disk, breaking it down into say 50GB parts so that I can get the whole disk over a number of images and then consolodate.
Or better still, is there a way to force the disk to re-identify on the previous location?
I have found little information on this topic so any insight would be useful.
Thanks.
When the device is lost, your stream will be lost, too. You cannot recover it even if it gets the same device name assigned. However you might want to employ udev rules to get the same name back just for your convenience.
In dd, you can use four useful parameters:
bs=BYTES the size of a "block"
skip=N number of blocks to skip in input
seek=N number of blocks to skip in output
count=N number of blocks to be copied (we don't need it here)
Also, dd has the, albeit a bit hidden, feature of providing progress reports. You can either use "status=progress" or send a signal to the process. The latter is more complicated but it allows you to define the frequency of progress reports. For example, you can do this in another terminal:
for ((;;)); do sleep 1; kill -USR1 `pidof -s dd`; done
Putting all of this together, you can use bs=4M as a reasonable blocksize. Then you can run aforementioned command in a secondary terminal, then start dd, initially with
dd bs=4M seek=0 skip=0 if=/dev/… of=…
After it fails the first time, you use the last block number that was successfully copied by dd as parameters to seek and skip. You can be a bit conservative here (decrease the number a bit) to ensure you don't get any "holes" in your output.
Repeat until the whole disk is done. Good luck!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I would like to know how to copy a Linux partition (example: /dev/sda1) on a USB stick, and then boot on the USB stick.
I tried to just copy it with the command cp but when I tried to boot on it, it booted on the partition I copied (/dev/sda1) and not the usb.
In short what I want to do is create a USB stick with my Linux partition on it that I can boot on with any computer.
Thank you.
cp is great for copying files, but you should consider it too high-level for copying partitions. When you copy a partition, you read from a device file and write to another device file, or normal file, or what ever. With cp, many file attributes might be changed: modification time, owner, permissions, etc. This isn't great for partition copies, e.g. files owned by root should still be owned by root, or ~/.ssh/config should still have permissions 600.
The program for this task is dd, which copies bit-by-bit. You specify an input file and an output file:
dd if=/dev/sda of=/dev/sdf bs=512
This copies contents of /dev/sda to /dev/sdf while reading 512 bytes at a time (bs=blocksize). After some time, it will finish and report some statstics. To get statistics during copying, you must send SIGUSR1 signal to the dd process.
Please beware that dd is a dangerous tool, if incorrectly used: For example, it won't ask you for permission to overwrite your 10000 picture vacation album. It simply does. Make sure to specify the correct device files!
You also have to take care that sizes of source and destination fit: destination needs to be at least the size as the source. If you have a 500GB hard disk it won't work to copy to a 4GB USB stick.
Copying whole hard disks also copies the boot loader. An issue with this might be, that the entries in boot loader configuration reference wrong disks. However, starting the boot loader should be no problem (provided architecture matches). If you use GRUB, you even get a command line, which you can use to boot the system manually.
Please change your bios setting so that the first boot device is USB.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
When our remote Linux box boots up, I want it to check for a newer boot file, and reboot that newer file (after first verifying its integrity).
Is there a Linux function that will reboot Linux from a given boot file?
For example:
reboot_function( filepath );
..Where filepath is the path and filename of a Linux boot file, different from the one previously booted.
PURPOSE:
Am trying to create a 100% power interruption tolerant way of upgrading Linux software in the field remotely. If power dies at any stage of the upload, then when rebooted, the Linux box needs to go back to the last working boot file.
What version of Linux do you have? I don't understand very well but do you want to update all your system, libs, or kernel? This isn't a packing tool responsibility, like apt-get?
Embedded Linux Boot Process
Reference
Software components Involved in Embedded Linux Boot Process
Bootloader
kernel Image
root file system - either an initrd image or a NFS location
Booting process for an Emebedded Systems
Instead of BIOS you will run program from a fixed location in Flash
The components involved in the first steps of PC boot process are combined in to a single "boot strap firmware", called "boot loader".
Bootloader also provides additional features useful for development & debugging.
Simply use kexec to execute a new kernel without having to touch your bootloader configuration.
http://en.wikipedia.org/wiki/Kexec
Field updates for embedded systems needs to be thought out just a bit. The most common approach I have run across basically has the following elements:
A kernel image that contains the kernel and the associated compressed ramdisk, or cram disk or other file system (which would have the running application) contained with the kernel in a single file image.
Two areas (or banks) of flash memory (or other) large enough for this kernel image.
A flash sector (or other) dedicated to "boot parameters".
Flash sectors (or other) dedicated to a "booter".
The basic idea is that the boot parameters contain current boot bank, while the booter will boot the kernel image resides in that particular bank. During the update process, the backup bank will be written with the newer kernel image. Then the boot parameters sector can then be updated. A reset occurs (many methods to do this), and the booter will know to boot the alternate bank. Note, that the order of updating, or writing to the flash, is critical here. The boot parameters sector must be written last.
There can be two schools of thought with this. The first is that one boot bank will never be updated, and contain a solid enough image to at least provide an updated into the update boot bank.
The other idea is that there is a current boot bank and the previous boot bank. The previous bank will always have the update written into it.
Along with this basic idea, there are several methods to help ensure integrity. For instance, the kernel images can have a checksum or hash stored in with the boot parameters. The download process should have similar checking as well. A check sum on the boot parameters sector helps ensure its integrity as well.
This basic idea can be expanded upon to suite your particular needs. Remember, write order is very important.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I recently accidentally called "rm -rf *" on a directory an deleted some files that I needed. However, I was able to recover most of them using photorec. Apparently, "deleting" a file just removes references to said file and is not truly deleted until it is overwritten by something else.
So if I wanted to remove the file completely, couldn't I just execute
mv myfile.txt /temp/myfile.txt
(or move to external storage)
You should consider using the Linux command shred, which overwrites the target file multiple times before deleting it completely, which makes it 'impossible' to recover the file.
You can read a bit about the shred command here.
Just moving the file does not cover you for good, if you moved it to external storage, the local version of the file is deleted just as it is with the rm command.
No. that won't help either.
A move when going between file systems is really still just a "copy + rm" internally. The original storage location of the file on the "source" media is still there, just marked as available. A moving WITHIN a file system doesn't touch the file bytes at all, it just updates the bookkeeping to say "file X is now in location Y".
To truly wipe a file, you must overwriteall of its bytes. And yet again, technology gets in the way of that - if you're using a solid state storage medium, there is a VERY high chance that writing 'garbage' to the file won't touch the actual transistors the file's stored in, but actually get written somewhere completely different.
For magnetic media, repeated overwriting with alternating 0x00, 0xFF, and random bytes will eventually totally nuke the file. For SSD/flash systems, it either has to offer a "secure erase" option, or you have to smash the chips into dust. For optical media, it's even more complicated. -r media cannot be erased, only destroyed. for -rw, I don't know how many repeated-write cycles are required to truly erase the bits.
No (and not just because moving it somewhere else on your computer is not removing it from the computer). The way to completely remove a file is to completely overwrite the space on the disk where it resided. The linux command shred will accomplish this.
Basically, no, in most file systems you can't guarantee that a file is overwritten without going very low level. Removing a file and/or moving it will only change the pointer to the file, not the files existence in the file system in any way. Even the linux command shred won't guarantee a file's removal in many file systems since it assumes files are overwritten in place.
On SSDs, it's even more likely that your data stays there for a long time, since even if the file system would attempt to overwrite blocks, the SSD will remap to write to a new block (erasing takes a lot of time, if it wrote in place things would be very slow)
In the end, with modern file systems and disks, the best chance you have to have files stored securely is to keep them encrypted to begin with. If they're stored anywhere in clear text, they can be very hard to remove, and recovering an encrypted file from disk (or a backup for that matter) won't be much use to anyone without the encryption key.