No bootable device error after restoring debian image to annother ssd disk - linux

I created an image of debian 7 on an SSD disk and later restored it on another computer with excatly same type of SSD disk. however I'm getting the error message No bootable device -- insert boot disk and press key
The image was created using a live OS with the command: dd if=/dev/sda conv=sync,noerror bs=64K | gzip -c > backup.img.gz
And later restored to disk with:
gunzip -c backup.img.gz | dd of=/dev/sda
I've done this plenty of times before on older computers and it usually works fine. These computers has EFI, could this be the issue? Any ideas, or workarounds?
Thanks

Do you use MBR or GPT on SSD?
Possibly you will need to switch between UEFI/LEgacy boot mode as shown here:
https://phoenixts.com/blog/uefi-vs-legacy-bios/

Related

Rust compilation on AWS fails while succeeding on other machines [duplicate]

I am using opensuse, specific the variant on mono's website when you click vmware
I get this error. Does anyone know how i might fix it?
make[4]: Entering directory `/home/rupert/Desktop/llvm/tools/clang/tools/driver'
llvm[4]: Linking Debug+Asserts executable clang
collect2: ld terminated with signal 9 [Killed]
make[4]: *** [/home/rupert/Desktop/llvm/Debug+Asserts/bin/clang] Error 1
The full text can be found here
Your virtual machine does not have enough memory to perform the linking phase. Linking is typical the most memory intensive part of a build since it's where all the object code comes together and is operated on as a whole.
If you can allocate more RAM to the VM then do that. Alternatively you could increase the amount of swap space. I am not that familiar with VMs but I imagine the virtual hard drive you set-up will have a swap partition. If you can make that bigger or allocate a second swap partition that would help.
Increasing the RAM, if only for the duration of your build, is the easiest thing to do though.
Also got the same issue and solved by doing following steps (It is memory issue only) -
Checks current swap space by running free command (It must be around 10GB.).
Checks the swap partition
sudo fdisk -l
/dev/hda8 none swap sw 0 0
Make swap space and enable it.
sudo swapoff -a
sudo /sbin/mkswap /dev/hda8
sudo swapon -a
If your swap disk size is not enough you would like to create swap file and use it.
Create swap file.
sudo fallocate -l 10g /mnt/10GB.swap
sudo chmod 600 /mnt/10GB.swap
OR
sudo dd if=/dev/zero of=/mnt/10GB.swap bs=1024 count=10485760
sudo chmod 600 /mnt/10GB.swap
Mount swap file.
sudo mkswap /mnt/10GB.swap
Enable swap file.
sudo swapon /mnt/10GB.swap
I tried with make -j1 and it works!. But it takes long time to build.
I had the same problem building on a VirtualBox system. FWIW I was building on a laptop with XP and 2GB RAM. I had to bump the virtual RAM up to 1462MB to get a successful build. Also note the recommended disk size of 8GB is not sufficient to build and install both LLVM and Clang under Ubuntu. I'd recommend at least 16GB.
I would suggest using of -l (--max-load) option instead of limiting -j in this case. Possibly helpful
answer.

Backup for a linux system via osx

I have an odroid (raspberry-like) machine with an arch linux system installed. Now I want to move the system from one microsd (A) to another microsd (B). When I tried this, the system became corrupted, information about files attributes were lost:
Copy files from A to osx-host cp -R /Volume/microsd_a/* ~/Desktop/backup
Copy files from osx-host to B cp -R ~/Desktop/backup/* /Volume/microsd_b
Is it real to copy linux-system using osx-host with preserving integrity?
Update:
dd. I tried this way, but there is a problem. My sd cards have different sizes, 64 Gb and 16 Gb, but system installed on 64 Gb disk has no more than 8 Gb. So when I launched the copying process, output image file exceed 16 Gb and I killed the process. By the way, the MBR contains information about partition table which should be different (one partition 64Gb / one partition 16 gb). And notice, I do not need to copy bootloader from MBR, I have an ability to flash disk bootloader by other ways.
cp. What I wanted to listen as the answer is the list of flags I need to make this operation. Reading man cp didn't help me. cp -a does not copy all files because of Cannot allocate memory error. Tried cp -aX, no attributes were restored after copying data to second sdcard.
tar. I tried multiple times with flags, last one was tar -cvpf; tar --same-owner -xpf. But file attributes were still corrupted.
Again:
- Are you sure, it is possible to preserve file attributes through copying ext4 -> APFS -> ext4?
- If this is possisble, how does it work and which command with which flags should I use?
cp -R results in change of permissions, time stamps and missing of hidden files, you can't use that command to create a disk image.
what you need is a disk copy/clone. The command to use is dd.
Check out this webpage:
https://pbxbook.com/other/dd_clone.html

mdadm: array disappearing on reboot, despite correct mdadm.conf

I'm using Ubuntu 13.10 and trying to create a RAID 5 array across 3 identical disks connected to SATA ports on the motherboard. I've followed every guide and and used both the built-in Disks GUI app and mdadm at the command line, and despite everything I cannot get the array to persist after reboot.
I create the array with the following command:
root#zapp:~# mdadm --create /dev/md/array --chunk=512 --level=5 \
--raid-devices=3 /dev/sda /dev/sdb /dev/sdd
Then I watch /proc/mdstat for awhile while it syncs, until I get this:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sda1[0] sdd1[3] sdb1[1]
1953262592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
To update the mdadm config file, I run the following:
root#zapp:~# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
This adds the essential line to my config file:
ARRAY /dev/md/array metadata=1.2 UUID=0ad3753e:f0177930:8362f527:285d76e7 name=zapp:array
Everything seems correct, but when I reboot, the array is gone!
The key to fixing this was to partition the drives first, and create the array from the partitions instead of the raw devices.
Basically, the create command just needed to change to:
root#zapp:~# mdadm --create /dev/md/array --chunk=512 --level=5 \
--raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdd1
The rest of the steps were correct, and created everything properly once this change was made. Any further info as to why this was necessary would be helpful. It was certainly not obvious in any of the documentation I found.

Detecting USB Thumb Drive when Ready in Linux Shell Script

I am a Windows admin and dev, I do not generally work with Linux so forgive me if this is in some way obvious.
I have a not so good Linux box, some older version of Open SUSE, and I have a script that unmounts the USB thumb drive, formats it, and then waits for the device to become ready again before it runs a script that does a copy/MD5 checksum verification on the source and destination file to ensure the copy was valid. The problem is that on one box the USB thumb drive does not become ready after the format in a consistent way. It takes anywhere from 1 to 2+ minutes before I can access the drive via /media/LABELNAME.
The direct path is /dev/sdb but, of course, I cannot access it directly via this path to copy the files. Here is my shell script as it stands:
#!/bin/bash
set -e
echo "Starting LABELNAME.\n\nUnmounting /dev/sdb/"
umount /dev/sdb
echo "Formatting /dev/sdb/"
mkfs.vfat -I -F32 -n "LABELNAME" /dev/sdb
echo "Waiting on remount..."
sleep 30
echo "Format complete. Running make master."
perl /home/labelname_master.20120830.pl
Any suggestions? How might I wait for the drive to become ready and detect it? I have seen Detecting and Writing to a USB Key / Thumb DriveAutomatically but quite frankly I don't even know what that answer means.
It seems that you have some automatic mounting service running which detects the flash disk and mounts the partition. However, you already know what the partition is, so I recommend that you simply mount the disk in your script, choosing a suitable mount point yourself.
mkfs.vfat -I -F32 -n "LABELNAME" /dev/sdb
echo "Format complete, remounting"
mount /dev/sdb $mountpoint #<-- you would choose $mountpoint
echo "Running make master."
perl /home/labelname_master.20120830.pl

bash command to force file closure on usb drive

I thought doing a sync in my bash script would force the file to be completely written out. When I looked at the thumb drive, it showed all the files I had copied, but after a power supply failure, the usb drive showed 0 files. Do I have to eject the drive manually or is there something I can do programmatically in my script?
If you want to eject the usb device from your bash script a simple umount on the device should do the trick. For example
mount /dev/usb /mnt/usb
# Your copy operations here... then on success:
umount /mnt/usb
you can also try to use the linux sync instruction that syncronize writing over disk if you're usb key is using a journalized file system

Resources