Restore backup from 'dd' (possibly problem with softRAID HDDs) - linux

I have 2 servers, Ubuntu 18.04. First is my main server which I want to backup. Second is tested server with Ubuntu 18.04 - KS-4 Server - Atom N2800 - 4GB DDR3 1066 MHz - SoftRAID 2x 2To SATA - when I want test my backup.
I make backup by 'dd' command and nextly I download this backup (wget) by server 2 (490gb, ~ 24hours downloading).
Now I want test my backup so I tried:
dd if=sdadisk.img of=/dev/sdb
I get:
193536+0 records in
193536+0 records out
99090432 bytes (99 MB, 94 MiB) copied, 5.06239 s, 19.6 MB/s
But nothing will change.
fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8efed6c9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 1050623 1046528 511M fd Linux raid autodetect
/dev/sda2 1050624 3905974271 3904923648 1.8T fd Linux raid autodetect
/dev/sda3 3905974272 3907020799 1046528 511M 82 Linux swap / Solaris
GPT PMBR size mismatch (879097967 != 3907029167) will be corrected by w(rite).
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B3B13223-6382-4EB6-84C3-8E66C917D396
Device Start End Sectors Size Type
/dev/sdb1 2048 1048575 1046528 511M EFI System
/dev/sdb2 1048576 2095103 1046528 511M Linux RAID
/dev/sdb3 2095104 878039039 875943936 417.7G Linux RAID
/dev/sdb4 878039040 879085567 1046528 511M Linux swap
Disk /dev/md1: 511 MiB, 535756800 bytes, 1046400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 1.8 TiB, 1999320842240 bytes, 3904923520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 511M 0 part
│ └─md1 9:1 0 511M 0 raid1 /boot
├─sda2 8:2 0 1.8T 0 part
│ └─md2 9:2 0 1.8T 0 raid1 /
└─sda3 8:3 0 511M 0 part [SWAP]
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 511M 0 part
├─sdb2 8:18 0 511M 0 part
├─sdb3 8:19 0 417.7G 0 part
└─sdb4 8:20 0 511M 0 part
I think the problem is with configuration of disks on server 2, specifically with 'Linux raid' between them. I searching how change its, I testing commands like 'mdadm...' but it's not working like I expected. So I have questions:
How change 'Linux raid' from 2 HDDS to 1 HDD with current system and 2 HDD clear, when I can test my backup properly?
It's generally possible to restore backup 490GB on 1.8TB?
I selected the best option to full linux backup?

To touch on question #3 "I selected the best option to full linux backup?"
You are using the correct tool, but not the correct items to use the dd command on.
It appears you created a .img file for your sda drive (which is normally only used to create a bootable disk and not a full backup of the drive.
If you want to create a full backup, for example of /dev/sda you would run the following:
dd if=/dev/sda of=/dev/sdb
But In your case your /dev/sda drive is a twice the size of your /dev/sdb drive so you might want to consider backing up important files using tar as well as compressing the backup using either gzip or bzip2.

Related

Mount docker logical volume

I'm trying to access to a logical volume where previously was used by docker. This is the result of various command:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 80G 0 disk
├─nvme0n1p1 259:3 0 80G 0 part /
└─nvme0n1p128 259:4 0 1M 0 part
nvme1n1 259:0 0 80G 0 disk
└─nvme1n1p1 259:1 0 80G 0 part
├─docker-docker--pool_tdata 253:1 0 79G 0 lvm
│ └─docker-docker--pool 253:2 0 79G 0 lvm
└─docker-docker--pool_tmeta 253:0 0 84M 0 lvm
└─docker-docker--pool 253:2 0 79G 0 lvm
fdisk
Disk /dev/nvme1n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00029c01
Device Boot Start End Blocks Id System
/dev/nvme1n1p1 2048 167772159 83885056 8e Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/nvme0n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 358A5F86-3BCA-4FB2-8C00-722B915A71AB
# Start End Size Type Name
1 4096 167772126 80G Linux filesyste Linux
128 2048 4095 1M BIOS boot BIOS Boot Partition
lvdisplay
--- Logical volume ---
LV Name docker-pool
VG Name docker
LV UUID piD2Wx-aDjf-CkpN-b4s4-YXWE-6ERm-GWTcOz
LV Write Access read/write
LV Creation host, time ip-172-31-39-159, 2020-02-16 09:18:57 +0000
LV Pool metadata docker-pool_tmeta
LV Pool data docker-pool_tdata
LV Status available
# open 0
LV Size 79.03 GiB
Allocated pool data 80.07%
Allocated metadata 31.58%
Current LE 20232
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
But when I try to mount the volume docker-docker--pool_tdata I get the following error:
mount /dev/mapper/docker-docker--pool_tdata /mnt/test
mount: /dev/mapper/docker-docker--pool_tdata is already mounted or /mnt/test busy
I've also tried to reboot the machine, to uninstall docker and to see if there is file opened on that volume using lsof
Do you have any clue about how can I mount that volume?
Thanks
Uninstalling docker does not really help as purge and autoremove only delete the installed packages and not the images, containers, volumes and config files.
To delete those you have to delete a bunch of directories contained in etc, var/lib, bin andvar/run
Clean up the env
try running docker system prune -a to remove unused containers, images etc
remove the volume with docker volume rm {volumeID}
create the volume again docker volume create docker-docker--pool_tdata
Kill the process
run lsof +D /mnt/test or cat ../docker/../tasks
this should display the PIDs of alive tasks.
Kill the task with kill -9 {PID}

Is it possible to create an .IMG image of my disk without another disk?

I only have one disk and it has lots free space unused with only one primary partition. I want to create a .img image as a backup without free space.
i do some search on 'create img image'(without free space), most anwser give:
dd if=/dev/sdb | gzip > backup.img.gz
Now I have two question:
I only have one disk(sda) with all space in one partition(sda1), is it possible to create a .img image with the dd order without adding another disk? If 'yes', then what should i do? Running dd if=/dev/sda | gzip > backup.img.gz at /home/xxx will be ok? will dd copy the .img file itself to the iamge?
It's better to get a .img file, not a .img.gz file, without free space. is that possible in my condition(one disk)?
here's the condition of my computer(sorry for the system language):
root#act-Precision-T1500:/home/act# df
df: "/mnt/udisk": 输入/输出错误
文件系统 1K-块 已用 可用 已用% 挂载点
udev 1983824 4 1983820 1% /dev
tmpfs 400464 1252 399212 1% /run
/dev/sda1 476502040 64401980 387872068 15% /
none 4 0 4 0% /sys/fs/cgroup
none 5120 0 5120 0% /run/lock
none 2002320 180 2002140 1% /run/shm
none 102400 40 102360 1% /run/user
root#act-Precision-T1500:/home/act# fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = 扇区 of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d1403
设备 启动 起点 终点 块数 Id 系统
/dev/sda1 * 2048 968466431 484232192 83 Linux
/dev/sda2 968468478 976771071 4151297 5 扩展
/dev/sda5 968468480 976771071 4151296 82 Linux 交换 / Solaris
Do i express myself clearly? I'm a beginner to Linux and my English is awful. Thank you for helping me!

How to Resize Root File System Partition Debian Jessie BeagleBone Black

I started a new Debian install on my BeagleBone Black Version C using a 16gb MicroSD card. The image I used via win32DiskImager created only one boot linux partition for the root file system and everything else, at a 3.3gb size. The rest of the MicroSD card is unused. I want to resize this partition to take advantage of the rest of the space that is unused.
All of the resize tutorials I have read relate to a system that has two partitions, but again, my install has only one. Why I do not know. I didn't delete any partitions at all, which is why I didn't notice until I decided to check the room I had left.
Is it possible to resize mmcblk0p1 to make use of the entire microSD, or should I break it up like a traditional install?
Here is my lsblk
root#beaglebone:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
mmcblk1boot0 179:16 0 1M 1 disk
mmcblk1boot1 179:24 0 1M 1 disk
mmcblk0 179:0 0 14.7G 0 disk
`-mmcblk0p1 179:1 0 3.3G 0 part /
mmcblk1 179:8 0 3.7G 0 disk
|-mmcblk1p1 179:9 0 96M 0 part
`-mmcblk1p2 179:10 0 3.6G 0 part
Here is my fdisk -l
root#beaglebone:~# fdisk -l
Disk /dev/mmcblk0: 14.7 GiB, 15811477504 bytes, 30881792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa2911fde
Device Boot Start End Sectors Size Id Type
/dev/mmcblk0p1 * 2048 6963199 6961152 3.3G 83 Linux
Disk /dev/mmcblk1: 3.7 GiB, 3925868544 bytes, 7667712 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/mmcblk1p1 * 2048 198655 196608 96M e W95 FAT16 (LBA)
/dev/mmcblk1p2 198656 7667711 7469056 3.6G 83 Linux
Disk /dev/mmcblk1boot1: 1 MiB, 1048576 bytes, 2048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mmcblk1boot0: 1 MiB, 1048576 bytes, 2048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
nsilent22's suggestion did the trick. I did not need to worry about losing data when I deleted the partition and then created a new one using fdisk. Thank you nsilent22!!

Using SD card as external storage for Beaglebone Black

After following instructions found here:
http://elinux.org/Beagleboard:MicroSD_As_Extra_Storage
and here:
http://electronicsembedded.blogspot.com/2014/10/beaglebone-black-using-sd-card-as-extra.html?showComment=1434418179676#c2761158033046523777
I am still having trouble. I use the code it says to use and followed the instructions, I get 3 solid LED on the board upon load up with the SD inserted, and Windows 7 doesn't detect it at all.
The board works fine without the SD inserted, I can boot up and login via SSH and it is detected by Windows.
The code for my uEnv.txt is as follows:
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk1p2 ro
optargs=quiet
and I also added to the fstab file:
/dev/mmcblk0p1 /media/card auto auto,rw,async,user,nofail 0 0
Some results from checking the filesystem, my drive is called 'BBB_Ext'. This is after booting without the SD in it, and then putting it in after bootup:
root#beaglebone:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 3.5G 1.8G 1.5G 55% /
/dev/root 3.5G 1.8G 1.5G 55% /
devtmpfs 250M 0 250M 0% /dev
tmpfs 250M 4.0K 250M 1% /dev/shm
tmpfs 250M 248K 250M 1% /run
tmpfs 250M 0 250M 0% /sys/fs/cgroup
tmpfs 250M 4.0K 250M 1% /tmp
/dev/mmcblk0p1 70M 54M 16M 78% /media/card
/dev/mmcblk1p1 15G 16K 15G 1% /media/BBB_Ext_
Here is more details on the fdisk:
root#beaglebone:~# fdisk -l
Disk /dev/mmcblk0: 3867 MB, 3867148288 bytes, 7553024 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 * 63 144584 72261 c W95 FAT32 (LBA)
/dev/mmcblk0p2 144585 7550549 3702982+ 83 Linux
Disk /dev/mmcblk0boot1: 2 MB, 2097152 bytes, 4096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mmcblk0boot0: 2 MB, 2097152 bytes, 4096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mmcblk1: 15.9 GB, 15931539456 bytes, 31116288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/mmcblk1p1 2048 31115263 15556608 c W95 FAT32 (LBA)
Any help would be appreciated. I figured since this is Linux related that the question is applicable to SO, if it's better off in SE plz let me know.
For what I got, the uEnv.txt should be (saved on your SD card):
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk1p2 ro
optargs=quiet
And add below line in /etc/fstab
/dev/mmcblk0p1 /media/data auto rw 0 0
Ok I believe I resolved the issue (for anyone who cares)
After looking at the fdisk log I realized that I needed to change the fstab line to be:
/dev/mmcblk1p1 /media/card auto rw 0 0
and then I also changed the uEnv.txt to be:
mmcdev=1
bootpart=1:2
mmcroot=/dev/mmcblk0p2 ro
optargs=quiet
From my understanding it was trying to boot off of a disk that wasn't there and that caused the problem. In addition, I didn't fix the fstab to be the correct port for the drive, which can be seen at the bottom of the fdisk check:
Device Boot Start End Blocks Id System
/dev/mmcblk1p1 2048 31115263 15556608 c W95 FAT32 (LBA)

vmware enlarging space for linux

Its maybe stupid question but how can i enlarge my linux machine from 20 to 40gb? i need to increase my / space. I made it on vmware and its says 40gb now but if i make :
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/zabbix-root 19G 17G 789M 96% /
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 276K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
/dev/sda1 228M 25M 192M 12% /boot
or
fdisk -l
Disk /dev/mapper/zabbix-root: 20.1 GB, 20124270592 bytes
255 heads, 63 sectors/track, 2446 cylinders, total 39305216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/zabbix-root doesn't contain a valid partition table
Disk /dev/mapper/zabbix-swap_1: 1069 MB, 1069547520 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2088960 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
i still se only 20gb... How can i mount another 20gb?
Thanks a lot.
Depending on your distribution, probably something like that:
reboot or echo "- - -" > /sys/class/scsi_host/host0/rescan
Now that Linux is aware of the disk size change, use pvextend to extend you PV.
Using vgdisplay, make sure that you have free space on your VG.
Extend your LV using lvextend and finally, use resize2fs to change your FS.

Resources