How to mount an rsync-copied partition combined from two source partitions - linux

My PC is running ArchLinux. My PC has two hard disks, /dev/sda and /dev/sdb. sda is the source disk and contains all my files. sdb is the destination disk and is currently empty. My purpose is to make a copy of sda to sdb, and also make sdb another bootable ArchLinux installation.
sda has three partitions: sda1 for /boot, sda2 for /, sda3 for /home. Here is its /etc/fstab:
/dev/sda2 / ext4 rw,relatime,data=ordered 0 1
/dev/sda1 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2
/dev/sda3 /home ext4 rw,relatime,data=ordered 0 2
I formatted sdb to two partitions only: sdb1 for /boot and sdb2 for /. I used rsync to copy sda1 to sdb1, as well as sda2 and sda3 to sdb2. And then I also updated the UEFI bootloader and /etc/fstab:
/dev/sdb2 / ext4 rw,relatime,data=ordered 0 1
/dev/sdb1 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2
The problem is, when I booted from sdb, both sdb1 and sdb2 were automatically mounted, but /home is empty. My personal home directory was not found under /home. Why is that?
Later I rebooted from sda and then manually mounted sdb2 and confirmed that my personal home directory was in /home.

I figured out the problem. I forgot to update /boot/loader/entries/arch.conf, so the gummiboot bootloader actually loaded /dev/sda2 instead of /dev/sdb2. And because sda2 did not contain /home/, so /home/ was not found.

Related

Increase size of filesystem /dev/sdb1 after increasing the size of /dev/sdb

For the FS
df -kh /store
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1        50G   45G  1.9G  97% /store
I have increased the size of /dev/sdb by 20G and then added the space to /dev/sdb1 by 20G by deleting the partition and creating it again
sdb               8:16   0   70G  0 disk
└─sdb1            8:17   0   70G  0 part
However i am not sure how to make it visible in /store
pvresize gives the error as no physical volume found
It is a cloud VM
vgs or vgdisplay doesnot show any VG created as well
Output of fstab as below
/dev/sdb1 /store/ ext4 defaults 0 0
However i am not sure how to make it visible in /store
pvresize gives the error as no physical volume found
It is a cloud VM
vgs or vgdisplay doesnot show any VG created as well
Output of fstab as below
/dev/sdb1 /store/ ext4 defaults 0 0
From the output you posted, I don't see any indication that the system is using LVM, rather the filesystem is directly on /dev/sdb1. So simply try umount /store followed by resize2fs /dev/sdb1.

Not able to find /opt /var /tmp in lsblk RHEL 8.1

Am Not able to find /opt /var /tmp in lsblk RHEL 8.1.Can you please help me.
[xxx#exxx ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
+-sda1 8:1 0 500M 0 part /boot/efi
+-sda2 8:2 0 500M 0 part /boot
+-sda3 8:3 0 2M 0 part
+-sda4 8:4 0 60G 0 part
+-rootvg-rootlv 253:5 0 60G 0 lvm /
lsblk
lsblk is used to display details about block devices and these block devices(Except ram disk) are basically those files that represent devices connected to the pc. It queries /sys virtual file system and udev db to obtain information that it displays. And it basically displays output in a tree-like structure. This command comes pre-installed with the util-Linux package.
That is the reason you are unable to saw the directories /opt, /var and /tmp
/opt is for “the installation of add-on application software packages”.
/var is a standard subdirectory of the root directory in Linux and other Unix-like operating systems that contains files to which the system writes data during the course of its operation.
/tmp directory is a temporary landing place for files.

ec2 how to add more volume to exist device

I was trying to add more volume to my device
df -h
I get:
[root#ip-172-x-x-x ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 44K 3.8G 1% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
/dev/nvme0n1p1 7.8G 3.6G 4.2G 46% /
I wanna add all existing storage to /dev/nvme0n1p1
lsblk
I get
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 300G 0 disk
├─nvme0n1p1 259:1 0 8G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
I was trying to google around on aws instructions, still quite confuse. since most of the instruction is setting up brand new instance. While for my use case i cannot stop the instance.
i cannot do
mkfs
Also seems like the disk is already mount?? I guess i may misunderstand the meaning of mount...
since the filesystem is already there.
just wanna use all existing space.
Thanks for help in advance!!
your lsblk output shows that you have a 300G disk but your nvme0n1p1 is only 8G. You need to first grow your partition to fill the disk and then expand your filesystem to fill your partition:
Snapshot all ebs volumes you care about before doing any resize operations on them.
Install growpart
sudo yum install cloud-utils-growpart
Resize partiongrowpart /dev/nvme0n1 1
Reboot reboot now
Run lsblk and verify that the partition is now the full disk size
You may still have to run sudo resize2fs /dev/nvme0n1 to expand the filesystem

My mounted EBS volume is not showing up

Trying to mount a 384G volume from old instance to a newly configure instance (8G). Attached 384G volume shows up on lsblk but on df -h it doesn't come up at all. What am I doing wrong?
[ec2-user#ip-10-111-111-111 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 384G 0 disk
xvda1 202:1 0 8G 0 disk /
[ec2-user#ip-10-111-111-111 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 1.5G 6.4G 19% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
Note: On EC2 instance dashboard it displays
Root device: /dev/sda1
Block devices: /dev/sda1 /dev/sdf
The df -k will only show mounted volumes.
You will need to mount your volume first, like this mount /dev/xvdf /mnt then you will be able to access it's content from /mnt and see it when typing df -k
For those landing here after not finding their xvdf devices on aws ec2 c5 or m5 instances, it's renamed to /dev/nvme... as per the docs
For C5 and M5 instances, EBS volumes are exposed as NVMe block
devices. The device names that you specify are renamed using NVMe
device names (/dev/nvme[0-26]n1). For more information, see Amazon EBS
and NVMe.
If this is windows follow this:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/recognize-expanded-volume-windows.html#recognize-expanded-volume-windows-disk-management

Mount ext2 as totally readonly system from busybox

I'm using ext2 FS on my embedded device (busybox) with readonly mode. But, when I check FS if its true readonly system. I found strange things: When I type cat /proc/mounts
rootfs / rootfs RW 0 0
/dev/root / ext2 ro,relatime,errors=continue 0 0
...
But in: /boot/grub/menu.lst
kernel=/boot/bzimage root=/dev/sda1 ro
in fstab:
/dev/root / ext2 ro,noatime,nodiratime,errors=remount-ro 0 1
in inittab:
null::sysinnit:/bin/mount -a
/bin/mount:
rootfs on / type rootfs (RW)
/dev/root on / type ext2 (ro,relatime,errors=continue)
I can't understand why rootfs mounted as RW (in case /proc/mounts and /bin/mount), and why arguments for mounting from fstab doesn't correspond to arguments in/bin/mount?
rootfs is the initial root filesystem at /. It is in RAM only, and is unreachable after /dev/root has been mounted over it.
/usr/src/linux/Documentation/filesystems/ramfs-rootfs-initramfs.txt

Resources