Increase size of Amazon EBS volume: "Unknown label" - linux

I am trying to increase the volume of the root folder after running this command growpart /dev/nvme0n1p1 83
This is the error I receive
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/nvme0n1p1
sfdisk: /dev/nvme0n1p1: does not contain a recognized partition table
FAILED: failed to dump sfdisk info for /dev/nvme0n1p1
How can I get past this error?

use the resize2fs command instead
eg:
$ sudo resize2fs /dev/nvme0n1p1

Related

AWS EC2: error expanding EBS volume partition

I am trying to expand an EBS volume from 120GB to 200GB on an c5d.xlarge EC2 instance running Ubuntu. I am following this guide.
So far, I have created a snapshot of the current EBS volume and then expanded it to 200GB.
sudo lsblk
nvme1n1 259:0 0 93.1G 0 disk
nvme0n1 259:1 0 200G 0 disk
└─nvme0n1p1 259:2 0 120G 0 part /
Following the guide, I have tried to expand the nvme0n1p1 partition to 200GB:
sudo growpart /dev/nvme0n1p1 1
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/nvme0n1p1
sfdisk: /dev/nvme0n1p1: does not contain a recognized partition table
FAILED: failed to dump sfdisk info for /dev/nvme0n1p1
It seems the partition is not recognized.
I have also tried with resize2fs command, but it doesn't do anything:
sudo resize2fs /dev/nvme0n1p1
resize2fs 1.44.1 (24-Mar-2018)
The filesystem is already 31457019 (4k) blocks long. Nothing to do!
Any idea how can I make the partition to expand to the correct size?
Actually the growpart command was wrong. It must follow the syntax growpart path/to/device_name partition_number, so the right command is:
sudo growpart /dev/nvme0n1 1

Openstack volume creation error in kolla ansible setup

When we are creating volume in Openstack getting below errors in cinder-volume.log file
Error creating Volume: oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
Command: 'sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvcreate -T -V 10g -n volume-2bff4396-a269-4207-8ea0-dea4ff1c4a85 cinder-volumes/cinder-volumes-pool'
Exit code: 5
Stdout: ' Using default stripesize 64.00 KiB.\n'
Stderr: ' device-mapper: message ioctl on (253:4) failed: Operation not supported\n Failed to process thin pool message "delete 143".
Failed to suspend cinder-volumes/cinder-volumes-pool with queued messages.
Logical volume partition details:
PV VG Fmt Attr PSize PFree
/dev/sda cinder-volumes lvm2 a-- <1.82t 186.08g
/dev/sdb cinder-volumes lvm2 a-- <1.82t 0
/dev/sdc1 ubuntu-vg lvm2 a-- <465.76g 4.00m
When I try to delete existing volumes also the space is not getting added to the partitions listed above.

mount already mounted or busy

I have an Amazon EC2 instance (Ubuntu 12.04) to which I have attached two 250 GB volumes. Inadvertently, the volumes got unmounted. When I tried mounting them again, with the following command,
sudo mount /dev/xvdg /data
this is the error I get :
mount: /dev/xvdg already mounted or /data busy
Then, I tried un-mounting it as follows :
umount /dev/xvdg but it tells me that the volume is not mounted.
umount: /dev/xvdg is not mounted (according to mtab)
I tried lsof to check for any locks but there weren't any.
The lsblk output is as below :
Any help will be appreciated. What do I need to do to mount the volumes back without losing the data on them?
Ok, figured it out. Thanks #Petesh and #mootmoot for pushing me in the right direction. I was trying to mount single volumes instead of a RAID 0 array. The /dev/md127 device was running so I stopped it first with the following command :
sudo mdadm --stop /dev/md127
Then I assembled the RAID 0 array :
sudo mdadm --assemble --uuid <RAID array UUID here> /dev/md0
Once the /dev/md0 array became active, I mounted it on /data.
Try umount /dev/xvdg* and umount /data and then
mount /dev/xvdg1 /data

missing superblock on encrypted filesystem

I have a hard drive with ubuntu 14 installed. The whole disk is encrypted. My default users home directory is encrypted as well. Lately, after a system crash, I am presented with a busybox (initramfs) on startup. When I chose to start in recovery mode, I can grasp several error messages like " ... Failed to read block at offset xyz ...".
I searched and found this Q&A: Boot drops to a (initramfs) prompts/busybox
I booted from a CD and followed the instructions. However I am only able to do ...
sudo dumpe2fs /dev/sda1
... and then continue to check and repair superblocks on /dev/sda1 .
If I try ...
sudo dumpe2fs /dev/sda2
... i get the following error message:
dumpe2fs: Attempted to read block from filesystem resulted
in short read while trying to open /dev/sda2
Couldn't find valid filesystem superblock.
gparted shows the partitioning and file systems of the drive as follows:
partition file system size used unused flags
-------------------------------------------------------------
/dev/sda1 ext2 243M 210M 32M boot
/dev/sda2 extended 465G - - -
/dev/sda5!!crypt-luks 465G - - -
unallocated unallocated 1M - - -
The warning (!!) at sda5 says "Linux Unified Key Setup encryption is not yet supported".
If I try ...
sudo dumpe2fs /dev/sda5
... it returns this error message:
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda5
Couldn't find valid filesystem superblock.
Mounting and rw-accessing sda1 works without error.
Any clues what is the cause and how i can repair, mount and decrypt the filesystem to boot normaly or at least to recover the data?
The given solution has missed some commands that you need to decrypt the file system and access it. Here's the full solution
Boot from Ubuntu USB
cryptsetup luksOpen /dev/rawdevice somename
sck /dev/mapper/somename
Get backup superblock:
sudo dumpe2fs /dev/mapper/ubuntu--vg-root | grep superblock
Fix:
sudo fsck -b 32768 /dev/mapper/ubuntu--vg-root -y
Verify:
mkdir /a
sudo mount /dev/mapper/ubuntu--vg-root /a
This worked for me:
Boot from Ubuntu USB
get backup superblock:
sudo dumpe2fs /dev/mapper/ubuntu--vg-root | grep superblock
fix:
sudo fsck -b 32768 /dev/mapper/ubuntu--vg-root -y
verify
mkdir /a
sudo mount /dev/mapper/ubuntu--vg-root /a
I used following links as source:
https://askubuntu.com/questions/137655/boot-drops-to-a-initramfs-prompts-busybox
https://serverfault.com/questions/375090/using-fsck-to-check-and-repair-luks-encrypted-disk

XFS grow not working

So I have the following setup:
[ec2-user#ip-172-31-9-177 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 80G 0 disk
├─xvda1 202:1 0 6G 0 part /
└─xvda2 202:2 0 4G 0 part /data
All the tutorials I find say to use xfs_growfs <mountpoint> but that has no effect, nor has the -d option:
[ec2-user#ip-172-31-9-177 ~]$ sudo xfs_growfs -d /
meta-data=/dev/xvda1 isize=256 agcount=4, agsize=393216 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=1572864, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data size unchanged, skipping
I should add that I am using:
[ec2-user#ip-172-31-9-177 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)
[ec2-user#ip-172-31-9-177 ~]$ xfs_info -V
xfs_info version 3.2.0-alpha2
[ec2-user#ip-172-31-9-177 ~]$ xfs_growfs -V
xfs_growfs version 3.2.0-alpha2
Before running xfs_growfs, you must resize the partition the filesystem sits on.
Give this one a go:
sudo growpart /dev/xvda 1
As per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
You have a 4GB xfs file system on a 4GB partition, so there is no work to do.
To overcome, enlarge the partition with parted then use xfs_growfs to expand the fs. You can use parted rm without losing data.
# umount /data
# parted
GNU Parted 3.1
Using /dev/xvda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) print
....
(parted) rm 2
(parted) mkpart
....
(parted) print
(parted) quit
# xfs_growfs /dev/xvda2
# mount /dev/xvda2 /data
Done. No need to update /etc/fstab as the partition numbers are the same.
Before running xfs_growfs, Please do the following step first:
#growpart <devicenametobeextend>
# growpart /dev/xvda 1
CHANGED: partition=1 start=4096 old: size=31453151 end=31457247 new: size=41938911,end=41943007
#xfs_growfs -d /
enter FYI for your reference
Many Servers by default won't have growpart utils So you can follow the below steps to do
Install growpart utils using package manager as per OS distribution below is for RPM/FEDORA based.
yum install cloud-utils-growpart
Run the growpart command on the partition which has to change.
growpart /dev/xvda 1
Finally run the xfs_growfs command.
xfs_growfs -d /dev/xvda1

Resources