XFS grow not working - linux

So I have the following setup:
[ec2-user#ip-172-31-9-177 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 80G 0 disk
├─xvda1 202:1 0 6G 0 part /
└─xvda2 202:2 0 4G 0 part /data
All the tutorials I find say to use xfs_growfs <mountpoint> but that has no effect, nor has the -d option:
[ec2-user#ip-172-31-9-177 ~]$ sudo xfs_growfs -d /
meta-data=/dev/xvda1 isize=256 agcount=4, agsize=393216 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=1572864, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data size unchanged, skipping
I should add that I am using:
[ec2-user#ip-172-31-9-177 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)
[ec2-user#ip-172-31-9-177 ~]$ xfs_info -V
xfs_info version 3.2.0-alpha2
[ec2-user#ip-172-31-9-177 ~]$ xfs_growfs -V
xfs_growfs version 3.2.0-alpha2

Before running xfs_growfs, you must resize the partition the filesystem sits on.
Give this one a go:
sudo growpart /dev/xvda 1
As per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html

You have a 4GB xfs file system on a 4GB partition, so there is no work to do.
To overcome, enlarge the partition with parted then use xfs_growfs to expand the fs. You can use parted rm without losing data.
# umount /data
# parted
GNU Parted 3.1
Using /dev/xvda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) print
....
(parted) rm 2
(parted) mkpart
....
(parted) print
(parted) quit
# xfs_growfs /dev/xvda2
# mount /dev/xvda2 /data
Done. No need to update /etc/fstab as the partition numbers are the same.

Before running xfs_growfs, Please do the following step first:
#growpart <devicenametobeextend>
# growpart /dev/xvda 1
CHANGED: partition=1 start=4096 old: size=31453151 end=31457247 new: size=41938911,end=41943007
#xfs_growfs -d /
enter FYI for your reference

Many Servers by default won't have growpart utils So you can follow the below steps to do
Install growpart utils using package manager as per OS distribution below is for RPM/FEDORA based.
yum install cloud-utils-growpart
Run the growpart command on the partition which has to change.
growpart /dev/xvda 1
Finally run the xfs_growfs command.
xfs_growfs -d /dev/xvda1

Related

AWS EC2: error expanding EBS volume partition

I am trying to expand an EBS volume from 120GB to 200GB on an c5d.xlarge EC2 instance running Ubuntu. I am following this guide.
So far, I have created a snapshot of the current EBS volume and then expanded it to 200GB.
sudo lsblk
nvme1n1 259:0 0 93.1G 0 disk
nvme0n1 259:1 0 200G 0 disk
└─nvme0n1p1 259:2 0 120G 0 part /
Following the guide, I have tried to expand the nvme0n1p1 partition to 200GB:
sudo growpart /dev/nvme0n1p1 1
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/nvme0n1p1
sfdisk: /dev/nvme0n1p1: does not contain a recognized partition table
FAILED: failed to dump sfdisk info for /dev/nvme0n1p1
It seems the partition is not recognized.
I have also tried with resize2fs command, but it doesn't do anything:
sudo resize2fs /dev/nvme0n1p1
resize2fs 1.44.1 (24-Mar-2018)
The filesystem is already 31457019 (4k) blocks long. Nothing to do!
Any idea how can I make the partition to expand to the correct size?
Actually the growpart command was wrong. It must follow the syntax growpart path/to/device_name partition_number, so the right command is:
sudo growpart /dev/nvme0n1 1

mkfs.vfat: unable to open {partition}: No such file or directory (command succeeds, but throws this error and blocks rest of script)

Update: I got this working but am still not 100% sure why. I've appended the fully and consistently working script to the end for reference.
I'm trying to script a series of disk partition commands using sgdisk and mkfs.vfat. I'm working from a Live USB (NixOS 21pre), have a blank 1TB M.2 SSD, and am creating a 1GB EFI boot partition, and a 999GB ZFS partition.
Everything works up until I try to create a FAT32 filesystem on the EFI partition, using mkfs.vfat, where I get the error in the title.
However, the odd thing is, the mkfs.vfat command succeeds, but throws that error anyway and blocks the rest of the script. Any idea why it's doing this and how to fix it?
Starting with an unformatted 1TB M.2 SSD:
$ sudo parted /dev/disk/by-id/wwn-0x5001b448b94488f8 print
Error: /dev/sda: unrecognised disk label
Model: ATA WDC WDS100T2B0B- (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
Script:
$ ls
total 4
drwxr-xr-x 2 nixos users 60 May 18 20:25 .
drwx------ 17 nixos users 360 May 18 15:24 ..
-rwxr-xr-x 1 nixos users 2225 May 18 19:59 partition.sh
$ cat partition.sh
#!/usr/bin/env bash
#make gpt partition table and boot & rpool partitions for ZFS on 1TB M.2 SSD
#error handling on
set -e
#wipe the disk with -Z, then create two partitions, a 1GB (945GiB) EFI boot partition, and a ZFS root partition consisting of the rest of the drive, then print the results
DISK=/dev/disk/by-id/wwn-0x5001b448b94488f8
sgdisk -Z $DISK
sgdisk -n 1:0:+954M -t 1:EF00 -c 1:efi $DISK
sgdisk -n 2:0:0 -t 2:BF01 -c 2:zroot $DISK
sgdisk -p /dev/sda
#make a FAT32 filesystem on the EFI partition, then mount it
#mkfs.vfat -F 32 ${DISK}-part1 (troubleshooting with hardcoded version below)
mkfs.vfat -F 32 /dev/disk/by-id/wwn-0x5001b448b94488f8-part1
mkdir -p /mnt/boot
mount ${DISK}-part1 /mnt/boot
Result (everything fine until mkfs.vfat, which throws error and blocks the rest of the script):
$ sudo sh partition.sh
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries in memory.
Setting name!
partNum is 0
The operation has completed successfully.
Setting name!
partNum is 1
The operation has completed successfully.
Disk /dev/sda: 1953525168 sectors, 931.5 GiB
Model: WDC WDS100T2B0B-
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 77ED6A41-E722-4FFB-92EC-975A37DBCB97
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1955839 954.0 MiB EF00 efi
2 1955840 1953525134 930.6 GiB BF01 zroot
mkfs.fat 4.1 (2017-01-24)
mkfs.vfat: unable to open /dev/disk/by-id/wwn-0x5001b448b94488f8-part1: No such file or directory
Verifying the partitioning and FAT32 creation commands worked:
$ sudo parted /dev/disk/by-id/wwn-0x5001b448b94488f8 print
Model: ATA WDC WDS100T2B0B- (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 1001MB 1000MB fat32 efi boot, esp
2 1001MB 1000GB 999GB zroot
Fwiw, the same command works on the commandline with no error:
$ sudo mkfs.vfat -F 32 /dev/disk/by-id/wwn-0x5001b448b94488f8-part1
mkfs.fat 4.1 (2017-01-24)
Success. But why no error on the commandline, but an error in the script?
Update: fully and consistently working script:
#!/usr/bin/env bash
#make UEFI (GPT) partition table and two partitions (FAT32 boot and ZFS rpool) on 1TB M.2 SSD
#error handling on
set -e
#vars
DISK=/dev/disk/by-id/wwn-0x5001b448b94488f8
POOL='rpool'
#0. if /mnt/boot is mounted, umount it; if any NixOS filesystems are mounted, unmount them
if mount -l | grep -q '/mnt/boot'; then
umount -f /mnt/boot
fi
if mount -l | grep -q '/mnt/nix'; then
umount -fR /mnt
fi
#1. if a zfs pool exists, delete it
if zpool list | grep -q $POOL; then
zfs unmount -a
zpool export $POOL
zpool destroy -f $POOL
fi
#2. wipe the disk
sgdisk -Z $DISK
wipefs -a $DISK
#3. create two partitions, a 1GB (945GiB) EFI boot partition, and a ZFS root partition consisting of the rest of the drive, then print the results
sgdisk -n 1:0:+954M -t 1:EF00 -c 1:efiboot $DISK
sgdisk -n 2:0:0 -t 2:BF01 -c 2:zfsroot $DISK
sgdisk -p /dev/sda
#4. notify the OS of partition updates, and print partition info
partprobe
parted ${DISK} print
#5. make a FAT32 filesystem on the EFI boot partition
mkfs.vfat -F 32 ${DISK}-part1
#6. notify the OS of partition updates, and print new partition info
partprobe
parted ${DISK} print
#mount the partitions in nixos-zfs-pool-dataset-create.sh script. Make sure to first mount the ZFS root dataset on /mnt before mounting and subdirectories of /mnt.
It may take time for kernel to be notified about partition changes. Try calling partprobe before mkfs, to request kernel to re-read the partition tables.

About formatting new EBS volume on Amazon AWS

I don't have much experience with Linux and mounting/unmounting things. I'm using Amazon AWS, have booting up EC2 with Ubuntu image, and have attached a new EBS volume to the EC2. From the dashboard, I can see that the volume is attached to :/dev/sda1.
Now, I see from this guide from Amazon that the path will likely be changed by the kernel. So it's most likely that my /dev/sda1 device will be mounted on, maybe, /dev/xvda1.
So I logged in using terminal. I do ls /dev/ and I indeed see xvda1 on there. But I also see xvda. Now I want to format the device. But I don't know if the unformatted device is attached to xvda1 or xvda. I cannot list the content of /dev/xvda1 and /dev/xvda (it says ls: cannot access /dev/xvda1/: Not a directory). I guess I have to format it first.
I tried to format using sudo mkfs.ext4 /dev/xvda1. It says: /dev/xvda1 is mounted; will not make a filesystem here!.
I tried to format using sudo mkfs.ext4 /dev/xvda. It says: /dev/xvda is apparently in use by the system; will not make a filesystem here!
How can I format the volume?
EDIT:
The result of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
I then tried to use the command sudo mkfs -t ext4 /dev/xvda, but the same error message appears: /dev/xvda is apparently in use by the system; will not make a filesystem here!
When I tried to use the command mount /dev/xvda /webserver, error message appears: mount: /dev/xvda already mounted or /webserver busy. Some website indicate that this also probably because a corrupted or unformatted file system. So I guess I have to be able to format it first before able to mount it.
First of all you are trying to format /dev/xvda1, which is root device. Why ??
Second if you have added a new EBS, then follow below steps.
List Block Device's
This will give you list of block device attached to your EC2 which will look like
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 100G 0 disk
xvda1 202:1 0 8G 0 disk /
Out of this xvda1 is the / (root) and xvdf is the one that you need to format and mount ( for the new EBS)
Format Device
sudo mkfs -t ext4 device_name # device_name is xvdf here
Create a Mount Point
sudo mkdir /mount_point
Mount the Volume
sudo mount device_name mount_point # here device_name is /dev/xvdf
Make an entry in /etc/fstab
device_name mount_point file_system_type fs_mntops fs_freq fs_passno
Execute
sudo mount -a
This will read your /etc/fstab file and if it's OK. it will mount the EBS to mount_point

missing superblock on encrypted filesystem

I have a hard drive with ubuntu 14 installed. The whole disk is encrypted. My default users home directory is encrypted as well. Lately, after a system crash, I am presented with a busybox (initramfs) on startup. When I chose to start in recovery mode, I can grasp several error messages like " ... Failed to read block at offset xyz ...".
I searched and found this Q&A: Boot drops to a (initramfs) prompts/busybox
I booted from a CD and followed the instructions. However I am only able to do ...
sudo dumpe2fs /dev/sda1
... and then continue to check and repair superblocks on /dev/sda1 .
If I try ...
sudo dumpe2fs /dev/sda2
... i get the following error message:
dumpe2fs: Attempted to read block from filesystem resulted
in short read while trying to open /dev/sda2
Couldn't find valid filesystem superblock.
gparted shows the partitioning and file systems of the drive as follows:
partition file system size used unused flags
-------------------------------------------------------------
/dev/sda1 ext2 243M 210M 32M boot
/dev/sda2 extended 465G - - -
/dev/sda5!!crypt-luks 465G - - -
unallocated unallocated 1M - - -
The warning (!!) at sda5 says "Linux Unified Key Setup encryption is not yet supported".
If I try ...
sudo dumpe2fs /dev/sda5
... it returns this error message:
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda5
Couldn't find valid filesystem superblock.
Mounting and rw-accessing sda1 works without error.
Any clues what is the cause and how i can repair, mount and decrypt the filesystem to boot normaly or at least to recover the data?
The given solution has missed some commands that you need to decrypt the file system and access it. Here's the full solution
Boot from Ubuntu USB
cryptsetup luksOpen /dev/rawdevice somename
sck /dev/mapper/somename
Get backup superblock:
sudo dumpe2fs /dev/mapper/ubuntu--vg-root | grep superblock
Fix:
sudo fsck -b 32768 /dev/mapper/ubuntu--vg-root -y
Verify:
mkdir /a
sudo mount /dev/mapper/ubuntu--vg-root /a
This worked for me:
Boot from Ubuntu USB
get backup superblock:
sudo dumpe2fs /dev/mapper/ubuntu--vg-root | grep superblock
fix:
sudo fsck -b 32768 /dev/mapper/ubuntu--vg-root -y
verify
mkdir /a
sudo mount /dev/mapper/ubuntu--vg-root /a
I used following links as source:
https://askubuntu.com/questions/137655/boot-drops-to-a-initramfs-prompts-busybox
https://serverfault.com/questions/375090/using-fsck-to-check-and-repair-luks-encrypted-disk

Ubuntu mount -t command

I use following command to mount "/dev/sdb1" to "/storage" directory:
mount -t ext3 /dev/sdb1 /storage
After run above command, I can use "df -h" can see it:
/dev/sdb1 147G 188M 140G 1% /storage
But after i restart the server, it disappear, and i have to run mount command again.
Is there a command that can keep the mount even if i restart the server?
Add the following line to your /etc/fstab file:
# device name mount point fs-type options dump-freq pass-num
/dev/sdb1 /storage ext3 defaults 0 0
You can run (as root):
echo "/dev/sdb1 /storage ext3 defaults 0 0" >> /etc/fstab
You need to add relevant information to /etc/fstab.

Resources