I am trying to expand an EBS volume from 120GB to 200GB on an c5d.xlarge EC2 instance running Ubuntu. I am following this guide.
So far, I have created a snapshot of the current EBS volume and then expanded it to 200GB.
sudo lsblk
nvme1n1 259:0 0 93.1G 0 disk
nvme0n1 259:1 0 200G 0 disk
└─nvme0n1p1 259:2 0 120G 0 part /
Following the guide, I have tried to expand the nvme0n1p1 partition to 200GB:
sudo growpart /dev/nvme0n1p1 1
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/nvme0n1p1
sfdisk: /dev/nvme0n1p1: does not contain a recognized partition table
FAILED: failed to dump sfdisk info for /dev/nvme0n1p1
It seems the partition is not recognized.
I have also tried with resize2fs command, but it doesn't do anything:
sudo resize2fs /dev/nvme0n1p1
resize2fs 1.44.1 (24-Mar-2018)
The filesystem is already 31457019 (4k) blocks long. Nothing to do!
Any idea how can I make the partition to expand to the correct size?
Actually the growpart command was wrong. It must follow the syntax growpart path/to/device_name partition_number, so the right command is:
sudo growpart /dev/nvme0n1 1
Related
For the FS
df -kh /store
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 50G 45G 1.9G 97% /store
I have increased the size of /dev/sdb by 20G and then added the space to /dev/sdb1 by 20G by deleting the partition and creating it again
sdb 8:16 0 70G 0 disk
└─sdb1 8:17 0 70G 0 part
However i am not sure how to make it visible in /store
pvresize gives the error as no physical volume found
It is a cloud VM
vgs or vgdisplay doesnot show any VG created as well
Output of fstab as below
/dev/sdb1 /store/ ext4 defaults 0 0
However i am not sure how to make it visible in /store
pvresize gives the error as no physical volume found
It is a cloud VM
vgs or vgdisplay doesnot show any VG created as well
Output of fstab as below
/dev/sdb1 /store/ ext4 defaults 0 0
From the output you posted, I don't see any indication that the system is using LVM, rather the filesystem is directly on /dev/sdb1. So simply try umount /store followed by resize2fs /dev/sdb1.
I am trying to increase the volume of the root folder after running this command growpart /dev/nvme0n1p1 83
This is the error I receive
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/nvme0n1p1
sfdisk: /dev/nvme0n1p1: does not contain a recognized partition table
FAILED: failed to dump sfdisk info for /dev/nvme0n1p1
How can I get past this error?
use the resize2fs command instead
eg:
$ sudo resize2fs /dev/nvme0n1p1
I was trying to add more volume to my device
df -h
I get:
[root#ip-172-x-x-x ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 44K 3.8G 1% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
/dev/nvme0n1p1 7.8G 3.6G 4.2G 46% /
I wanna add all existing storage to /dev/nvme0n1p1
lsblk
I get
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 300G 0 disk
├─nvme0n1p1 259:1 0 8G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
I was trying to google around on aws instructions, still quite confuse. since most of the instruction is setting up brand new instance. While for my use case i cannot stop the instance.
i cannot do
mkfs
Also seems like the disk is already mount?? I guess i may misunderstand the meaning of mount...
since the filesystem is already there.
just wanna use all existing space.
Thanks for help in advance!!
your lsblk output shows that you have a 300G disk but your nvme0n1p1 is only 8G. You need to first grow your partition to fill the disk and then expand your filesystem to fill your partition:
Snapshot all ebs volumes you care about before doing any resize operations on them.
Install growpart
sudo yum install cloud-utils-growpart
Resize partiongrowpart /dev/nvme0n1 1
Reboot reboot now
Run lsblk and verify that the partition is now the full disk size
You may still have to run sudo resize2fs /dev/nvme0n1 to expand the filesystem
I created a d2.xlarge EC2 instance on AWS which returns the following output:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 1.8T 0 disk
xvdc 202:32 0 1.8T 0 disk
xvdd 202:48 0 1.8T 0 disk
The default /etc/fstab looks like this
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
Now, I make an EXT4 filesystem for xvdc
$ sudo mkfs -t ext4 /dev/xvdc
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 488375808 4k blocks and 122101760 inodes
Filesystem UUID: 2391499d-c66a-442f-b9ff-a994be3111f8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done
blkid returns a UID for the filesystem
$ sudo blkid /dev/xvdc
/dev/xvdc: UUID="2391499d-c66a-442f-b9ff-a994be3111f8" TYPE="ext4"
Then, I mount it on /mnt5
$ sudo mkdir -p /mnt5
$ sudo mount /dev/xvdc /mnt5
It gets succesfully mounted. Till there, the things work fine.
Now, I reboot the machine(first stop it and then start it) and then SSH into the machine.
I do
$ sudo blkid /dev/xvdc
It returns me nothing. Where did the filesystem go which I created before the reboot? I guess the filesystem for mounts remain created even after the reboot cycle.
Am I missing something to mount a partition on an AWS EC2 instance?
I followed this http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html and it does not seem to work as described above
You need to read up on EC2 Ephemeral Instance Store volumes. When you stop an instance with this type of volume the data on the volume is lost. You can reboot by performing a reboot/restart operation, but if you do a stop followed later by a start the data is lost. A stop followed by a start is not considered a "reboot" on EC2. When you stop an instance it is completely shut down and when you start it back later it is basically recreated on different backing hardware.
In other words what you describe isn't an issue, it is expected behavior. You need to be very aware of how these volumes work before depending on them.
I don't have much experience with Linux and mounting/unmounting things. I'm using Amazon AWS, have booting up EC2 with Ubuntu image, and have attached a new EBS volume to the EC2. From the dashboard, I can see that the volume is attached to :/dev/sda1.
Now, I see from this guide from Amazon that the path will likely be changed by the kernel. So it's most likely that my /dev/sda1 device will be mounted on, maybe, /dev/xvda1.
So I logged in using terminal. I do ls /dev/ and I indeed see xvda1 on there. But I also see xvda. Now I want to format the device. But I don't know if the unformatted device is attached to xvda1 or xvda. I cannot list the content of /dev/xvda1 and /dev/xvda (it says ls: cannot access /dev/xvda1/: Not a directory). I guess I have to format it first.
I tried to format using sudo mkfs.ext4 /dev/xvda1. It says: /dev/xvda1 is mounted; will not make a filesystem here!.
I tried to format using sudo mkfs.ext4 /dev/xvda. It says: /dev/xvda is apparently in use by the system; will not make a filesystem here!
How can I format the volume?
EDIT:
The result of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
I then tried to use the command sudo mkfs -t ext4 /dev/xvda, but the same error message appears: /dev/xvda is apparently in use by the system; will not make a filesystem here!
When I tried to use the command mount /dev/xvda /webserver, error message appears: mount: /dev/xvda already mounted or /webserver busy. Some website indicate that this also probably because a corrupted or unformatted file system. So I guess I have to be able to format it first before able to mount it.
First of all you are trying to format /dev/xvda1, which is root device. Why ??
Second if you have added a new EBS, then follow below steps.
List Block Device's
This will give you list of block device attached to your EC2 which will look like
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 100G 0 disk
xvda1 202:1 0 8G 0 disk /
Out of this xvda1 is the / (root) and xvdf is the one that you need to format and mount ( for the new EBS)
Format Device
sudo mkfs -t ext4 device_name # device_name is xvdf here
Create a Mount Point
sudo mkdir /mount_point
Mount the Volume
sudo mount device_name mount_point # here device_name is /dev/xvdf
Make an entry in /etc/fstab
device_name mount_point file_system_type fs_mntops fs_freq fs_passno
Execute
sudo mount -a
This will read your /etc/fstab file and if it's OK. it will mount the EBS to mount_point