About formatting new EBS volume on Amazon AWS - linux

I don't have much experience with Linux and mounting/unmounting things. I'm using Amazon AWS, have booting up EC2 with Ubuntu image, and have attached a new EBS volume to the EC2. From the dashboard, I can see that the volume is attached to :/dev/sda1.
Now, I see from this guide from Amazon that the path will likely be changed by the kernel. So it's most likely that my /dev/sda1 device will be mounted on, maybe, /dev/xvda1.
So I logged in using terminal. I do ls /dev/ and I indeed see xvda1 on there. But I also see xvda. Now I want to format the device. But I don't know if the unformatted device is attached to xvda1 or xvda. I cannot list the content of /dev/xvda1 and /dev/xvda (it says ls: cannot access /dev/xvda1/: Not a directory). I guess I have to format it first.
I tried to format using sudo mkfs.ext4 /dev/xvda1. It says: /dev/xvda1 is mounted; will not make a filesystem here!.
I tried to format using sudo mkfs.ext4 /dev/xvda. It says: /dev/xvda is apparently in use by the system; will not make a filesystem here!
How can I format the volume?
EDIT:
The result of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
I then tried to use the command sudo mkfs -t ext4 /dev/xvda, but the same error message appears: /dev/xvda is apparently in use by the system; will not make a filesystem here!
When I tried to use the command mount /dev/xvda /webserver, error message appears: mount: /dev/xvda already mounted or /webserver busy. Some website indicate that this also probably because a corrupted or unformatted file system. So I guess I have to be able to format it first before able to mount it.

First of all you are trying to format /dev/xvda1, which is root device. Why ??
Second if you have added a new EBS, then follow below steps.
List Block Device's
This will give you list of block device attached to your EC2 which will look like
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 100G 0 disk
xvda1 202:1 0 8G 0 disk /
Out of this xvda1 is the / (root) and xvdf is the one that you need to format and mount ( for the new EBS)
Format Device
sudo mkfs -t ext4 device_name # device_name is xvdf here
Create a Mount Point
sudo mkdir /mount_point
Mount the Volume
sudo mount device_name mount_point # here device_name is /dev/xvdf
Make an entry in /etc/fstab
device_name mount_point file_system_type fs_mntops fs_freq fs_passno
Execute
sudo mount -a
This will read your /etc/fstab file and if it's OK. it will mount the EBS to mount_point

Related

Changing EC2 Instance Type modified EBS root device UUID and made disk read only. How to resolve?

I had a fully working Amazon Linux 2 instance, running on t2.small instance type. I wanted to try changing the instance to a t2.medium type to test. As I have done in the past, I simply shut down the instance, changed the type, and then restarted the instance.
After the restart, apache was down and my sites were un-reachable. I was able to login to the instance and when trying to start apache I discovered that the root drive was now read only which prevented start/etc. Through some troubleshooting I was able to get the drive remounted and thing running as normal, but everytime I restart the instance, it goes back to read-only and I have to perform the same fix each time to get it back to normal. I believe it's an issue with my /etc/fstab root device UUID not matching the current root device UUID. I never changed any of the attached EBS volumes, so I'm not sure how the change occured.
Some relevant info:
$ cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
To discover the UUID mismatch/fix, I performed the following:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 50G 0 disk
└─xvda1 202:1 0 50G 0 part /
xvdb 202:16 0 50G 0 disk
xvdf 202:80 0 50G 0 disk
└─xvdf1 202:81 0 50G 0 part
$ sudo blkid
/dev/xvda1: LABEL="/" UUID="2a7884f1-a23b-49a0-8693-ae82c155e5af" TYPE="xfs" PARTLABEL="Linux" PARTUUID="4d1e3134-c9e4-456d-a253-374c91394e99"
/dev/xvdf1: LABEL="/" UUID="a8346192-0f62-444c-9cd0-655ed0d49a8b" TYPE="ext4" PARTLABEL="Linux" PARTUUID="2688b30d-29ef-424f-9196-05ec7e4a0d80"
I had read that a possible fix would be to perform the following:
$ sudo mount -o remount,rw /
mount: /: can't find UUID=-1a7884f1-a23b-49a0-8693-ae82c155e5af.
Obviously, that didn't work. So I looked at my /etc/fstab:
#
UUID=-1a7884f1-a23b-49a0-8693-ae82c155e5af / xfs defaults,noatime 1 1
/swapfile swap swap defaults 0 0
Seeing this mismatch, I tried:
sudo mount -o remount nouuid /
Which worked, made the root writeable and I was able to get services back up and running.
So, this is how I've come to the belief that it has to do with the mismatch of the UUID in fstab.
My Questions:
Should I change the entry in /etc/fstab to match the current UUID: 2a7884f1-a23b-49a0-8693-ae82c155e5af
Any idea why this happened and how I can prevent it from happening in the future?

Determine WWID of LUN from mapped drive on Linux

I am trying to establish if there is an easier method to determine the WWID of an iSCSI LUN connected with a Linux Filesystem or mountpoint.
A frequent problem we have is where a user requests a disk expansion on a RHEL system with multiple iSCSI LUNs connected. A user will provide us with the path their LUN is mounted on, and from this we need to establish which LUN they are referring to so that we can make the increase as appropriate at the Storage side.
Currently we run df -h to get the Filesystem name, pvdisplay to get the VG Name and then multipath -v4 -ll | grep "^mpath" to get the WWID. This feels messy, long-winded and prone inconsistent interpretation.
Is there a more concise command we can run to determine the WWID of the device?
Here's one approach. The output format leaves something to be desired - it's more suited to eyeballs than programs.
lsblk understands the mapping of a mounted filesystem down through the LVM and multipath layers to the underlying block devices. In the output below, /dev/sdc is my iSCSI-attached LUN, attached via one path to the target. It contains the volume group vg1 and a logical volume lv1. /mnt/tmp is where I have the filesystem on the LV mounted.
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:32 0 128M 0 disk
└─360a010a0b43e87ab1962194c4008dc35 253:4 0 128M 0 mpath
└─vg1-lv1 253:3 0 124M 0 lvm /mnt/tmp
At the 2nd level there is the SCSI WWN (360a010...), courtesy multipathd.

Filesystem for a partition goes missing EC2 reboot

I created a d2.xlarge EC2 instance on AWS which returns the following output:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 1.8T 0 disk
xvdc 202:32 0 1.8T 0 disk
xvdd 202:48 0 1.8T 0 disk
The default /etc/fstab looks like this
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
Now, I make an EXT4 filesystem for xvdc
$ sudo mkfs -t ext4 /dev/xvdc
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 488375808 4k blocks and 122101760 inodes
Filesystem UUID: 2391499d-c66a-442f-b9ff-a994be3111f8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done
blkid returns a UID for the filesystem
$ sudo blkid /dev/xvdc
/dev/xvdc: UUID="2391499d-c66a-442f-b9ff-a994be3111f8" TYPE="ext4"
Then, I mount it on /mnt5
$ sudo mkdir -p /mnt5
$ sudo mount /dev/xvdc /mnt5
It gets succesfully mounted. Till there, the things work fine.
Now, I reboot the machine(first stop it and then start it) and then SSH into the machine.
I do
$ sudo blkid /dev/xvdc
It returns me nothing. Where did the filesystem go which I created before the reboot? I guess the filesystem for mounts remain created even after the reboot cycle.
Am I missing something to mount a partition on an AWS EC2 instance?
I followed this http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html and it does not seem to work as described above
You need to read up on EC2 Ephemeral Instance Store volumes. When you stop an instance with this type of volume the data on the volume is lost. You can reboot by performing a reboot/restart operation, but if you do a stop followed later by a start the data is lost. A stop followed by a start is not considered a "reboot" on EC2. When you stop an instance it is completely shut down and when you start it back later it is basically recreated on different backing hardware.
In other words what you describe isn't an issue, it is expected behavior. You need to be very aware of how these volumes work before depending on them.

mount already mounted or busy

I have an Amazon EC2 instance (Ubuntu 12.04) to which I have attached two 250 GB volumes. Inadvertently, the volumes got unmounted. When I tried mounting them again, with the following command,
sudo mount /dev/xvdg /data
this is the error I get :
mount: /dev/xvdg already mounted or /data busy
Then, I tried un-mounting it as follows :
umount /dev/xvdg but it tells me that the volume is not mounted.
umount: /dev/xvdg is not mounted (according to mtab)
I tried lsof to check for any locks but there weren't any.
The lsblk output is as below :
Any help will be appreciated. What do I need to do to mount the volumes back without losing the data on them?
Ok, figured it out. Thanks #Petesh and #mootmoot for pushing me in the right direction. I was trying to mount single volumes instead of a RAID 0 array. The /dev/md127 device was running so I stopped it first with the following command :
sudo mdadm --stop /dev/md127
Then I assembled the RAID 0 array :
sudo mdadm --assemble --uuid <RAID array UUID here> /dev/md0
Once the /dev/md0 array became active, I mounted it on /data.
Try umount /dev/xvdg* and umount /data and then
mount /dev/xvdg1 /data

Amazon EC2: Unable to unmount and remove EBS drive file system

I have create an EBS drive, attached it to the Instance and created file system using mkfs.ext3.
Now i want to unmount and delete the drive, i've tried many things but nothing seems to work. Although i am able to detach the drive from instance and delete using EC-2 Console,
but when i am checking partition using df -hk it is still showing the drive.
[ec2-user#XXXXXXXXXXXXXX ~]$ df -hk
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 1075740 7097356 14% /
tmpfs 304368 0 304368 0% /dev/shm
/dev/xvdf 30963708 176196 29214648 1% /media/newdrive
And more over when i try to use any other command like "fdisk -l" or and all or trying to browse the drive's folders, the putty session hangs.
I am new to EC2 cloud and also to Linux.
How about this?
You need to run as:
sudo umount /dev/xvdf
umount -dRf /media/newdrive
umount needs mountpoint not a devicetype like /dev/xvdf

Resources