I have an Amazon EC2 instance (Ubuntu 12.04) to which I have attached two 250 GB volumes. Inadvertently, the volumes got unmounted. When I tried mounting them again, with the following command,
sudo mount /dev/xvdg /data
this is the error I get :
mount: /dev/xvdg already mounted or /data busy
Then, I tried un-mounting it as follows :
umount /dev/xvdg but it tells me that the volume is not mounted.
umount: /dev/xvdg is not mounted (according to mtab)
I tried lsof to check for any locks but there weren't any.
The lsblk output is as below :
Any help will be appreciated. What do I need to do to mount the volumes back without losing the data on them?
Ok, figured it out. Thanks #Petesh and #mootmoot for pushing me in the right direction. I was trying to mount single volumes instead of a RAID 0 array. The /dev/md127 device was running so I stopped it first with the following command :
sudo mdadm --stop /dev/md127
Then I assembled the RAID 0 array :
sudo mdadm --assemble --uuid <RAID array UUID here> /dev/md0
Once the /dev/md0 array became active, I mounted it on /data.
Try umount /dev/xvdg* and umount /data and then
mount /dev/xvdg1 /data
Related
I created a d2.xlarge EC2 instance on AWS which returns the following output:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 1.8T 0 disk
xvdc 202:32 0 1.8T 0 disk
xvdd 202:48 0 1.8T 0 disk
The default /etc/fstab looks like this
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
Now, I make an EXT4 filesystem for xvdc
$ sudo mkfs -t ext4 /dev/xvdc
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 488375808 4k blocks and 122101760 inodes
Filesystem UUID: 2391499d-c66a-442f-b9ff-a994be3111f8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done
blkid returns a UID for the filesystem
$ sudo blkid /dev/xvdc
/dev/xvdc: UUID="2391499d-c66a-442f-b9ff-a994be3111f8" TYPE="ext4"
Then, I mount it on /mnt5
$ sudo mkdir -p /mnt5
$ sudo mount /dev/xvdc /mnt5
It gets succesfully mounted. Till there, the things work fine.
Now, I reboot the machine(first stop it and then start it) and then SSH into the machine.
I do
$ sudo blkid /dev/xvdc
It returns me nothing. Where did the filesystem go which I created before the reboot? I guess the filesystem for mounts remain created even after the reboot cycle.
Am I missing something to mount a partition on an AWS EC2 instance?
I followed this http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html and it does not seem to work as described above
You need to read up on EC2 Ephemeral Instance Store volumes. When you stop an instance with this type of volume the data on the volume is lost. You can reboot by performing a reboot/restart operation, but if you do a stop followed later by a start the data is lost. A stop followed by a start is not considered a "reboot" on EC2. When you stop an instance it is completely shut down and when you start it back later it is basically recreated on different backing hardware.
In other words what you describe isn't an issue, it is expected behavior. You need to be very aware of how these volumes work before depending on them.
I don't have much experience with Linux and mounting/unmounting things. I'm using Amazon AWS, have booting up EC2 with Ubuntu image, and have attached a new EBS volume to the EC2. From the dashboard, I can see that the volume is attached to :/dev/sda1.
Now, I see from this guide from Amazon that the path will likely be changed by the kernel. So it's most likely that my /dev/sda1 device will be mounted on, maybe, /dev/xvda1.
So I logged in using terminal. I do ls /dev/ and I indeed see xvda1 on there. But I also see xvda. Now I want to format the device. But I don't know if the unformatted device is attached to xvda1 or xvda. I cannot list the content of /dev/xvda1 and /dev/xvda (it says ls: cannot access /dev/xvda1/: Not a directory). I guess I have to format it first.
I tried to format using sudo mkfs.ext4 /dev/xvda1. It says: /dev/xvda1 is mounted; will not make a filesystem here!.
I tried to format using sudo mkfs.ext4 /dev/xvda. It says: /dev/xvda is apparently in use by the system; will not make a filesystem here!
How can I format the volume?
EDIT:
The result of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
I then tried to use the command sudo mkfs -t ext4 /dev/xvda, but the same error message appears: /dev/xvda is apparently in use by the system; will not make a filesystem here!
When I tried to use the command mount /dev/xvda /webserver, error message appears: mount: /dev/xvda already mounted or /webserver busy. Some website indicate that this also probably because a corrupted or unformatted file system. So I guess I have to be able to format it first before able to mount it.
First of all you are trying to format /dev/xvda1, which is root device. Why ??
Second if you have added a new EBS, then follow below steps.
List Block Device's
This will give you list of block device attached to your EC2 which will look like
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 100G 0 disk
xvda1 202:1 0 8G 0 disk /
Out of this xvda1 is the / (root) and xvdf is the one that you need to format and mount ( for the new EBS)
Format Device
sudo mkfs -t ext4 device_name # device_name is xvdf here
Create a Mount Point
sudo mkdir /mount_point
Mount the Volume
sudo mount device_name mount_point # here device_name is /dev/xvdf
Make an entry in /etc/fstab
device_name mount_point file_system_type fs_mntops fs_freq fs_passno
Execute
sudo mount -a
This will read your /etc/fstab file and if it's OK. it will mount the EBS to mount_point
I have a hard drive with ubuntu 14 installed. The whole disk is encrypted. My default users home directory is encrypted as well. Lately, after a system crash, I am presented with a busybox (initramfs) on startup. When I chose to start in recovery mode, I can grasp several error messages like " ... Failed to read block at offset xyz ...".
I searched and found this Q&A: Boot drops to a (initramfs) prompts/busybox
I booted from a CD and followed the instructions. However I am only able to do ...
sudo dumpe2fs /dev/sda1
... and then continue to check and repair superblocks on /dev/sda1 .
If I try ...
sudo dumpe2fs /dev/sda2
... i get the following error message:
dumpe2fs: Attempted to read block from filesystem resulted
in short read while trying to open /dev/sda2
Couldn't find valid filesystem superblock.
gparted shows the partitioning and file systems of the drive as follows:
partition file system size used unused flags
-------------------------------------------------------------
/dev/sda1 ext2 243M 210M 32M boot
/dev/sda2 extended 465G - - -
/dev/sda5!!crypt-luks 465G - - -
unallocated unallocated 1M - - -
The warning (!!) at sda5 says "Linux Unified Key Setup encryption is not yet supported".
If I try ...
sudo dumpe2fs /dev/sda5
... it returns this error message:
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda5
Couldn't find valid filesystem superblock.
Mounting and rw-accessing sda1 works without error.
Any clues what is the cause and how i can repair, mount and decrypt the filesystem to boot normaly or at least to recover the data?
The given solution has missed some commands that you need to decrypt the file system and access it. Here's the full solution
Boot from Ubuntu USB
cryptsetup luksOpen /dev/rawdevice somename
sck /dev/mapper/somename
Get backup superblock:
sudo dumpe2fs /dev/mapper/ubuntu--vg-root | grep superblock
Fix:
sudo fsck -b 32768 /dev/mapper/ubuntu--vg-root -y
Verify:
mkdir /a
sudo mount /dev/mapper/ubuntu--vg-root /a
This worked for me:
Boot from Ubuntu USB
get backup superblock:
sudo dumpe2fs /dev/mapper/ubuntu--vg-root | grep superblock
fix:
sudo fsck -b 32768 /dev/mapper/ubuntu--vg-root -y
verify
mkdir /a
sudo mount /dev/mapper/ubuntu--vg-root /a
I used following links as source:
https://askubuntu.com/questions/137655/boot-drops-to-a-initramfs-prompts-busybox
https://serverfault.com/questions/375090/using-fsck-to-check-and-repair-luks-encrypted-disk
I have create an EBS drive, attached it to the Instance and created file system using mkfs.ext3.
Now i want to unmount and delete the drive, i've tried many things but nothing seems to work. Although i am able to detach the drive from instance and delete using EC-2 Console,
but when i am checking partition using df -hk it is still showing the drive.
[ec2-user#XXXXXXXXXXXXXX ~]$ df -hk
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 1075740 7097356 14% /
tmpfs 304368 0 304368 0% /dev/shm
/dev/xvdf 30963708 176196 29214648 1% /media/newdrive
And more over when i try to use any other command like "fdisk -l" or and all or trying to browse the drive's folders, the putty session hangs.
I am new to EC2 cloud and also to Linux.
How about this?
You need to run as:
sudo umount /dev/xvdf
umount -dRf /media/newdrive
umount needs mountpoint not a devicetype like /dev/xvdf
I'm partway through an installation of Arch Linux and, following the online instructions, I'm mounting /dev/sdb1/mnt.
When I input
mount /dev/sdb1/mnt
it returns
mount: you must specify the filesystem type
Using both auto and ext4 (my filesystem type, I'm fairly certain)
mount auto /dev/sdb1/mnt
I get
mount: mount point /dev/sdb1/mnt is not a directory
What is going on here?
You are missing a space:
# right here---v
mount /dev/sdb1 /mnt
The mount command wants a device and a directory. /dev/sdb1 is the device, and /mnt is the directory.