Mounting instance storage corrupting ec2 instance [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm trying to mount two instance storages in my ec2 instance and before creating an AMI, I just want to try it's mounting those storages at the right mount point. But as soon as I stop and start my instance after mounting, I'm unable to connect. Looks like it's unable to boot even though ec2 console shows they are running.
I get this right after I create my instance(i2.2xlarge):
[root#xxxxx ec2-user]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 300G 0 disk
└─xvda1 202:1 0 300G 0 part /
xvdb 202:16 0 745.2G 0 disk
xvdc 202:32 0 745.2G 0 disk
Then I format and mount those two to two different location.
[root#xxxx ec2-user]# mkfs -t ext4 /dev/xvdc
[root#xxxx ec2-user]# mkfs -t ext4 /dev/xvdc
Here is my fstab:
#
LABEL=/ / ext4 defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/xvdb /media/ephemeral0 ext4 defaults,nofail,comment=cloudconfig 0 2
/dev/xvdc /media/ephemeral1 ext4 defaults,nofail,comment=cloudconfig 0 2
After I mount them, I get this which I want at the end:
[root#xxxxxx ec2-user]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 493G 1.2G 491G 1% /
devtmpfs 30G 68K 30G 1% /dev
tmpfs 31G 0 31G 0% /dev/shm
/dev/xvdb 734G 69M 697G 1% /media/ephemeral0
/dev/xvdc 734G 69M 697G 1% /media/ephemeral1
At this point, when I want to stop and start the instance, I'm unable to connect that instance. I know those two are ephemeral storage and I don't care it's content. But I want to recreate several similar instances like this, so before creating an AMI, I just wanted to test it to see after I restart this instance, it keeps mount configuration.
What I am doing wrong?

This issue is a major problem while working with paritioning. The root cause of problem is SElinux which is refusing SSH connection
Here are the steps which will solve your issue :
Step 1 : Create the volume in AWS Console and attach it to instance. (Assuming you know this already!)
Step 2 : By default it is always mounted on /dev/xvdc, please create the partition using fdisk and confirm the lsblk output, it should look like below:
$ sudo fdisk /dev/xvdc
Use options N to create a new partition and all the defaults for creating 1 full partition for entire volume and option W to write the partition in the filesystem
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdc 202:80 0 20G 0 disk
└─xvdc1 202:81 0 20G 0 part
*All the work ahead would be done on this xvdfc partition, make sure you are NOT using /dev/xvdc anywhere.
Step 3 : Format the below partition using
$ sudo mkfs -t ext4 /dev/xvdc1
Step 4: Make the entry in fstab as below:
/dev/xvdf1 /var ext4 defaults,noatime,nofail 0 2
Hope that helps :)
Here are some links that might help :
STEPS TO CREATE SEPARATE /VAR PARTITION ON EBS VOLUME AWS
CREATE ROOT SWAP AND LVM PARTITION ON EBS VOLUME (AWS)

Related

How to get disk usage from inside docker container

I have started my container using the --privileged flag, so as far as I know, all disks should be available from inside the container - and that is partly true, but I somehow can't read the size of them.
lsblk on host (Ubuntu):
sda 8:0 1 59,6G 0 disk
└─sda1 8:1 1 59,6G 0 part /media/mauz/ESD-ISO
nvme0n1 259:0 0 953,9G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part /boot/efi
├─nvme0n1p2 259:2 0 732M 0 part /boot
└─nvme0n1p3 259:3 0 952,7G 0 part
└─nvme0n1p3_crypt 253:0 0 952,6G 0 crypt
├─vgubuntu-root 253:1 0 930,4G 0 lvm /
└─vgubuntu-swap_1 253:2 0 976M 0 lvm [SWAP]
lsblk in container (Alpine):
sda 8:0 1 59.6G 0 disk
└─sda1 8:1 1 59.6G 0 part
nvme0n1 259:0 0 953.9G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part
├─nvme0n1p2 259:2 0 732M 0 part
└─nvme0n1p3 259:3 0 952.7G 0 part
Both outputs are stripped from loop devices, but as you can see, there are 2 drives recognized in both.
Now, if I run the df command on the host:
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3261580 2564 3259016 1% /run
/dev/mapper/vgubuntu-root 959200352 137078032 773327904 16% /
tmpfs 16307884 215740 16092144 2% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
/dev/nvme0n1p2 721392 364788 304140 55% /boot
/dev/nvme0n1p1 523248 76232 447016 15% /boot/efi
tmpfs 3261576 140 3261436 1% /run/user/1000
/dev/sda1 62519040 23118848 39400192 37% /media/mauz/ESD-ISO
And inside the container:
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 959200352 137078188 773327748 15% /
tmpfs 65536 0 65536 0% /dev
shm 65536 0 65536 0% /dev/shm
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /app
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /etc/os-release
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /etc/resolv.conf
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /etc/hostname
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /etc/hosts
Somehow, it does not show the correct drives in the second df output. Is there any way to make df show the correct output, even inside the container?
Or is there another way to get the correct disk sizes and usages from the host?
There is no "decent" solution that can accomplish what you want, but let me explain why.
You talk about "disk usage", but in reality there is no such thing as disk usage. As far as the disk (i.e. the device itself) is concerned, there is no concept of "usage". What you are looking for is rather filesystem disk usage, which is fundamentally different.
In order to know the "used" and "available" space of a filesystem, you will have to mount it. This allows the kernel to process filesystem metadata that can then be used to determine free and used filesystem blocks. Without mounting the filesystem this information is simply not available to the kernel (and therefore not available to df, for example).
In order for Docker to work, containers run with a different mount namsepace than the host. The core reason for this is that containers cannot in general safely share mount points with the host. Think for example what would happen if / in the host and / in the container referred to the same mount point: as soon as the container starts, it would likely break your system by touching sensitive files that it is not supposed to. So by default, Docker will "isolate" containers in their own mount namespace, so that they will only see the mount points they need, and there is no option to avoid this because of the above.
You could be able to get this information by reading raw data from the available block devices (without mounting them) and parsing the filesystem metadata (if any) from userspace within the container using some specialized tool, but this is a finicky solution as it would basically require one tool per possible filesystem type. See also Free space in unmounted partition at Unix & Linux SE.
You could also use bind mounts allowing the host and the container to share mount points, but this would have to be done on a per-mount basis, for example:
docker run --mount readonly,type=bind,source=/media/mauz/ESD-ISO,target=/container/path ...
$ df
...
/dev/sda1 62519040 23118848 39400192 37% /container/path
...
You say that for now you are "passing all mounted volumes manually", so I assume this is not different than whatever you are currently doing. On top of being pretty ugly, this solution would also have the limit of not being able to handle changes in devices or mount points on the host (e.g. if a new device is added and mounted).
The only "real" solution I can see here would be to run some application on the host, which periodically extracts the needed information and communicates it to the application running inside the container.
Using nsenter, this command should achieve what you wanted :
docker run -it --rm --privileged --pid host ubuntu nsenter -t 1 -m -u -n -i bash
# --privileged : run privileged
# --pid host : shares host's process id namespace
# nsenter - run program with namespaces of other processes
# -t 1 : Specify process 1 to get contexts from
# -m -u -n -i : Share mount, UTS, network, IPC namespace.

ec2 how to add more volume to exist device

I was trying to add more volume to my device
df -h
I get:
[root#ip-172-x-x-x ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 44K 3.8G 1% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
/dev/nvme0n1p1 7.8G 3.6G 4.2G 46% /
I wanna add all existing storage to /dev/nvme0n1p1
lsblk
I get
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 300G 0 disk
├─nvme0n1p1 259:1 0 8G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
I was trying to google around on aws instructions, still quite confuse. since most of the instruction is setting up brand new instance. While for my use case i cannot stop the instance.
i cannot do
mkfs
Also seems like the disk is already mount?? I guess i may misunderstand the meaning of mount...
since the filesystem is already there.
just wanna use all existing space.
Thanks for help in advance!!
your lsblk output shows that you have a 300G disk but your nvme0n1p1 is only 8G. You need to first grow your partition to fill the disk and then expand your filesystem to fill your partition:
Snapshot all ebs volumes you care about before doing any resize operations on them.
Install growpart
sudo yum install cloud-utils-growpart
Resize partiongrowpart /dev/nvme0n1 1
Reboot reboot now
Run lsblk and verify that the partition is now the full disk size
You may still have to run sudo resize2fs /dev/nvme0n1 to expand the filesystem

Not able to resize the AWS EC2 volume

I created an AWS EC2 Linux instance with 8GB root volume. Then I increased the EBS volume to 9GB and it went to the completed state. It's a small volume, so the resize took a couple of minutes to complete.
Now I try to extend extend the linux file system after resizing the volume using the instructions mentioned here. But, I get the below error message. I tried two times, the entire process. But it's all the same.
The filesystem is already 2096635 (4k) blocks long. Nothing to do!
Here is the screen shot of the image.
Can someone help me?
Just reboot the instance because it automatically resizes your root filesystem on boot.
I tried it myself. Here is the instance with an 8GB volume:
[ec2-user#ip-172-31-15-216 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
[ec2-user#ip-172-31-15-216 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 236M 56K 236M 1% /dev
tmpfs 246M 0 246M 0% /dev/shm
/dev/xvda1 7.8G 985M 6.7G 13% /
After modifying the EBS Volume:
[ec2-user#ip-172-31-15-216 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 9G 0 disk
└─xvda1 202:1 0 8G 0 part /
[ec2-user#ip-172-31-15-216 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 236M 56K 236M 1% /dev
tmpfs 246M 0 246M 0% /dev/shm
/dev/xvda1 7.8G 985M 6.7G 13% /
After the reboot:
[ec2-user#ip-172-31-15-216 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 9G 0 disk
└─xvda1 202:1 0 9G 0 part /
[ec2-user#ip-172-31-15-216 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 236M 56K 236M 1% /dev
tmpfs 246M 0 246M 0% /dev/shm
/dev/xvda1 8.8G 984M 7.7G 12% /
See also: increase EC2 EBS volume after cloning - resize2fs not working
# http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/storage_expand_partition.html
# Before goging to do this, detach and attach the root volume to anothe instance
# Note:
# 1) Before detach the volume, please make a note of device name which going to
# detch from the machine, why because we should mention same name when attaching back, otherwise data will be lost
# 2)
# Identifying device name which we want to expand
lsblk
# Running parted command on the device
sudo parted /dev/xvdf
# Changing the parted units of measure to sectors.
unit s
# Run the print command to list the partitions on the device
print
# if it shows warning, chose fix
# Delete the partition entry for the partition using the number (1) from the previous step
rm 1 # number 1 will change based the partition we want to delete
# Create a new partition that extends to the end of the volume
mkpart Linux 4096s 100%
# Run the print command again to verify your partition
print
# Check to see that any flags that were present earlier are still
# present for the partition that you expanded. In some cases the boot
# flag may be lost. If a flag was dropped from the partition when it was expanded,
# add the flag with the following command, substituting your partition number and the flag name.
# For example, the following command adds the boot flag to partition 1
set 1 boot on
#Run the quit command to exit parted.
quit
# verfiying the device
sudo e2fsck -f /dev/xvdf1

Having losetup read the partition table [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
For the purpose of learning, I wanted to create a mini-replica of my hard disk:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 244M 0 part /boot
└─sda3 8:3 0 930.8G 0 part
└─sda3_crypt 254:0 0 930.8G 0 crypt
├─host--vg-root 254:1 0 25G 0 lvm /
├─host--vg-var 254:2 0 2.8G 0 lvm /var
├─host--vg-swap_1 254:3 0 11.9G 0 lvm [SWAP]
├─host--vg-tmp 254:4 0 380M 0 lvm /tmp
└─host--vg-home 254:5 0 890G 0 lvm /home
In my particular case, setting up a new device so it looks like my hard disk requires familiarity with many things, creating partitions, creating LUKS devices, opening them, creating LVM volumes etc, so I regard this as a worthy exercise, at least for someone who is new to Linux.
So I first needed a new device to play with, without messing up anything else:
$ dd if=/dev/zero of=loopfile bs=1M count=1024
$ sudo losetup /dev/loop1 loopfile
(using loop1 rather than loop0 which is already taken for some other purpose, also zero good enough for this exercise so ignoring urandom).
My first objective was to mimic the partitions sda1/sda2/sda3
$ sudo blkid
/dev/sda1: UUID="08FC-EA23" TYPE="vfat" ...
/dev/sda2: UUID="30b5d595-4986-4f75-962a-7e1f5f03ed4a" TYPE="ext2" ...
/dev/sda3: UUID="a84cc598-9316-48b9-94a9-bb4885e45e9c" TYPE="crypto_LUKS" ...
$ sudo parted /dev/loop1
So I went and created three 'primary' partitions (using 'fat32' for the first one and 'ext2' for other two, not too sure why just guessing) with all sizes reduced by a factor of 1000:
(parted) print
Number Start End Size Type File system Flags
1 512B 1000kB 1000kB primary fat32 lba
2 1049kB 2097kB 1049kB primary ext2 lba
3 2097kB 1074MB 1072MB primary ext2 lba
and I then formatted the three devices in line with the previous blkid report:
sudo mkfs -t vfat /dev/loop1p1
sudo mkfs -t ext2 /dev/loop1p2
sudo cryptsetup luksFormat /dev/loop1p3
So at this point, my parted print report looks good as well as lsblk and blkid:
$ lsblk
loop1 7:1 0 1G 0 loop
├─loop1p1 259:0 0 976.5K 0 loop
├─loop1p2 259:1 0 1M 0 loop
└─loop1p3 259:2 0 1022M 0 loop
$ sudo blkid
/dev/loop1p1: SEC_TYPE="msdos" UUID="1CD8-2CA5" TYPE="vfat" ...
/dev/loop1p2: UUID="6532dba9-3101-488e-a6d1-e5e1ef4943f7" TYPE="ext2" ...
/dev/loop1p3: UUID="a0e96a54-6d6a-49c8-80fd-03217b25062f" TYPE="crypto_LUKS" ...
/dev/loop1: PTUUID="1de285f7" PTTYPE="dos"
So I thought I was on the right track. I also thought that my file loopfile which underlies my loop device would contain of the necessary metadata so I do not need to worry about rebooting. As I am only playing with devices (not mounting them) I assumed there is no need for any /etc/fstab setup...
The issue I have is that when I reboot, some of the set up seems to be lost. After re-creating the loop device from loopfile, the parted print report still shows me the partitions (albeit with lost information on type), but these partitions no longer appear on the lsblk or the blkid reports. I was wondering if there was a way of making my set up persistant. I am on Debian 8, in case this matters.
You need to run losetup -P /dev/loop1 loopfile. What this does is tell the kernel to perform a partition table scan of the newly added file.

Linux and Hadoop : Mounting a disk and increasing the cluster capacity [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
First of all, I'm a total noob at hadoop and linux.I have a cluster of five nodes , which when starts shows a each node capacity only 46.6 GB while each machine had around 500 gb space which i dont know how to allocate to these nodes.
(1) Do I have to change the datanode and namenode file size(i checked these and it shows the same space remaining as in the Datanode information tab)? if so how should i do that.
(2)Also this 500gb disk is only shown when i do $lsblk command and not when i do $df -H command. Does that mean its not mounted? These are the results of the commands. Can someone explain what does this mean..
[hadoop#hdp1 hadoop]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 50G 0 disk
\u251c\u2500sda1 8:1 0 500M 0 part /boot
\u2514\u2500sda2 8:2 0 49.5G 0 part
\u251c\u2500VolGroup-lv_root (dm-0) 253:0 0 47.6G 0 lvm /
\u2514\u2500VolGroup-lv_swap (dm-1) 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 512G 0 disk
[hadoop#hdp1 hadoop]$ sudo df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
51G 6.7G 41G 15% /
tmpfs 17G 14M 17G 1% /dev/shm
/dev/sda1 500M 163M 311M 35% /boot
Please help. Thanks in advance.
First can someone help me understand why its showing different memory disks and what it means and where does it reside ?! I seem to not able to figure it out
You are right. Your second disk (sdb) is not mounted anywhere. If you are going to dedicate the whole disk to hadoop data, here is how you should format and mount it:
Format your disk:
mkfs.ext4 -m1 -O dir_index,extent,sparse_super /dev/sdb
For mounting edit the file /etc/fstab. Add this line:
/dev/sdb /hadoop/disk0 ext4 noatime 1 2
After that, create the directory /hadoop/disk0 (it doesn't have to be named like that. you could use a directory of your choice).
mkdir -p /hadoop/disk0
Now you are ready to mount the disk:
mount -a
Finally, you should let hadoop know that you want to use this disk as hadoop storage. Your /etc/hadoop/conf/hdfs-site.xml should contain these config parameters
<property><name>dfs.name.dir</name><value>/hadoop/disk0/nn</value></property>
<property><name>dfs.data.dir</name><value>/hadoop/disk0/dn</value></property>

Resources