Not able to find /opt /var /tmp in lsblk RHEL 8.1 - rhel

Am Not able to find /opt /var /tmp in lsblk RHEL 8.1.Can you please help me.
[xxx#exxx ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 64G 0 disk
+-sda1 8:1 0 500M 0 part /boot/efi
+-sda2 8:2 0 500M 0 part /boot
+-sda3 8:3 0 2M 0 part
+-sda4 8:4 0 60G 0 part
+-rootvg-rootlv 253:5 0 60G 0 lvm /

lsblk
lsblk is used to display details about block devices and these block devices(Except ram disk) are basically those files that represent devices connected to the pc. It queries /sys virtual file system and udev db to obtain information that it displays. And it basically displays output in a tree-like structure. This command comes pre-installed with the util-Linux package.
That is the reason you are unable to saw the directories /opt, /var and /tmp
/opt is for “the installation of add-on application software packages”.
/var is a standard subdirectory of the root directory in Linux and other Unix-like operating systems that contains files to which the system writes data during the course of its operation.
/tmp directory is a temporary landing place for files.

Related

How to get disk usage from inside docker container

I have started my container using the --privileged flag, so as far as I know, all disks should be available from inside the container - and that is partly true, but I somehow can't read the size of them.
lsblk on host (Ubuntu):
sda 8:0 1 59,6G 0 disk
└─sda1 8:1 1 59,6G 0 part /media/mauz/ESD-ISO
nvme0n1 259:0 0 953,9G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part /boot/efi
├─nvme0n1p2 259:2 0 732M 0 part /boot
└─nvme0n1p3 259:3 0 952,7G 0 part
└─nvme0n1p3_crypt 253:0 0 952,6G 0 crypt
├─vgubuntu-root 253:1 0 930,4G 0 lvm /
└─vgubuntu-swap_1 253:2 0 976M 0 lvm [SWAP]
lsblk in container (Alpine):
sda 8:0 1 59.6G 0 disk
└─sda1 8:1 1 59.6G 0 part
nvme0n1 259:0 0 953.9G 0 disk
├─nvme0n1p1 259:1 0 512M 0 part
├─nvme0n1p2 259:2 0 732M 0 part
└─nvme0n1p3 259:3 0 952.7G 0 part
Both outputs are stripped from loop devices, but as you can see, there are 2 drives recognized in both.
Now, if I run the df command on the host:
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 3261580 2564 3259016 1% /run
/dev/mapper/vgubuntu-root 959200352 137078032 773327904 16% /
tmpfs 16307884 215740 16092144 2% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
/dev/nvme0n1p2 721392 364788 304140 55% /boot
/dev/nvme0n1p1 523248 76232 447016 15% /boot/efi
tmpfs 3261576 140 3261436 1% /run/user/1000
/dev/sda1 62519040 23118848 39400192 37% /media/mauz/ESD-ISO
And inside the container:
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 959200352 137078188 773327748 15% /
tmpfs 65536 0 65536 0% /dev
shm 65536 0 65536 0% /dev/shm
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /app
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /etc/os-release
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /etc/resolv.conf
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /etc/hostname
/dev/mapper/vgubuntu-root
959200352 137078188 773327748 15% /etc/hosts
Somehow, it does not show the correct drives in the second df output. Is there any way to make df show the correct output, even inside the container?
Or is there another way to get the correct disk sizes and usages from the host?
There is no "decent" solution that can accomplish what you want, but let me explain why.
You talk about "disk usage", but in reality there is no such thing as disk usage. As far as the disk (i.e. the device itself) is concerned, there is no concept of "usage". What you are looking for is rather filesystem disk usage, which is fundamentally different.
In order to know the "used" and "available" space of a filesystem, you will have to mount it. This allows the kernel to process filesystem metadata that can then be used to determine free and used filesystem blocks. Without mounting the filesystem this information is simply not available to the kernel (and therefore not available to df, for example).
In order for Docker to work, containers run with a different mount namsepace than the host. The core reason for this is that containers cannot in general safely share mount points with the host. Think for example what would happen if / in the host and / in the container referred to the same mount point: as soon as the container starts, it would likely break your system by touching sensitive files that it is not supposed to. So by default, Docker will "isolate" containers in their own mount namespace, so that they will only see the mount points they need, and there is no option to avoid this because of the above.
You could be able to get this information by reading raw data from the available block devices (without mounting them) and parsing the filesystem metadata (if any) from userspace within the container using some specialized tool, but this is a finicky solution as it would basically require one tool per possible filesystem type. See also Free space in unmounted partition at Unix & Linux SE.
You could also use bind mounts allowing the host and the container to share mount points, but this would have to be done on a per-mount basis, for example:
docker run --mount readonly,type=bind,source=/media/mauz/ESD-ISO,target=/container/path ...
$ df
...
/dev/sda1 62519040 23118848 39400192 37% /container/path
...
You say that for now you are "passing all mounted volumes manually", so I assume this is not different than whatever you are currently doing. On top of being pretty ugly, this solution would also have the limit of not being able to handle changes in devices or mount points on the host (e.g. if a new device is added and mounted).
The only "real" solution I can see here would be to run some application on the host, which periodically extracts the needed information and communicates it to the application running inside the container.
Using nsenter, this command should achieve what you wanted :
docker run -it --rm --privileged --pid host ubuntu nsenter -t 1 -m -u -n -i bash
# --privileged : run privileged
# --pid host : shares host's process id namespace
# nsenter - run program with namespaces of other processes
# -t 1 : Specify process 1 to get contexts from
# -m -u -n -i : Share mount, UTS, network, IPC namespace.

Fails to `mkdir /mnt/vzsnap0` for Container Backups with Permission Denied

This is all done as the root user.
The script for backups at /usr/share/perl5/PVE/VZDump/LXC.pm sets a default mount point
my $default_mount_point = "/mnt/vzsnap0";
But regardless of whether I use the GUI or the command line I get the following error:
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0:
Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
And lines 160 - 161 in that script is:
my $rootdir = $default_mount_point;
mkpath $rootdir;
After the installation before I created any images or did any backups I setup two things.
(1) SSHFS mount for /mnt/backups
(2) Added all other drives as Linux LVM
What I did for the drive addition is as simple as:
pvcreate /dev/sdb1
pvcreate /dev/sdc1
pvcreate /dev/sdd1
pvcreate /dev/sde1
vgextend pve /dev/sdb1
vgextend pve /dev/sdc1
vgextend pve /dev/sdd1
vgextend pve /dev/sde1
lvextend pve/data /dev/sdb1
lvextend pve/data /dev/sdc1
lvextend pve/data /dev/sdd1
lvextend pve/data /dev/sde1
For the SSHFS instructions see my blog post on it: https://6ftdan.com/allyourdev/2018/02/04/proxmox-a-vm-server-for-your-home/
Here are filesystem directory permission related files and details.
cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 9.0M 1.6G 1% /run
/dev/mapper/pve-root 37G 8.0G 27G 24% /
tmpfs 7.9G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/fuse 30M 20K 30M 1% /etc/pve
sshfs#10.0.0.10:/mnt/raid/proxmox_backup 1.4T 725G 672G 52% /mnt/backups
tmpfs 1.6G 0 1.6G 0% /run/user/0
ls -dla /mnt
drwxr-xr-x 3 root root 0 Aug 12 20:10 /mnt
ls /mnt
backups
ls -dla /mnt/backups
drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups
The command that I desire to succeed is:
vzdump 103 --compress lzo --node ProxMox --storage backup --remove 0 --mode snapshot
For the record the container image is only 8GB in size.
Cloning containers does work and snapshots work.
Q & A
Q) How are you running the perl script?
A) Through the GUI you click on Backup now, then select your storage (I have backups and local and the both produce this error), then select the state of the container (Snapshot, Suspend, Stop each produce the same error), then compression type (none, LZO, and gzip each produce the same error). Once all that is set you click Backup and get the following output.
INFO: starting new backup job: vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0
INFO: Starting Backup of VM 103 (lxc)
INFO: Backup started at 2019-08-18 16:21:11
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: Passport
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0: Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
INFO: Failed at 2019-08-18 16:21:11
INFO: Backup job finished with errors
TASK ERROR: job errors
From this you can see that the command is vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0 . I've also tried logging in with a SSH shell and running this command and get the same error.
Q) It could be that the directory's "immutable" attribute is set. Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
A) root#ProxMox:~# lsattr /
--------------e---- /tmp
--------------e---- /opt
--------------e---- /boot
lsattr: Inappropriate ioctl for device While reading flags on /sys
--------------e---- /lost+found
lsattr: Operation not supported While reading flags on /sbin
--------------e---- /media
--------------e---- /etc
--------------e---- /srv
--------------e---- /usr
lsattr: Operation not supported While reading flags on /libx32
lsattr: Operation not supported While reading flags on /bin
lsattr: Operation not supported While reading flags on /lib
lsattr: Inappropriate ioctl for device While reading flags on /proc
--------------e---- /root
--------------e---- /var
--------------e---- /home
lsattr: Inappropriate ioctl for device While reading flags on /dev
lsattr: Inappropriate ioctl for device While reading flags on /mnt
lsattr: Operation not supported While reading flags on /lib32
lsattr: Operation not supported While reading flags on /lib64
lsattr: Inappropriate ioctl for device While reading flags on /run
Q) Can you manually created /mnt/vzsnap0 without any issues?
A) root#ProxMox:~# mkdir /mnt/vzsnap0
mkdir: cannot create directory ‘/mnt/vzsnap0’: Permission denied
Q) Can you replicate it in a clean VM ?
A) I don't know. I don't have an extra system to try it on and I need the container's I have on it. Trying it within a VM in ProxMox… I'm not sure. I suppose I could try but I'd really rather not have to just yet. Maybe if all else fails.
Q) If you look at drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups, it looks like there are is a user with id 1001 which has access to the backups, so not even root will be able to write. You need to check why it is 1001 and which group is represented by 1002. Then you can add your root as well as the user under which the GUI runs to the group with id 1002.
A) I have no problem writing to the /mnt/backups directory. Just now did a cd /mnt/backups; mkdir test and that was successful.
From the message
mkdir /mnt/vzsnap0: Permission denied
it is obvious the problem is the permissions for /mnt directory.
It could be that the directory `s "immutable" attribute is set.
Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
As a reference:
The lower-case i in lsattr output indicates that the file or directory is set as immutable: even root must clear this attribute first before making any changes to it. With root access, you should be able to remove this with chattr -i /mnt, but there is probably a reason why this was done in the first place; you should find out what the reason was and whether or not it's still applicable before removing it. There may be security implications.
So, if this is the case, try:
chattr -i /mnt
to remove it.
References
lsattr output
According to inode flags—attributes manual page:
FS_IMMUTABLE_FL 'i':
The file is immutable: no changes are permitted to the file
contents or metadata (permissions, timestamps, ownership, link
count and so on). (This restriction applies even to the supe‐
ruser.) Only a privileged process (CAP_LINUX_IMMUTABLE) can
set or clear this attribute.
As long as the bounty is still up I'll give it to a legitimate answer that fixes the problem described here.
What I'm writing here for you all is a work around I've thought of which works. Note, it is very slow.
Since I am able to write to the /mnt/backups directory, which exists on another system on the network, I went ahead and changed the Perl script to point to /mnt/backups/vzsnap0 instead of /mnt/vzsnap0.
Bounty remains for anyone who can get the /mnt directory to work for the mount path to successfully mount vzsnap0 for the backup script..
1)
Perhaps your "/mnt/vzsnap0" is mounted as read only?
It may tell from your:
/dev/pve/root / ext4 errors=remount-ro 0 1
'errors=remount-ro' means in case of mistake remounting the partition like readonly. Perhaps this setting applies for your mounted filesystem as well.
Can you try remounting the drive as in the following link? https://askubuntu.com/questions/175739/how-do-i-remount-a-filesystem-as-read-write
And if that succeeds, manually create the directory afterwards?
2) If that didn't help:
https://www.linuxquestions.org/questions/linux-security-4/mkdir-throws-permission-denied-error-in-a-directoy-even-with-root-ownership-and-777-permission-4175424944/
There, someone remarked:
What is the filesystem for the partition that contains the directory.[?]
Double check the permissions of the directory, or whether it's a
symbolic link to another directory. If the directory is an NFS mount,
rootsquash can prevent writing by root.
Check for attributes (lsattr). Check for ACLs (getfacl). Check for
selinux restrictions. (ls -Z)
If the filesystem is corrupt, it might be initially mounted RW but
when you try to write to a bad area, change to RO.
Great, turns out this is a pretty long-standing issue with Ubuntu Make which is faced by many people.
I saw a workaround mentioned by an Ubuntu Developer in the above link.
Just follow the below steps:
sudo -s
unset SUDO_UID
unset SUDO_GID
Then run umake to install your application as normal.
you should now be able to install to any directory you want. Works flawlessly for me.
try ls laZ /mnt to review the security context, in case SE Linux is enabled. relabeling might be required then. errors=remount-ro should also be investigated (however, it is rather unlikely lsattr would fail, unless the /mnt inode itself is corrupted). Creating a new directory inode for these mount-points might be worth a try; if it works, one can swap them.
Just change /mnt/backups to /mnt/sshfs/backups
And the vzdump will work.

ec2 how to add more volume to exist device

I was trying to add more volume to my device
df -h
I get:
[root#ip-172-x-x-x ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 44K 3.8G 1% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
/dev/nvme0n1p1 7.8G 3.6G 4.2G 46% /
I wanna add all existing storage to /dev/nvme0n1p1
lsblk
I get
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 300G 0 disk
├─nvme0n1p1 259:1 0 8G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
I was trying to google around on aws instructions, still quite confuse. since most of the instruction is setting up brand new instance. While for my use case i cannot stop the instance.
i cannot do
mkfs
Also seems like the disk is already mount?? I guess i may misunderstand the meaning of mount...
since the filesystem is already there.
just wanna use all existing space.
Thanks for help in advance!!
your lsblk output shows that you have a 300G disk but your nvme0n1p1 is only 8G. You need to first grow your partition to fill the disk and then expand your filesystem to fill your partition:
Snapshot all ebs volumes you care about before doing any resize operations on them.
Install growpart
sudo yum install cloud-utils-growpart
Resize partiongrowpart /dev/nvme0n1 1
Reboot reboot now
Run lsblk and verify that the partition is now the full disk size
You may still have to run sudo resize2fs /dev/nvme0n1 to expand the filesystem

Mounting instance storage corrupting ec2 instance [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm trying to mount two instance storages in my ec2 instance and before creating an AMI, I just want to try it's mounting those storages at the right mount point. But as soon as I stop and start my instance after mounting, I'm unable to connect. Looks like it's unable to boot even though ec2 console shows they are running.
I get this right after I create my instance(i2.2xlarge):
[root#xxxxx ec2-user]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 300G 0 disk
└─xvda1 202:1 0 300G 0 part /
xvdb 202:16 0 745.2G 0 disk
xvdc 202:32 0 745.2G 0 disk
Then I format and mount those two to two different location.
[root#xxxx ec2-user]# mkfs -t ext4 /dev/xvdc
[root#xxxx ec2-user]# mkfs -t ext4 /dev/xvdc
Here is my fstab:
#
LABEL=/ / ext4 defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/xvdb /media/ephemeral0 ext4 defaults,nofail,comment=cloudconfig 0 2
/dev/xvdc /media/ephemeral1 ext4 defaults,nofail,comment=cloudconfig 0 2
After I mount them, I get this which I want at the end:
[root#xxxxxx ec2-user]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 493G 1.2G 491G 1% /
devtmpfs 30G 68K 30G 1% /dev
tmpfs 31G 0 31G 0% /dev/shm
/dev/xvdb 734G 69M 697G 1% /media/ephemeral0
/dev/xvdc 734G 69M 697G 1% /media/ephemeral1
At this point, when I want to stop and start the instance, I'm unable to connect that instance. I know those two are ephemeral storage and I don't care it's content. But I want to recreate several similar instances like this, so before creating an AMI, I just wanted to test it to see after I restart this instance, it keeps mount configuration.
What I am doing wrong?
This issue is a major problem while working with paritioning. The root cause of problem is SElinux which is refusing SSH connection
Here are the steps which will solve your issue :
Step 1 : Create the volume in AWS Console and attach it to instance. (Assuming you know this already!)
Step 2 : By default it is always mounted on /dev/xvdc, please create the partition using fdisk and confirm the lsblk output, it should look like below:
$ sudo fdisk /dev/xvdc
Use options N to create a new partition and all the defaults for creating 1 full partition for entire volume and option W to write the partition in the filesystem
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdc 202:80 0 20G 0 disk
└─xvdc1 202:81 0 20G 0 part
*All the work ahead would be done on this xvdfc partition, make sure you are NOT using /dev/xvdc anywhere.
Step 3 : Format the below partition using
$ sudo mkfs -t ext4 /dev/xvdc1
Step 4: Make the entry in fstab as below:
/dev/xvdf1 /var ext4 defaults,noatime,nofail 0 2
Hope that helps :)
Here are some links that might help :
STEPS TO CREATE SEPARATE /VAR PARTITION ON EBS VOLUME AWS
CREATE ROOT SWAP AND LVM PARTITION ON EBS VOLUME (AWS)

How to mount an rsync-copied partition combined from two source partitions

My PC is running ArchLinux. My PC has two hard disks, /dev/sda and /dev/sdb. sda is the source disk and contains all my files. sdb is the destination disk and is currently empty. My purpose is to make a copy of sda to sdb, and also make sdb another bootable ArchLinux installation.
sda has three partitions: sda1 for /boot, sda2 for /, sda3 for /home. Here is its /etc/fstab:
/dev/sda2 / ext4 rw,relatime,data=ordered 0 1
/dev/sda1 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2
/dev/sda3 /home ext4 rw,relatime,data=ordered 0 2
I formatted sdb to two partitions only: sdb1 for /boot and sdb2 for /. I used rsync to copy sda1 to sdb1, as well as sda2 and sda3 to sdb2. And then I also updated the UEFI bootloader and /etc/fstab:
/dev/sdb2 / ext4 rw,relatime,data=ordered 0 1
/dev/sdb1 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2
The problem is, when I booted from sdb, both sdb1 and sdb2 were automatically mounted, but /home is empty. My personal home directory was not found under /home. Why is that?
Later I rebooted from sda and then manually mounted sdb2 and confirmed that my personal home directory was in /home.
I figured out the problem. I forgot to update /boot/loader/entries/arch.conf, so the gummiboot bootloader actually loaded /dev/sda2 instead of /dev/sdb2. And because sda2 did not contain /home/, so /home/ was not found.

Resources