linux docker mount namespace - linux

I use docker to create a container. When I enter the container, I can't see the overload related mount. This is normal,
root#django-work:~/test# docker run -it ubuntu /bin/bash
root#52110483ac09:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 49G 17G 30G 36% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda3 49G 17G 30G 36% /etc/hosts
tmpfs 3.9G 0 3.9G 0% /proc/asound
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
root#52110483ac09:/#
But why can I see the complete mount information when I create a PID namespace and mount namespace through unshare
root#django-work:~# df -h
tmpfs 793M 2.0M 791M 1% /run
/dev/sda3 49G 17G 30G 36% /
tmpfs 3.9G 32M 3.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sda2 512M 5.3M 507M 2% /boot/efi
tmpfs 793M 5.8M 787M 1% /run/user/1000
overlay 49G 17G 30G 36% /var/lib/docker/overlay2/dddc4d45086e3b814fe5589af09becc35cfa7cf4cce1a8fc82a930fba94a70ed/merged
root#django-work:~# unshare --pid --fork --mount-proc /bin/bash
root#django-work:~# df -h
/dev/sda3 49G 17G 30G 36% /
tmpfs 3.9G 32M 3.9G 1% /dev/shm
tmpfs 793M 2.0M 791M 1% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 793M 5.8M 787M 1% /run/user/1000
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sda2 512M 5.3M 507M 2% /boot/efi
overlay 49G 17G 30G 36% /var/lib/docker/overlay2/dddc4d45086e3b814fe5589af09becc35cfa7cf4cce1a8fc82a930fba94a70ed/merged
root#django-work:~#

From the man page of mount_namespaces:
If the namespace is created using unshare(2), the mount list of the new namespace is a copy of the mount list in the caller's previous mount namespace.
You see the same output because when you do unshare you initially get a copy of the mount list. But if you make any changes in either namespace, those changes won't be visible in the other.

Related

How to use SDB when sda is full

I am a beginner to linux. but I should install a blockchain node on ubuntu server(4T SSD).
SDA6 is only 500G and SDB is 3.5T. so I have to use SDB if sda is full.
root#UK270-1G:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 1.4M 6.3G 1% /run
/dev/sda6 437G 2.2G 413G 1% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda5 1.9G 81M 1.8G 5% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/0
root#UK270-1G:~#
SDB is not mounted yet. but for my problem, I wanna know underlying principle and need detailed instruction since I am a beginner of linux.
Thanks in advance.

How to do partition for directories in Linux server

Kindly to make partitions as shown below.
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 0 12G 0% /dev
tmpfs 12G 0 12G 0% /dev/shm
tmpfs 12G 1.2G 11G 10% /run
tmpfs 12G 0 12G 0% /sys/fs/cgroup
/dev/mapper/ol-root 300G 96G 205G 32% /
/dev/mapper/ol-var 13G 1.7G 11G 14% /var
/dev/mapper/ol-home 40G 12G 29G 29% /home
/dev/mapper/ol-tmp 10G 33M 10G 1% /tmp
/dev/sda1 497M 311M 187M 63% /boot
/dev/mapper/ol-var_log 10G 330M 9.7G 4% /var/log
/dev/mapper/ol-var_log_audit 5.0G 68M 5.0G 2% /var/log/audit
tmpfs 2.4G 0 2.4G 0% /run/user/1000
My new server is showing as below, i want to do partition as same as above - root directory should contain 220+GB, home - 40GB, tmp - 10GB, var - 13GB.
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 0 12G 0% /dev
tmpfs 12G 0 12G 0% /dev/shm
tmpfs 12G 8.7M 12G 1% /run
tmpfs 12G 0 12G 0% /sys/fs/cgroup
/dev/mapper/ol-root 50G 5.3G 45G 11% /
/dev/mapper/ol-home 328G 109M 327G 1% /home
/dev/sda1 1014M 251M 764M 25% /boot
tmpfs 2.4G 0 2.4G 0% /run/user/1003
please help, thanks in advance.
The steps are:
login as root
unmount and destroy LV ol-home (make backup if there are some files in home directory before operation)
extend LV ol-root and / filesystem to 300GB
Create LVs ol-var, ol-home, ol-tmp with relevant sizes and create filesystems for these LVs
edit /etc/fstab to mount the filesystems on startup
Voila

Temp folder runs out of inodes

I have a LiteSpeed server with a Wordpress/woocommerce website on it.
Every few days my /tmp folder runs out of inodes, which effectively disables the website.
Here is what I get, after running df -i, please note the /dev/loop0 part.
root#openlitespeed:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 501653 385 501268 1% /dev
tmpfs 504909 573 504336 1% /run
/dev/vda1 7741440 888567 6852873 12% /
tmpfs 504909 6 504903 1% /dev/shm
tmpfs 504909 3 504906 1% /run/lock
tmpfs 504909 18 504891 1% /sys/fs/cgroup
/dev/vda15 0 0 0 - /boot/efi
/dev/loop0 96000 78235 17765 82% /tmp
tmpfs 504909 11 504898 1% /run/user/0
The output from df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 395M 616K 394M 1% /run
/dev/vda1 58G 43G 16G 74% /
tmpfs 2.0G 128K 2.0G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda15 105M 3.6M 101M 4% /boot/efi
/dev/loop0 1.5G 13M 1.4G 1% /tmp
tmpfs 395M 0 395M 0% /run/user/0
The folder is full of "sess_" files.
Right now I constantly monitor the folder and issue:
find /tmp/ -mindepth 1 -mtime +5 -delete
Which kinda helps, but is not ideal.
How can I reconfigure to increase the inodes?
Where to look?
Edit:
I have redis, and it's enabled in LiteSpeed Cache plugin.
It's all on DigitalOcean VPS, with 2CPUs and 4 GB RAM. Ubuntu 18.04.
Why does it even have just 96k inodes with the size of 1.5G, when tmpfs has much more?

how to combine two partition in centos

when i run the command df it shows
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 38G 135M 100% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 17M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vdc1 99G 60M 94G 1% /mnt/onapp-disk
tmpfs 395M 0 395M 0% /run/user/0
now when i try to install GNOME DESKTOP using command
yum groupinstall "GNOME DESKTOP", size 1.5 GB
but it prompted an error
Disk Requirements:
At least 1496MB more space needed on the / filesystem.
it is not using the vdc1 partiton

How to understand docker container disk space usage?

My host disk space usage is like this:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 50G 31G 20G 61% /
devtmpfs 5.8G 0 5.8G 0% /dev
tmpfs 5.8G 84K 5.8G 1% /dev/shm
tmpfs 5.8G 9.0M 5.8G 1% /run
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
/dev/mapper/rhel-home 1.3T 5.4G 1.3T 1% /home
/dev/sda2 497M 212M 285M 43% /boot
/dev/sda1 200M 9.5M 191M 5% /boot/efi
tmpfs 1.2G 16K 1.2G 1% /run/user/42
tmpfs 1.2G 0 1.2G 0% /run/user/0
After starting a docker container, the disk usage of this container is like follows:
# docker run -it mstormo/suse bash
606759b37afb:/ # df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 99G 231M 94G 1% /
/dev/mapper/docker-253:0-137562709-606759b37afb809fe9224ac2210252ee1da71f9c0b315ff9ef570ad9c0adb16c 99G 231M 94G 1% /
tmpfs 5.8G 0 5.8G 0% /dev
shm 64M 0 64M 0% /dev/shm
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 5.8G 96K 5.8G 1% /run/secrets
/dev/mapper/rhel-root 50G 31G 20G 61% /etc/resolv.conf
/dev/mapper/rhel-root 50G 31G 20G 61% /etc/hostname
/dev/mapper/rhel-root 50G 31G 20G 61% /etc/hosts
tmpfs 5.8G 0 5.8G 0% /proc/kcore
tmpfs 5.8G 0 5.8G 0% /proc/timer_stats
I have 2 questions about container disk usage:
(1) The tmpfs and /dev/mapper/rhel-root in container share the same disk/memory space with host directly?
(2) For rootfs of container, where does this file system exist? As printed, it has 99G, so I can use all the 99G disk space?

Resources