how to combine two partition in centos - partition

when i run the command df it shows
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 38G 135M 100% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 17M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vdc1 99G 60M 94G 1% /mnt/onapp-disk
tmpfs 395M 0 395M 0% /run/user/0
now when i try to install GNOME DESKTOP using command
yum groupinstall "GNOME DESKTOP", size 1.5 GB
but it prompted an error
Disk Requirements:
At least 1496MB more space needed on the / filesystem.
it is not using the vdc1 partiton

Related

How to use SDB when sda is full

I am a beginner to linux. but I should install a blockchain node on ubuntu server(4T SSD).
SDA6 is only 500G and SDB is 3.5T. so I have to use SDB if sda is full.
root#UK270-1G:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 1.4M 6.3G 1% /run
/dev/sda6 437G 2.2G 413G 1% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda5 1.9G 81M 1.8G 5% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/0
root#UK270-1G:~#
SDB is not mounted yet. but for my problem, I wanna know underlying principle and need detailed instruction since I am a beginner of linux.
Thanks in advance.

linux docker mount namespace

I use docker to create a container. When I enter the container, I can't see the overload related mount. This is normal,
root#django-work:~/test# docker run -it ubuntu /bin/bash
root#52110483ac09:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 49G 17G 30G 36% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/sda3 49G 17G 30G 36% /etc/hosts
tmpfs 3.9G 0 3.9G 0% /proc/asound
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
root#52110483ac09:/#
But why can I see the complete mount information when I create a PID namespace and mount namespace through unshare
root#django-work:~# df -h
tmpfs 793M 2.0M 791M 1% /run
/dev/sda3 49G 17G 30G 36% /
tmpfs 3.9G 32M 3.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sda2 512M 5.3M 507M 2% /boot/efi
tmpfs 793M 5.8M 787M 1% /run/user/1000
overlay 49G 17G 30G 36% /var/lib/docker/overlay2/dddc4d45086e3b814fe5589af09becc35cfa7cf4cce1a8fc82a930fba94a70ed/merged
root#django-work:~# unshare --pid --fork --mount-proc /bin/bash
root#django-work:~# df -h
/dev/sda3 49G 17G 30G 36% /
tmpfs 3.9G 32M 3.9G 1% /dev/shm
tmpfs 793M 2.0M 791M 1% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 793M 5.8M 787M 1% /run/user/1000
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sda2 512M 5.3M 507M 2% /boot/efi
overlay 49G 17G 30G 36% /var/lib/docker/overlay2/dddc4d45086e3b814fe5589af09becc35cfa7cf4cce1a8fc82a930fba94a70ed/merged
root#django-work:~#
From the man page of mount_namespaces:
If the namespace is created using unshare(2), the mount list of the new namespace is a copy of the mount list in the caller's previous mount namespace.
You see the same output because when you do unshare you initially get a copy of the mount list. But if you make any changes in either namespace, those changes won't be visible in the other.

Temp folder runs out of inodes

I have a LiteSpeed server with a Wordpress/woocommerce website on it.
Every few days my /tmp folder runs out of inodes, which effectively disables the website.
Here is what I get, after running df -i, please note the /dev/loop0 part.
root#openlitespeed:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 501653 385 501268 1% /dev
tmpfs 504909 573 504336 1% /run
/dev/vda1 7741440 888567 6852873 12% /
tmpfs 504909 6 504903 1% /dev/shm
tmpfs 504909 3 504906 1% /run/lock
tmpfs 504909 18 504891 1% /sys/fs/cgroup
/dev/vda15 0 0 0 - /boot/efi
/dev/loop0 96000 78235 17765 82% /tmp
tmpfs 504909 11 504898 1% /run/user/0
The output from df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 395M 616K 394M 1% /run
/dev/vda1 58G 43G 16G 74% /
tmpfs 2.0G 128K 2.0G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda15 105M 3.6M 101M 4% /boot/efi
/dev/loop0 1.5G 13M 1.4G 1% /tmp
tmpfs 395M 0 395M 0% /run/user/0
The folder is full of "sess_" files.
Right now I constantly monitor the folder and issue:
find /tmp/ -mindepth 1 -mtime +5 -delete
Which kinda helps, but is not ideal.
How can I reconfigure to increase the inodes?
Where to look?
Edit:
I have redis, and it's enabled in LiteSpeed Cache plugin.
It's all on DigitalOcean VPS, with 2CPUs and 4 GB RAM. Ubuntu 18.04.
Why does it even have just 96k inodes with the size of 1.5G, when tmpfs has much more?

How to understand docker container disk space usage?

My host disk space usage is like this:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 50G 31G 20G 61% /
devtmpfs 5.8G 0 5.8G 0% /dev
tmpfs 5.8G 84K 5.8G 1% /dev/shm
tmpfs 5.8G 9.0M 5.8G 1% /run
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
/dev/mapper/rhel-home 1.3T 5.4G 1.3T 1% /home
/dev/sda2 497M 212M 285M 43% /boot
/dev/sda1 200M 9.5M 191M 5% /boot/efi
tmpfs 1.2G 16K 1.2G 1% /run/user/42
tmpfs 1.2G 0 1.2G 0% /run/user/0
After starting a docker container, the disk usage of this container is like follows:
# docker run -it mstormo/suse bash
606759b37afb:/ # df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 99G 231M 94G 1% /
/dev/mapper/docker-253:0-137562709-606759b37afb809fe9224ac2210252ee1da71f9c0b315ff9ef570ad9c0adb16c 99G 231M 94G 1% /
tmpfs 5.8G 0 5.8G 0% /dev
shm 64M 0 64M 0% /dev/shm
tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
tmpfs 5.8G 96K 5.8G 1% /run/secrets
/dev/mapper/rhel-root 50G 31G 20G 61% /etc/resolv.conf
/dev/mapper/rhel-root 50G 31G 20G 61% /etc/hostname
/dev/mapper/rhel-root 50G 31G 20G 61% /etc/hosts
tmpfs 5.8G 0 5.8G 0% /proc/kcore
tmpfs 5.8G 0 5.8G 0% /proc/timer_stats
I have 2 questions about container disk usage:
(1) The tmpfs and /dev/mapper/rhel-root in container share the same disk/memory space with host directly?
(2) For rootfs of container, where does this file system exist? As printed, it has 99G, so I can use all the 99G disk space?

Which value is referenced to show capacity by davfs?

I know there is no way to know REAL SIZE of volume through webdav protocol,
so MS Windows' is showing the SAME SIZE OF THE SYSTEM DRIVE. (usually, C:)
ref : https://support.microsoft.com/en-us/kb/2386902
Then, which value is referenced by davfs in ubuntu 14.04?
In my case>
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 46G 22G 22G 50% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.0G 4.0K 2.0G 1% /dev
tmpfs 395M 1.5M 394M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 152K 2.0G 1% /run/shm
none 100M 52K 100M 1% /run/user
http://127.0.0.213/uuid-4d4f02fb-6d34-405f-b952-d00eb350b9ee 26G 13G 13G 50% /home/jin/mount/webdavTest
I used 50G disk and root partition(sda1) is 46G, but Total size of webdav is 26G and used 13G.
I can't determine what kind of rule(?) was used to show the webdav size and couldn't find the DOCUMENTATION about this anywhere.
Someone knows about this?
There is actually a way to communicate the real available space via the quota properties:
https://www.rfc-editor.org/rfc/rfc4331
It's pretty widely supported, apparently just not by Windows.

Resources