Which value is referenced to show capacity by davfs? - linux

I know there is no way to know REAL SIZE of volume through webdav protocol,
so MS Windows' is showing the SAME SIZE OF THE SYSTEM DRIVE. (usually, C:)
ref : https://support.microsoft.com/en-us/kb/2386902
Then, which value is referenced by davfs in ubuntu 14.04?
In my case>
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 46G 22G 22G 50% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.0G 4.0K 2.0G 1% /dev
tmpfs 395M 1.5M 394M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 152K 2.0G 1% /run/shm
none 100M 52K 100M 1% /run/user
http://127.0.0.213/uuid-4d4f02fb-6d34-405f-b952-d00eb350b9ee 26G 13G 13G 50% /home/jin/mount/webdavTest
I used 50G disk and root partition(sda1) is 46G, but Total size of webdav is 26G and used 13G.
I can't determine what kind of rule(?) was used to show the webdav size and couldn't find the DOCUMENTATION about this anywhere.
Someone knows about this?

There is actually a way to communicate the real available space via the quota properties:
https://www.rfc-editor.org/rfc/rfc4331
It's pretty widely supported, apparently just not by Windows.

Related

How to use SDB when sda is full

I am a beginner to linux. but I should install a blockchain node on ubuntu server(4T SSD).
SDA6 is only 500G and SDB is 3.5T. so I have to use SDB if sda is full.
root#UK270-1G:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 1.4M 6.3G 1% /run
/dev/sda6 437G 2.2G 413G 1% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda5 1.9G 81M 1.8G 5% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/0
root#UK270-1G:~#
SDB is not mounted yet. but for my problem, I wanna know underlying principle and need detailed instruction since I am a beginner of linux.
Thanks in advance.

/dev/mapper/RHELCSB-Home marked as full when it is not after verification

I was trying to copy a 1.5GiB file from a location to another and was warned that my disk space is full, so I proceeded to a verification using df -h, which gave the following output:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 114M 16G 1% /dev/shm
tmpfs 16G 2.0M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/RHELCSB-Root 50G 11G 40G 21% /
/dev/nvme0n1p2 3.0G 436M 2.6G 15% /boot
/dev/nvme0n1p1 200M 17M 184M 9% /boot/efi
/dev/mapper/RHELCSB-Home 100G 100G 438M 100% /home
tmpfs 3.1G 88K 3.1G 1% /run/user/4204967
where /dev/mapper/RHELCSB-Home seemed to cause the issue. But when running sudo du -xsh /dev/mapper/RHELCSB-Home, I got the following result:
0 /dev/mapper/RHELCSB-Home
and same thing for /dev/ and /dev/mapper/. After researching this issue, I figured out that this might have been caused by undeleted log files in /var/log/, but the total size of files there is far from approaching the 100GiB. What could cause my disk space to be full?
Additional context: I was running a local postgresql database when this happened, but I can't see how this can relate to my issue as postgres log files are not taking that much space either.
The issue was solved by deleting podman container volumes in ~/.local/share/containers/

Temp folder runs out of inodes

I have a LiteSpeed server with a Wordpress/woocommerce website on it.
Every few days my /tmp folder runs out of inodes, which effectively disables the website.
Here is what I get, after running df -i, please note the /dev/loop0 part.
root#openlitespeed:~# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 501653 385 501268 1% /dev
tmpfs 504909 573 504336 1% /run
/dev/vda1 7741440 888567 6852873 12% /
tmpfs 504909 6 504903 1% /dev/shm
tmpfs 504909 3 504906 1% /run/lock
tmpfs 504909 18 504891 1% /sys/fs/cgroup
/dev/vda15 0 0 0 - /boot/efi
/dev/loop0 96000 78235 17765 82% /tmp
tmpfs 504909 11 504898 1% /run/user/0
The output from df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 395M 616K 394M 1% /run
/dev/vda1 58G 43G 16G 74% /
tmpfs 2.0G 128K 2.0G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda15 105M 3.6M 101M 4% /boot/efi
/dev/loop0 1.5G 13M 1.4G 1% /tmp
tmpfs 395M 0 395M 0% /run/user/0
The folder is full of "sess_" files.
Right now I constantly monitor the folder and issue:
find /tmp/ -mindepth 1 -mtime +5 -delete
Which kinda helps, but is not ideal.
How can I reconfigure to increase the inodes?
Where to look?
Edit:
I have redis, and it's enabled in LiteSpeed Cache plugin.
It's all on DigitalOcean VPS, with 2CPUs and 4 GB RAM. Ubuntu 18.04.
Why does it even have just 96k inodes with the size of 1.5G, when tmpfs has much more?

how to combine two partition in centos

when i run the command df it shows
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 38G 135M 100% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 17M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vdc1 99G 60M 94G 1% /mnt/onapp-disk
tmpfs 395M 0 395M 0% /run/user/0
now when i try to install GNOME DESKTOP using command
yum groupinstall "GNOME DESKTOP", size 1.5 GB
but it prompted an error
Disk Requirements:
At least 1496MB more space needed on the / filesystem.
it is not using the vdc1 partiton

How do I create an XFS volume out of root volume on EC2?

I've created a new EC2 instance and setting up a bunch of software on it. MongoDB 3.2's Production checklist suggests installing it on an XFS (or ext4) volume. How do I create a volume of, say 15 GB, out of /dev/xvda1, format is as XFS using mkfs and then mount it? Here's the output of df -h right now:
udev 492M 12K 492M 1% /dev
tmpfs 100M 340K 99M 1% /run
/dev/xvda1 30G 2.5G 26G 9% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 0 497M 0% /run/shm
none 100M 0 100M 0% /run/user
OS is Ubuntu 12.04 LTS
Does it have to be the root partition?
If not, you can simply create a new volume in the AWS EC2 UI and attach it to the instance. It will show up as e.g. /dev/xvdf and you can format and mount it.
Also, this might answer your question.

Resources