reserve,used,available,total disk space in linux - linux

Is there any single linux command or single system call through which I can get all 4 information(Total,used,free,reserved) ?
I have checked following:
df: does not give reserve disk space
stat(): does not give reserve disk space
statfs(): gives total and free only
I tried using "tune2fs -l /dev/vda1" for reserved ,but there is some discrepancy between outputs of tune2fs and df command.The total is not coming as sum of used,free and reserved.

Do a calculation with df output :
reserved=total-used-free

Related

How is memory used value derived in check_snmp_mem.pl?

I was configuring icinga2 to get memory used information from one linux client using script at check_snmp_mem.pl . Any idea how the memory used is derived in this script ?
Here is free command output
# free
total used free shared buff/cache available
Mem: 500016 59160 89564 3036 351292 408972
Swap: 1048572 4092 1044480
where as the performance data shown in icinga dashboard is
Label Value Max Warning Critical
ram_used 137,700.00 500,016.00 470,015.00 490,016.00
swap_used 4,092.00 1,048,572.00 524,286.00 838,858.00
Looking through the source code, it mentions ram_used for example in this line:
$n_output .= " | ram_used=" . ($$resultat{$nets_ram_total}-$$resultat{$nets_ram_free}-$$resultat{$nets_ram_cache}).";";
This strongly suggests that ram_used is calculated as the difference of the total RAM and the free RAM and the RAM used for cache. These values are retrieved via the following SNMP ids:
my $nets_ram_free = "1.3.6.1.4.1.2021.4.6.0"; # Real memory free
my $nets_ram_total = "1.3.6.1.4.1.2021.4.5.0"; # Real memory total
my $nets_ram_cache = "1.3.6.1.4.1.2021.4.15.0"; # Real memory cached
I don't know how they correlate to the output of free. The difference in free memory reported by free and to Icinga is 48136, so maybe you can find that number somewhere.

Linux `top` command: how much process memory is physically stored in swap space?

Let's say I run my program on a 64-bit Linux machine with 64 Gb of RAM. In my very small C program immediately after the start I do
void *p = sbrk(1024ull * 1024 * 1024 * 120);
this moving my data segment break forward by 120 Gb.
After the above sbrk call top entry for my process shows RES at some low value, VIRT at 120g, and SWAP at 120g.
After this operation I write something into the first 90 Gb of the above region
memset(p, 0xAB, 1024ull * 1024 * 1024 * 90);
This causes some changes in the top entry for my process: VIRT expectedly remains at 120g, RES becomes almost 64g, SWAP drops to around 56g.
The common Swap stats in the header of top output show that swap file usage increases, which is expected since my program will have to push about 26 Gb of memory pages into the swap file.
So, according to the above observations, SWAP column simply reports my process's non-RES address space regardless of whether this address space has been "materialized", i.e. regardless of whether I already wrote something into that region of virtual memory.
But is there any way to figure out how much of that SWAP size has actually been "materialized" and backed up by something stored in the swap file? I.e. is there any way to make top to display that 26 Gb value for my process?
The behavior depends on a version of procps you are using. For instance, in version 3.0.5 SWAP value equals:
task->size - task->resident
and it is exactly what you are encountering. Man top.1 says:
VIRT = SWAP + RES
Procps-ng, however, reads /proc/pid/status and sets SWAP correctly
https://gitlab.com/procps-ng/procps/blob/master/proc/readproc.c#L383
So, you can update procps or look at /proc/pid/status directly

Intel NVMe drive Performance degradation with xfs filesystem with sector size other than 4096

I am working with NVMe card on linux(Ubuntu 14.04).
I am finding some performance degradation for Intel NVMe card when formatted with xfs file system with its default sector size(512). or any other sector size less than 4096.
In the experiment I formatted the card with xfs filesystem with default options. I tried running fio with 64k block size on an arm64 platform with 64k page size.
This is the command used
fio --rw=randread --bs=64k --ioengine=libaio --iodepth=8 --direct=1 --group_reporting --name=Write_64k_1 --numjobs=1 --runtime=120 --filename=new --size=20G
I could get only the below values
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=281670KB/s, minb=281670KB/s, maxb=281670KB/s, mint=744454msec, maxt=74454msec
Disk stats (read/write):
nvme0n1: ios=326821/8, merge=0/0, ticks=582640/0, in_queue=582370, util=99.93%
I tried formatting as follows:
mkfs.xfs -f -s size=4096 /dev/nvme0n1
then the values were :
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=781149KB/s, minb=781149KB/s, maxb=781149KB/s, mint=266
847msec, maxt=26847msec
Disk stats (read/write):
nvme0n1: ios=326748/7, merge=0/0, ticks=200270/0, in_queue=200350, util=99.51%
I find no performance degradation when used with
4k page size
Any fio block size lesser than 64k
With ext4 fs with default configs
What could be the issue? Is this any alignment issue? What Am I missing here? Any help appreciated
The issue is your SSD's native sector size is 4K. So your file system's block size should be set to match so that reads and writes are aligned on sector boundaries. Otherwise you'll have blocks that span 2 sectors, and therefore require 2 sector reads to return 1 block (instead of 1 read).
If you have an Intel SSD, the newer ones have a variable sector size you can set using their Intel Solid State Drive DataCenter Tool. But honestly 4096 is still probably the drive's true sector size anyway and you'll get the most consistent performance using it and setting your file system to match.
On ZFS on Linux the setting is ashift=12 for 4K blocks.

How to get rid of "Some devices missing" in BTRFS after reuse of devices?

I have been playing around with BTRFS on a few drives I had lying around. At first I created BTRFS using the entire drive, but eventually I decided I wanted to use GPT partitions on the drives and recreated the filesystem I needed on the partitions that resulted. (This was so I could use a portion of each drive as Linux swap space, FYI.)
When I got this all done, BTRFS worked a treat. But I have annoying messages saying that I have some old filesystems from my previous experimentation that I have actually nuked. I worry this meant that BTRFS was confused about what space on the drives was available, or that some sort of corruption might occur.
The messages look like this:
$ sudo btrfs file show
Label: 'x' uuid: 06fa59c9-f7f6-4b73-81a4-943329516aee
Total devices 3 FS bytes used 159.20GB
devid 3 size 931.00GB used 134.01GB path /dev/sde
*** Some devices missing
Label: 'root' uuid: 5f63d01d-3fde-455c-bc1c-1b9946e9aad0
Total devices 4 FS bytes used 1.13GB
devid 4 size 931.51GB used 1.03GB path /dev/sdd
devid 3 size 931.51GB used 2.00GB path /dev/sdc
devid 2 size 931.51GB used 1.03GB path /dev/sdb
*** Some devices missing
Label: 'root' uuid: e86ff074-d4ac-4508-b287-4099400d0fcf
Total devices 5 FS bytes used 740.93GB
devid 4 size 911.00GB used 293.03GB path /dev/sdd1
devid 5 size 931.51GB used 314.00GB path /dev/sde1
devid 3 size 911.00GB used 293.00GB path /dev/sdc1
devid 2 size 911.00GB used 293.03GB path /dev/sdb1
devid 1 size 911.00GB used 293.00GB path /dev/sda1
As you can see, I have an old filesystem labeled 'x' and an old one labeled 'root', and both of these have "Some devices missing". The real filesystem, the last one shown, is the one that I am now using.
So how do I clean up the old "Some devices missing" filesystems? I'm a little worried, but mostly just OCD and wanting to tidy up this messy output.
Thanks.
To wipe from disks that are NOT part of your wanted BTRFS FS, I found:
How to clean up old superblock ?
...
To actually remove the filesystem use:
wipefs -o 0x10040 /dev/sda
8 bytes [5f 42 48 52 66 53 5f 4d] erased at offset 0x10040 (btrfs)"
from: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#I_can.27t_mount_my_filesystem.2C_and_I_get_a_kernel_oops.21
I actually figured this out for myself. Maybe it will help someone else.
I poked around in the code to see what was going on. When the btrfs filesystem show command is used to show all filesystems on all devices, it scans every device and partition in /proc/partitions. Each device and each partition is examined to see if there is a BTRFS "magic number" and associated valid root data structure found at 0x10040 offset from the beginning of the device or partition.
I then used hexedit on a disk that was showing up wrong in my own situation and sure enough there was a BTRFS magic number (which is the ASCII string _BHRfS_M) there from my previous experiments.
I simply nailed that magic number by overwriting a couple of the characters of the string with "**", also using hexedit, and the erroneous entries magically disappeared!

Discrepancy between call to statvfs and df command

When I use the statvfs command on a Linux machine to get the available free space on a mounted file system, the number I get is slightly different than what is reported by df.
For example, one on machine I have with a 500G hard drive, I get the following output from df:
# df --block-size=1 --no-sync
Filesystem 1B-blocks Used Available Use% Mounted on
/dev/md0 492256247808 3422584832 463828406272 1% /
tmpfs 2025721856 0 2025721856 0% /lib/init/rw
varrun 2025721856 114688 2025607168 1% /var/run
varlock 2025721856 4096 2025717760 1% /var/lock
udev 2025721856 147456 2025574400 1% /dev
tmpfs 2025721856 94208 2025627648 1% /dev/shm
A call to statvfs gives me a block size of 4096 and 119344155 free blocks, so that there should be 488,833,658,880 bytes free. Yet, df reports there are 463,828,406,272 bytes free. Why is there a discrepancy here?
Since your discrepancy is close to 5% [1], which is the default percentage allocated for root, there is a possibility that you compare the df result with the ->f_bfree of statvfs and
not with ->f_bavail, which is what df uses.
[1]: (488833658880 - 463828406272)/492256247808 = 0.0508
#include <stdio.h>
#include <sys/statvfs.h>
int main(){
struct statvfs stat;
int a=statvfs("/",&stat);
printf("total free disk space of the partition: %d GB \n",(stat.f_bavail)*8/2097152);
//512 is 2^9 - one half of a kilobyte.
//A kilobyte is 2^10. A megabyte is 2^20. A gigabyte is 2^30. A terabyte
//is 2^40. And so on. The common computer units go up by 10's of powers
//of 2 like that.
//So you need to divide by 2^(30-9) == 2^21 == 2097152 to get gigabytes.
//And multiply by 8 because 1 byte=8bit
return 0;
}
I do in this form because I prefer the outcome in Gb, but you can modify the units changing the exponent.And is true the first answer, as you can see, I use that too
Note that under Linux, df uses stat on the device file, not statvfs; cf the coreutils source.
However, the basic principle above applies. As to why df is so much faster and whether there are any shortcuts available for du....
The filesystem entry for any specific folder contains information about that folder only: how much space is allocated for the folder itself, and how space is allocated for the file system entries for the files and folders in that folder - it does not contain the total space occupied by that folder and all of its subfolders.
To get that information, du has to list all of the folders in the original folder, all of their folders, and so on, totalling as it goes.
So du will return very quickly for a folder without subfolders, and ever more slowly for folders with increasing numbers of subfolders.
Contrast that with df, with a call to stat(3) against a device file, or with a call to statfs(2) or statvfs(3) against any file on a device, etc., all of which return information about the specific device/filesystem immediately.
du can only rival the speed of df in the case of being called against a single file, where both du and df are making a single system call and doing very little math.

Resources