Ext4 "unused inodes" "free inodes" diffrence? - linux

When I use the dumpe2fs command to look at the Block Group of the ext4 filesystem, I see "free inodes" and "unused inodes".
I want to know the difference between them ?
Why do they have different values in Group 0 ?
Group 0: (Blocks 0-32767) [ITABLE_ZEROED]
Checksum 0xd1a1, unused inodes 0
Primary superblock at 0, Group descriptors at 1-3
Reserved GDT blocks at 4-350
Block bitmap at 351 (+351), Inode bitmap at 367 (+367)
Inode table at 383-892 (+383)
12 free blocks, 1 free inodes, 1088 directories
Free blocks: 9564, 12379-12380, 12401-12408, 12411
Free inodes: 168
Group 1: (Blocks 32768-65535) [ITABLE_ZEROED]
Checksum 0x0432, unused inodes 0
Backup superblock at 32768, Group descriptors at 32769-32771
Reserved GDT blocks at 32772-33118
Block bitmap at 352 (+4294934880), Inode bitmap at 368 (+4294934896)
Inode table at 893-1402 (+4294935421)
30 free blocks, 0 free inodes, 420 directories
Free blocks: 37379-37384, 37386-37397, 42822-42823, 42856-42859, 42954-42955, 44946-44947, 45014-45015
Free inodes:

The "unused inodes" reported are inodes at the end of the inode table for each group that have never been used in the lifetime of the filesystem, so e2fsck does not need to scan them during repair. This can speed up e2fsck pass-1 scanning significantly.
The "free inodes" are the current unallocated inodes in the group. This number includes the "unused inodes" number, so that they will still be used if there are many (typically very small) inodes allocated in a single group.
From:
https://unix.stackexchange.com/a/715165/536354

Related

How to read/write sectors in range 0 - 2048 under Linux?

I have small task to read/write sectors in "free area", this free area between MBR's sector (LBA=0) and first sector of the first partition (tipicaly LBA=2048).
So, I can read/write first 128 sectors. After LBA=127 write operation end successfully but nothing has been written to disk really.
So is there a some kernel limitation ?
It was a logic bug in the C code, so the problem was resolved.
It was incorrect file offset computation as input for vfs_read() routine.

Intel NVMe drive Performance degradation with xfs filesystem with sector size other than 4096

I am working with NVMe card on linux(Ubuntu 14.04).
I am finding some performance degradation for Intel NVMe card when formatted with xfs file system with its default sector size(512). or any other sector size less than 4096.
In the experiment I formatted the card with xfs filesystem with default options. I tried running fio with 64k block size on an arm64 platform with 64k page size.
This is the command used
fio --rw=randread --bs=64k --ioengine=libaio --iodepth=8 --direct=1 --group_reporting --name=Write_64k_1 --numjobs=1 --runtime=120 --filename=new --size=20G
I could get only the below values
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=281670KB/s, minb=281670KB/s, maxb=281670KB/s, mint=744454msec, maxt=74454msec
Disk stats (read/write):
nvme0n1: ios=326821/8, merge=0/0, ticks=582640/0, in_queue=582370, util=99.93%
I tried formatting as follows:
mkfs.xfs -f -s size=4096 /dev/nvme0n1
then the values were :
Run status group 0 (all jobs):
READ: io=20480MB, aggrb=781149KB/s, minb=781149KB/s, maxb=781149KB/s, mint=266
847msec, maxt=26847msec
Disk stats (read/write):
nvme0n1: ios=326748/7, merge=0/0, ticks=200270/0, in_queue=200350, util=99.51%
I find no performance degradation when used with
4k page size
Any fio block size lesser than 64k
With ext4 fs with default configs
What could be the issue? Is this any alignment issue? What Am I missing here? Any help appreciated
The issue is your SSD's native sector size is 4K. So your file system's block size should be set to match so that reads and writes are aligned on sector boundaries. Otherwise you'll have blocks that span 2 sectors, and therefore require 2 sector reads to return 1 block (instead of 1 read).
If you have an Intel SSD, the newer ones have a variable sector size you can set using their Intel Solid State Drive DataCenter Tool. But honestly 4096 is still probably the drive's true sector size anyway and you'll get the most consistent performance using it and setting your file system to match.
On ZFS on Linux the setting is ashift=12 for 4K blocks.

How to get rid of "Some devices missing" in BTRFS after reuse of devices?

I have been playing around with BTRFS on a few drives I had lying around. At first I created BTRFS using the entire drive, but eventually I decided I wanted to use GPT partitions on the drives and recreated the filesystem I needed on the partitions that resulted. (This was so I could use a portion of each drive as Linux swap space, FYI.)
When I got this all done, BTRFS worked a treat. But I have annoying messages saying that I have some old filesystems from my previous experimentation that I have actually nuked. I worry this meant that BTRFS was confused about what space on the drives was available, or that some sort of corruption might occur.
The messages look like this:
$ sudo btrfs file show
Label: 'x' uuid: 06fa59c9-f7f6-4b73-81a4-943329516aee
Total devices 3 FS bytes used 159.20GB
devid 3 size 931.00GB used 134.01GB path /dev/sde
*** Some devices missing
Label: 'root' uuid: 5f63d01d-3fde-455c-bc1c-1b9946e9aad0
Total devices 4 FS bytes used 1.13GB
devid 4 size 931.51GB used 1.03GB path /dev/sdd
devid 3 size 931.51GB used 2.00GB path /dev/sdc
devid 2 size 931.51GB used 1.03GB path /dev/sdb
*** Some devices missing
Label: 'root' uuid: e86ff074-d4ac-4508-b287-4099400d0fcf
Total devices 5 FS bytes used 740.93GB
devid 4 size 911.00GB used 293.03GB path /dev/sdd1
devid 5 size 931.51GB used 314.00GB path /dev/sde1
devid 3 size 911.00GB used 293.00GB path /dev/sdc1
devid 2 size 911.00GB used 293.03GB path /dev/sdb1
devid 1 size 911.00GB used 293.00GB path /dev/sda1
As you can see, I have an old filesystem labeled 'x' and an old one labeled 'root', and both of these have "Some devices missing". The real filesystem, the last one shown, is the one that I am now using.
So how do I clean up the old "Some devices missing" filesystems? I'm a little worried, but mostly just OCD and wanting to tidy up this messy output.
Thanks.
To wipe from disks that are NOT part of your wanted BTRFS FS, I found:
How to clean up old superblock ?
...
To actually remove the filesystem use:
wipefs -o 0x10040 /dev/sda
8 bytes [5f 42 48 52 66 53 5f 4d] erased at offset 0x10040 (btrfs)"
from: https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#I_can.27t_mount_my_filesystem.2C_and_I_get_a_kernel_oops.21
I actually figured this out for myself. Maybe it will help someone else.
I poked around in the code to see what was going on. When the btrfs filesystem show command is used to show all filesystems on all devices, it scans every device and partition in /proc/partitions. Each device and each partition is examined to see if there is a BTRFS "magic number" and associated valid root data structure found at 0x10040 offset from the beginning of the device or partition.
I then used hexedit on a disk that was showing up wrong in my own situation and sure enough there was a BTRFS magic number (which is the ASCII string _BHRfS_M) there from my previous experiments.
I simply nailed that magic number by overwriting a couple of the characters of the string with "**", also using hexedit, and the erroneous entries magically disappeared!

How does stat command calculate the blocks of a file?

I am wondering how the stat command calculates the number of blocks for a file. I read this article that says:
The value st_blocks gives the size of the file in 512-byte blocks. (This may be smaller than st_size/512 e.g. when the file has holes.) The value st_blksize gives the "preferred" blocksize for efficient file system I/O. (Writing to a file in smaller chunks may cause an inefficient read-modify-rewrite.)
Yet I cannot verify this with my own tests.
My file system is ext3.
The command dumpe2fs -h /dev/sda3 shows:
...
First block: 0
Block size: 4096
Fragment size: 4096
...
Then I run
kent#KentT60:~/Desktop$ stat Email
File: `Email'
Size: 965 Blocks: 8 IO Block: 4096 regular file
Device: 80ah/2058d Inode: 746095 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ kent) Gid: ( 1000/ kent)
Access: 2009-08-11 21:36:36.000000000 +0200
Modify: 2009-08-11 21:36:35.000000000 +0200
Change: 2009-08-11 21:36:35.000000000 +0200
If "Blocks" here means: "how many 512 bytes blocks", the number should be 2, not 8. I thought that the block size of the file system (IO block) is 4k.
If the file system gets the file Email, it will fetch a minimum of 4k from the disk (8 x 512 bytes blocks), which means 965/512 + 6 = 8. I am not sure if this guess is correct.
Another test:
kent#KentT60:~/Desktop$ stat wxPython-demo-2.8.10.1.tar.bz2
File: `wxPython-demo-2.8.10.1.tar.bz2'
Size: 3605257 Blocks: 7056 IO Block: 4096 regular file
Device: 80ah/2058d Inode: 746210 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ kent) Gid: ( 1000/ kent)
Access: 2009-08-12 21:45:45.000000000 +0200
Modify: 2009-08-12 21:43:46.000000000 +0200
Change: 2009-08-12 21:43:46.000000000 +0200
3605257/512=7041.xx = 7042
Following my guess above, this would be 7042 + 6 = 7048. But the stat result shows 7056.
And another example from the internet at https://www.computerhope.com/unix/stat.htm. I pasted the example at the bottom of the page here:
File: `index.htm'
Size: 17137 Blocks: 40 IO Block: 8192 regular file
Device: 8h/8d Inode: 23161443 Links: 1
Access: (0644/-rw-r--r--) Uid: (17433/comphope) Gid: ( 32/ www)
Access: 2007-04-03 09:20:18.000000000 -0600
Modify: 2007-04-01 23:13:05.000000000 -0600
Change: 2007-04-02 16:36:21.000000000 -0600
In this example, the file system block size is 8k. I suppose the "Blocks" value should be 16xN, but it is 40. I'm getting lost...
Can anyone explain how stat calculates the "Blocks" value?
Thanks!
The stat command-line tool uses the stat / fstat etc. functions, which return data in the stat structure. The st_blocks member of the stat structure returns:
The total number of physical blocks of size 512 bytes actually allocated on disk. This field is not defined for block special or character special files.
So for your "Email" example, with a size of 965 and a block count of 8, it is indicating that 8*512=4096 bytes are physically allocated on disk. The reason it's not 2 is that the file system on disk does not allocate space in units of 512, it evidently allocates them in units of 4096. (And the unit of allocation may vary depending on file size and filesystem sophistication. E.g. ZFS supports different units of allocation.)
Similarly, for the wxPython example, it indicates that 7056*512 bytes, or 3612672 bytes are physically allocated on disk. You get the idea.
The IO block size is "a hint as to the 'best' unit size for I/O operations" - it's usually the unit of allocation on the physical disk. Don't get confused between the IO block and the block that stat uses to indicate physical size; the blocks for physical size are always 512 bytes.
Update based on comment:
Like I said, st_blocks is how the OS indicates how much space is used by the file on disk. The actual units of allocation on disk are the choice of the file system. For example, ZFS can have allocation blocks of variable size, even in the same file, because of the way it allocates blocks: files start out having a small block size, and the block size keeps on increasing until it reaches a particular point. If the file is later truncated, it will probably keep the old block size. So based on the history of the file, it can have multiple possible block sizes. So given a file size it is not always obvious why it has a particular physical size.
Concrete example: on my Solaris box, with a ZFS file system, I can create a very short file:
$ echo foo > test
$ stat test
Size: 4 Blocks: 2 IO Block: 512 regular file
(irrelevant details omitted)
OK, small file, 2 blocks, physical disk usage is 1024 for this file.
$ dd if=/dev/zero of=test2 bs=8192 count=4
$ stat test2
Size: 32768 Blocks: 65 IO Block: 32768 regular file
OK, now we see physical disk usage of 32.5K, and an IO block size of 32K. I then copied it to test3 and truncated this test3 file in an editor:
$ cp test2 test3
$ joe -hex test3
$ stat test3
Size: 4 Blocks: 65 IO Block: 32768 regular file
Well now, here's a file with 4 bytes in it - just like test - but it's using 32.5K physically on the disk, because of the way the ZFS file system allocates space. Block sizes increase as the file gets larger, but they don't decrease when the file gets smaller. (And yes, this can lead to substantial wasted space depending on the kinds of files and file operations you do on ZFS, which is why it allows you to set the maximum block size on a per-filesystem basis, and change it dynamically.)
Hopefully, you can now appreciate that there isn't necessarily a simple relationship between file size and physical disk usage. Even in the above it's not clear why 32.5K bytes are needed to store a file that's exactly 32K in size - it appears that ZFS generally needs an extra 512 bytes for extra storage of its own. Perhaps it's using that storage for checksums, reference counts, transaction state - file system bookkeeping. By including these extras in the indicated physical file size, it seems like ZFS is trying not to mislead the user as to the physical costs of the file. That doesn't mean it's trivial to reverse-engineer the calculation without knowing intimate details about the underlying file system implementation.

Discrepancy between call to statvfs and df command

When I use the statvfs command on a Linux machine to get the available free space on a mounted file system, the number I get is slightly different than what is reported by df.
For example, one on machine I have with a 500G hard drive, I get the following output from df:
# df --block-size=1 --no-sync
Filesystem 1B-blocks Used Available Use% Mounted on
/dev/md0 492256247808 3422584832 463828406272 1% /
tmpfs 2025721856 0 2025721856 0% /lib/init/rw
varrun 2025721856 114688 2025607168 1% /var/run
varlock 2025721856 4096 2025717760 1% /var/lock
udev 2025721856 147456 2025574400 1% /dev
tmpfs 2025721856 94208 2025627648 1% /dev/shm
A call to statvfs gives me a block size of 4096 and 119344155 free blocks, so that there should be 488,833,658,880 bytes free. Yet, df reports there are 463,828,406,272 bytes free. Why is there a discrepancy here?
Since your discrepancy is close to 5% [1], which is the default percentage allocated for root, there is a possibility that you compare the df result with the ->f_bfree of statvfs and
not with ->f_bavail, which is what df uses.
[1]: (488833658880 - 463828406272)/492256247808 = 0.0508
#include <stdio.h>
#include <sys/statvfs.h>
int main(){
struct statvfs stat;
int a=statvfs("/",&stat);
printf("total free disk space of the partition: %d GB \n",(stat.f_bavail)*8/2097152);
//512 is 2^9 - one half of a kilobyte.
//A kilobyte is 2^10. A megabyte is 2^20. A gigabyte is 2^30. A terabyte
//is 2^40. And so on. The common computer units go up by 10's of powers
//of 2 like that.
//So you need to divide by 2^(30-9) == 2^21 == 2097152 to get gigabytes.
//And multiply by 8 because 1 byte=8bit
return 0;
}
I do in this form because I prefer the outcome in Gb, but you can modify the units changing the exponent.And is true the first answer, as you can see, I use that too
Note that under Linux, df uses stat on the device file, not statvfs; cf the coreutils source.
However, the basic principle above applies. As to why df is so much faster and whether there are any shortcuts available for du....
The filesystem entry for any specific folder contains information about that folder only: how much space is allocated for the folder itself, and how space is allocated for the file system entries for the files and folders in that folder - it does not contain the total space occupied by that folder and all of its subfolders.
To get that information, du has to list all of the folders in the original folder, all of their folders, and so on, totalling as it goes.
So du will return very quickly for a folder without subfolders, and ever more slowly for folders with increasing numbers of subfolders.
Contrast that with df, with a call to stat(3) against a device file, or with a call to statfs(2) or statvfs(3) against any file on a device, etc., all of which return information about the specific device/filesystem immediately.
du can only rival the speed of df in the case of being called against a single file, where both du and df are making a single system call and doing very little math.

Resources