Differences in showing disk usage percentage - linux

I'm using Arch Linux.
When I run df -T command it gives me this:
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/sda3 ext4 930449240 750685092 132430124 86% /home
As it can be seen, the number on "Use%" column displays 86 while through echo $(awk "BEGIN {printf \"%.3f\n\", 771793804 * 100 / 930449240}") I get 82.95.
What do you think is causing the difference in numbers and which one is the more reliable one?

Related

df -h giving fake data?

when i'm writing df -h in my instance i'm getting this data:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.7G 0 7.7G 0% /dev
tmpfs 7.7G 0 7.7G 0% /dev/shm
tmpfs 7.7G 408K 7.7G 1% /run
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
/dev/nvme0n1p1 32G 24G 8.5G 74% /
tmpfs 1.6G 0 1.6G 0% /run/user/1000
but when i'm clicking sudo du -sh / i'm getting:
11G /
So in df -h, / size is 24G but in du -sh same directory the size is 11G.
I'm trying to get some free space on my instance and can't find the files that cause that.
What i'm missing?
did df -h is really giving fake data?
This question comes up quite often. The file system allocates disk blocks in the file system to record its data. This data is referred to as metadata which is not visible to most user-level programs (such as du). Examples of metadata are inodes, disk maps, indirect blocks, and superblocks.
The du command is a user-level program that isn't aware of filesystem metadata, while df looks at the filesystem disk allocation maps and is aware of file system metadata. df obtains true filesystem statistics, whereas du sees only a partial picture.
There are many causes on why the disk space used or available when running the du or df commands differs.
Perhaps the most common is deleted files. Files that have been deleted may still be open by at least one process. The entry for such files is removed from the associated directory, which makes the file inaccessible. Therefore the command du which only counts files does not take these files into account and comes up with a smaller value. As long as a process still has the deleted file in use, however, the associated blocks are not yet released in the file system, so df which works at the kernel level correctly displays these as occupied. You can find out if this is the case by running the following:
lsof | grep '(deleted)'
The fix for this issue would be to restart the services that still have those deleted files open.
The second most common cause is if you have a partition or drive mounted on top of a directory with the same name. For example, if you have a directory under / called backup which contains data and then you mount a new drive on top of that directory and label it /backup but it contains no data then the space used will show up with the df command even though the du command shows no files.
To determine if there are any files or directories hidden under an active mount point, you can try using a bind-mount to mount your / filesystem which will enable me to inspect underneath other mount points. Note, this is recommended only for experienced system administrators.
mkdir /tmp/tmpmnt
mount -o bind //tmp/tmpmnt
du /tmp/tmpmnt
After you have confirmed that this is the issue, the bind mount can be removed by running:
umount /tmp/tmpmnt/
rmdir /tmp/tmpmnt
Another possible cause might be filesystem corruption. If this is suspected, please make sure you have good backups, and at your convenience, please unmount the filesystem and run fsck.
Again, this should be done by experienced system administrators.
You can also check the calculation by running:
strace -e statfs df /
This will give you output similar to:
statfs("/", {f_type=XFS_SB_MAGIC, f_bsize=4096, f_blocks=20968699, f_bfree=17420469,
f_bavail=17420469, f_files=41942464, f_ffree=41509188, f_fsid={val=[64769, 0]},
f_namelen=255, f_frsize=4096, f_flags=ST_VALID|ST_RELATIME}) = 0
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 83874796 14192920 69681876 17% /
+++ exited with 0 +++
Notice the difference between f_bfree and f_bavail? These are the free blocks in the filesystem vs free blocks available to an unprivileged user. The used column is merely a calculation between the two.
Hope this will make your idea clear. Let me know if you still have any doubts.

Get free space of HDD in linux

Within a bash script i need to get the total disk size and the currently used size of the complete disk.
I know i can get the total disk size without needed to be root with this command:
cat /sys/block/sda/size
This command will output the count of blocks on device SDA.
Multiply it with 512 and you'll get the amount of bytes on this device.
This is sufficient with the total disk size.
Now for the currently used space. I want to get this value without being root.
I can assume the device name is SDA.
Now there is this command: df
I thought i could use this command but it seems this command only outputs data of the currently mounted partitions.
Is there a way to get a total of space used on disk SDA without needing to be root and not all partitions needs to be mounted?
Let's assume the following example:
/dev/sda1 80GB Linux partition 20GB Used
/dev/sda2 80GB Linux Home partition 20GB Used
/dev/sda3 100GB Windows Parition. 30GB Used
Let's assume my SDA disk is partitioned like above. But while i'm on Linux my Windows partition (sda3) is not mounted.
The output of df will give me a grand total of 40 GB Used so it doesn't take sda3 in account.
Again the question:
Is there a way without root to get me a grand total of 70 GB used space?
I think stat command should help. You can get the get the partitions from /proc/partitions.
Sample stat command output:
$ stat -f /dev/sda1
File: "/dev/sda1"
ID: 0 Namelen: 255 Type: tmpfs
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 237009 Free: 236970 Available: 236970
Inodes: Total: 237009 Free: 236386
You can use df.
df -h --output='used' /home
Used
3.2G
If you combine this with some sed or awk you can have the value you seek

Backup entire disk in Ubuntu

I would like to make a backup of the entire HDD disk.
Step-by-step what I'am trying to do:
1) Check storage capacity (that is going to be backupped):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 455G 157G 275G 37% /
2) Mount extra, empty hdd to /mnt/backup/
/dev/sdb 294G 63M 279G 1% /mnt/backup
3) Run backup (using lzop as the fastest compressor)
dd if=/dev/sda1 bs=4M conv=noerror iflag=noatime,nofollow | lzop -1 > /mnt/backup/dev-sda1.lzo
But the backup fails with error: lzop: No space left on device: <stdout>
The extra hdd being fulled with dev-sda1.lzo. But the original size of /dev/sda1 "157G" is obviously less than available on /dev/sdb "279G". Even without compression.
In /etc/fstab /dev/sda1 being mounted to "/":
UUID=8a49b90e-6115-43a6-9702-7620182bbbf5 / ext4 errors=remount-ro 0 1
Is it possible that "dd" is doing recursive copy of the "/mnt/backup/" folder and this leads to it fail ?
Please, advice
Thanks to Mark Setchell to show me the correct direction.
Finally the solution to create dump of the whole partition without spaces is:
dump -0a -z1 -f /mnt/hdd1/dev-sda1.dump.gz /dev/sda1
For 157 G partition of Ubuntu 14.04 + development files + database files "dump" takes 45 minutes (on 7200 rpm HDD) and the result file was 80 G (compression level = 1).

XFS No space left on device

I have a server setup of an XFS partition on LVM. While copying files to the home partition, "No space left on device" is displayed.
df -h displays sufficient space:
/dev/mapper/prod--vg-home 35G 21G 15G 60% /home
df -i also displays sufficient inodes:
/dev/mapper/prod--vg-home 36700160 379390 36320770 2% /home
I did verify the impact of changing the maximum percentage of inodes:
xfs_growfs -m 25 /dev/mapper/prod--vg-home
This amount can easily be decreased and increased.
While experimenting with this setting, I noticed that decreasing it to 3% and increasing it again to 25%, and deleting some files, allows me to add a lot more files again.
xfs_info displays:
meta-data=/dev/mapper/prod--vg-home isize=256 agcount=14, agsize=655360 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=9175040, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
I did read about 64-bit inodes, but it seems to be applicable only for large drives (over 1TB).
Is there any other setting which could cause the "No space left on device" message.
Thank you
There is a bug with xfs_growfs which causes inodes to not be properly distributed across a partition. The solution is to simply remount with the inode64 option. For example, if this was the /dev/vda1, you would do the following:
mount -o remount,inode64 /dev/vda1
You can find more information about the bug at the following link:
http://xfs.org/index.php/XFS_FAQ#Q:_Why_do_I_receive_No_space_left_on_device_after_xfs_growfs.3F
You should also check possible "deleted" files and restart some processes:
lsof -nP +L1

Hudson: returned status code 141: fatal: write error: No space left on device

I copied one of the existing project and created a new project in Hudson. While running build it says "returned status code 141: fatal: write error: No space left on device"
Like suggested in other forums I checked free space and inode used in file system and nothing seems problematic here. Hudson is running as service and Hudons user has been given sudo privilege. Older job can be run so nothing different in new cloned job.
Disk Space
bash-4.1$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dev-lv_root
20G 19G 28K 100% /
tmpfs 1.9G 192K 1.9G 1% /dev/shm
/dev/sda1 485M 83M 377M 19% /boot
/dev/mapper/vg_dev-lv_home
73G 26G 44G 38% /home
i-nodes used
bash-4.1$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg_dev-lv_root
1310720 309294 1001426 24% /
tmpfs 490645 4 490641 1% /dev/shm
/dev/sda1 128016 46 127970 1% /boot
/dev/mapper/vg_dev-lv_home
4833280 117851 4715429 3% /home
Hudson build log
bash-4.1$ cat log
Started by user anonymous
Checkout:workspace / /var/lib/hudson/jobs/Demo/workspace - hudson.remoting.LocalChannel#1d4ab266
Using strategy: Default
Checkout:workspace / /var/lib/hudson/jobs/Demo/workspace - hudson.remoting.LocalChannel#1d4ab266
Fetching changes from the remote Git repository
Fetching upstream changes from ssh://demouser#10.10.10.10:20/home/git-repos/proj.git
ERROR: Problem fetching from origin / origin - could be unavailable. Continuing anyway
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=ERROR: (Underlying report) : Error performing command: git fetch -t ssh://demouser#10.10.10.10:20/home/git-repos/proj.git +refs/heads/*:refs/remotes/origin/*
Command "git fetch -t ssh://demouser#10.10.10.10:20/home/git-repos/proj.git +refs/heads/*:refs/remotes/origin/*" returned status code 141: fatal: write error: No space left on device
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=ERROR: Could not fetch from any repository
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=FATAL: Could not fetch from any repository
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=hudson.plugins.git.GitException: Could not fetch from any repository
at hudson.plugins.git.GitSCM$3.invoke(GitSCM.java:887)
at hudson.plugins.git.GitSCM$3.invoke(GitSCM.java:845)
at hudson.FilePath.act(FilePath.java:758)
at hudson.FilePath.act(FilePath.java:740)
at hudson.plugins.git.GitSCM.gerRevisionToBuild(GitSCM.java:845)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:622)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1483)
at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:507)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:424)
at hudson.model.Run.run(Run.java:1366)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:145)
Your error message is quite clear: There is no space left on device.
This is verified by your df output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dev-lv_root 20G 19G 28K 100% /
This tells you, you have a root partition / with a total size of 20GB which is use by 100%.
20GB is probably a bit small in your case. As this "partition" is managed by LVM (/dev/mapper/vg...) it is possible to extend it to create more space for your data.
Otherwise you have to check, if there is some "garbage" laying around which can be removed.
You can use something like xdiskusage / to find out, what is occupying your precious disk space.
But if you don't understand the concept of a file system, maybe it is easier to find someone else to do it for you.
I had a very similar issue, it turned out to be a 40 gig log file from a "neverending" build which had been running for 8 hours

Resources