Can you explain why the available space is only 78G but the different beetween 905G and 782 is 123?
Where is the other 45?
/dev/md2 905G 782G 78G 92% /var
File system is EXT-3
i check that is normal on ext-3 that filesystem reserve 5% of disk.
For free that you can do this command
tune2fs -m 0 /dev/sda1
Related
when i'm writing df -h in my instance i'm getting this data:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.7G 0 7.7G 0% /dev
tmpfs 7.7G 0 7.7G 0% /dev/shm
tmpfs 7.7G 408K 7.7G 1% /run
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
/dev/nvme0n1p1 32G 24G 8.5G 74% /
tmpfs 1.6G 0 1.6G 0% /run/user/1000
but when i'm clicking sudo du -sh / i'm getting:
11G /
So in df -h, / size is 24G but in du -sh same directory the size is 11G.
I'm trying to get some free space on my instance and can't find the files that cause that.
What i'm missing?
did df -h is really giving fake data?
This question comes up quite often. The file system allocates disk blocks in the file system to record its data. This data is referred to as metadata which is not visible to most user-level programs (such as du). Examples of metadata are inodes, disk maps, indirect blocks, and superblocks.
The du command is a user-level program that isn't aware of filesystem metadata, while df looks at the filesystem disk allocation maps and is aware of file system metadata. df obtains true filesystem statistics, whereas du sees only a partial picture.
There are many causes on why the disk space used or available when running the du or df commands differs.
Perhaps the most common is deleted files. Files that have been deleted may still be open by at least one process. The entry for such files is removed from the associated directory, which makes the file inaccessible. Therefore the command du which only counts files does not take these files into account and comes up with a smaller value. As long as a process still has the deleted file in use, however, the associated blocks are not yet released in the file system, so df which works at the kernel level correctly displays these as occupied. You can find out if this is the case by running the following:
lsof | grep '(deleted)'
The fix for this issue would be to restart the services that still have those deleted files open.
The second most common cause is if you have a partition or drive mounted on top of a directory with the same name. For example, if you have a directory under / called backup which contains data and then you mount a new drive on top of that directory and label it /backup but it contains no data then the space used will show up with the df command even though the du command shows no files.
To determine if there are any files or directories hidden under an active mount point, you can try using a bind-mount to mount your / filesystem which will enable me to inspect underneath other mount points. Note, this is recommended only for experienced system administrators.
mkdir /tmp/tmpmnt
mount -o bind //tmp/tmpmnt
du /tmp/tmpmnt
After you have confirmed that this is the issue, the bind mount can be removed by running:
umount /tmp/tmpmnt/
rmdir /tmp/tmpmnt
Another possible cause might be filesystem corruption. If this is suspected, please make sure you have good backups, and at your convenience, please unmount the filesystem and run fsck.
Again, this should be done by experienced system administrators.
You can also check the calculation by running:
strace -e statfs df /
This will give you output similar to:
statfs("/", {f_type=XFS_SB_MAGIC, f_bsize=4096, f_blocks=20968699, f_bfree=17420469,
f_bavail=17420469, f_files=41942464, f_ffree=41509188, f_fsid={val=[64769, 0]},
f_namelen=255, f_frsize=4096, f_flags=ST_VALID|ST_RELATIME}) = 0
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 83874796 14192920 69681876 17% /
+++ exited with 0 +++
Notice the difference between f_bfree and f_bavail? These are the free blocks in the filesystem vs free blocks available to an unprivileged user. The used column is merely a calculation between the two.
Hope this will make your idea clear. Let me know if you still have any doubts.
I don't know actually if this is more a "classic" linux or a docker question but:
On an VM where some of my docker containers are running I've a strange thing. /var/lib/docker is an own partitionwith 20GB. When I look over the partition with df -h I see this:
eti-gwl1v-dockerapp1 root# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 815M 7.0G 11% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda2 12G 3.2G 8.0G 29% /
/dev/sda7 3.9G 17M 3.7G 1% /tmp
/dev/sda5 7.8G 6.8G 649M 92% /var
/dev/sdb2 20G 47M 19G 1% /usr2
/dev/sdb1 20G 2.9G 16G 16% /var/lib/docker
So usage is at 16%. But when I now navigate to /var/lib and do a du -sch docker I see this:
eti-gwl1v-dockerapp1 root# cd /var/lib
eti-gwl1v-dockerapp1 root# du -sch docker
19G docker
19G total
eti-gwl1v-dockerapp1 root#
So same directory/partition but two sizes? How is that going?
This is really a question for unix.stackexchange.com, but there is filesystem overhead that makes the partition larger than the total size of the individual files within it.
du and df show you two different metrics:
du shows you the (estimated) file space usage, i.e. the sum of all file sizes
df shows you the disk space usage, i.e. how much space on the disk is actually used
These are distinct values and can often diverge:
disk usage may be bigger than the mere sum of file sizes due to additional meta data: e.g. the disk usage of 1000 empty files (file size = 0) is >0 since their file names and permissions need to be stored
the space used by one or multiple files may be smaller than their reported file size due to:
holes in the file - block consisting of only null bytes are not actually written to disk, see sparse files
automatic file system compression
deduplication through hard links or copy-on-write
Since docker uses the image layers as a means of deduplication the latter is most probably the cause of your observation - i.e. the sum of the files is much bigger because most of them are shared/deduplicated through hard links.
du estimates filesystem usage through summing the size of all files in it. This does not deal well with the usage of overlay2: there will be many directories which contain the same files as contained in another, but overlaid with additional layers using overlay2. As such, du will show a very inflated number.
I have not tested this since my Docker daemon is not using overlay2, but using du -x to avoid going into overlays could give the right amount. However, this wouldn't work for other Docker drivers, like btrfs, for example.
I created a Debian VM on google cloud. Below is information from "df -h". What are those filesystems, such as tmpfs or /dev/sda1, mean? Any beginner-friendly reference for them? In particular, how much space can I use at my working directory "~", and how much space can I use in /usr/local (for installing software). Any idea?
zell#instance-1:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 371M 6.4M 365M 2% /run
/dev/sda1 9.8G 1.4G 7.9G 15% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
df -h
shows the amount of disk space used and available on Linux file systems. The command df stands for Disk Free and -h means human-readable form.
You can also see information about a specific filesystem, as follows:
df /dev/sda1
From Documentation:
Tmpfs is a file system which keeps all files in virtual memory.
Everything in tmpfs is temporary in the sense that no files will be
created on your hard drive. If you unmount a tmpfs instance,
everything stored therein is lost.
They can be mounted on different directories. For example, a tmpfs filesystem mounted at /dev/shm is used for the implementation of POSIX shared memory, an inter-process communication (IPC) where two or more processes may read from and write to the shared memory region, and POSIX semaphores, which allows processes and threads to sync their actions.
From What does /dev/sda mean?:
/dev/ is the part in the unix directory tree that contains all "device" files -- unix traditionally treats just about everything you can access as a file to read from or write to. Therefore, /dev/sda1 means the first partition on the first drive, and /dev/sda9 will mean the ninth partition on the first drive.
Check out the link for more information.
To display the amount of disk space used by the specified files and for each subdirectory, you can run the following command:
du -h
WHERE du stands for Disk Usage and -h means human-readable form.
Optionally you can use the following command to display the amount of disk space used by a certain directory:
du -h usr/local
We have an issue where our CentOS 7 server will not generate a kernel dump file in /var/crash upon Kernel panic. It appears the crash kernel never boots. We’ve followed the Rhel guide (http://red.ht/1sCztdv) on configuring crash dumps and at first glance everything appears to be configured correctly. We are triggering a panic like this:
echo 1 > /proc/sys/kernel/sysrq
echo c > /proc/sysrq-trigger
This causes the system to freeze. We get no messages on the console and the console becomes unresponsive. At this point I would imagine the system would boot a crash kernel and begin writing a dump out to /var/crash. I’ve left it in this frozen state for up to 30 minutes to give it time to complete the entire dump. However after a hard cold reboot /var/crash is empty.
Additionally, I've replicated the configuration in a KVM virtual machine and kdump words as expected. So there is either something wrong with my configuration on the physical system or something odd about that hardware config that causes the hang rather than the dump.
Our server is an HP G9 with 24 cores and 128GB of memory. Here are some other details:
[user#host]$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-123.el7.x86_64 root=UUID=287798f7-fe7a-4172-a35a-6a78051af4d2 ro rd.lvm.lv=vg_sda/lv_root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_sda/lv_swap crashkernel=auto vconsole.keymap=us rhgb nosoftlockup intel_idle.max_cstate=0 mce=ignore_ce processor.max_cstate=0 idle=mwait isolcpus=2-11,14-23
[user#host]$ systemctl is-active kdump
active
[user#host]$ cat /etc/kdump.conf
path /var/crash
core_collector makedumpfile -l --message-level 1 -d 31 -c
[user#host]$ cat /proc/iomem |grep Crash
2b000000-357fffff : Crash kernel
[user#host]$ dmesg|grep Reserving
[ 0.000000] Reserving 168MB of memory at 688MB for crashkernel (System RAM: 131037MB)
[user#host]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_sda-lv_root 133G 4.7G 128G 4% /
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 63G 9.1M 63G 1% /run
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 492M 175M 318M 36% /boot
/dev/mapper/vg_sdb-lv_data 2.8T 145G 2.6T 6% /data
After modifying the following parameters we were able to reliably get crash dumps:
Changed crashkernel=auto to crashkernel=1G: I'm not sure why we need 1G as the formula indicated 128M+64M for every 1TB of ram.
/etc/sysconfig/kdump: Removed everything from KDUMP_COMMANDLINE_APPEND excpet irqpoll nr_cpus=1 resulting in: KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1
/etc/kdump.cfg: Add compression (“-c”) to makedump
Not 100% sure why this works but it does. Would love to know what others think
Eric
Eric,
1G seems a bit large. I've never seen anything larger than 200M for a normal server. Not sure about the sysconfig settings. Compression is a good idea but I don't think it would affect the issue since you're target is close to total memory and you're only dumping the kernel ring.
Hello I have a raspberry PI with a 8GB SD card in it where I have installed Archlinux on.
Now i was curious how much space I have used until now after installing all the packages I needed for a private dev server. this is the result.
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.7G 899M 690M 57% /
devtmpfs 83M 0 83M 0% /dev
tmpfs 231M 0 231M 0% /dev/shm
tmpfs 231M 256K 231M 1% /run
tmpfs 231M 0 231M 0% /sys/fs/cgroup
tmpfs 231M 176K 231M 1% /tmp
/dev/mmcblk0p1 90M 9.0M 81M 10% /boot
As u see it shows me only 1.7G instead of +- 8G I think this is because I have installed it once on the SD card but after i messed up something I tried it again.. could it be possible that the old installation is still on the SD card? how can I see this and delete this if this is the case? or is this normal?
Thanks in advance
Beware: arch has moved to a slightly different partitioning scheme which will cause the above to fail.
See this blog post for details, but the short version is there are now three partitions. p02 is an Extended partition containing the p05 logical partition.
so:
d 2
n e 2 <cr><cr>
n l <cr><cr>
w
then reboot and resize (p05 instead of p02)
resize2fs /dev/mmcblk0p5
It's possible the rest of the SD card is unpartitioned space.
You can use GParted to view the partitions on the SD card. You can then either create an additional partition or extend your current one.
From GParted you will also be able to see if there are any old installations on other partitions, as you suggest, however I think this would be unlikely.
When installing the raspbian distro the first thing you do after booting is fixing the partitioning. Since you installed Archlinux you won't get through this "guided" solution and you have to do it manually as explained above
This is how I solved this
As root:
fdisk /dev/mmcblk0
Delete the second partition /dev/mmcblk0p2
d
2
Create a new primary partition and use default sizes prompted. This will then create a partiton that fills the disk
n
p
2
enter
enter
Save and exit fdisk:
w
Now reboot. Once rebooted:
resize2fs /dev/mmcblk0p2
Your main / partition should be the full size of the disk now.