I'm having difficulties to understand the disk size of my qcow2 image.
I have a CentOS 6 box running:
# virsh version
Compiled against library: libvirt 0.10.2
Using library: libvirt 0.10.2
Using API: QEMU 0.10.2
Running hypervisor: QEMU 0.12.1
I run couple guests there and without much activity on the guests I noticed the backup ( I do manual complete file copy with cp, no qcow2 based snaps) on one of my guests has grown 4 times. The other guests seem to behave normally and have normal backup size growth.
When I login to that guest I see that
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 396M 5.5M 391M 2% /run
/dev/mapper/debian9--vg-root 188G 2.7G 176G 2% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 236M 62M 162M 28% /boot
tmpfs 89M 0 89M 0% /run/user/0
but the qcow2 file has grown from 5GB to
# du -h /backups/vm01/20180111/vm01.qcow2
19G /backups/vm01/20180111/vm01.qcow2
I found the size of qcow2 disk file grows rapidly and tried to "qemu-img convert" the backup file, but did not solve the problem. When I did dd if=/dev/zero of=vm01.qcow2 it ran until I ran out of space on that volume group ( more than the 19G ). I was expecting the qcow2 file to grow more or less with the size of the internal file system. Any hints what I may be doing wrong?
Regards,
Pavel
Unless you have TRIM/DISCARD enabled for the host filesystem, QEMU and the guest OS, the qcow2 file will never shrink in size. So most likely explanation is that something in the guest OS created a very large file for a short time and then deleted it again. the qcow2 image would have grown to hold this file, but once the file was deleted, the qcow2 image won't shrink again, without TRIM/DISCARD being available.
Related
I don't know actually if this is more a "classic" linux or a docker question but:
On an VM where some of my docker containers are running I've a strange thing. /var/lib/docker is an own partitionwith 20GB. When I look over the partition with df -h I see this:
eti-gwl1v-dockerapp1 root# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 815M 7.0G 11% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda2 12G 3.2G 8.0G 29% /
/dev/sda7 3.9G 17M 3.7G 1% /tmp
/dev/sda5 7.8G 6.8G 649M 92% /var
/dev/sdb2 20G 47M 19G 1% /usr2
/dev/sdb1 20G 2.9G 16G 16% /var/lib/docker
So usage is at 16%. But when I now navigate to /var/lib and do a du -sch docker I see this:
eti-gwl1v-dockerapp1 root# cd /var/lib
eti-gwl1v-dockerapp1 root# du -sch docker
19G docker
19G total
eti-gwl1v-dockerapp1 root#
So same directory/partition but two sizes? How is that going?
This is really a question for unix.stackexchange.com, but there is filesystem overhead that makes the partition larger than the total size of the individual files within it.
du and df show you two different metrics:
du shows you the (estimated) file space usage, i.e. the sum of all file sizes
df shows you the disk space usage, i.e. how much space on the disk is actually used
These are distinct values and can often diverge:
disk usage may be bigger than the mere sum of file sizes due to additional meta data: e.g. the disk usage of 1000 empty files (file size = 0) is >0 since their file names and permissions need to be stored
the space used by one or multiple files may be smaller than their reported file size due to:
holes in the file - block consisting of only null bytes are not actually written to disk, see sparse files
automatic file system compression
deduplication through hard links or copy-on-write
Since docker uses the image layers as a means of deduplication the latter is most probably the cause of your observation - i.e. the sum of the files is much bigger because most of them are shared/deduplicated through hard links.
du estimates filesystem usage through summing the size of all files in it. This does not deal well with the usage of overlay2: there will be many directories which contain the same files as contained in another, but overlaid with additional layers using overlay2. As such, du will show a very inflated number.
I have not tested this since my Docker daemon is not using overlay2, but using du -x to avoid going into overlays could give the right amount. However, this wouldn't work for other Docker drivers, like btrfs, for example.
I have a server running Centos 7. This is the result of df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.4G 0 7.4G 0% /dev
tmpfs 1.5G 139M 1.4G 10% /run
/dev/vda1 46G 44G 0 100% /
tmpfs 7.4G 0 7.4G 0% /dev/shm
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
/dev/vda15 99M 3.6M 95M 4% /boot/efi
/dev/mapper/LVMVolGroup-DATA_VOLUME 138G 17G 114G 13% /mnt/data
tmpfs 1.5G 0 1.5G 0% /run/user/0
Even if there are 2GB of free space on / , it shows that the filesystem is at 100% of usage, and I can't install new packages because it tells me there's no space left on device.
Besides, if I type sudo du -sh /* | sort -rh | head -15
the result is:
17G /mnt
1.1G /usr
292M /var
208M /root
139M /run
49M /boot
48M /tmp
32M /etc
28K /home
16K /lost+found
12K /anaconda-post.log
4.0K /srv
4.0K /opt
4.0K /media
0 /sys
So it seems that there are no big files filling up the disk, and the sum of the sizes of the directories is not even equal to 44GB.
Additional info: the only service running on the server is Jenkins, but its home is under /mnt/data/jenkins.
How can I solve the problem?
Found the solution.
The problem was related to some deleted files kept open by Jenkins.
Restarting the service the problem was solved.
The problem was related to the system cache/temp storage. Linux system created the cache files and its archive from time to time, especially when some long option is run like DB import or crone job etc.. or sometimes server up from sines long.
Restarting the service or server
so due to that, the cache/ temp files were deleted and the problem was solved.
even in windows, we faced that kind of performance issue when RAM is low, and restarting the system is the primary solution for that.
I'm using a DS4 Azure VM (Ubuntu 14.04). It comes with a 56GB local SSD.
I need to set up a 25GB swapfile in this local SSD. When I do df -h in the VM, I can see that it seems to be mapped to the /mnt/ folder. Following is the entire output:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 22G 6.4G 77% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 14G 4.0K 14G 1% /dev
tmpfs 2.8G 472K 2.8G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 14G 0 14G 0% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 56G 97M 56G 1% /mnt
However, if I try to initialize a swapfile in /mnt, it still gets added to the available disk space in /dev/sda1.
What do I need to do to set up my swap file? An illustrative example would be great. Thanks in advance.
I normally use the following commands to set up a swapfile:
sudo fallocate -l 25G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Update:
I went into /etc/waagent.conf, and tweaked the followed:
# Format if unformatted. If 'n', resource disk will not be mounted.
ResourceDisk.Format=y
# File system on the resource disk
# Typically ext3 or ext4. FreeBSD images should use 'ufs2' here.
ResourceDisk.Filesystem=ext4
# Mount point for the resource disk
ResourceDisk.MountPoint=/mnt
# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y
# Size of the swapfile.
ResourceDisk.SwapSizeMB=26000
After this, I resized (and consequently rebooted) my Azure VM from the portal. Currently I can't tell whether the settings have taken effect. Are my settings correct and what's the best way to ensure they've taken effect?
You are right, we should modify /etc/waagent.conf to add a swap file.
By modifying the /etc/waagent.conf file and setting the following 3 parameters a swap file will be created in the directory defined by ResourceDisk.MountPoint
ResourceDisk.Format=y
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=26000
Then we should restart walinuxagent:
service walinuxagent restart
Commands to show the new swap space in use after agent restart:
dmesg | grep swap
root#ubuntu:~# swapon -s
Filename Type Size Used Priority
/mnt/swapfile file 26623996 0 -1
root#ubuntu:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 3.4G 12K 3.4G 1% /dev
tmpfs tmpfs 697M 412K 697M 1% /run
/dev/sda1 ext4 29G 869M 27G 4% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 3.5G 0 3.5G 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/dev/sdb1 ext4 99G 26G 68G 28% /mnt
I resized (and consequently rebooted) my Azure VM from the portal
I resized my VM, and the swap file does not lose.
Are my settings correct and what's the best way to ensure they've
taken effect?
After modify the /etc/waagent.conf and restart walinuxagent, we can use swapon -s to check it.
How to increase disk space of an instance without using EBS ? Root file system size is only showing 10 GB. Is there a way to create a bigger file system without EBS ?
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 3.3G 6.1G 35% /
tmpfs 874M 0 874M 0% /lib/init/rw
udev 874M 84K 874M 1% /dev
tmpfs 874M 0 874M 0% /dev/shm
/dev/sdb 335G 12G 307G 4% /mnt
As you can see in the output, a much bigger partition is mounted at /mnt. You can move some of the things on the root filesystem there by either remounting it at the appropriate location or add symlinks. There is no other way to add more diskspace if you don't want to resort to EBS or a network filesystem.
I have a linux box with a partition full, the partition being full is stopping SQL from being started. I Need to work out what files I need to delete in order to free up space on the partition, I have tried deleting backup database files from mysql by hand using rm, and deleting old log files, but this just frees up more space from sda8 - which has plenty of space. Does anyone know how to find out which files are in sda7?
Here is the output of df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 4.6G 1.2G 3.2G 27% /
tmpfs 1.8G 0 1.8G 0% /lib/init/rw
varrun 1.8G 92K 1.8G 1% /var/run
varlock 1.8G 0 1.8G 0% /var/lock
udev 1.8G 168K 1.8G 1% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
lrm 1.8G 2.5M 1.8G 1% /lib/modules/2.6.28-19-generic/volatile
/dev/sda5 76M 20M 53M 27% /boot
/dev/sda8 220G 7.4G 202G 4% /home
/dev/sda7 4.6G 4.4G 0 100% /var
Thanks
/dev/sda7 4.6G 4.4G 0 100% /var
varrun 1.8G 92K 1.8G 1% /var/run
varlock 1.8G 0 1.8G 0% /var/lock
I re-arranged your df -h output a little and trimmed it to the most meaningful lines.
You need to remove content in /var/ that is not in /var/run or /var/lock. A very fast way to free up a giant pile of free space on Debian-derived systems (including Ubuntu) is to run apt-get autoclean -- this will remove old packages from /var/cache/apt/archives/. apt-get clean will free up even more space by removing all packages from that directory. (These packages are kept around for your troubleshooting.) If you're not sure which to run, apt-get clean is my suggestion -- you'll almost never need those packages anyway.
But that's not a long-term solution to your problem. You should probably store your SQL databases in /home instead. You have 202 gigabytes free there and you probably have a backup solution of some sort in place on your /home partition -- right? -- that you might not have thought to back up from /var/. Make a new directory in /home/ for your SQL databases, make it owned by the user and group accounts for your SQL server, move your databases, and configure the database server to use the new locations.