Cannot login to owncloud. No space left on device - linux

I am currently using the last version of owncloud. Since the installation, I cannot login anymore. A quick look at /var/log/apache2/error.log explains why :
WARNING: could not create relation-cache initialization file "global/pg_internal.init.7826": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "base/17999/pg_internal.init.7826": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "global/pg_internal.init.7827": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "base/17999/pg_internal.init.7827": No space left on device
DETAIL: Continuing anyway, but there's something wrong.
WARNING: could not create relation-cache initialization file "global/pg_internal.init.7828": No space left on device
But I cannot figure where I do not have enough space. If I try df -h as root, everything seems ok to me :
:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 20G 20G 0 100% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 82M 3.8G 3% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda2 898G 912M 851G 1% /home
tmpfs 788M 0 788M 0% /run/user/0
Excepted the first line which I hardly understand what it represents. I installed owncloud into /home/owncloud so I bet everything should be ok.
Any idea?
Edit :
Results of findmnt :
~# findmnt /
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda1 ext4 rw,relatime,errors=remount-ro,data=ordered
~# findmnt /dev/sda1
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda1 ext4 rw,relatime,errors=remount-ro,data=ordered
~# findmnt /dev/sda2
TARGET SOURCE FSTYPE OPTIONS
/home /dev/sda2 ext4 rw,relatime,data=ordered

Often, these programs store their data under /var, In your case, you don't have a separate mountpoint for /var so it's a directory on your root file system /. This is full and so the program is not working.
Before you attempt a resize or anything, I think you should find out what is hogging 20GB. du / | sort -n should give you a rough idea of the guilty parties or you can use a graphical tool like xdiskusage. Clean it up and you'll be good to go.
The other alternative is to look through the config files for owncloud and make it use your home directory to store its data. That way, it will work. But you should clean up your /. Various things will misbehave if you don't.

Maybe you are out of inodes: No space left on device – running out of Inodes.
Use df -i to check that. It happened to me as my backup used to have millions of small files. So there was space left but no inodes left.

Related

Different sizes for /var/lib/docker

I don't know actually if this is more a "classic" linux or a docker question but:
On an VM where some of my docker containers are running I've a strange thing. /var/lib/docker is an own partitionwith 20GB. When I look over the partition with df -h I see this:
eti-gwl1v-dockerapp1 root# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 815M 7.0G 11% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda2 12G 3.2G 8.0G 29% /
/dev/sda7 3.9G 17M 3.7G 1% /tmp
/dev/sda5 7.8G 6.8G 649M 92% /var
/dev/sdb2 20G 47M 19G 1% /usr2
/dev/sdb1 20G 2.9G 16G 16% /var/lib/docker
So usage is at 16%. But when I now navigate to /var/lib and do a du -sch docker I see this:
eti-gwl1v-dockerapp1 root# cd /var/lib
eti-gwl1v-dockerapp1 root# du -sch docker
19G docker
19G total
eti-gwl1v-dockerapp1 root#
So same directory/partition but two sizes? How is that going?
This is really a question for unix.stackexchange.com, but there is filesystem overhead that makes the partition larger than the total size of the individual files within it.
du and df show you two different metrics:
du shows you the (estimated) file space usage, i.e. the sum of all file sizes
df shows you the disk space usage, i.e. how much space on the disk is actually used
These are distinct values and can often diverge:
disk usage may be bigger than the mere sum of file sizes due to additional meta data: e.g. the disk usage of 1000 empty files (file size = 0) is >0 since their file names and permissions need to be stored
the space used by one or multiple files may be smaller than their reported file size due to:
holes in the file - block consisting of only null bytes are not actually written to disk, see sparse files
automatic file system compression
deduplication through hard links or copy-on-write
Since docker uses the image layers as a means of deduplication the latter is most probably the cause of your observation - i.e. the sum of the files is much bigger because most of them are shared/deduplicated through hard links.
du estimates filesystem usage through summing the size of all files in it. This does not deal well with the usage of overlay2: there will be many directories which contain the same files as contained in another, but overlaid with additional layers using overlay2. As such, du will show a very inflated number.
I have not tested this since my Docker daemon is not using overlay2, but using du -x to avoid going into overlays could give the right amount. However, this wouldn't work for other Docker drivers, like btrfs, for example.

Filesystems, and quota for the home directory and /usr/local on the Google Cloud VM

I created a Debian VM on google cloud. Below is information from "df -h". What are those filesystems, such as tmpfs or /dev/sda1, mean? Any beginner-friendly reference for them? In particular, how much space can I use at my working directory "~", and how much space can I use in /usr/local (for installing software). Any idea?
zell#instance-1:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 371M 6.4M 365M 2% /run
/dev/sda1 9.8G 1.4G 7.9G 15% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
df -h
shows the amount of disk space used and available on Linux file systems. The command df stands for Disk Free and -h means human-readable form.
You can also see information about a specific filesystem, as follows:
df /dev/sda1
From Documentation:
Tmpfs is a file system which keeps all files in virtual memory.
Everything in tmpfs is temporary in the sense that no files will be
created on your hard drive. If you unmount a tmpfs instance,
everything stored therein is lost.
They can be mounted on different directories. For example, a tmpfs filesystem mounted at /dev/shm is used for the implementation of POSIX shared memory, an inter-process communication (IPC) where two or more processes may read from and write to the shared memory region, and POSIX semaphores, which allows processes and threads to sync their actions.
From What does /dev/sda mean?:
/dev/ is the part in the unix directory tree that contains all "device" files -- unix traditionally treats just about everything you can access as a file to read from or write to. Therefore, /dev/sda1 means the first partition on the first drive, and /dev/sda9 will mean the ninth partition on the first drive.
Check out the link for more information.
To display the amount of disk space used by the specified files and for each subdirectory, you can run the following command:
du -h
WHERE du stands for Disk Usage and -h means human-readable form.
Optionally you can use the following command to display the amount of disk space used by a certain directory:
du -h usr/local

qcow2 growing faster than guest filesystem

I'm having difficulties to understand the disk size of my qcow2 image.
I have a CentOS 6 box running:
# virsh version
Compiled against library: libvirt 0.10.2
Using library: libvirt 0.10.2
Using API: QEMU 0.10.2
Running hypervisor: QEMU 0.12.1
I run couple guests there and without much activity on the guests I noticed the backup ( I do manual complete file copy with cp, no qcow2 based snaps) on one of my guests has grown 4 times. The other guests seem to behave normally and have normal backup size growth.
When I login to that guest I see that
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 396M 5.5M 391M 2% /run
/dev/mapper/debian9--vg-root 188G 2.7G 176G 2% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 236M 62M 162M 28% /boot
tmpfs 89M 0 89M 0% /run/user/0
but the qcow2 file has grown from 5GB to
# du -h /backups/vm01/20180111/vm01.qcow2
19G /backups/vm01/20180111/vm01.qcow2
I found the size of qcow2 disk file grows rapidly and tried to "qemu-img convert" the backup file, but did not solve the problem. When I did dd if=/dev/zero of=vm01.qcow2 it ran until I ran out of space on that volume group ( more than the 19G ). I was expecting the qcow2 file to grow more or less with the size of the internal file system. Any hints what I may be doing wrong?
Regards,
Pavel
Unless you have TRIM/DISCARD enabled for the host filesystem, QEMU and the guest OS, the qcow2 file will never shrink in size. So most likely explanation is that something in the guest OS created a very large file for a short time and then deleted it again. the qcow2 image would have grown to hold this file, but once the file was deleted, the qcow2 image won't shrink again, without TRIM/DISCARD being available.

About Linux file system

Just set a Fedora 22 system on VMWare with 60GB. When inputting the "df" command, the system displayed this:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/fedora-root 38440424 4140700 32324020 12% /
devtmpfs 2009804 0 2009804 0% /dev
tmpfs 2017796 92 2017704 1% /dev/shm
tmpfs 2017796 872 2016924 1% /run
tmpfs 2017796 0 2017796 0% /sys/fs/cgroup
tmpfs 2017796 532 2017264 1% /tmp
/dev/sda1 487652 79147 378809 18% /boot
/dev/mapper/fedora-home 18701036 49464 17678568 1% /home
What is the exact size of each 1K-blocks? Does the /dev/mapper/fedora-root contain the /dev/mapper/fedora-home?
I'm so confused with "df" command.
Thanks a lot.
You can see from the df output that /dev/mapper/fedora-home can currently be reached at /home which is its mount point. Because the mount point for /dev/mapper/fedora-root is at / farther up the directory tree, anything that /dev/mapper/fedora-root has in /home is not accessible by normal means until and unless /dev/mapper/fedora-home gets unmounted.
As David Schwartz noted, a 1K-block is a unit of one (binary) kilobyte, which is 1024 bytes. Because no one has bothered to change it since the time when it was important to performance, df still reports sizes in terms of the device's block size. That value does vary. There are still plenty of devices around that have a block size of 512 bytes. For output consistently in kilobytes, you can use df -k.
What is the exact size of each 1K-blocks?
1,024 bytes.
Does the /dev/mapper/fedora-root contains the /dev/mapper/fedora-home?
They're separate filesystems, that's why they appear on separate lines in the df output.

How do you locate which files are on which partition with linux ubuntu

I have a linux box with a partition full, the partition being full is stopping SQL from being started. I Need to work out what files I need to delete in order to free up space on the partition, I have tried deleting backup database files from mysql by hand using rm, and deleting old log files, but this just frees up more space from sda8 - which has plenty of space. Does anyone know how to find out which files are in sda7?
Here is the output of df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 4.6G 1.2G 3.2G 27% /
tmpfs 1.8G 0 1.8G 0% /lib/init/rw
varrun 1.8G 92K 1.8G 1% /var/run
varlock 1.8G 0 1.8G 0% /var/lock
udev 1.8G 168K 1.8G 1% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
lrm 1.8G 2.5M 1.8G 1% /lib/modules/2.6.28-19-generic/volatile
/dev/sda5 76M 20M 53M 27% /boot
/dev/sda8 220G 7.4G 202G 4% /home
/dev/sda7 4.6G 4.4G 0 100% /var
Thanks
/dev/sda7 4.6G 4.4G 0 100% /var
varrun 1.8G 92K 1.8G 1% /var/run
varlock 1.8G 0 1.8G 0% /var/lock
I re-arranged your df -h output a little and trimmed it to the most meaningful lines.
You need to remove content in /var/ that is not in /var/run or /var/lock. A very fast way to free up a giant pile of free space on Debian-derived systems (including Ubuntu) is to run apt-get autoclean -- this will remove old packages from /var/cache/apt/archives/. apt-get clean will free up even more space by removing all packages from that directory. (These packages are kept around for your troubleshooting.) If you're not sure which to run, apt-get clean is my suggestion -- you'll almost never need those packages anyway.
But that's not a long-term solution to your problem. You should probably store your SQL databases in /home instead. You have 202 gigabytes free there and you probably have a backup solution of some sort in place on your /home partition -- right? -- that you might not have thought to back up from /var/. Make a new directory in /home/ for your SQL databases, make it owned by the user and group accounts for your SQL server, move your databases, and configure the database server to use the new locations.

Resources