Docker (Linux) Error: no space left on device even if I have enough space [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have built docker image and while that I got the Error: no space left on device. This comes in the step where docker copies the oracle databse files. In my docker file it is this step:
ADD resources/oracle/database /data/home/oracle/database/
RUN chown -R oracle:oracle /data/home/oracle/database
This step takes some time when I build the image after that it prints this error:
Step 10/32 : ADD resources/tomcat/apache-tomcat-8.0.29.tar.gz /opt/tomcat
---> 092e001b744e
Step 11/32 : ADD resources/oracle/database /data/home/oracle/database/
---> 69218d6278b0
Step 12/32 : RUN chown -R oracle:oracle /data/home/oracle/database
---> Running in 4ae797185eeb
Error processing tar file(exit status 1): write /data/home/oracle/database/javavm/jdk/jdk8/admin/classes.bin: no space left on device
Hmm it seems for me that I dont have enough space on linux server. I then tried to check my space in linux as I am sure that I have much space. I run df -h and it looks like this:
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 59G 0 59G 0% /dev
tmpfs 59G 0 59G 0% /dev/shm
tmpfs 59G 418M 59G 1% /run
tmpfs 59G 0 59G 0% /sys/fs/cgroup
/dev/sda3 39G 30G 9.0G 77% /
/dev/sda1 200M 8.6M 192M 5% /boot/efi
tmpfs 12G 0 12G 0% /run/user/1000
/dev/sdc 1008G 176G 781G 19% /data
/dev/sdd 1008G 466G 492G 49% /oracle
tmpfs 12G 0 12G 0% /run/user/2003
Before I built the image it looked like this:
davinci#sii-dev-ora19:/opt/davinci]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 59G 0 59G 0% /dev
tmpfs 59G 0 59G 0% /dev/shm
tmpfs 59G 418M 59G 1% /run
tmpfs 59G 0 59G 0% /sys/fs/cgroup
/dev/sda3 39G 16G 23G 42% /
/dev/sda1 200M 8.6M 192M 5% /boot/efi
tmpfs 12G 0 12G 0% /run/user/1000
/dev/sdc 1008G 176G 781G 19% /data
/dev/sdd 1008G 466G 492G 49% /oracle
tmpfs 12G 0 12G 0% /run/user/2003
Means That I had on /dev/sda3 23GB available before I built the image and only 9GB available after building the image.
Question1: why does it stop even when I still have 9GB avilable on that partition?
Question2: It seems that docker doesnt take space from the other partitions. On dev/sdc for example I have 718GB available. How to tell docker to take space from there?
Question3: It seems that docker write files into this folder /data/home/oracle/database/...
But I go to that folder with winscp I can find only the directory /data. Is the folder /data/home/oracle/database/ in container somehow or its really in linux file system?

Question1: why does it stop even when I still have 9GB avilable on that partition?
You didn't when the error occurred. The error resulted in docker deleting the partial filesystem layer it had created. Note with overlay filesystems, a recursive chmod will copy every file in that directory tree, doubling the used disk space.
Note that you should also be watching for inode exhaustion when you get this write (df -i).
Question2: It seems that docker doesnt take space from the other partitions. On dev/sdc for example I have 718GB available. How to tell docker to take space from there?
Docker stores data in /var/lib/docker. I'd recommend making that it's own partition, or relocating it using a symlink rather than trying to change the location docker looks. There are lots of tools out there that assume this directory name.
Question3: It seems that docker write files into this folder /data/home/oracle/database/... But I go to that folder with winscp I can find only the directory /data. Is the folder /data/home/oracle/database/ in container somehow or its really in linux file system?
Docker uses namespaces, and one of those namespaces in the filesystem. The root directory in a docker container is not the root directory on the host, otherwise you'd have no isolation. This is typically implemented as an overlay filesystem under the docker directory, and each step of the Dockerfile may create a new filesystem layer used by overlay.

Related

Linux partition not showing full size [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
Improve this question
I have a Linux system where the disk space shows as only 29Gb, but when I look at the partition with the parted - print command it shows as a 64Gb partition. I'm not sure if the remaining disk space is unallocated, mounted in other folders, stuck in "tmpfs" or how to add it to the primary partition. This is in Ubuntu 18.04 OS. I would like for the full 64 GB to be available at root. I appreciate any help!
When I run df -h, here are the results:
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.2M 3.2G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 29G 25G 2.7G 91% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda2 976M 81M 829M 9% /boot
/dev/sda1 511M 4.4M 507M 1% /boot/efi
tmpfs 3.2G 0 3.2G 0% /run/user/1000
Results of parted print command shows a 64GB partition:
Model: ATA MSH-64 (scsi)
Disk /dev/sda: 63.4GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB fat32 boot, esp
2 538MB 1612MB 1074MB ext4
3 1612MB 63.3GB 61.7GB
Results of vgs command:
VG #PV #LV #SN Attr VSize VFree
ubuntu-vg 1 1 0 wz--n- <57.50g <28.75g
Results of the lvs command:
(talos-env) pradmin#pradmin:~$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi-ao---- 28.75g
Depending on the installation, the root partition might only use a part of the logical volume (LV).
Try the commands vgs and lvs to get information about your current setup. I assume that vgs shows about 30G free space. You can enlarge the root volume using lvresize. After this you need to adapt the file system. This depends on the file system type you are using. If you use extX then you might want to run resize2fs.
Edit based on the edited question:
Yes, everything can be done when the disk is mounted and in use.
BUT YOU NEED TO TAKE CARE ABOUT THE COMMANDS YOURSELF!!! A WRONG COMMAND MIGHT DESTROY YOUR SYSTEM.
PLEASE TAKE YOUR TIME TO MAKE YOURSELF COMFORTABLE WITH LVS BEFORE CHANGING THE SYSTEM.
There are many good tutorials which might help you, e.g.:
http://ryandoyle.net/posts/expanding-a-lvm-partition-to-fill-remaining-drive-space/
The guidance from Andreas proved helpful. I managed to resize the logical volume to the full size of the partition using the following commands and sequence.
Resources that I found helpful:
https://www.redhat.com/sysadmin/resize-lvm-simple
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/ext4grow
root:~# lvs
  LV        VG        Attr       LSize   Pool Origin
Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- <57.50g
 
Here you can see that the logical volume doesn't fill the full partition size
root:~# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  ubuntu-vg   1   1   0 wz--n- <57.50g <28.75g
Extend the logical volume to 100% of the free space, /dev/{VG FROM lvs CMD}/{LV FROM lvs CMD}
root:~# lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
Size of logical volume ubuntu-vg/ubuntu-lv changed from 28.75 GiB (7360 extents) to <57.50 GiB (14719 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
Checked disk space and saw that it hadn't changed yet
root:~# df
Filesystem 1K-blocks Used
Available Use% Mounted on
udev 16390292 0 16390292 0% /dev
tmpfs 3284628 1164 3283464 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 29542388 25311328 2707348 91% /
tmpfs 16423128 0 16423128 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 16423128 0 16423128 0% /sys/fs/cgroup
/dev/sda2 999320 82552 847956 9% /boot
/dev/sda1 523248 4492 518756 1% /boot/efi
tmpfs 3284624 0 3284624 0% /run/user/1000
Resize file system to full size of logical volume, use Filesystem name from df command above. Note this is an ext4 filesystem, you may have to use a different command for a different filesystem.
root:~# resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line
resizing required
old_desc_blocks = 4, new_desc_blocks = 8
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 15072256 (4k) blocks
long.
root:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 16390292 0 16390292 0% /dev
tmpfs 3284628 1164 3283464 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 59211724 25319316 31128948 45% /
tmpfs 16423128 0 16423128 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 16423128 0 16423128 0%
/sys/fs/cgroup
/dev/sda2 999320 82552 847956 9% /boot
/dev/sda1 523248 4492 518756 1% /boot/efi
tmpfs 3284624 0 3284624 0%
/run/user/1000

Different sizes for /var/lib/docker

I don't know actually if this is more a "classic" linux or a docker question but:
On an VM where some of my docker containers are running I've a strange thing. /var/lib/docker is an own partitionwith 20GB. When I look over the partition with df -h I see this:
eti-gwl1v-dockerapp1 root# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 815M 7.0G 11% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda2 12G 3.2G 8.0G 29% /
/dev/sda7 3.9G 17M 3.7G 1% /tmp
/dev/sda5 7.8G 6.8G 649M 92% /var
/dev/sdb2 20G 47M 19G 1% /usr2
/dev/sdb1 20G 2.9G 16G 16% /var/lib/docker
So usage is at 16%. But when I now navigate to /var/lib and do a du -sch docker I see this:
eti-gwl1v-dockerapp1 root# cd /var/lib
eti-gwl1v-dockerapp1 root# du -sch docker
19G docker
19G total
eti-gwl1v-dockerapp1 root#
So same directory/partition but two sizes? How is that going?
This is really a question for unix.stackexchange.com, but there is filesystem overhead that makes the partition larger than the total size of the individual files within it.
du and df show you two different metrics:
du shows you the (estimated) file space usage, i.e. the sum of all file sizes
df shows you the disk space usage, i.e. how much space on the disk is actually used
These are distinct values and can often diverge:
disk usage may be bigger than the mere sum of file sizes due to additional meta data: e.g. the disk usage of 1000 empty files (file size = 0) is >0 since their file names and permissions need to be stored
the space used by one or multiple files may be smaller than their reported file size due to:
holes in the file - block consisting of only null bytes are not actually written to disk, see sparse files
automatic file system compression
deduplication through hard links or copy-on-write
Since docker uses the image layers as a means of deduplication the latter is most probably the cause of your observation - i.e. the sum of the files is much bigger because most of them are shared/deduplicated through hard links.
du estimates filesystem usage through summing the size of all files in it. This does not deal well with the usage of overlay2: there will be many directories which contain the same files as contained in another, but overlaid with additional layers using overlay2. As such, du will show a very inflated number.
I have not tested this since my Docker daemon is not using overlay2, but using du -x to avoid going into overlays could give the right amount. However, this wouldn't work for other Docker drivers, like btrfs, for example.

CentOS no disk space left while copying a file to /home/ansible-user/ however admin-vol1 is of 100gb [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am trying to copy a tarball to a server having CentOS 7 using winscp.
The copying after some time throws an error that no space left. However when I check from the monitoring GUI it shows that the server has a vol of 100gb. I am copying to the home/ansible-user user directory which has a space of only 2GB.
How can I increase the space allocated to the home/ansible-user folder?
Also Where is the 100gb space getting used up is not clear by the df or df -h how and where it is getting used up? Here is the output of the command
3.9G 1.3G 2.5G 34% /
485M 0 485M 0% /dev
496M 0 496M 0% /dev/shm
496M 51M 446M 11% /run
496M 0 496M 0% /sys/fs/cgroup
496M 4.0K 496M 1% /tmp
99G 61M 94G 1% /data
488M 140M 313M 31% /boot
33G 2.7G 29G 9% /var
2.0G 443M 1.5G 24% /home
5.9G 65M 5.6G 2% /var/log
2.0G 3.0M 1.9G 1% /var/tmp
492M 35M 432M 8% /var/log/audit
100M 0 100M 0% /run/user/1000
UPDATE :
df -i gives the following output :
262144 43638 218506 17% /
124118 378 123740 1% /dev
126926 1 126925 1% /dev/shm
126926 575 126351 1% /run
126926 16 126910 1% /sys/fs/cgroup
126926 9 126917 1% /tmp
6553600 11 6553589 1% /data
32768 344 32424 2% /boot
2162688 38203 2124485 2% /var
524288 869 523419 1% /home
393216 49 393167 1% /var/log
131072 13 131059 1% /var/tmp
131072 15 131057 1% /var/log/audit
126926 1 126925 1% /run/user/1000
Update :
Ok my mistake. I read the output wrong. The var folder has ~40G and data folder has 94G space.
Use (on your remote Linux server, perhaps thru  ssh) both df and df -i to check the available space (both for data and for inodes). You might have no more inodes available, even if a lot of data space remains free. See df(1). And there could be disk quotas. See quota(1) and ask your sysadmin.
error that no space left
That could be either data space, or inode space (or some disk quota exceeded). You should use both df and df -i to find out.
How can I increase the space allocated to the home/ansible folder?
That is a question for the sysadmin of your Linux server (who probably is also in charge of installing software). BTW, Linux has directories, not folders (folders are visible in some GUI, and might not be shown).
The amount of space dedicated for inodes and for data is fixed when creating the file system with mkfs(8) (actually with mke2fs(8) for ext4 file systems); usually that is happening when installing your Linux distribution. You could consider resizing it, but be sure to backup all the data (on some external storage) before attempting (on some unmounted partition) any resize2fs(8) (it is a risky operation, and you might lose all your disk partition if something goes wrong...)
At last, I recommend copying the *tar.gz archive to your remote Linux server, and use on that server some tar xvf command to extract it. Try tar tvf before. See tar(1).

qcow2 growing faster than guest filesystem

I'm having difficulties to understand the disk size of my qcow2 image.
I have a CentOS 6 box running:
# virsh version
Compiled against library: libvirt 0.10.2
Using library: libvirt 0.10.2
Using API: QEMU 0.10.2
Running hypervisor: QEMU 0.12.1
I run couple guests there and without much activity on the guests I noticed the backup ( I do manual complete file copy with cp, no qcow2 based snaps) on one of my guests has grown 4 times. The other guests seem to behave normally and have normal backup size growth.
When I login to that guest I see that
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 396M 5.5M 391M 2% /run
/dev/mapper/debian9--vg-root 188G 2.7G 176G 2% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/vda1 236M 62M 162M 28% /boot
tmpfs 89M 0 89M 0% /run/user/0
but the qcow2 file has grown from 5GB to
# du -h /backups/vm01/20180111/vm01.qcow2
19G /backups/vm01/20180111/vm01.qcow2
I found the size of qcow2 disk file grows rapidly and tried to "qemu-img convert" the backup file, but did not solve the problem. When I did dd if=/dev/zero of=vm01.qcow2 it ran until I ran out of space on that volume group ( more than the 19G ). I was expecting the qcow2 file to grow more or less with the size of the internal file system. Any hints what I may be doing wrong?
Regards,
Pavel
Unless you have TRIM/DISCARD enabled for the host filesystem, QEMU and the guest OS, the qcow2 file will never shrink in size. So most likely explanation is that something in the guest OS created a very large file for a short time and then deleted it again. the qcow2 image would have grown to hold this file, but once the file was deleted, the qcow2 image won't shrink again, without TRIM/DISCARD being available.

Amazon EC2 micro instance - ran out of space? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
df -h shows that only 71% of space used:
root#ip-xxx-xxx-xxx-xxx:/home/myuser# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 7.9G 5.3G 2.2G 71% /
udev 10M 0 10M 0% /dev
tmpfs 60M 88K 60M 1% /run
/dev/xvda1 7.9G 5.3G 2.2G 71% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 120M 0 120M 0% /run/shm
However nothing can create a file anymore, even MC does not start
#mc
Cannot create temporary directory /tmp/mc-root: No space left on device (28)
Php can not create files
PHP Warning: fopen(/home/.../file.json): failed to open stream: No space
left on device in /webdev/www/..../my.php on line 10
What could it be?
I use Debian 7 on Micro instance.
df -h shows you disk free space in human readable format. But this sounds like an inode table issue which you can check via df -i. For example, here is my inode usage on my own Amazon ECS micro instance running Ubuntu 12.04:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda1 524288 116113 408175 23% /
udev 73475 379 73096 1% /dev
tmpfs 75540 254 75286 1% /run
none 75540 5 75535 1% /run/lock
none 75540 1 75539 1% /run/shm
Depending on the output, I bet your inode table is filled to the brim. The inode table logs each individual file data. Not just how much space. Meaning you might have 71% in use, but that 71% can be filled with thousands of files. So if you have tons of small files, you might still technically have free space, but the inode table is full so you have to clear that out to get your system fully functional again.
Not too clear on the best way to clear this up, but if you know of a directory that has tons of files you can toss away right away, I would recommend removing them first. For what it’s worth, this question & answer thread looks like it has some decent ideas.

Resources