How does docker map host partitions? - linux

I'm relatively new to docker, and when I started a container (an ubuntu base image), I noticed the following:
On the host,
$ df -h
...
/dev/sdc1 180M 98M 70M 59% /boot
/dev/sdc2 46G 20G 24G 46% /home
/dev/sdc5 37G 7.7G 27G 23% /usr
/dev/sdc6 19G 13G 5.3G 70% /var
$ lsblk
...
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 190M 0 part /boot
├─sdc2 8:34 0 46.6G 0 part /home
├─sdc3 8:35 0 18.6G 0 part /
├─sdc4 8:36 0 1K 0 part
├─sdc5 8:37 0 37.3G 0 part /usr
├─sdc6 8:38 0 18.6G 0 part /var
├─sdc7 8:39 0 29.8G 0 part [SWAP]
└─sdc8 8:40 0 42.8G 0 part
On the container
$ df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 19G 13G 5.3G 70% /
none 19G 13G 5.3G 70% /
tmpfs 7.8G 0 7.8G 0% /dev
shm 64M 0 64M 0% /dev/shm
/dev/sdc6 19G 13G 5.3G 70% /etc/hosts
tmpfs 7.8G 0 7.8G 0% /proc/kcore
tmpfs 7.8G 0 7.8G 0% /proc/latency_stats
tmpfs 7.8G 0 7.8G 0% /proc/timer_stats
$ lsblk
sdc 8:32 0 232.9G 0 disk
|-sdc1 8:33 0 190M 0 part
|-sdc2 8:34 0 46.6G 0 part
|-sdc3 8:35 0 18.6G 0 part
|-sdc4 8:36 0 1K 0 part
|-sdc5 8:37 0 37.3G 0 part
|-sdc6 8:38 0 18.6G 0 part /var/lib/cassandra
|-sdc7 8:39 0 29.8G 0 part [SWAP]
`-sdc8 8:40 0 42.8G 0 part
Question 1: why is sdc6 mounted on different places between the host and the container?
Because the contents of the two mount points are different, so I assume docker must have done some kind of device mapping on the container, so sdc6 in the container isn't the same as the one on the host. However, the partition capacity and usage are the same, so I'm confused here.
Question 2: why is the container's root dir usage so high? The docker image doesn't have much stuff on it.
Thanks for any help.
Addition
The Dockerfile has a line
VOLUME /var/lib/cassandra

Question 1: why is sdc6 mounted on different places between the host and the container?
/dev/sdc6 on your host is /var, which is where /var/lib/docker resides and where Docker keeps certain data, such as the hosts file allocated to your container.
The hosts file is exposed as a bind mount inside the container, which is why you see:
/dev/sdc6 19G 13G 5.3G 70% /etc/hosts
Question 2: why is the container's root dir usage so high? The docker image doesn't have much stuff on it.
Take a look at the df output inside the container:
rootfs 19G 13G 5.3G 70% /
Now look at the df output on your host, and you'll see:
/dev/sdc6 19G 13G 5.3G 70% /var
The df inside the container is reflecting the state of the host filesystem. This suggests that you are using the aufs or overlay storage driver, both of which create "overlay" filesystems for containers on top of the host filesystem. The output from df would look different if you were using the devicemapper storage driver, relies on device mapper block devices instead of overlay filesystems.

Related

AWS ami cannot execute file from attached volume

I have created ami with two volumes attached as following:
[ec2-user#ip-192-***** ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 100G 0 disk
├─nvme0n1p1 259:1 0 100G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
nvme2n1 259:3 0 320G 0 disk
├─hardenedpartitions-tmp 253:0 0 25G 0 lvm /var/tmp
├─hardenedpartitions-home 253:1 0 25G 0 lvm /home
├─hardenedpartitions-var 253:2 0 35G 0 lvm /var
├─hardenedpartitions-varlog 253:3 0 25G 0 lvm /var/log
└─hardenedpartitions-varlogaudit 253:4 0 16G 0 lvm /var/log/audit
nvme1n1 259:4 0 320G 0 disk
[root#ip-192-**** ec2-user]# df -h
Filesystem Size Used Avail
Use% Mounted on
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 3.8G 520K 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0%
/sys/fs/cgroup
/dev/nvme0n1p1 100G 1.7G 99G 2% /
/dev/mapper/hardenedpartitions-home 25G 436M 23G 2% /home
/dev/mapper/hardenedpartitions-var 35G 407M 33G 2% /var
/dev/mapper/hardenedpartitions-tmp 25G 64K 24G 1% /tmp
/dev/mapper/hardenedpartitions-varlog 25G 42M 24G 1% /var/log
/dev/mapper/hardenedpartitions-varlogaudit 16G 880K 15G 1% /var/log/audit
tmpfs 774M 0 774M 0% /run/user/1000
tmpfs 774M 0 774M 0% /run/user/0
I am trying to boot an instance from this ami in opsworks with it being stacked in the boot in opsworks(still shows that it starts in ec2). After sshing into the instance and inspecting the logs in /var/logs/aws/opsworks/ I see the following:
[Tue, 28 Jun 2022 14:44:13 +0000] opsworks-init: Starting: Download Installer.
/var/lib/cloud/instance/scripts/part-002: line 433: /tmp/opsworks-agent-
downloader.sh: Permission denied
Then doing smth like that does not work:
[root#ip-192-**** ec2-user]# chmod 777 /tmp/opsworks-agent-downloader.sh
[root#ip-192-**** ec2-user]# ls -la /tmp/opsworks-agent-downloader.sh
-rwxrwxrwx 1 root root 7045 Jun 28 14:44 /tmp/opsworks-agent-downloader.sh
[root#ip-**** ec2-user]# /tmp/opsworks-agent-downloader.sh
bash: /tmp/opsworks-agent-downloader.sh: Permission denied
Any ideas why I cannot run this file as root from attached volume?
So the problem was with the way how volume had been attached to the instance.
Specifically, the line that had been added into /etc/fstab file, smth like that:
mount /dev/hardenedpartitions/tmp ..... noexec .....
This noexec specifies that no files can be executed even if you have the correct permission. So removing that helped in booting instance in opsworks.

How to check the swap partion in Ubuntu

I found a strange thing, the result of free shows:
> free
total used free shared buff/cache available
Mem: 6060516 1258584 3614828 34340 1187104 4469908
Swap: 2097148 0 2097148
But the result of df:
> df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.9G 0 2.9G 0% /dev
tmpfs 592M 1.9M 591M 1% /run
/dev/sda1 98G 85G 8.8G 91% /
tmpfs 2.9G 40K 2.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 2.9G 0 2.9G 0% /sys/fs/cgroup
/dev/loop0 161M 161M 0 100% /snap/gnome-3-28-1804/116
...
/dev/loop18 2.5M 2.5M 0 100% /snap/gnome-calculator/730
tmpfs 592M 32K 592M 1% /run/user/1000
There is no swap partion ... I use default config to build ubuntu18.04 in VMWare
> uname -a
Linux ubuntu 5.3.0-51-generic #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Is the swap partition enabled in the system ?
The swap partition is not a file system and as a consequence is not displayed by df which works only on file systems.
Instead you can use swapon:
$ swapon
NAME TYPE SIZE USED PRIO
/swap.img file 2G 0B -2
Other than swapon suggested by pifor you can also check your partition, swap included, with lsblk command:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 2G 0 lvm [SWAP]
└─centos-home 253:2 0 27G 0 lvm /home

(linux) gzip fails with No space left on device

I need to compress a directory on my Ubuntu server. The directory is about 3.2 Go, and I have 15 Go left over 20 Go available on my server.
I'm using the command: tar -zcvf test src_directory
The command fails with the message:
gzip: stdout: No space left on device
tar: test: Wrote only 6144 of 10240 bytes
tar: Child returned status 1
tar: Error is not recoverable: exiting now
Why is it failing as I have enough space on my server ? (15 Go should be enough)
thanks
EDIT
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 463M 0 463M 0% /dev
tmpfs 98M 2.2M 96M 3% /run
/dev/xvda1 20G 17G 2.2G 89% /
tmpfs 490M 0 490M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 490M 0 490M 0% /sys/fs/cgroup
tmpfs 24K 0 24K 0% /var/gandi
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 20G 0 disk
└─xvda1 202:1 0 20G 0 part /
xvdz 202:6400 0 512M 0 disk
├─xvdz1 202:6401 0 502M 0 part [SWAP]
└─xvdz2 202:6402 0 10M 0 part
with df, I can see there is only 2.2G left on the disk. How can I have details about what is taking so much space ? Because I know that my application files only take 4.6G.
thanks
Check whether you have enough memory available on the partition you want the archive to be stored in
$: df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 476G 300G 176G 63% /
Also check if you have the permissions necessary to archive the desired directory.
If the problem still persists, I may suggest a filesystem check
$: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 698.7G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 5.8G 0 part [SWAP]
├─sda3 8:3 0 50G 0 part /
├─sda4 8:4 0 1K 0 part
└─sda5 8:5 0 642.4G 0 part /home
# Find the device that matches the mountpoint of the directory in question
# Replace X with the appropriate device
# Replace Y with the appropriate partition
$: sudo fsck -vcck /dev/sdXY

"No space left on device" when using dd to create a disk image

I am trying to trying to create a disk image of my Raspberry Pi Model 3 B+ onto a USB drive using dd. I know there are easier ways to do this on a Raspberry Pi, but I want to try this to test the procedure on a 'sacrificial' system, which I hope to then use on another linux computer running a much larger Ubuntu disk to create a backup. OS is Raspbian Buster 10.
I have been following a procedure I found on an article here: https://www.makeuseof.com/tag/easily-clone-restore-linux-disk-image-dd/
The USB drive has 64GB capacity and has been formatted, initially as exFAT but I also tried NTFS thinking maybe that was the issue. The command ended with the same error, however each time i have tried this the file size transferred has been different, varying from 2-8GB in size before the error occurred.
This is to identify my drives - the SD card is "mmcblk" and my USB drive is "sda", called "NINJA":
pi#raspberrypi:~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 57.9G 0 disk
└─sda1 8:1 1 57.9G 0 part
mmcblk0 179:0 0 14.9G 0 disk
├─mmcblk0p1 179:1 0 256M 0 part /boot
└─mmcblk0p2 179:2 0 14.6G 0 part /
This my command I tried to use:
pi#raspberrypi:~ $ sudo dd bs=4M if=/dev/mmcblk0 of=/media/pi/NINJA/raspibackup.img
and this is the output:
dd: error writing '/media/pi/NINJA/raspibackup.img': No space left on device
605+0 records in
604+0 records out
2535124992 bytes (2.5 GB, 2.4 GiB) copied, 325.617 s, 7.8 MB/s
Check how much disk space is "Avail" on the target device.
Example:
[jack#server1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 496M 41M 456M 9% /dev/shm
tmpfs 496M 6.9M 489M 2% /run
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/mapper/centos-root 6.2G 6.2G 172K 100% /
/dev/sda1 1014M 166M 849M 17% /boot
tmpfs 100M 24K 100M 1% /run/user/1000
/dev/sr0 552M 552M 0 100% /run/media/jack/CentOS 7 x86_64
Terminology:
df: DiskFree
-h: Human Readable Sizes (Ex: 6.2G instead of 6485900)
In this example, let's say I want to make a backup of the Boot drive (/dev/sda1) and save it in my Local User Home Folder on my Root Drive (/dev/mapper/centos-root).
When I so this, I will get an error that looks like:
[jack#server1 ~]$ sudo dd if=/dev/sda1 of=boot.img
dd: error writing 'boot.img': No space left on device
1736905+0 records in
1736904+0 records out
889294848 bytes (889 MB) copied, 4.76575 s, 187 MB/s
Terminology:
sudo: Super User Do
dd: Disk Duplicate
if: Input File (source)
of: Output File (destination)
The system is trying to copy ALL of /dev/sda1 (to include freespace) to boot.img, which is impossible at this because /dev/sda1 is 1014M and there is only 172K space left on /dev/mapper/centos-root.
With that said, the actual size of the /dev/sda is actually 16G total! Which means that there is 8G not allocated.
My /dev/sda1 should be 1G where my /dev/sda2 (centos-root) should be 15G... in which it is currently 6.2G
[jack#server1 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 16G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 15G 0 part
├─centos-root 253:0 0 6.2G 0 lvm /
└─centos-swap 253:1 0 820M 0 lvm [SWAP]
sr0 11:0 1 552M 0 rom /run/media/jack/CentOS 7 x86_64
This partition can be extended by doing the following:
[jack#server1 ~]$ sudo lvextend -L +8G /dev/mapper/centos-root
[jack#server1 ~]$ sudo xfs_growfs /dev/mapper/centos-root
Now that my partition is extended, I can do another DiskFree command to double check.
[jack#server1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 496M 33M 463M 7% /dev/shm
tmpfs 496M 6.9M 489M 2% /run
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/mapper/centos-root 15G 7.0G 7.3G 49% /
/dev/sda1 1014M 166M 849M 17% /boot
tmpfs 100M 24K 100M 1% /run/user/1000
/dev/sr0 552M 552M 0 100% /run/media/jack/CentOS 7 x86_64
My root partition is now 15G! Now I can perform my backup of the /dev/sda1 partition!
[jack#server1 ~]$ sudo dd if=/dev/sda1 of=boot.img
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 5.59741 s, 192 MB/s
Mission Complete!
sda1 is not mounted in /media/pi/NINJA/, the image you create is therefore stored on the mmcblk0p2 partition.
Since mmcblk0 is by definition larger than mmcblk0p2, you logically run out of space on it.
Solution :
You need to first mount sda1 using sudo mount /dev/sda1 /media/pi/NINJA/ and try your dd command again after.

Disk size for Azure VM on docker-machine

I am creating an Azure VM using docker-machine as follows.
docker-machine create --driver azure --azure-size Standard_DS2_v2 --azure-subscription-id #### --azure-location southeastasia --azure-image canonical:UbuntuServer:14.04.2-LTS:latest --azure-open-port 80 AwesomeMachine
following the instructions here. Azure VM docs say - Max. disk size of Standard_DS2_v2 is 100GB,
however when I login to the machine (or create a container on this machine), the max available disk size I see is 30GB.
$ docker-machine ssh AwesomeMachine
docker-user#tf:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 6.9G 21G 25% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.4G 12K 3.4G 1% /dev
tmpfs 698M 452K 697M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.5G 1.1M 3.5G 1% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 14G 35M 13G 1% /mnt
What is the meaning of Max. disk size then? Also what is this /dev/sdb1? Is it usable space?
My bad, I didn't look at the documentation carefully.
Wo when --azure-size is Standard_DS2_v2, Local SSD disk = 14 GB, which is /dev/sdb1, while
--azure-size Standard_D2_v2 gives you Local SSD disk = 100 GB.
Not deleting the question in case somebody else makes the same stupid mistake.

Resources