How to limit disks IO of Docker container? - linux

I am working with Docker containers and observed that they tend to generate too much disk IOs.
I found the --device-write-bps option which seem to address my need of limiting the disk IOs.
However, this option expects a path to a device, but the latest Docker drivers do not allow me to determine what to set (device is overlay with overlay2 storage driver). Here is what df -h outputs in my case:
Filesystem Size Used Avail Use% Mounted on
overlay 59G 5.3G 51G 10% /
tmpfs 64M 0 64M 0% /dev
tmpfs 994M 0 994M 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/vda1 59G 5.3G 51G 10% /etc/hosts
tmpfs 994M 0 994M 0% /proc/acpi
tmpfs 994M 0 994M 0% /sys/firmware
Is the option compatible with the latest drivers? If yes, would someone know what is the path to set?
Thanks!

It seems I was mistaken by the doc. The device to specify for the --device-write-bps option is the device of the host machine. So the mount command is useful, but needs to be run on the host ^^

Related

Drone sometimes fail with error related to "No space left on device"

Sometimes we experience a drone pipeline failing due to a lack of disk space, but there is a lot of space.
drone#drone2:~$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 8.4G 0 8.4G 0% /dev
tmpfs 1.7G 984k 1.7G 1% /run
/dev/vda2 138G 15G 118G 12% /
tmpfs 8.4G 0 8.4G 0% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 8.4G 0 8.4G 0% /sys/fs/cgroup
overlay 138G 15G 118G 12% /var/lib/docker/overlay2/cf15a2d5e215d6624d9239d28c34be2aa4a856485a6ecafb16b38d1480531dfc/merged
overlay 138G 15G 118G 12% /var/lib/docker/overlay2/c4b441b59cba0ded4b82c94ab5f5658b7d8015d6e84bf8c91b86c2a2404f81b2/merged
tmpfs 1.7G 0 1.7G 0% /run/user/1000
We use docker images for running drone infostracture. The console command:
docker run \
--volume=/var/lib/drone:/data \
--publish=80:80 \
--publish=443:443 \
--restart=always \
--detach=true \
--name=drone \
<drone image>
My assumption may be due to the docker container's limitations, and we need to configure it somehow manually.
Any suggestion on how to fix it?
We have manage to identify the problem.
Our cloud provider has confirmed that our software has used the disc a lot this lead to triggering limitations on disk operation.
They gave us a recommendation to increase the disk plan. Or optimise the disk usage.

Squid Cannot allocate memory

I was trying to build squid container which work as proxy for list of users and use different configuration such as "login using custom python script" ssl bump to stop https urls and some ACL rules
but i get that most of time squid is very slow and take much time to be up and ready and slow to use
when i read the cache.log file i get always
ipcCreate: fork: (12) Cannot allocate memory
I tried also to run docker container using "--memory-swap -1" flag and still same
df -h in the docker container :
Filesystem Size Used Avail Use% Mounted on
none 30G 16G 14G 54% /
tmpfs 496M 0 496M 0% /dev
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/xvda1 30G 16G 14G 54% /etc/hosts
shm 64M 2.1M 62M 4% /dev/shm
tmpfs 496M 0 496M 0% /sys/firmware

Azure Container Instances disk space pricing

This doesn't seem to be listed neither on the Microsoft portals, or be answered anywhere else - so I might have misunderstood how this is provisioned and/or calculated in Azure, but;
Instancing a container instance in Azure, with an image from my Azure Container Registry, how is disk space billed? I understand the whole memory/CPU billed-by-the-second, but what about the disk space taken up by the running instance?
The official docs seems to be very vague about this.
When an Azure Container Instance is launched it has an amount of disk space available to the container. You have no control over this and are not explicitly charged for it.
Having just ran this command
az container create --image node:8.9.3-alpine -g "Test1" \
-n test1 --command-line 'df -h'
The response shows...
Filesystem Size Used Available Use% Mounted on
overlay 48.4G 2.8G 45.6G 6% /
tmpfs 958.9M 0 958.9M 0% /dev
tmpfs 958.9M 0 958.9M 0% /sys/fs/cgroup
/dev/sda1 48.4G 2.8G 45.6G 6% /dev/termination-log
tmpfs 958.9M 4.0K 958.9M 0% /var/aci_metadata
/dev/sda1 48.4G 2.8G 45.6G 6% /etc/resolv.conf
/dev/sda1 48.4G 2.8G 45.6G 6% /etc/hostname
/dev/sda1 48.4G 2.8G 45.6G 6% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 958.9M 0 958.9M 0% /proc/kcore
tmpfs 958.9M 0 958.9M 0% /proc/timer_list
tmpfs 958.9M 0 958.9M 0% /proc/timer_stats
tmpfs 958.9M 0 958.9M 0% /proc/sched_debug
So you should have 48gb of disk space to play with.
(I tried to test this with a Win Image, but got hit by a bug trying to get the information out)

Disk size for Azure VM on docker-machine

I am creating an Azure VM using docker-machine as follows.
docker-machine create --driver azure --azure-size Standard_DS2_v2 --azure-subscription-id #### --azure-location southeastasia --azure-image canonical:UbuntuServer:14.04.2-LTS:latest --azure-open-port 80 AwesomeMachine
following the instructions here. Azure VM docs say - Max. disk size of Standard_DS2_v2 is 100GB,
however when I login to the machine (or create a container on this machine), the max available disk size I see is 30GB.
$ docker-machine ssh AwesomeMachine
docker-user#tf:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 6.9G 21G 25% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.4G 12K 3.4G 1% /dev
tmpfs 698M 452K 697M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.5G 1.1M 3.5G 1% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 14G 35M 13G 1% /mnt
What is the meaning of Max. disk size then? Also what is this /dev/sdb1? Is it usable space?
My bad, I didn't look at the documentation carefully.
Wo when --azure-size is Standard_DS2_v2, Local SSD disk = 14 GB, which is /dev/sdb1, while
--azure-size Standard_D2_v2 gives you Local SSD disk = 100 GB.
Not deleting the question in case somebody else makes the same stupid mistake.

Unable to extend the root volume /dev/root of a Yocto build VM

I have created an emulator yocto build VM. The *.vmdk file provided create an space of around 273MB. Its too small, when new terminal is opened its root memory gets full.
I can mount a drive but but it remians as external hdd.
Result of df -h
Filesystem size used available use% mounted on
/dev/root 273.5M 273.5M 0M 100% /
devtmpfs 500.0M 0 500.0M 0% /dev
tmpfs 500.3M 0 500.3M 0% /dev/shm
tmpfs 500.3M 9.4M 490.9M 2% /run
tmpfs 500.3M 0 500.3M 0% sys/fs/cgroup
tmpfs 500.3M 9.4M 490.9M 2% /etc/machine-id
tmpfs 500.3M 16.0K 500.3M 0% /tmp
Even I tried to change the ROOTFS and increase the size of root directory but it failed to increase.

Resources