I having trouble building and deploying new Docker containers on Azure Web App on Linux.
Error logs is claiming to be out off space, and when looking at disk usage through Kudu I can see that I'm indeed out of space.
/>df -H gives:
Filesystem Size Used Avail Use% Mounted on
none 29G 28G 0 100% /
/dev/sda1 29G 28G 0 100% /etc/hosts
Have deployed several docker containers in web apps before and removed them aswell but it seems as they are still taking up space.
Creating a new App Service plan without anything deployed gives about 5.7G of free space.
Can't seem to run docker commands from the Kudu terminal so I'm not able to check how many images and can't figure out how to clean up space. Also sodu isn't available.
Does anyone have any ideas about how to free up some space?
Your disk was indeed full of Docker images. I have cleared them off; you should be unblocked.
This is a known issue that we will have a fix for soon. Iterating and deploying new containers is a common scenario, and the goal is that this should be completely abstracted away and you should not have to worry about this.
I believe my coworker and I ran into this issue when pulling images from a repository on Azure. The images would not be cleared after running docker-compose pull, but not appear to be present on the primary node.
We would see the following upon sshing onto that node:
> ssh username#server.eastus.cloudapp.azure.com -A -p 2200
> df -h
Filesystem Size Used Avail Use% Mounted on
# ...
/dev/sda1 29G 2.0G 26G 8% /
We would still enounter space issues. After some debugging, we found these results differed when attached to a container itself:
> docker-compose exec container_name /bin/bash
> df -h
Filesystem Size Used Avail Use% Mounted on
# ...
/dev/sda1 29G 29G 0G 100% /etc/hosts
The following snippet worked to clear all images not in use without issue:
docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
Note that --no-trunc is required, without it docker complains that the images don't actually exist.
Related
docker volume create minty
docker run -v minty:/Minty:rw mango
docker run -v minty:/Minty:rw banana
The mango container then creates several empty folders in /Minty and mounts filesystems on them. Unfortunately, the banana container can see the empty folders, but can't see any of the mounted filesystems.
I presume this is to do with Docker running each container in its own namespace or something. Does anybody know how to fix this?
I've seen several answers that claim to fix this, by making use of "data containers" and the --volumes-from option. However, it appears that data containers are a deprecated feature now. Regardless, I tried it, and it doesn't seem to make any difference; the second container sees only the empty mount points, not the newly-mounted filesystems.
Even bind-mounting a folder to the host doesn't allow the host to see the newly-mounted filesystems.
I'm affected by an issue described in moby/moby/27886, meaning loop devices I create in a Docker container do not appear in the container, but they do on the host.
One of the potential workarounds is to mount /dev into the Docker container, so something like:
docker run -v /dev:/dev image:tag
I know that it works, at least on Docker Engine and the Linux kernel that I currently use (20.10.5 and 5.4.0-70-generic respectively), but I'm not sure how portable and safe it is.
In runc's libcontainer/SPEC for filesystems I found some detailed information on /dev mounts for containers, so I'm a bit worried that mounting /dev from the host might cause unwanted side-effects. I even found one problematic use-case that was fixed in 2018...
So, is it safe to mount /dev into a Docker container? Is it guaranteed to work?
I'm not asking if it works, because it seems it works, I'm interested if this is a valid usage, natively supported by Docker Engine.
I need a shared volume accessible from multiple pods for caching files in RAM on each node.
The problem is that the emptyDir volume provisioner (which supports Memory as its medium) is available in Volume spec but not in PersistentVolume spec.
Is there any way to achieve this, except by creating a tmpfs volume manually on each host and mounting it via local or hostPath provisioner in the PV spec?
Note that Docker itself supports such volumes:
docker volume create --driver local --opt type=tmpfs --opt device=tmpfs \
--opt o=size=100m,uid=1000 foo
I don't see any reason why k8s doesn't. Or maybe it does, but it's not obvious?
I tried playing with local and hostPath PVs with mountOptions but it didn't work.
EmtpyDir tied to lifetime of a pod, so it can't be used via shared with multiple pods.
What you request, is additional feature and if you look at below github discussions, you will see that you are not the first that asking for this feature.
consider a tmpfs storage class
Also according your mention that docker supports this tmpfs volume, yes it supports, but you can't share this volume between containers. From Documentation
Limitations of tmpfs mounts:
Unlike volumes and bind mounts, you can’t
share tmpfs mounts between containers.
We have provisioned 11 nodes(1 master + 10 cores) EMR cluster in AWS. We have chosen disk space for each node as 100 GB.
When the cluster is provisioned, the EMR automatically allocated only 10GB to root partition(/dev/xvda1). After some days root partition disk space becomes full, due to this we couldn't run any job or install basic softwares like git using yum command.
[hadoop#<<ip address>> ~]$ df -BG
Filesystem 1G-blocks Used Available Use% Mounted on
devtmpfs 79G 1G 79G 1% /dev
tmpfs 79G 0G 79G 0% /dev/shm
/dev/xvda1 10G 10G 0G 100% /
/dev/xvdb1 5G 1G 5G 4% /emr
/dev/xvdb2 95G 12G 84G 12% /mnt
/dev/xvdf 99G 12G 83G 12% /data
Could you please help us, how to resolve this issue?
How to increase root partition(/dev/xvda1) disk space to 30GB?
By default all installation using yum or rpm goes to root partition(/dev/xvda1). How to by-pass softwares installing to root partition(/dev/xvda1)?
Whatever the solution, it should not disturb the existing EMR installation.
Help would be much appreciated.
Recently ran into same issue. Find the corresponding ec2 instance and in description tab find and click on the link root device. It points to a EBS Id, click on it. In the actions click on modify volume. After requesting required total space. you might have to aditionally run commands such as "growpart" to let the os adjust to the new size.
All EMR AMI's come with fixed root volume of 10GB and so will be all ec2 instances of your EMR cluster. All applications that you select on EMR will be installed on this root volume and are expected to take about 90% of this disk. At this moment, neither increasing this volume size nor application installation behavior can be altered. So, you should refrain from using this root volume to install application and rather install your custom apps on bigger volumes like /mnt/. You can also symlink some root directories to bigger volumes and then install your apps.
Seems like /var/aws/emr/packages takes most of the space (30%). Idk if this folder can be rm -rf /var/aws/emr/packages'd or should be symlinked to /mnt, but removing it seems to have worked for me.
EBS root volume size can also be increased while at the time of launching the EMR cluster. Default is 10GB
Once the EMR is up and running, then also we can increase the root volume. Refer to this AWS blog -> https://aws.amazon.com/premiumsupport/knowledge-center/ebs-volume-size-increase/
I launched an EC2 Spot Instance and unchecked the "Delete On Termination" option for the EBS root volume. I chose the Ubuntu 14.04 64-bit HVM AMI.
At some point the instance got terminated due to max price and the EBS volume stayed behind as intended. Now eventually when the Spot Instance is relaunched it creates a brand-new EBS root volume. The old EBS root volume is still sitting out there.
Actually I simulated the above events for testing purposes by manually terminating the Spot Instance and launching a new one, but I assume the result would be the same in real usage.
So now, how can I get the old EBS volume re-mounted as the current root volume?
I tried the example from http://linux.die.net/man/8/pivot_root, with a few modifications to get around obvious errors:
# manually attach old EBS to /dev/sdf in the AWS console, then do:
sudo su -
mkdir /new-root
mkdir /new-root/old-root
mount /dev/xvdf1 /new-root
cd /new-root
pivot_root . old-root
exec chroot . sh <dev/console >dev/console 2>&1
umount /old-root
The terminal hangs at the exec chroot command, and the instance won't accept new ssh connections.
I'd really like to get this working, as it provides a convenient mechanism to save money off the On Demand prices for development, test, and batch-oriented EC2 instances without having to re-architect the whole application deployment, and without the commitment of a Reserved Instance.
What am I missing?
The answer is to place the pivot_root call inside of /sbin/init on the initial (ephemeral) EBS root volume.
Here are some scripts that automate the process of launching a new Spot Instance and modifying the /sbin/init on the 1st (ephemeral) EBS volume to chain-load the system from a 2nd (persistent) EBS volume:
https://github.com/atramos/ec2-spotter