Disk size for Azure VM on docker-machine - azure

I am creating an Azure VM using docker-machine as follows.
docker-machine create --driver azure --azure-size Standard_DS2_v2 --azure-subscription-id #### --azure-location southeastasia --azure-image canonical:UbuntuServer:14.04.2-LTS:latest --azure-open-port 80 AwesomeMachine
following the instructions here. Azure VM docs say - Max. disk size of Standard_DS2_v2 is 100GB,
however when I login to the machine (or create a container on this machine), the max available disk size I see is 30GB.
$ docker-machine ssh AwesomeMachine
docker-user#tf:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 6.9G 21G 25% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.4G 12K 3.4G 1% /dev
tmpfs 698M 452K 697M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.5G 1.1M 3.5G 1% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 14G 35M 13G 1% /mnt
What is the meaning of Max. disk size then? Also what is this /dev/sdb1? Is it usable space?

My bad, I didn't look at the documentation carefully.
Wo when --azure-size is Standard_DS2_v2, Local SSD disk = 14 GB, which is /dev/sdb1, while
--azure-size Standard_D2_v2 gives you Local SSD disk = 100 GB.
Not deleting the question in case somebody else makes the same stupid mistake.

Related

/dev/mapper/RHELCSB-Home marked as full when it is not after verification

I was trying to copy a 1.5GiB file from a location to another and was warned that my disk space is full, so I proceeded to a verification using df -h, which gave the following output:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 114M 16G 1% /dev/shm
tmpfs 16G 2.0M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/RHELCSB-Root 50G 11G 40G 21% /
/dev/nvme0n1p2 3.0G 436M 2.6G 15% /boot
/dev/nvme0n1p1 200M 17M 184M 9% /boot/efi
/dev/mapper/RHELCSB-Home 100G 100G 438M 100% /home
tmpfs 3.1G 88K 3.1G 1% /run/user/4204967
where /dev/mapper/RHELCSB-Home seemed to cause the issue. But when running sudo du -xsh /dev/mapper/RHELCSB-Home, I got the following result:
0 /dev/mapper/RHELCSB-Home
and same thing for /dev/ and /dev/mapper/. After researching this issue, I figured out that this might have been caused by undeleted log files in /var/log/, but the total size of files there is far from approaching the 100GiB. What could cause my disk space to be full?
Additional context: I was running a local postgresql database when this happened, but I can't see how this can relate to my issue as postgres log files are not taking that much space either.
The issue was solved by deleting podman container volumes in ~/.local/share/containers/

Drone sometimes fail with error related to "No space left on device"

Sometimes we experience a drone pipeline failing due to a lack of disk space, but there is a lot of space.
drone#drone2:~$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 8.4G 0 8.4G 0% /dev
tmpfs 1.7G 984k 1.7G 1% /run
/dev/vda2 138G 15G 118G 12% /
tmpfs 8.4G 0 8.4G 0% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 8.4G 0 8.4G 0% /sys/fs/cgroup
overlay 138G 15G 118G 12% /var/lib/docker/overlay2/cf15a2d5e215d6624d9239d28c34be2aa4a856485a6ecafb16b38d1480531dfc/merged
overlay 138G 15G 118G 12% /var/lib/docker/overlay2/c4b441b59cba0ded4b82c94ab5f5658b7d8015d6e84bf8c91b86c2a2404f81b2/merged
tmpfs 1.7G 0 1.7G 0% /run/user/1000
We use docker images for running drone infostracture. The console command:
docker run \
--volume=/var/lib/drone:/data \
--publish=80:80 \
--publish=443:443 \
--restart=always \
--detach=true \
--name=drone \
<drone image>
My assumption may be due to the docker container's limitations, and we need to configure it somehow manually.
Any suggestion on how to fix it?
We have manage to identify the problem.
Our cloud provider has confirmed that our software has used the disc a lot this lead to triggering limitations on disk operation.
They gave us a recommendation to increase the disk plan. Or optimise the disk usage.

Azure Container Instances disk space pricing

This doesn't seem to be listed neither on the Microsoft portals, or be answered anywhere else - so I might have misunderstood how this is provisioned and/or calculated in Azure, but;
Instancing a container instance in Azure, with an image from my Azure Container Registry, how is disk space billed? I understand the whole memory/CPU billed-by-the-second, but what about the disk space taken up by the running instance?
The official docs seems to be very vague about this.
When an Azure Container Instance is launched it has an amount of disk space available to the container. You have no control over this and are not explicitly charged for it.
Having just ran this command
az container create --image node:8.9.3-alpine -g "Test1" \
-n test1 --command-line 'df -h'
The response shows...
Filesystem Size Used Available Use% Mounted on
overlay 48.4G 2.8G 45.6G 6% /
tmpfs 958.9M 0 958.9M 0% /dev
tmpfs 958.9M 0 958.9M 0% /sys/fs/cgroup
/dev/sda1 48.4G 2.8G 45.6G 6% /dev/termination-log
tmpfs 958.9M 4.0K 958.9M 0% /var/aci_metadata
/dev/sda1 48.4G 2.8G 45.6G 6% /etc/resolv.conf
/dev/sda1 48.4G 2.8G 45.6G 6% /etc/hostname
/dev/sda1 48.4G 2.8G 45.6G 6% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 958.9M 0 958.9M 0% /proc/kcore
tmpfs 958.9M 0 958.9M 0% /proc/timer_list
tmpfs 958.9M 0 958.9M 0% /proc/timer_stats
tmpfs 958.9M 0 958.9M 0% /proc/sched_debug
So you should have 48gb of disk space to play with.
(I tried to test this with a Win Image, but got hit by a bug trying to get the information out)

Setting up a swapfile in local SSD (temporary drive) in Azure VM

I'm using a DS4 Azure VM (Ubuntu 14.04). It comes with a 56GB local SSD.
I need to set up a 25GB swapfile in this local SSD. When I do df -h in the VM, I can see that it seems to be mapped to the /mnt/ folder. Following is the entire output:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 22G 6.4G 77% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 14G 4.0K 14G 1% /dev
tmpfs 2.8G 472K 2.8G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 14G 0 14G 0% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 56G 97M 56G 1% /mnt
However, if I try to initialize a swapfile in /mnt, it still gets added to the available disk space in /dev/sda1.
What do I need to do to set up my swap file? An illustrative example would be great. Thanks in advance.
I normally use the following commands to set up a swapfile:
sudo fallocate -l 25G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Update:
I went into /etc/waagent.conf, and tweaked the followed:
# Format if unformatted. If 'n', resource disk will not be mounted.
ResourceDisk.Format=y
# File system on the resource disk
# Typically ext3 or ext4. FreeBSD images should use 'ufs2' here.
ResourceDisk.Filesystem=ext4
# Mount point for the resource disk
ResourceDisk.MountPoint=/mnt
# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y
# Size of the swapfile.
ResourceDisk.SwapSizeMB=26000
After this, I resized (and consequently rebooted) my Azure VM from the portal. Currently I can't tell whether the settings have taken effect. Are my settings correct and what's the best way to ensure they've taken effect?
You are right, we should modify /etc/waagent.conf to add a swap file.
By modifying the /etc/waagent.conf file and setting the following 3 parameters a swap file will be created in the directory defined by ResourceDisk.MountPoint  
 
ResourceDisk.Format=y  
ResourceDisk.EnableSwap=y    
ResourceDisk.SwapSizeMB=26000
Then we should restart walinuxagent:
service walinuxagent restart
Commands to show the new swap space in use after agent restart:
dmesg | grep swap
root#ubuntu:~# swapon -s
Filename Type Size Used Priority
/mnt/swapfile file 26623996 0 -1
root#ubuntu:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 3.4G 12K 3.4G 1% /dev
tmpfs tmpfs 697M 412K 697M 1% /run
/dev/sda1 ext4 29G 869M 27G 4% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 3.5G 0 3.5G 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/dev/sdb1 ext4 99G 26G 68G 28% /mnt
I resized (and consequently rebooted) my Azure VM from the portal
I resized my VM, and the swap file does not lose.
Are my settings correct and what's the best way to ensure they've
taken effect?
After modify the /etc/waagent.conf and restart walinuxagent, we can use swapon -s to check it.

How do I create an XFS volume out of root volume on EC2?

I've created a new EC2 instance and setting up a bunch of software on it. MongoDB 3.2's Production checklist suggests installing it on an XFS (or ext4) volume. How do I create a volume of, say 15 GB, out of /dev/xvda1, format is as XFS using mkfs and then mount it? Here's the output of df -h right now:
udev 492M 12K 492M 1% /dev
tmpfs 100M 340K 99M 1% /run
/dev/xvda1 30G 2.5G 26G 9% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 0 497M 0% /run/shm
none 100M 0 100M 0% /run/user
OS is Ubuntu 12.04 LTS
Does it have to be the root partition?
If not, you can simply create a new volume in the AWS EC2 UI and attach it to the instance. It will show up as e.g. /dev/xvdf and you can format and mount it.
Also, this might answer your question.

Resources