Setting up a swapfile in local SSD (temporary drive) in Azure VM - azure

I'm using a DS4 Azure VM (Ubuntu 14.04). It comes with a 56GB local SSD.
I need to set up a 25GB swapfile in this local SSD. When I do df -h in the VM, I can see that it seems to be mapped to the /mnt/ folder. Following is the entire output:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 22G 6.4G 77% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 14G 4.0K 14G 1% /dev
tmpfs 2.8G 472K 2.8G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 14G 0 14G 0% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 56G 97M 56G 1% /mnt
However, if I try to initialize a swapfile in /mnt, it still gets added to the available disk space in /dev/sda1.
What do I need to do to set up my swap file? An illustrative example would be great. Thanks in advance.
I normally use the following commands to set up a swapfile:
sudo fallocate -l 25G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Update:
I went into /etc/waagent.conf, and tweaked the followed:
# Format if unformatted. If 'n', resource disk will not be mounted.
ResourceDisk.Format=y
# File system on the resource disk
# Typically ext3 or ext4. FreeBSD images should use 'ufs2' here.
ResourceDisk.Filesystem=ext4
# Mount point for the resource disk
ResourceDisk.MountPoint=/mnt
# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y
# Size of the swapfile.
ResourceDisk.SwapSizeMB=26000
After this, I resized (and consequently rebooted) my Azure VM from the portal. Currently I can't tell whether the settings have taken effect. Are my settings correct and what's the best way to ensure they've taken effect?

You are right, we should modify /etc/waagent.conf to add a swap file.
By modifying the /etc/waagent.conf file and setting the following 3 parameters a swap file will be created in the directory defined by ResourceDisk.MountPoint  
 
ResourceDisk.Format=y  
ResourceDisk.EnableSwap=y    
ResourceDisk.SwapSizeMB=26000
Then we should restart walinuxagent:
service walinuxagent restart
Commands to show the new swap space in use after agent restart:
dmesg | grep swap
root#ubuntu:~# swapon -s
Filename Type Size Used Priority
/mnt/swapfile file 26623996 0 -1
root#ubuntu:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 3.4G 12K 3.4G 1% /dev
tmpfs tmpfs 697M 412K 697M 1% /run
/dev/sda1 ext4 29G 869M 27G 4% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 3.5G 0 3.5G 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/dev/sdb1 ext4 99G 26G 68G 28% /mnt
I resized (and consequently rebooted) my Azure VM from the portal
I resized my VM, and the swap file does not lose.
Are my settings correct and what's the best way to ensure they've
taken effect?
After modify the /etc/waagent.conf and restart walinuxagent, we can use swapon -s to check it.

Related

Podman on RHEL 8 running out of space during import

I am having issues with Podman running out of space when importing. This is happening on a RHEL 8 VM that has been deployed for our group. We do have a 80GB /docker partition available, but I am missing some Podman configuration that says to use /docker. This VM
Can you all help me identify?
Here is part of my /etc/containers/storage.conf:
[storage]
# Default Storage Driver, Must be set for proper operation.
driver = "overlay"
# Temporary storage location
runroot = "/docker/temp"
# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
# graphroot = "/var/lib/containers/storage"
graphroot = "/docker"
We are running SELinux, so I did run these commands:
semanage fcontext -a -e /var/lib/containers/storage /docker
restorecon -R -v /docker
and restart the podman service. However, if I run
podman import docker.tar
We receive the error:
Getting image source signatures
Copying blob 848eb673668a [=>------------------------------------] 1.8GiB / 41.3GiB
Error: writing blob: storing blob to file "/var/tmp/storage2140624383/1": write /var/tmp/storage2140624383/1: no space left on device
df -H shows:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 84K 3.9G 1% /dev/shm
tmpfs 3.9G 9.3M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mapper/rhel_rhel86--svr-root 38G 7.2G 31G 20% /
/dev/mapper/rhel_rhel86--svr-tmp 4.7G 66M 4.6G 2% /tmp
/dev/mapper/rhel_rhel86--svr-home 43G 1.4G 42G 4% /home
/dev/sda2 495M 276M 220M 56% /boot
/dev/sdb1 79G 42G 33G 56% /docker
/dev/sda1 500M 5.9M 494M 2% /boot/efi
/dev/mapper/rhel_rhel86--svr-var 33G 1.6G 32G 5% /var
/dev/mapper/rhel_rhel86--svr-var_log 4.7G 109M 4.6G 3% /var/log
/dev/mapper/rhel_rhel86--svr-var_tmp 1.9G 47M 1.9G 3% /var/tmp
/dev/mapper/rhel_rhel86--svr-var_log_audit 9.4G 132M 9.2G 2% /var/log/audit
tmpfs 785M 8.0K 785M 1% /run/user/42
tmpfs 785M 0 785M 0% /run/user/1000
Do you guys know what I'm missing to tell Podman to use /docker instead of /var/tmp/storage2140624383 ?
################################################
Edited December 29:
I was able to change the tmpdir to /docker. However, upon import of this 54GB docker.tar file, it is still telling me I am running out of space. We were able to import a small .tar (around 800MB) successfully, so we know podman is working.
$ podman import docker.tar
Getting image source signatures
Copying blob b45265b317a7 done
Error: writing blob: adding layer with blob "sha256:b45265b317a7897670ff015b177bac7b9d5037b3cfb490d3567da959c7e2cf70": Error processing tar file(exit status 1): write /a65be6ac39ddadfec332b73d772c49d5f1b4fffbe7a3a419d00fd58fcb4bb752/layer.tar: no space left on device
This might be a pretty easy one:
Copying blob 848eb673668a [=>------------------------------------] 1.8GiB / 41.3GiB
vs
/dev/mapper/rhel_rhel86--svr-var_tmp 1.9G 47M 1.9G 3% /var/tmp
As you can see, the image will not fit into the desired temp space directory.
This is somewhat explained in the docs, which states, you can adjust this by changing the TMPDIR environment variable.

Disk size for Azure VM on docker-machine

I am creating an Azure VM using docker-machine as follows.
docker-machine create --driver azure --azure-size Standard_DS2_v2 --azure-subscription-id #### --azure-location southeastasia --azure-image canonical:UbuntuServer:14.04.2-LTS:latest --azure-open-port 80 AwesomeMachine
following the instructions here. Azure VM docs say - Max. disk size of Standard_DS2_v2 is 100GB,
however when I login to the machine (or create a container on this machine), the max available disk size I see is 30GB.
$ docker-machine ssh AwesomeMachine
docker-user#tf:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 6.9G 21G 25% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.4G 12K 3.4G 1% /dev
tmpfs 698M 452K 697M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.5G 1.1M 3.5G 1% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 14G 35M 13G 1% /mnt
What is the meaning of Max. disk size then? Also what is this /dev/sdb1? Is it usable space?
My bad, I didn't look at the documentation carefully.
Wo when --azure-size is Standard_DS2_v2, Local SSD disk = 14 GB, which is /dev/sdb1, while
--azure-size Standard_D2_v2 gives you Local SSD disk = 100 GB.
Not deleting the question in case somebody else makes the same stupid mistake.

How do I create an XFS volume out of root volume on EC2?

I've created a new EC2 instance and setting up a bunch of software on it. MongoDB 3.2's Production checklist suggests installing it on an XFS (or ext4) volume. How do I create a volume of, say 15 GB, out of /dev/xvda1, format is as XFS using mkfs and then mount it? Here's the output of df -h right now:
udev 492M 12K 492M 1% /dev
tmpfs 100M 340K 99M 1% /run
/dev/xvda1 30G 2.5G 26G 9% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 0 497M 0% /run/shm
none 100M 0 100M 0% /run/user
OS is Ubuntu 12.04 LTS
Does it have to be the root partition?
If not, you can simply create a new volume in the AWS EC2 UI and attach it to the instance. It will show up as e.g. /dev/xvdf and you can format and mount it.
Also, this might answer your question.

cronjob : No space left on device

I have attached a new volume to an EC2 instance. Volume was attached successfully.Below the output of command.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 32G 8.1G 22G 27% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.0G 12K 2.0G 1% /dev
tmpfs 396M 340K 395M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 0 2.0G 0% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 1.0M 0 100% /tmp
When i tried to add new cronjob it shows the error that there is no space left.
sudo crontab -e
/tmp/crontab.jVOoWT/crontab: No space left on device
Your /tmp directory is full, first remove the files from your temp directory by issuing the command below
rm -rf /tmp/*
Run your crontab again
sudo crontab -e
Please execute df -i may be inode's 100% full Remove unnecessary file from /var
run your crontab again
crontab -e
I had the same issue on AWS and ultimately the solution was to boost the capacity of the hard drive. Solved the issue.

dont know as how to mount the file-system in ubuntu

I am new to ubuntu , please help me i need create /data and mount this file system /dev/sdb
I have no clue as to how to do it . I read about mount and unmount , but I am still unable to create.
Below is the current structure .
Pdpie#ubuntu:/dev$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 18G 4.0G 13G 24% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 984M 4.0K 984M 1% /dev
tmpfs 199M 1.5M 198M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 994M 152K 994M 1% /run/shm
none 100M 44K 100M 1% /run/user
I wanted to see like this below
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 18G 4.0G 13G 24% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 984M 4.0K 984M 1% /dev
tmpfs 199M 1.5M 198M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 994M 152K 994M 1% /run/shm
none 100M 44K 100M 1% /run/user
/dev/sdb 50M /data
Can anyone please help me step by step.
Thanks
You need to create your mount point first:
sudo mkdir /data
Then mount the sdb1 (if sdb1 is what you want) to it:
sudo mount /dev/sdb1 /data
Done
PS: To check which one you want to mount run sudo fdisk -l
Follow these steps
1.Create a directory which you want that is mkdir /data
2.Then you will use mount command to mount /data directory to sdb1 which is like this mount /dev/sdb1
3.Then to take it immediate effect use this command mount -a
4.Then to check it you can use df-h
What about the following?
# mkdir /data
# mount /dev/sdb1 /data
If you don't have /dev/sdb<number>, you'll have to create partitions with e.g. parted or fdisk.

Resources