How to increase harddisk size of Azure VM - azure

I am using Azure VM which is of type RHEL OS type. Currently I am using Standard D3 v2 size VM. I see only 32 hard disk storage available in VM. How do I increase the size of hard disk?
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 32G 32G 185M 100% /
devtmpfs 6.9G 0 6.9G 0% /dev
tmpfs 6.9G 0 6.9G 0% /dev/shm
tmpfs 6.9G 8.4M 6.9G 1% /run
tmpfs 6.9G 0 6.9G 0% /sys/fs/cgroup
/dev/sda1 497M 117M 381M 24% /boot
/dev/sdb1 197G 2.1G 185G 2% /mnt/resource
tmpfs 1.4G 0 1.4G 0% /run/user/1000
Note: I am using unmanaged disk.

If your Virtual Machine was created using the Azure Resource Manager (ARM) you can resize the OS disk or the data disk within the new Azure Portal.
Navigate to the Azure Resource Manager Virtual Machine that you want to resize the disk(s).
Shutdown the Virtual Machine from the Azure portal. Wait untill it's completely shutdown (de-allocated).
Select ‘Disks’ in the Settings blade (As in the below image).
Disks settings blade
Select the OS or Data disk that you would like to resize.
On the new blade, enter the new disk size (1023GB or 1TB max per disk) (As in the below image).
Change disk size
Hit ‘Save’ on top.
Start the Virtual Machine again.
That’s it! You can login to the VM and check that you have the new selected size for the disk(s).

So basically follow this article: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks
az vm deallocate --resource-group myResourceGroup --name myVM
az disk list \
--resource-group myResourceGroup \
--query '[*].{Name:name,Gb:diskSizeGb,Tier:accountType}' \
--output table
az disk update \
--resource-group myResourceGroup \
--name myDataDisk \
--size-gb 200
az vm start --resource-group myResourceGroup --name myVM
for unmanaged disks:
https://blogs.msdn.microsoft.com/cloud_solution_architect/2016/05/24/step-by-step-how-to-resize-a-linux-vm-os-disk-in-azure-arm/

According to your description, I test in my lab, I test on Red Hat Enterprise Linux Server release 7.3 (Maipo).
Notes: When you don it, I strongly suggest you could backup your OS VHD. If you
do fails, you could not start your VM.
1.Stop your VM on Azure Portal.
2.Increase OS disk with Azure CLI.
az vm update -g shui -n shui --set storageProfile.osDisk.diskSizeGB=100
3.Start your VM and ssh to your VM. You could check df -h and fdisk -l. /dev/sda2 does not increase to 100GB. You need do as the following commands.
sudo -i
[root#shui ~]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): p
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001461e
Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 3917 30944256 83 Linux
Command (m for help): u
Changing display/entry units to sectors
Command (m for help): p
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001461e
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 1026048 62914559 30944256 83 Linux
Command (m for help): d
Partition number (1-4): 1
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (63-104857599, default 63): 64
Last sector, +sectors or +size{K,M,G} (64-1026047, default 1026047):
Using default value 1026047
Command (m for help): p
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001461e
Device Boot Start End Blocks Id System
/dev/sda1 64 1026047 512992 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 1026048 62914559 30944256 83 Linux
Command (m for help): wq
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root#shui ~]# fdisk -l /dev/sda
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001461e
Device Boot Start End Blocks Id System
/dev/sda1 1 64 512992 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 3917 30944256 83 Linux
4.Reboot your VM
5.SSH to your VM and resize the filesystem.
xfs_growfs -d /dev/sda2
Now, you could check your OS disk with df -h
[root#shui ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 100G 1.7G 98G 2% /

Use below link to resize the azure Ubuntu and RHEL servers OS disks.
9 Easy Steps To Increase Your Root Volume Of AZURE Instance

Related

Restore backup from 'dd' (possibly problem with softRAID HDDs)

I have 2 servers, Ubuntu 18.04. First is my main server which I want to backup. Second is tested server with Ubuntu 18.04 - KS-4 Server - Atom N2800 - 4GB DDR3 1066 MHz - SoftRAID 2x 2To SATA - when I want test my backup.
I make backup by 'dd' command and nextly I download this backup (wget) by server 2 (490gb, ~ 24hours downloading).
Now I want test my backup so I tried:
dd if=sdadisk.img of=/dev/sdb
I get:
193536+0 records in
193536+0 records out
99090432 bytes (99 MB, 94 MiB) copied, 5.06239 s, 19.6 MB/s
But nothing will change.
fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8efed6c9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 1050623 1046528 511M fd Linux raid autodetect
/dev/sda2 1050624 3905974271 3904923648 1.8T fd Linux raid autodetect
/dev/sda3 3905974272 3907020799 1046528 511M 82 Linux swap / Solaris
GPT PMBR size mismatch (879097967 != 3907029167) will be corrected by w(rite).
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B3B13223-6382-4EB6-84C3-8E66C917D396
Device Start End Sectors Size Type
/dev/sdb1 2048 1048575 1046528 511M EFI System
/dev/sdb2 1048576 2095103 1046528 511M Linux RAID
/dev/sdb3 2095104 878039039 875943936 417.7G Linux RAID
/dev/sdb4 878039040 879085567 1046528 511M Linux swap
Disk /dev/md1: 511 MiB, 535756800 bytes, 1046400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 1.8 TiB, 1999320842240 bytes, 3904923520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 511M 0 part
│ └─md1 9:1 0 511M 0 raid1 /boot
├─sda2 8:2 0 1.8T 0 part
│ └─md2 9:2 0 1.8T 0 raid1 /
└─sda3 8:3 0 511M 0 part [SWAP]
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 511M 0 part
├─sdb2 8:18 0 511M 0 part
├─sdb3 8:19 0 417.7G 0 part
└─sdb4 8:20 0 511M 0 part
I think the problem is with configuration of disks on server 2, specifically with 'Linux raid' between them. I searching how change its, I testing commands like 'mdadm...' but it's not working like I expected. So I have questions:
How change 'Linux raid' from 2 HDDS to 1 HDD with current system and 2 HDD clear, when I can test my backup properly?
It's generally possible to restore backup 490GB on 1.8TB?
I selected the best option to full linux backup?
To touch on question #3 "I selected the best option to full linux backup?"
You are using the correct tool, but not the correct items to use the dd command on.
It appears you created a .img file for your sda drive (which is normally only used to create a bootable disk and not a full backup of the drive.
If you want to create a full backup, for example of /dev/sda you would run the following:
dd if=/dev/sda of=/dev/sdb
But In your case your /dev/sda drive is a twice the size of your /dev/sdb drive so you might want to consider backing up important files using tar as well as compressing the backup using either gzip or bzip2.

Mount docker logical volume

I'm trying to access to a logical volume where previously was used by docker. This is the result of various command:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 80G 0 disk
├─nvme0n1p1 259:3 0 80G 0 part /
└─nvme0n1p128 259:4 0 1M 0 part
nvme1n1 259:0 0 80G 0 disk
└─nvme1n1p1 259:1 0 80G 0 part
├─docker-docker--pool_tdata 253:1 0 79G 0 lvm
│ └─docker-docker--pool 253:2 0 79G 0 lvm
└─docker-docker--pool_tmeta 253:0 0 84M 0 lvm
└─docker-docker--pool 253:2 0 79G 0 lvm
fdisk
Disk /dev/nvme1n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00029c01
Device Boot Start End Blocks Id System
/dev/nvme1n1p1 2048 167772159 83885056 8e Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/nvme0n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 358A5F86-3BCA-4FB2-8C00-722B915A71AB
# Start End Size Type Name
1 4096 167772126 80G Linux filesyste Linux
128 2048 4095 1M BIOS boot BIOS Boot Partition
lvdisplay
--- Logical volume ---
LV Name docker-pool
VG Name docker
LV UUID piD2Wx-aDjf-CkpN-b4s4-YXWE-6ERm-GWTcOz
LV Write Access read/write
LV Creation host, time ip-172-31-39-159, 2020-02-16 09:18:57 +0000
LV Pool metadata docker-pool_tmeta
LV Pool data docker-pool_tdata
LV Status available
# open 0
LV Size 79.03 GiB
Allocated pool data 80.07%
Allocated metadata 31.58%
Current LE 20232
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
But when I try to mount the volume docker-docker--pool_tdata I get the following error:
mount /dev/mapper/docker-docker--pool_tdata /mnt/test
mount: /dev/mapper/docker-docker--pool_tdata is already mounted or /mnt/test busy
I've also tried to reboot the machine, to uninstall docker and to see if there is file opened on that volume using lsof
Do you have any clue about how can I mount that volume?
Thanks
Uninstalling docker does not really help as purge and autoremove only delete the installed packages and not the images, containers, volumes and config files.
To delete those you have to delete a bunch of directories contained in etc, var/lib, bin andvar/run
Clean up the env
try running docker system prune -a to remove unused containers, images etc
remove the volume with docker volume rm {volumeID}
create the volume again docker volume create docker-docker--pool_tdata
Kill the process
run lsof +D /mnt/test or cat ../docker/../tasks
this should display the PIDs of alive tasks.
Kill the task with kill -9 {PID}

How to increase the hard disk space of thin provisioning vm

Created thin provisioning vm(centos 7) with 50 GB hard disk. But it doesnt automatically increase the space when there is a need. Can someone please tell how to increase the space of "/" directory.
[oracle#localhost ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 14G 14G 16K 100% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 912M 985M 49% /dev/shm
tmpfs 1.9G 17M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 497M 147M 351M 30% /boot
tmpfs 380M 0 380M 0% /run/user/1001
tmpfs 380M 0 380M 0% /run/user/1002
Below are the output of pvs command.
[root#inches-rmdev01 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- 15.51g 40.00m
Below are the output of vgs command.
[root#inches-rmdev01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- 15.51g 40.00m
Below are the output of lvs command.
[root#inches-rmdev01 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- 13.87g
swap centos -wi-ao---- 1.60g
Below are the output of fdisk command.
[root#inches-rmdev01 ~]# fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009a61a
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 33554431 16264192 8e Linux LVM
/dev/sda3 33554432 104857599 35651584 8e Linux LVM
Disk /dev/mapper/centos-root: 14.9 GB, 14889779200 bytes, 29081600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 1719 MB, 1719664640 bytes, 3358720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
In the fdisk -l output you can see that you have a 35GB disk /dev/sda3. To extend your root volume you can add this disk to LVM (Logical Volume Manager):
pvcreate /dev/sda3
This will add the unused disk /dev/sda3 as a new pv (physical volume) to LVM.
Next step is to extend your root vg (volumegroup). In your case it is easy since you've got only one vg:
vgextend centos /dev/sda3
Now you have added the 35GB disk to your vg and you can distribute it to your lv's (logical volume).
Finaly you can add as much space as you need (up to 35GB) to your root-volume with the lvextend command:
If you want to use the whole 35GB you can use:
lvextend -l +100%FREE /dev/mapper/centos-root
If you only want to add a certain ammount (i.e 1G) you can use this:
lvextend -L +1G /dev/mapper/centos-root
And finaly resize your filesystem:
resize2fs /dev/mapper/centos-root
The LVM logic is:
1. Harddisk fdisk -l
2. Physical Volume pvs
3. Volume Group vgs
4. Logical Volume lvs

Docker cgroup.procs no space left on device

After some problem with Docker and my dedicated server with a Debian (the provider give some OS image without some features needed by Docker, so I recompiled Linux kernel yesterday and activate the features needed, I followed some instruction in blog).
Now I was happy to have success with docker I tried to create image... and I have an error.
$ docker run -d -t -i phusion/baseimage /sbin/my_init -- bash -l
Unable to find image 'phusion/baseimage:latest' locally
Pulling repository phusion/baseimage
5a14c1498ff4: Download complete
511136ea3c5a: Download complete
53f858aaaf03: Download complete
837339b91538: Download complete
615c102e2290: Download complete
b39b81afc8ca: Download complete
8254ff58b098: Download complete
ec5f59360a64: Download complete
2ce4ac388730: Download complete
2eccda511755: Download complete
Status: Downloaded newer image for phusion/baseimage:latest
0bd93f0053140645a930a3411972d8ea9a35385ac9fafd94012c9841562beea8
FATA[0039] Error response from daemon: Cannot start container 0bd93f0053140645a930a3411972d8ea9a35385ac9fafd94012c9841562beea8: [8] System error: write /sys/fs/cgroup/docker/0bd93f0053140645a930a3411972d8ea9a35385ac9fafd94012c9841562beea8/cgroup.procs: no space left on device
More informations :
$ docker info
Containers: 3
Images: 12
Storage Driver: devicemapper
Pool Name: docker-8:1-275423-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 814.4 MB
Data Space Total: 107.4 GB
Data Space Available: 12.22 GB
Metadata Space Used: 1.413 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.146 GB
Udev Sync Supported: false
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Kernel Version: 3.19.0-xxxx-std-ipv6-64
Operating System: Debian GNU/Linux 8 (jessie)
CPUs: 4
Total Memory: 7.691 GiB
Name: ns3289160.ip-5-135-180.eu
ID: JK54:ZD2Q:F75Q:MBD6:7MPA:NGL6:75EP:MLAN:UYVU:QIPI:BTDP:YA2Z
System :
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 788M 456K 788M 1% /run
/dev/sda1 20G 7.8G 11G 43% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.7G 4.0K 1.7G 1% /dev/shm
/dev/sda2 898G 11G 842G 2% /home
Edit: command du -sk /var
# du -sk /var
3927624 /var
Edit: command fdisk -l
# fdisk -l
Disk /dev/loop0:
100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop1: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00060a5c
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 40962047 40957952 19.5G 83 Linux
/dev/sda2 40962048 1952471039 1911508992 911.5G 83 Linux
/dev/sda3 1952471040 1953517567 1046528 511M 82 Linux swap / Solaris
Disk /dev/mapper/docker-8:1-275423-pool: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 byte
You should not remove cgroup support in docker. Otherwise you may get warning like WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded. when you run a docker container.
A simple command should do the trick.
sudo echo 1 > /sys/fs/cgroup/docker/cgroup.clone_children
If it still does not work, run below commands and restart docker service:
sudo echo 0 > /sys/fs/cgroup/docker/cpuset.mems
sudo echo 0 > /sys/fs/cgroup/docker/cpuset.cpus
I installed docker via docker-lxc in the debian repos, I followed a tuto. I tried another solution (with success), I updated my source.list /etc/apt/source.list from jessie to sid, I removed docker-lxc with a purge and I installed docker.io.
The error changed. It was mkdir -p /sys/... can't create dir : access denied
So I find a comment in a blog and I tried the solution it was to comment this line previously added by the tutorial :
## file /etc/fstab
# cgroup /sys/fs/cgroup cgroup defaults 0 0
and reboot the server.
yum install -y libcgroup libcgroup-devel libcgroup-tools
cgclear
service cgconfig restart
mount -t cgroup none /cgroup
vi /etc/fstab
cgroup /sys/fs/cgroup cgroup defaults 0 0

Can't open /dev/sda2 exclusively. Mounted filesystem?

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 3999.7 GB, 3999688294400 bytes
255 heads, 63 sectors/track, 486267 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
/dev/sda2 1 2090 16785120 82 Linux swap / Solaris
/dev/sda3 1 218918 1758456029+ 8e Linux LVM
Partition table entries are not in disk order
Above is my "fdisk -l", my current problem is when I go and try to do "pvcreate /dev/sda2" it gives me "Can't open /dev/sda2 exclusively. Mounted filesystem?" and I have been searching google for a while now trying to find a way to fix this. There is defiantly things I tried from google but none of them ended up working.
You're trying to initialize a partition for use by LVM that's currently used by swap.
You should rather run
pvcreate /dev/sda3
i updated to the new Kernel and the problem was resolved in RHEL 6 . Had upgraded from 2.6.32-131.x to 2.6.32-431.x
Check those disks/partition you are using, are not mounted to any directory on your system.
If yes, umount them and try again.

Resources