Expand virtual hard disks on a Linux VM with the Azure CLI - azure

I am trying to extend a disk in my vm (azure). I used to do it like this:
sudo umount /dev/sdc1
(sdc1 as an example)
sudo parted /dev/sdc
after typing print, I should see something like this:
GNU Parted 3.2
Using /dev/sdc1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Unknown Msft Virtual Disk (scsi)
Disk /dev/sdc1: 215GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 107GB 107GB ext4
I can't go any further because in my case after typing this command I see:
GNU Parted 3.3
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Msft Virtual Disk (scsi)
Disk /dev/sdc: 550GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:
As you can see, there are no partitions, so I can't use resizepart command.
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
sda 1:0:1:0 16G
└─sda1 16G /mnt
sdb 0:0:0:0 30G
├─sdb1 29.9G /
├─sdb14 4M
└─sdb15 106M /boot/efi
sdc 3:0:0:0 512G

As you can see, there are no partitions, so I can't use resizepart
command.
You Need to format the disk sdc to create partitions using either xfs or ext4 file system & to procced further resize/expand the disk partition & file system.
Cmdlets for disk format & diskpartition using XFS file system:
sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
sudo mkfs.xfs /dev/sdc1
sudo partprobe /dev/sdc1
Here we are formatting the disk using XFS file system & using the partprobeutility to make sure the kernel is aware of the new partition and filesystem.
Reference documentation to format the disk & also you can refer this blog on How to create a ext4 file system partition in Linux.
We have tested in our local environment creating a disk partition (to newly attached disk to the linux machine running with ubuntu 20.84 image) & initializing the disk partition with xfs file system.
Below is the reference image when we created a new disk & attached it to the virtual machine. When ran lsblk you see that disk is not mounted & it has no partitions.
In the above image, post running the above mentioned disk format & file partition cmdlets you can see a new partition with sdc1 got created.

Related

mkfs.vfat: unable to open {partition}: No such file or directory (command succeeds, but throws this error and blocks rest of script)

Update: I got this working but am still not 100% sure why. I've appended the fully and consistently working script to the end for reference.
I'm trying to script a series of disk partition commands using sgdisk and mkfs.vfat. I'm working from a Live USB (NixOS 21pre), have a blank 1TB M.2 SSD, and am creating a 1GB EFI boot partition, and a 999GB ZFS partition.
Everything works up until I try to create a FAT32 filesystem on the EFI partition, using mkfs.vfat, where I get the error in the title.
However, the odd thing is, the mkfs.vfat command succeeds, but throws that error anyway and blocks the rest of the script. Any idea why it's doing this and how to fix it?
Starting with an unformatted 1TB M.2 SSD:
$ sudo parted /dev/disk/by-id/wwn-0x5001b448b94488f8 print
Error: /dev/sda: unrecognised disk label
Model: ATA WDC WDS100T2B0B- (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
Script:
$ ls
total 4
drwxr-xr-x 2 nixos users 60 May 18 20:25 .
drwx------ 17 nixos users 360 May 18 15:24 ..
-rwxr-xr-x 1 nixos users 2225 May 18 19:59 partition.sh
$ cat partition.sh
#!/usr/bin/env bash
#make gpt partition table and boot & rpool partitions for ZFS on 1TB M.2 SSD
#error handling on
set -e
#wipe the disk with -Z, then create two partitions, a 1GB (945GiB) EFI boot partition, and a ZFS root partition consisting of the rest of the drive, then print the results
DISK=/dev/disk/by-id/wwn-0x5001b448b94488f8
sgdisk -Z $DISK
sgdisk -n 1:0:+954M -t 1:EF00 -c 1:efi $DISK
sgdisk -n 2:0:0 -t 2:BF01 -c 2:zroot $DISK
sgdisk -p /dev/sda
#make a FAT32 filesystem on the EFI partition, then mount it
#mkfs.vfat -F 32 ${DISK}-part1 (troubleshooting with hardcoded version below)
mkfs.vfat -F 32 /dev/disk/by-id/wwn-0x5001b448b94488f8-part1
mkdir -p /mnt/boot
mount ${DISK}-part1 /mnt/boot
Result (everything fine until mkfs.vfat, which throws error and blocks the rest of the script):
$ sudo sh partition.sh
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries in memory.
Setting name!
partNum is 0
The operation has completed successfully.
Setting name!
partNum is 1
The operation has completed successfully.
Disk /dev/sda: 1953525168 sectors, 931.5 GiB
Model: WDC WDS100T2B0B-
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 77ED6A41-E722-4FFB-92EC-975A37DBCB97
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1955839 954.0 MiB EF00 efi
2 1955840 1953525134 930.6 GiB BF01 zroot
mkfs.fat 4.1 (2017-01-24)
mkfs.vfat: unable to open /dev/disk/by-id/wwn-0x5001b448b94488f8-part1: No such file or directory
Verifying the partitioning and FAT32 creation commands worked:
$ sudo parted /dev/disk/by-id/wwn-0x5001b448b94488f8 print
Model: ATA WDC WDS100T2B0B- (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 1001MB 1000MB fat32 efi boot, esp
2 1001MB 1000GB 999GB zroot
Fwiw, the same command works on the commandline with no error:
$ sudo mkfs.vfat -F 32 /dev/disk/by-id/wwn-0x5001b448b94488f8-part1
mkfs.fat 4.1 (2017-01-24)
Success. But why no error on the commandline, but an error in the script?
Update: fully and consistently working script:
#!/usr/bin/env bash
#make UEFI (GPT) partition table and two partitions (FAT32 boot and ZFS rpool) on 1TB M.2 SSD
#error handling on
set -e
#vars
DISK=/dev/disk/by-id/wwn-0x5001b448b94488f8
POOL='rpool'
#0. if /mnt/boot is mounted, umount it; if any NixOS filesystems are mounted, unmount them
if mount -l | grep -q '/mnt/boot'; then
umount -f /mnt/boot
fi
if mount -l | grep -q '/mnt/nix'; then
umount -fR /mnt
fi
#1. if a zfs pool exists, delete it
if zpool list | grep -q $POOL; then
zfs unmount -a
zpool export $POOL
zpool destroy -f $POOL
fi
#2. wipe the disk
sgdisk -Z $DISK
wipefs -a $DISK
#3. create two partitions, a 1GB (945GiB) EFI boot partition, and a ZFS root partition consisting of the rest of the drive, then print the results
sgdisk -n 1:0:+954M -t 1:EF00 -c 1:efiboot $DISK
sgdisk -n 2:0:0 -t 2:BF01 -c 2:zfsroot $DISK
sgdisk -p /dev/sda
#4. notify the OS of partition updates, and print partition info
partprobe
parted ${DISK} print
#5. make a FAT32 filesystem on the EFI boot partition
mkfs.vfat -F 32 ${DISK}-part1
#6. notify the OS of partition updates, and print new partition info
partprobe
parted ${DISK} print
#mount the partitions in nixos-zfs-pool-dataset-create.sh script. Make sure to first mount the ZFS root dataset on /mnt before mounting and subdirectories of /mnt.
It may take time for kernel to be notified about partition changes. Try calling partprobe before mkfs, to request kernel to re-read the partition tables.

Simulate mounted volume errors to cause read only

Few days ago we have encountered an unexpected error where one of the mounted drive on our RedHat linux machine became Read-Only. The issue was cause by the network outage in the datacenter.
Now I need to see if I can reproduce the same behavior where drive will be re-mounted as Read-Only while application is running.
I tried to remounted it was read-only but that didn't work because there are files that are opened (logs being written).
Is there a way to temporary cause the read-only if I have root access to the machine (but no access to the hypervisor).
That volume is mounted via /etc/fstab. Here is the record:
UUID=abfe2bbb-a8b6-4ae0-b8da-727cc788838f / ext4 defaults 1 1
UUID=8c828be6-bf54-4fe6-b68a-eec863d80133 /opt/sunapp ext4 rw 0 2
Here are the output of few commands that shows details about our mounted drive. I can add more details as needed.
Output of fdisk -l
Disk /dev/vda: 268.4 GB, 268435456000 bytes, 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008ba5f
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 524287966 262142959+ 83 Linux
Disk /dev/vdb: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Output of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 80G 0 disk
└─vda1 253:1 0 80G 0 part /
vdb 253:16 0 250G 0 disk /opt/sunup
Output of blkid command:
/dev/vda1: UUID="abfe2bbb-a8b6-4ae0-b8da-727cc788838f" TYPE="ext4"
/dev/sr0: UUID="2017-11-13-13-33-07-00" LABEL="config-2" TYPE="iso9660"
/dev/vdb: UUID="8c828be6-bf54-4fe6-b68a-eec863d80133" TYPE="ext4"
Output of parted -l command:
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0
has been opened read-only.
Error: /dev/sr0: unrecognised disk label
Model: QEMU QEMU DVD-ROM (scsi)
Disk /dev/sr0: 461kB
Sector size (logical/physical): 2048B/2048B
Partition Table: unknown
Disk Flags:
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 268GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 268GB 268GB primary ext4 boot
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 42.9GB 42.9GB ext4
Yes, you can do it. But the method proposed here may cause data loss, so use it only for testing.
Supposing you have /dev/vdb mounted as /opt/sunapp, do this:
First, unmount it. You may need to shut down any applications using it first.
Configure a loop device to mirror the contents of /dev/vdb:
losetup /dev/loop0 /dev/vdb
Then, mount /dev/loop0 instead of /dev/vdb:
mount /dev/loop0 /opt/sunapp -o rw,errors=remount-ro
Now, you can run your application. When it is time to make /opt/sunapp read-only, use this command:
blockdev --setro /dev/vdb
After that, attempts to write to /dev/loop0 will result in I/O errors. As soon as file system driver detects this, it will remount the file system as read-only.
To restore everything back, you will need to unmount /opt/sunapp, detach the loop device, and make /dev/vdb writable again:
umount /opt/sunapp
losetup -d /dev/loop0
blockdev --setrw /dev/vdb
When I had some issues like corrupted disks, I had used ntfsfix.
Please see if these commands, solve the problem.
sudo ntfsfix /dev/vda
sudo ntfsfix /dev/vdb

gcloud instance disk space

I am trying to do some computing on cloud. For this I created a computing instance and then I attached an external storage with about 10TB. But it seemed that I did something wrong and I got only 200GB available for my datalab. Any comment will be helpful
To check this I used
df -h
and
sudo lsblk
Thanks.
As I can see from lsblk command, you have the right size of your datalab-pd disk.
But you can use only 196 Gb.
I think this may be because the file system does not occupy the entire disk space.
Need to extend the file system.
As an example if you have ext3 fs need to do:
- umount /dev/sdb # Unmount your disk
- e2fsck /dev/sdb # Check file system in your disk
- resize2fs /dev/sdb
resize2fs command without any parameters will extend filesystem to all free space on disk.
More info: https://access.redhat.com/articles/1196353

How to mount azure datadisk in virtual machine

How can I mount an Azure data disk from a linux virtual machine?
I think it might be something like this
az vm disk attach-existing [virtualmachinename] [datadiskname]
I found the solution, its confusing because the documentation for creating azure disk is hard to sort from the documentation for creating a mount point. This is the relevant documentation.
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/add-disk#connect-to-the-linux-vm-to-mount-the-new-disk
For an alternative walkthrough, see this blog: https://chrismckee.co.uk/creating-mounting-new-drives-in-ubuntu-azure/. I couldn't identify the disk I'd like to mount with the official Azure docs and this post helped.
You can attach any disk size with azure virtual machine
https://mocktool.com/2020/11/24/attach-managed-disk-to-azure-linux-virtual-machine
Find the disk
Once connected to your VM, you need to find the disk. In this example, we are using lsblk to list the disks.
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
The output is similar to the following example:
sda 0:0:0:0 30G
├─sda1 29.9G /
├─sda14 4M
└─sda15 106M /boot/efi
sdb 1:0:1:0 14G
└─sdb1 14G /mnt
sdc 3:0:0:0 50G
Here, sdc is the disk that we want, because it is 50G. If you aren't sure which disk it is based on size alone, you can go to the VM page in the portal, select Disks, and check the LUN number for the disk under Data disks.
Mount the disk
Now, create a directory to mount the file system using mkdir. The following example creates a directory at /datadrive:
sudo mkdir /datadrive
Use mount to then mount the filesystem. The following example mounts the /dev/sdc1 partition to the /datadrive mount point:
sudo mount /dev/sdc1 /datadrive

Backup entire disk in Ubuntu

I would like to make a backup of the entire HDD disk.
Step-by-step what I'am trying to do:
1) Check storage capacity (that is going to be backupped):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 455G 157G 275G 37% /
2) Mount extra, empty hdd to /mnt/backup/
/dev/sdb 294G 63M 279G 1% /mnt/backup
3) Run backup (using lzop as the fastest compressor)
dd if=/dev/sda1 bs=4M conv=noerror iflag=noatime,nofollow | lzop -1 > /mnt/backup/dev-sda1.lzo
But the backup fails with error: lzop: No space left on device: <stdout>
The extra hdd being fulled with dev-sda1.lzo. But the original size of /dev/sda1 "157G" is obviously less than available on /dev/sdb "279G". Even without compression.
In /etc/fstab /dev/sda1 being mounted to "/":
UUID=8a49b90e-6115-43a6-9702-7620182bbbf5 / ext4 errors=remount-ro 0 1
Is it possible that "dd" is doing recursive copy of the "/mnt/backup/" folder and this leads to it fail ?
Please, advice
Thanks to Mark Setchell to show me the correct direction.
Finally the solution to create dump of the whole partition without spaces is:
dump -0a -z1 -f /mnt/hdd1/dev-sda1.dump.gz /dev/sda1
For 157 G partition of Ubuntu 14.04 + development files + database files "dump" takes 45 minutes (on 7200 rpm HDD) and the result file was 80 G (compression level = 1).

Resources