Need to extend the lvm (centos-home) from attached EBS (150GB) - amazon

enter image description hereim trying to extend the lvm (centos-home) from the attached EBS storage(xvdf) 150GB without data loss

Related

Clonezilla - target partition smaller than source

I made an image of a disk using CLONEZILLA. I checked the image and everything was fine. The image is of a 120GB disk. When I try to restore the image on a 1TB disk or any other disk with a capacity greater than 120GB I always get the message:
Target partition size (2MB) is smaller than source (105MB).
Use option -C do disable size checking (Dangerous).
I never came across this situation.
Any idea how to overcome this problem?
Thank you very much
This happened because CLONEZILLA does not support dynamic volumes.

how to move disc space from rhel-home to rhel-root

I need to move disc space from home partition to root partition using RHEL-8.5. I am sharing the screen shot below.
In the above image I have 421G assign to rhel-home partition but I need to move 350G from rhel-home to rhel-root partition. Can anybody please provides the steps how to swap this space from home to root ?

Increase the root volume (Hard disk) of EC2 Linux running instance without restart - Step by step process

Problem:
I have EC2 instance with Linux (Ubunty) and root volume of 10 GB. I have consumed about 96% of the size and now my application responding slow, so I wanted to increase the size to 50 GB.
The most important point is, I have data already there and many applications are running on this EC2 instance and I don't want to disturb or stop them.
To check the current space available ~$ df -hT
Please use ~$ lsblk command to check the partition size
Here is the solution:
Take a snapshot of your volume which contains valuable data.
Increase the EBS volume using Elastic Volumes
After increasing the size, extend the volume's file system manually.
Details
1. Snapshot Process (AWS Reference)
1) Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2) Choose Snapshots under Elastic Block Store in the navigation pane.
3) Choose Create Snapshot.
4) For Select resource type, choose Volume.
5) For Volume, select the volume.
6) (Optional) Enter a description for the snapshot.
7) (Optional) Choose Add Tag to add tags to your snapshot. For each tag, provide a tag key and a tag value.
8) Choose Create Snapshot.
2) Increase the EBS volume using Elastic Volumes (AWS Reference)
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
Choose Volumes, select the volume to modify, and then choose Actions, Modify Volume.
The Modify Volume window displays the volume ID and the volume's current configuration, including type, size, IOPS, and throughput. Set new configuration values as follows:
To modify the type, choose a value for Volume Type.
To modify the size, enter a new value for Size.
To modify the IOPS, if the volume type is gp3, io1, or io2, enter a new value for IOPS.
To modify the throughput, if the volume type is gp3, enter a new value for Throughput.
After you have finished changing the volume settings, choose Modify. When prompted for confirmation, choose Yes.
Modifying volume size has no practical effect until you also extend the volume's file system to make use of the new storage capacity.
3) Extend the volume's file system manually (AWS Reference)
To check whether the volume has a partition that must be extended, use the lsblk command to display information block devices attached to your instance.
The root volume, /dev/nvme0n1, has a partition, /dev/nvme0n1p1. While the size of the root volume reflects the new size, 50 GB, the size of the partition reflects the original size, 10 GB, and must be extended before you can extend the file system.
The volume /dev/nvme1n1 has no partitions. The size of the volume reflects the new size, 40 GB.
For volumes that have a partition, such as the root volume shown in the previous step, use the growpart command to extend the partition. Notice that there is a space between the device name and the partition number.
~$ sudo growpart /dev/nvme0n1 1
To extend the file system on each volume, use the correct command for your file system. In my case, I have ext4 filesystem, I will use the resize2fs command.
~$ sudo resize2fs /dev/nvme0n1p1
Use lsblk to check the partition size.

How can I change disk size for new machine instance for Azure?

I want to create a new NC6 instance (for my GPU and ML works). Few days ago I created my first instance for the NC6 instance, it was 150 GB standard SSD one, it was too much for me, so I tried to change the disk to low price one, but I noticed you cannot shrink or swap/change the disk to reduced size one or HDD one. So, I ended up deleting the instance and now trying to create a new one, but here, seems there is no way to change disk size, it seems you can change "OS disk type" to SSD to HDD, but no disk size e.g. 150 GB to 32 GB. So, How do I specify disk size for new machine instance for Azure? Thanks.
The disk size of the OS disk is determined by the image of the VM that you choose.
When you create a VM you get a copy of the bace image, and that bace image has a size.
You cannot reduce the size, but you can increase it:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/expand-os-disk
Bouncing off of Shiraz's answer. I was banging my head on this problem. I have a free account with Azure and I'm trying to keep it free by not using stupid options that incur charges. But the default Server 2019 images comes with 128GB with no option to use a smaller size. The P6 that comes with the free account is 64GB.
I found that if you go into all images, you can search for a [smalldisk] image. It comes at 32GB. In my case, I created the VM, shut it down, then expanded it up to P6. (Why wouldn't I use the full size that comes for free?)
Screenshots for Reference (Imgur)

Attaching a cloud disk, how to format disk if its larger than 2 TiB

I found the device name of the disk by going to :
ECS Console > Block Storage > Disks > (Disk ID specific) More > Modify Atrributes.
The run
fdisk /dev/vdb
To create a new partition. But I do not think it is working for disk bigger than 2 TiB, so what is the procedure for doing the same for those.
You could create the file system directly on the disk, without a partition table:
# mkfs.xfs /dev/vdb
But GPT support was added to fdisk in util-linux in 2012, so this should not be necessary.
As 2TiB exceeds the limits of MBR you must use GPT partition.
Here you can find the list of the tools https://wiki.archlinux.org/index.php/partitioning#Partitioning_tools
You can use:
gdisk
parted
GParted
using parted you can create a partition larger than 2TB.

Resources