Attaching a cloud disk, how to format disk if its larger than 2 TiB - alibaba-cloud-ecs

I found the device name of the disk by going to :
ECS Console > Block Storage > Disks > (Disk ID specific) More > Modify Atrributes.
The run
fdisk /dev/vdb
To create a new partition. But I do not think it is working for disk bigger than 2 TiB, so what is the procedure for doing the same for those.

You could create the file system directly on the disk, without a partition table:
# mkfs.xfs /dev/vdb
But GPT support was added to fdisk in util-linux in 2012, so this should not be necessary.

As 2TiB exceeds the limits of MBR you must use GPT partition.
Here you can find the list of the tools https://wiki.archlinux.org/index.php/partitioning#Partitioning_tools
You can use:
gdisk
parted
GParted

using parted you can create a partition larger than 2TB.

Related

Clonezilla - target partition smaller than source

I made an image of a disk using CLONEZILLA. I checked the image and everything was fine. The image is of a 120GB disk. When I try to restore the image on a 1TB disk or any other disk with a capacity greater than 120GB I always get the message:
Target partition size (2MB) is smaller than source (105MB).
Use option -C do disable size checking (Dangerous).
I never came across this situation.
Any idea how to overcome this problem?
Thank you very much
This happened because CLONEZILLA does not support dynamic volumes.

How can I quickly erase all partition information and data on partitions in LInux?

I'm testing a program to use on Raspberry Pi OS. A good part of what it does is read the partitioning info on the system drive, which is going to be (in this case), /boot and / and no extra partitions, just the two. I'm using a Python script that calls sfdisk. I do what so many examples show: I get the info from the system drive, read it as output, then use it as input to run the command to format the target drive.
I'm using Python and doing this with subprocess.run(). The script I'm writing, when it writes the 2nd partition on the target drive, writes it as a small size, then I use parted to extend the partition to the end of the drive. In between tests, to wipe my data so I can start fresh, I've been using sfdisk to make one partition for the full size of the drive. Also, I'm using USB memory sticks at this point for testing. I'll generally be using that for drives or using SD cards.
The problem I'm finding is that the file structure is persistent on the partitions on the target drive. (All this paragraph is about ONLY the target drive.) If I divide it up into 2 partitions (as I need to use, eventually), I find that /boot, the small 1st partition, still has all the files from previous usage of the partition. If I've tried to wipe the information by making only one big partition on the drive, I still see only, in that one partition, the original files for the /boot partition. If I split it into 2 partitions, the locations are going to be the same as when I normally make a Raspbian image and I find the files in both /boot and the system drive are still there.
So repartitioning, with the partitions in the same location, leaves me with the files still intact from the previous incarnation of a partition in the same sectors.
I'd like to, for testing, just wipe out all the information so I start fresh with each test, but I do not want to just use dd and write gigabytes of 0s or 1s to the full drive to wipe out the data.
What can I do to make sure:
The partition table is wiped out between tests
Any directory structure or file information for the partitions is wiped out so there are no files still surviving on any partitions when I start my testing?
A "nice" thing about linux filesystems is that they are separate from partition tables. This has saved me in the past when partition tables have been accidentally deleted or corrupted - recreate the partition table and the filesystem is still there! For your use case, if you want the files to be "gone", you need to destroy the filesystem superblocks. Destroying just the first one is probably sufficient for your use case.
Using dd to overwrite just the first MB of each of your filesystems should get you what you need. So, if you're starting your first partition/FS on block 0, you could do something like
# write 1MB of zeros to wipe out /boot
dd if=/dev/zero of=/dev/path_to_your_device bs=1024 count=1024
That ought to wipe out the /boot file system. From there you'll need to calculate the start of your root volume and you can use skip as per https://superuser.com/questions/380717/how-to-output-file-from-the-specified-offset-but-not-dd-bs-1-skip-n to write a meg of zeros at the start of your root filesystem.
Alternately, if /boot is small, you can just write sizeof(/boot)+1MB (assuming you start /root immediately after /boot) and it'll overwrite the primary superblock from /root too while saving you some calculations.
Note that the alternate superblocks will still exist, so at some point if you (or someone) wanted to get back what was there previously then recovery of alternate superblocks might be possible, except that whatever files were present in that first 1MB of disk would be corrupt due to overwrite.

Increase the root volume (Hard disk) of EC2 Linux running instance without restart - Step by step process

Problem:
I have EC2 instance with Linux (Ubunty) and root volume of 10 GB. I have consumed about 96% of the size and now my application responding slow, so I wanted to increase the size to 50 GB.
The most important point is, I have data already there and many applications are running on this EC2 instance and I don't want to disturb or stop them.
To check the current space available ~$ df -hT
Please use ~$ lsblk command to check the partition size
Here is the solution:
Take a snapshot of your volume which contains valuable data.
Increase the EBS volume using Elastic Volumes
After increasing the size, extend the volume's file system manually.
Details
1. Snapshot Process (AWS Reference)
1) Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2) Choose Snapshots under Elastic Block Store in the navigation pane.
3) Choose Create Snapshot.
4) For Select resource type, choose Volume.
5) For Volume, select the volume.
6) (Optional) Enter a description for the snapshot.
7) (Optional) Choose Add Tag to add tags to your snapshot. For each tag, provide a tag key and a tag value.
8) Choose Create Snapshot.
2) Increase the EBS volume using Elastic Volumes (AWS Reference)
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
Choose Volumes, select the volume to modify, and then choose Actions, Modify Volume.
The Modify Volume window displays the volume ID and the volume's current configuration, including type, size, IOPS, and throughput. Set new configuration values as follows:
To modify the type, choose a value for Volume Type.
To modify the size, enter a new value for Size.
To modify the IOPS, if the volume type is gp3, io1, or io2, enter a new value for IOPS.
To modify the throughput, if the volume type is gp3, enter a new value for Throughput.
After you have finished changing the volume settings, choose Modify. When prompted for confirmation, choose Yes.
Modifying volume size has no practical effect until you also extend the volume's file system to make use of the new storage capacity.
3) Extend the volume's file system manually (AWS Reference)
To check whether the volume has a partition that must be extended, use the lsblk command to display information block devices attached to your instance.
The root volume, /dev/nvme0n1, has a partition, /dev/nvme0n1p1. While the size of the root volume reflects the new size, 50 GB, the size of the partition reflects the original size, 10 GB, and must be extended before you can extend the file system.
The volume /dev/nvme1n1 has no partitions. The size of the volume reflects the new size, 40 GB.
For volumes that have a partition, such as the root volume shown in the previous step, use the growpart command to extend the partition. Notice that there is a space between the device name and the partition number.
~$ sudo growpart /dev/nvme0n1 1
To extend the file system on each volume, use the correct command for your file system. In my case, I have ext4 filesystem, I will use the resize2fs command.
~$ sudo resize2fs /dev/nvme0n1p1
Use lsblk to check the partition size.

What exactly a file system like EXT4/XFS contains on a disk partition in Linux apart from data blocks?

At low level, what happens when we format a newly created disk using mkfs?
mkfs formats the partition you selected and you can format with the filessytem you choose
Ex: mkfs -t ext3 /dev/sda1
You can also use
mkfs.ext3 /dev/sda1

XenServer 7.2 parted mkpart error

I am attempting to partition my Dell R710 for VM storage. Details:
Newly installed XenServer 7.2. Accepted Defaults.
5x 2TB Drives, Raid 5. Single Virtual Disk. Total storage: 8TB
All I want to do is add two partitions, a 4TB for VM storage, then whatever is left for media storage (~ 3.9TB).
When I run parted to try and create the first partition (4TB), I am receiving an error "Unable to satisfy all constraints on the partition." I have Googled and Googled, but am unable to find anything that seems to get me going in the right direction. Additionally, I get a strange message (see the bottom of the screenshot) suggesting I have an issue with my sectors perhaps (34...2047 available?).
Below is a screenshot that contains pertinent information as well as command output. Here's hoping someone can help. Thanks in advance!
You are attempting to write a partition to an already partitioned space. You will have to delete the lvm partition first.
So, I ended up booting from a Debian live cd and using gparted to tweak the partition size. This worked like a charm. Marking meatball's answer as correct as this was what lead me down this path.

Resources