gcloud instance disk space - linux

I am trying to do some computing on cloud. For this I created a computing instance and then I attached an external storage with about 10TB. But it seemed that I did something wrong and I got only 200GB available for my datalab. Any comment will be helpful
To check this I used
df -h
and
sudo lsblk
Thanks.

As I can see from lsblk command, you have the right size of your datalab-pd disk.
But you can use only 196 Gb.
I think this may be because the file system does not occupy the entire disk space.
Need to extend the file system.
As an example if you have ext3 fs need to do:
- umount /dev/sdb # Unmount your disk
- e2fsck /dev/sdb # Check file system in your disk
- resize2fs /dev/sdb
resize2fs command without any parameters will extend filesystem to all free space on disk.
More info: https://access.redhat.com/articles/1196353

Related

Expand virtual hard disks on a Linux VM with the Azure CLI

I am trying to extend a disk in my vm (azure). I used to do it like this:
sudo umount /dev/sdc1
(sdc1 as an example)
sudo parted /dev/sdc
after typing print, I should see something like this:
GNU Parted 3.2
Using /dev/sdc1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Unknown Msft Virtual Disk (scsi)
Disk /dev/sdc1: 215GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 107GB 107GB ext4
I can't go any further because in my case after typing this command I see:
GNU Parted 3.3
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Msft Virtual Disk (scsi)
Disk /dev/sdc: 550GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:
As you can see, there are no partitions, so I can't use resizepart command.
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
sda 1:0:1:0 16G
└─sda1 16G /mnt
sdb 0:0:0:0 30G
├─sdb1 29.9G /
├─sdb14 4M
└─sdb15 106M /boot/efi
sdc 3:0:0:0 512G
As you can see, there are no partitions, so I can't use resizepart
command.
You Need to format the disk sdc to create partitions using either xfs or ext4 file system & to procced further resize/expand the disk partition & file system.
Cmdlets for disk format & diskpartition using XFS file system:
sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
sudo mkfs.xfs /dev/sdc1
sudo partprobe /dev/sdc1
Here we are formatting the disk using XFS file system & using the partprobeutility to make sure the kernel is aware of the new partition and filesystem.
Reference documentation to format the disk & also you can refer this blog on How to create a ext4 file system partition in Linux.
We have tested in our local environment creating a disk partition (to newly attached disk to the linux machine running with ubuntu 20.84 image) & initializing the disk partition with xfs file system.
Below is the reference image when we created a new disk & attached it to the virtual machine. When ran lsblk you see that disk is not mounted & it has no partitions.
In the above image, post running the above mentioned disk format & file partition cmdlets you can see a new partition with sdc1 got created.

How to mount azure datadisk in virtual machine

How can I mount an Azure data disk from a linux virtual machine?
I think it might be something like this
az vm disk attach-existing [virtualmachinename] [datadiskname]
I found the solution, its confusing because the documentation for creating azure disk is hard to sort from the documentation for creating a mount point. This is the relevant documentation.
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/add-disk#connect-to-the-linux-vm-to-mount-the-new-disk
For an alternative walkthrough, see this blog: https://chrismckee.co.uk/creating-mounting-new-drives-in-ubuntu-azure/. I couldn't identify the disk I'd like to mount with the official Azure docs and this post helped.
You can attach any disk size with azure virtual machine
https://mocktool.com/2020/11/24/attach-managed-disk-to-azure-linux-virtual-machine
Find the disk
Once connected to your VM, you need to find the disk. In this example, we are using lsblk to list the disks.
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
The output is similar to the following example:
sda 0:0:0:0 30G
├─sda1 29.9G /
├─sda14 4M
└─sda15 106M /boot/efi
sdb 1:0:1:0 14G
└─sdb1 14G /mnt
sdc 3:0:0:0 50G
Here, sdc is the disk that we want, because it is 50G. If you aren't sure which disk it is based on size alone, you can go to the VM page in the portal, select Disks, and check the LUN number for the disk under Data disks.
Mount the disk
Now, create a directory to mount the file system using mkdir. The following example creates a directory at /datadrive:
sudo mkdir /datadrive
Use mount to then mount the filesystem. The following example mounts the /dev/sdc1 partition to the /datadrive mount point:
sudo mount /dev/sdc1 /datadrive

How To Mount A Hard Disk Of File-System Type "devtmpfs"

I'm trying to recover some data from a hard drive extracted from a broken laptop, and I'm having problems mounting the disk to my current system (Linux Mint). The hard disk I'm recovering from ran Debian. Simply, I'm confused as to how I can mount the hard drive to access the files, however it's not as simple as any other mount I've done. The following details struggles and information I've encountered.
I get the following outputs when trying to mount the hard drive with different file-system tags. I should add that the file-system type isn't automatically detected when using auto, and "sdb" is definitely the correct address for the disk (taken it from dmesg).
$ mount /dev/sdb /mnt/usb -t ntfs
NTFS signature is missing.
Failed to mount '/dev/sdb': Invalid argument
The device '/dev/sdb' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
The following returns the same message when all other common file-system tags are used:
$ sudo mount /dev/sdb usb -t ext2
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
The results from these commands led me to believe that there was an issue with the hard disk and it's partitions, however fdisk proved that it's partition's do seem to be valid and correct:
$ sudo fdisk /dev/sdb -l
Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002da94
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 475920383 237959168 83 Linux
/dev/sdb2 475922430 488396799 6237185 5 Extended
/dev/sdb5 475922432 488396799 6237184 82 Linux swap / Solaris
I then decided to try verify the file-system type of the hard drive, which seems to be "devtmpfs", which I got from the following command using df:
$ df /dev/sdb -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
udev devtmpfs 1014764 4 1014760 1% /dev
And so finally, I mount the hard drive using -t devtmpfs, which is successful in mounting however I'm left with a confusing file system very unlike from what I would expect from what was a standard debian set up.
It contains file folders such as "block","bus","char","disk","dri","mapper"... and files like "sda1","sdb","sdb1","tty","vcs".
I'm totally stumped as to how I should progress, and I'm pretty convinced the hard disk isn't broken and that I'm just mounting it incorrectly. How can I successfully mount the disk so I can access my files? Any help would be greatly appreciated.
Ok, you are trying to mount the entire disk instead of individual partitions, which is why you are getting the error. In short the command you need is:
mount /dev/sdb1 /mnt/usb
The file /dev/sdb references the entire disk as a block file. This includes the partition table at the start, which is why it can't find a filesystem. The file /dev/sdb1 references the first partition, which is where your filesystem will be. From the looks of your fdisk output, this is not an ntfs partition since this is a Windows filesystem and the partition is marked as Linux (most likely you will have ext4 unless you specifically set up something different).
To add a quick explanation of devtmpfs, this is a special filesystem which contains these block files which are specified by udev. You can google both for more information, but by now I'm sure you now know its not what you are looking for.

Amazon EC2: Unable to unmount and remove EBS drive file system

I have create an EBS drive, attached it to the Instance and created file system using mkfs.ext3.
Now i want to unmount and delete the drive, i've tried many things but nothing seems to work. Although i am able to detach the drive from instance and delete using EC-2 Console,
but when i am checking partition using df -hk it is still showing the drive.
[ec2-user#XXXXXXXXXXXXXX ~]$ df -hk
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 1075740 7097356 14% /
tmpfs 304368 0 304368 0% /dev/shm
/dev/xvdf 30963708 176196 29214648 1% /media/newdrive
And more over when i try to use any other command like "fdisk -l" or and all or trying to browse the drive's folders, the putty session hangs.
I am new to EC2 cloud and also to Linux.
How about this?
You need to run as:
sudo umount /dev/xvdf
umount -dRf /media/newdrive
umount needs mountpoint not a devicetype like /dev/xvdf

Understanding Linux partitions with Amazon EC2

I am relatively new to Linux. In one of our projects, we use amazon's EC2 instance for processing of some files. We upload files to S3 server after processing. EC2 instance is booted using an existing AMI
Recently I got an error no space left on disk, hence processing of files was halted. I cleaned up some older files and the processing continued.
Now when I look at available space using df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 5.7G 3.7G 61% /
none 3.7G 0 3.7G 0% /dev/shm
/dev/xvdb 414G 199M 393G 1% /mnt
/dev/xvdc 414G 199M 393G 1% /data
I can see my files are effecting only /dev/xvda1.
I have following queries
What is the use of other partitions when I can see my files only effecting /dev/xvda1
It looks like we are only using 10 GB of space effectively and other is being wasted. How can I use other space? Can I move some disk space to /dev/xvda1 or directly store files in other areas?
As you can see from the output of df -h, there are two large partitions mouted on /mnt and /data respectively. I suggest that you use those partitions by processing the files in one of those directories. If you cannot move where the processing happens for some reason, you can remount the partitions in the appropriate place.
If for example your files are processed in the directory /var/mydir and you cannot change that, do the following (as root):
umount /mnt
mount /dev/xvdb /var/mydir
You can use the other partition as well of course if you prefer that.

Resources