Get free space of HDD in linux - linux

Within a bash script i need to get the total disk size and the currently used size of the complete disk.
I know i can get the total disk size without needed to be root with this command:
cat /sys/block/sda/size
This command will output the count of blocks on device SDA.
Multiply it with 512 and you'll get the amount of bytes on this device.
This is sufficient with the total disk size.
Now for the currently used space. I want to get this value without being root.
I can assume the device name is SDA.
Now there is this command: df
I thought i could use this command but it seems this command only outputs data of the currently mounted partitions.
Is there a way to get a total of space used on disk SDA without needing to be root and not all partitions needs to be mounted?
Let's assume the following example:
/dev/sda1 80GB Linux partition 20GB Used
/dev/sda2 80GB Linux Home partition 20GB Used
/dev/sda3 100GB Windows Parition. 30GB Used
Let's assume my SDA disk is partitioned like above. But while i'm on Linux my Windows partition (sda3) is not mounted.
The output of df will give me a grand total of 40 GB Used so it doesn't take sda3 in account.
Again the question:
Is there a way without root to get me a grand total of 70 GB used space?

I think stat command should help. You can get the get the partitions from /proc/partitions.
Sample stat command output:
$ stat -f /dev/sda1
File: "/dev/sda1"
ID: 0 Namelen: 255 Type: tmpfs
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 237009 Free: 236970 Available: 236970
Inodes: Total: 237009 Free: 236386

You can use df.
df -h --output='used' /home
Used
3.2G
If you combine this with some sed or awk you can have the value you seek

Related

Resizing an EXT4 Partition [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I have an Ubuntu 18.04 server in a VM. The VM administrator has allocated more disk space. How do I resize a partition to use that space?
My setup is as follows:
Device Start End Sectors Size Type
/dev/sda1 2048 1050623 1048576 512M EFI System
/dev/sda2 1050624 5244927 4194304 2G Linux filesystem
/dev/sda3 5244928 424675327 419430400 200G Linux filesystem
/dev/sda4 424675328 529532927 104857600 50G Linux filesystem
/dev/sda5 529532928 1578108927 1048576000 500G Linux filesystem
/dev/sda6 1578108928 1745881087 167772160 80G Linux filesystem
(parted) print free space
Model: Msft Virtual Disk (scsi)
Disk /dev/sda: 1397GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 538MB 537MB fat32 boot, esp
2 538MB 2685MB 2147MB ext4
3 2685MB 217GB 215GB ext4
4 217GB 271GB 53.7GB ext4
5 271GB 808GB 537GB ext4
6 808GB 894GB 85.9GB ext4
894GB 1397GB 503GB Free Space
How can I assign that free space to the fifth partition.
Following the instructions in:
https://geekpeek.net/resize-filesystem-fdisk-resize2fs/
I deleted /dev/sda5 and then recreated it, but it remained the same size:
Similarly, following:
https://www.tecmint.com/parted-command-to-create-resize-rescue-linux-disk-partitions/
resizepart doesn't suggest that there's any available disk space to use and the partition remains the same size.
Do I need to remove /dev/sda6 to get access to the available disk space?
If you want to assign free space to partition /dev/sda5 from inside the VM you will first have to get rid of /dev/sda6 partition.
Otherwise the solution is to run GParted from a live CD while the filesystem is not mounted, because you will have to move partitions (in your case, you would have to move the /dev/sda6 partition to the end of the disk, then expand /dev/sda5 with its now adjacent free space).

gcloud instance disk space

I am trying to do some computing on cloud. For this I created a computing instance and then I attached an external storage with about 10TB. But it seemed that I did something wrong and I got only 200GB available for my datalab. Any comment will be helpful
To check this I used
df -h
and
sudo lsblk
Thanks.
As I can see from lsblk command, you have the right size of your datalab-pd disk.
But you can use only 196 Gb.
I think this may be because the file system does not occupy the entire disk space.
Need to extend the file system.
As an example if you have ext3 fs need to do:
- umount /dev/sdb # Unmount your disk
- e2fsck /dev/sdb # Check file system in your disk
- resize2fs /dev/sdb
resize2fs command without any parameters will extend filesystem to all free space on disk.
More info: https://access.redhat.com/articles/1196353

Backup entire disk in Ubuntu

I would like to make a backup of the entire HDD disk.
Step-by-step what I'am trying to do:
1) Check storage capacity (that is going to be backupped):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 455G 157G 275G 37% /
2) Mount extra, empty hdd to /mnt/backup/
/dev/sdb 294G 63M 279G 1% /mnt/backup
3) Run backup (using lzop as the fastest compressor)
dd if=/dev/sda1 bs=4M conv=noerror iflag=noatime,nofollow | lzop -1 > /mnt/backup/dev-sda1.lzo
But the backup fails with error: lzop: No space left on device: <stdout>
The extra hdd being fulled with dev-sda1.lzo. But the original size of /dev/sda1 "157G" is obviously less than available on /dev/sdb "279G". Even without compression.
In /etc/fstab /dev/sda1 being mounted to "/":
UUID=8a49b90e-6115-43a6-9702-7620182bbbf5 / ext4 errors=remount-ro 0 1
Is it possible that "dd" is doing recursive copy of the "/mnt/backup/" folder and this leads to it fail ?
Please, advice
Thanks to Mark Setchell to show me the correct direction.
Finally the solution to create dump of the whole partition without spaces is:
dump -0a -z1 -f /mnt/hdd1/dev-sda1.dump.gz /dev/sda1
For 157 G partition of Ubuntu 14.04 + development files + database files "dump" takes 45 minutes (on 7200 rpm HDD) and the result file was 80 G (compression level = 1).

How To Mount A Hard Disk Of File-System Type "devtmpfs"

I'm trying to recover some data from a hard drive extracted from a broken laptop, and I'm having problems mounting the disk to my current system (Linux Mint). The hard disk I'm recovering from ran Debian. Simply, I'm confused as to how I can mount the hard drive to access the files, however it's not as simple as any other mount I've done. The following details struggles and information I've encountered.
I get the following outputs when trying to mount the hard drive with different file-system tags. I should add that the file-system type isn't automatically detected when using auto, and "sdb" is definitely the correct address for the disk (taken it from dmesg).
$ mount /dev/sdb /mnt/usb -t ntfs
NTFS signature is missing.
Failed to mount '/dev/sdb': Invalid argument
The device '/dev/sdb' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
The following returns the same message when all other common file-system tags are used:
$ sudo mount /dev/sdb usb -t ext2
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
The results from these commands led me to believe that there was an issue with the hard disk and it's partitions, however fdisk proved that it's partition's do seem to be valid and correct:
$ sudo fdisk /dev/sdb -l
Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002da94
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 475920383 237959168 83 Linux
/dev/sdb2 475922430 488396799 6237185 5 Extended
/dev/sdb5 475922432 488396799 6237184 82 Linux swap / Solaris
I then decided to try verify the file-system type of the hard drive, which seems to be "devtmpfs", which I got from the following command using df:
$ df /dev/sdb -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
udev devtmpfs 1014764 4 1014760 1% /dev
And so finally, I mount the hard drive using -t devtmpfs, which is successful in mounting however I'm left with a confusing file system very unlike from what I would expect from what was a standard debian set up.
It contains file folders such as "block","bus","char","disk","dri","mapper"... and files like "sda1","sdb","sdb1","tty","vcs".
I'm totally stumped as to how I should progress, and I'm pretty convinced the hard disk isn't broken and that I'm just mounting it incorrectly. How can I successfully mount the disk so I can access my files? Any help would be greatly appreciated.
Ok, you are trying to mount the entire disk instead of individual partitions, which is why you are getting the error. In short the command you need is:
mount /dev/sdb1 /mnt/usb
The file /dev/sdb references the entire disk as a block file. This includes the partition table at the start, which is why it can't find a filesystem. The file /dev/sdb1 references the first partition, which is where your filesystem will be. From the looks of your fdisk output, this is not an ntfs partition since this is a Windows filesystem and the partition is marked as Linux (most likely you will have ext4 unless you specifically set up something different).
To add a quick explanation of devtmpfs, this is a special filesystem which contains these block files which are specified by udev. You can google both for more information, but by now I'm sure you now know its not what you are looking for.

Understanding Linux partitions with Amazon EC2

I am relatively new to Linux. In one of our projects, we use amazon's EC2 instance for processing of some files. We upload files to S3 server after processing. EC2 instance is booted using an existing AMI
Recently I got an error no space left on disk, hence processing of files was halted. I cleaned up some older files and the processing continued.
Now when I look at available space using df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 5.7G 3.7G 61% /
none 3.7G 0 3.7G 0% /dev/shm
/dev/xvdb 414G 199M 393G 1% /mnt
/dev/xvdc 414G 199M 393G 1% /data
I can see my files are effecting only /dev/xvda1.
I have following queries
What is the use of other partitions when I can see my files only effecting /dev/xvda1
It looks like we are only using 10 GB of space effectively and other is being wasted. How can I use other space? Can I move some disk space to /dev/xvda1 or directly store files in other areas?
As you can see from the output of df -h, there are two large partitions mouted on /mnt and /data respectively. I suggest that you use those partitions by processing the files in one of those directories. If you cannot move where the processing happens for some reason, you can remount the partitions in the appropriate place.
If for example your files are processed in the directory /var/mydir and you cannot change that, do the following (as root):
umount /mnt
mount /dev/xvdb /var/mydir
You can use the other partition as well of course if you prefer that.

Resources