Why does my primary partition not reflect the total disk space? - linux

I remotely ssh'd and resized my primary partition with parted (rebooted as well and updated /etc/fstab) to give it more space.
Why on earth is my primary ext4 partition not reflecting the free space?
~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 985M 0 985M 0% /dev
tmpfs 200M 3.5M 197M 2% /run
/dev/sda1 7.8G 7.5G 0 100% /
tmpfs 999M 0 999M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 999M 0 999M 0% /sys/fs/cgroup
tmpfs 200M 0 200M 0% /run/user/0
~# fdisk -l
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x010920b9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 41943039 41940992 20G 83 Linux
Any ideas? I must have forgotten something really simple.

Execute parted and then print free. This will print free space of your system.

Related

How to check the swap partion in Ubuntu

I found a strange thing, the result of free shows:
> free
total used free shared buff/cache available
Mem: 6060516 1258584 3614828 34340 1187104 4469908
Swap: 2097148 0 2097148
But the result of df:
> df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.9G 0 2.9G 0% /dev
tmpfs 592M 1.9M 591M 1% /run
/dev/sda1 98G 85G 8.8G 91% /
tmpfs 2.9G 40K 2.9G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 2.9G 0 2.9G 0% /sys/fs/cgroup
/dev/loop0 161M 161M 0 100% /snap/gnome-3-28-1804/116
...
/dev/loop18 2.5M 2.5M 0 100% /snap/gnome-calculator/730
tmpfs 592M 32K 592M 1% /run/user/1000
There is no swap partion ... I use default config to build ubuntu18.04 in VMWare
> uname -a
Linux ubuntu 5.3.0-51-generic #44~18.04.2-Ubuntu SMP Thu Apr 23 14:27:18 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Is the swap partition enabled in the system ?
The swap partition is not a file system and as a consequence is not displayed by df which works only on file systems.
Instead you can use swapon:
$ swapon
NAME TYPE SIZE USED PRIO
/swap.img file 2G 0B -2
Other than swapon suggested by pifor you can also check your partition, swap included, with lsblk command:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 2G 0 lvm [SWAP]
└─centos-home 253:2 0 27G 0 lvm /home

"No space left on device" when using dd to create a disk image

I am trying to trying to create a disk image of my Raspberry Pi Model 3 B+ onto a USB drive using dd. I know there are easier ways to do this on a Raspberry Pi, but I want to try this to test the procedure on a 'sacrificial' system, which I hope to then use on another linux computer running a much larger Ubuntu disk to create a backup. OS is Raspbian Buster 10.
I have been following a procedure I found on an article here: https://www.makeuseof.com/tag/easily-clone-restore-linux-disk-image-dd/
The USB drive has 64GB capacity and has been formatted, initially as exFAT but I also tried NTFS thinking maybe that was the issue. The command ended with the same error, however each time i have tried this the file size transferred has been different, varying from 2-8GB in size before the error occurred.
This is to identify my drives - the SD card is "mmcblk" and my USB drive is "sda", called "NINJA":
pi#raspberrypi:~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 57.9G 0 disk
└─sda1 8:1 1 57.9G 0 part
mmcblk0 179:0 0 14.9G 0 disk
├─mmcblk0p1 179:1 0 256M 0 part /boot
└─mmcblk0p2 179:2 0 14.6G 0 part /
This my command I tried to use:
pi#raspberrypi:~ $ sudo dd bs=4M if=/dev/mmcblk0 of=/media/pi/NINJA/raspibackup.img
and this is the output:
dd: error writing '/media/pi/NINJA/raspibackup.img': No space left on device
605+0 records in
604+0 records out
2535124992 bytes (2.5 GB, 2.4 GiB) copied, 325.617 s, 7.8 MB/s
Check how much disk space is "Avail" on the target device.
Example:
[jack#server1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 496M 41M 456M 9% /dev/shm
tmpfs 496M 6.9M 489M 2% /run
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/mapper/centos-root 6.2G 6.2G 172K 100% /
/dev/sda1 1014M 166M 849M 17% /boot
tmpfs 100M 24K 100M 1% /run/user/1000
/dev/sr0 552M 552M 0 100% /run/media/jack/CentOS 7 x86_64
Terminology:
df: DiskFree
-h: Human Readable Sizes (Ex: 6.2G instead of 6485900)
In this example, let's say I want to make a backup of the Boot drive (/dev/sda1) and save it in my Local User Home Folder on my Root Drive (/dev/mapper/centos-root).
When I so this, I will get an error that looks like:
[jack#server1 ~]$ sudo dd if=/dev/sda1 of=boot.img
dd: error writing 'boot.img': No space left on device
1736905+0 records in
1736904+0 records out
889294848 bytes (889 MB) copied, 4.76575 s, 187 MB/s
Terminology:
sudo: Super User Do
dd: Disk Duplicate
if: Input File (source)
of: Output File (destination)
The system is trying to copy ALL of /dev/sda1 (to include freespace) to boot.img, which is impossible at this because /dev/sda1 is 1014M and there is only 172K space left on /dev/mapper/centos-root.
With that said, the actual size of the /dev/sda is actually 16G total! Which means that there is 8G not allocated.
My /dev/sda1 should be 1G where my /dev/sda2 (centos-root) should be 15G... in which it is currently 6.2G
[jack#server1 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 16G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 15G 0 part
├─centos-root 253:0 0 6.2G 0 lvm /
└─centos-swap 253:1 0 820M 0 lvm [SWAP]
sr0 11:0 1 552M 0 rom /run/media/jack/CentOS 7 x86_64
This partition can be extended by doing the following:
[jack#server1 ~]$ sudo lvextend -L +8G /dev/mapper/centos-root
[jack#server1 ~]$ sudo xfs_growfs /dev/mapper/centos-root
Now that my partition is extended, I can do another DiskFree command to double check.
[jack#server1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 496M 33M 463M 7% /dev/shm
tmpfs 496M 6.9M 489M 2% /run
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/mapper/centos-root 15G 7.0G 7.3G 49% /
/dev/sda1 1014M 166M 849M 17% /boot
tmpfs 100M 24K 100M 1% /run/user/1000
/dev/sr0 552M 552M 0 100% /run/media/jack/CentOS 7 x86_64
My root partition is now 15G! Now I can perform my backup of the /dev/sda1 partition!
[jack#server1 ~]$ sudo dd if=/dev/sda1 of=boot.img
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 5.59741 s, 192 MB/s
Mission Complete!
sda1 is not mounted in /media/pi/NINJA/, the image you create is therefore stored on the mmcblk0p2 partition.
Since mmcblk0 is by definition larger than mmcblk0p2, you logically run out of space on it.
Solution :
You need to first mount sda1 using sudo mount /dev/sda1 /media/pi/NINJA/ and try your dd command again after.

How to increase the hard disk space of thin provisioning vm

Created thin provisioning vm(centos 7) with 50 GB hard disk. But it doesnt automatically increase the space when there is a need. Can someone please tell how to increase the space of "/" directory.
[oracle#localhost ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 14G 14G 16K 100% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 912M 985M 49% /dev/shm
tmpfs 1.9G 17M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 497M 147M 351M 30% /boot
tmpfs 380M 0 380M 0% /run/user/1001
tmpfs 380M 0 380M 0% /run/user/1002
Below are the output of pvs command.
[root#inches-rmdev01 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 centos lvm2 a-- 15.51g 40.00m
Below are the output of vgs command.
[root#inches-rmdev01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz--n- 15.51g 40.00m
Below are the output of lvs command.
[root#inches-rmdev01 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- 13.87g
swap centos -wi-ao---- 1.60g
Below are the output of fdisk command.
[root#inches-rmdev01 ~]# fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0009a61a
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 33554431 16264192 8e Linux LVM
/dev/sda3 33554432 104857599 35651584 8e Linux LVM
Disk /dev/mapper/centos-root: 14.9 GB, 14889779200 bytes, 29081600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 1719 MB, 1719664640 bytes, 3358720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
In the fdisk -l output you can see that you have a 35GB disk /dev/sda3. To extend your root volume you can add this disk to LVM (Logical Volume Manager):
pvcreate /dev/sda3
This will add the unused disk /dev/sda3 as a new pv (physical volume) to LVM.
Next step is to extend your root vg (volumegroup). In your case it is easy since you've got only one vg:
vgextend centos /dev/sda3
Now you have added the 35GB disk to your vg and you can distribute it to your lv's (logical volume).
Finaly you can add as much space as you need (up to 35GB) to your root-volume with the lvextend command:
If you want to use the whole 35GB you can use:
lvextend -l +100%FREE /dev/mapper/centos-root
If you only want to add a certain ammount (i.e 1G) you can use this:
lvextend -L +1G /dev/mapper/centos-root
And finaly resize your filesystem:
resize2fs /dev/mapper/centos-root
The LVM logic is:
1. Harddisk fdisk -l
2. Physical Volume pvs
3. Volume Group vgs
4. Logical Volume lvs

vmware enlarging space for linux

Its maybe stupid question but how can i enlarge my linux machine from 20 to 40gb? i need to increase my / space. I made it on vmware and its says 40gb now but if i make :
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/zabbix-root 19G 17G 789M 96% /
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 276K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
/dev/sda1 228M 25M 192M 12% /boot
or
fdisk -l
Disk /dev/mapper/zabbix-root: 20.1 GB, 20124270592 bytes
255 heads, 63 sectors/track, 2446 cylinders, total 39305216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/zabbix-root doesn't contain a valid partition table
Disk /dev/mapper/zabbix-swap_1: 1069 MB, 1069547520 bytes
255 heads, 63 sectors/track, 130 cylinders, total 2088960 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
i still se only 20gb... How can i mount another 20gb?
Thanks a lot.
Depending on your distribution, probably something like that:
reboot or echo "- - -" > /sys/class/scsi_host/host0/rescan
Now that Linux is aware of the disk size change, use pvextend to extend you PV.
Using vgdisplay, make sure that you have free space on your VG.
Extend your LV using lvextend and finally, use resize2fs to change your FS.

Number of inodes in a partition not matching up to the maximum number of inodes the partition should support

We are using Amazon EBS to store a large number of small files (<10KB) in a 3-level directory structure.
~/lists# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 3.9G 5.5G 42% /
tmpfs 854M 0 854M 0% /lib/init/rw
varrun 854M 64K 854M 1% /var/run
varlock 854M 0 854M 0% /var/lock
udev 854M 80K 854M 1% /dev
tmpfs 854M 0 854M 0% /dev/shm
/dev/sda2 147G 80G 60G 58% /mnt
/dev/sdj 197G 60G 128G 32% /vol
The partition in question is /vol (size: 200GB)
~/lists# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 655360 26541 628819 5% /
tmpfs 186059 3 186056 1% /lib/init/rw
varrun 186059 31 186028 1% /var/run
varlock 186059 2 186057 1% /var/lock
udev 186059 824 185235 1% /dev
tmpfs 186059 1 186058 1% /dev/shm
/dev/sda2 19546112 17573097 1973015 90% /mnt
/dev/sdj 13107200 13107200 0 100% /vol
~/lists# sudo /sbin/dumpe2fs /dev/sdj | grep "Block size"
dumpe2fs 1.41.4 (27-Jan-2009)
Block size: 4096
The number of inodes for the partition /vol are 13Million+. The block size is 4096. Taking the Block Size as 4096, the number of inodes the 200GB partition (ext3) should support is 52million+ (Maximum Inode Calculation: Volume size in bytes/2^12). So why does the partition only support 13million inode?
I'm pretty sure that inodes are allocated statically when you create the volume (using mfs.ext3 in this case). For whatever reason, mkfs.ext3 decided to reserve 13 Million inodes and now you can't create any more files.
See this 2001 discussion of inodes
The Wikipedia ext3 page has a footnote explaining this more concisely: wiki link
Also, inodes are allocated per file (not block), which is why there are only 13M inodes - mkfs.ext3 must have been configured with an average file size of 8 KB which would account for the issue you're seeing.

Resources