Number of inodes in a partition not matching up to the maximum number of inodes the partition should support - linux

We are using Amazon EBS to store a large number of small files (<10KB) in a 3-level directory structure.
~/lists# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 3.9G 5.5G 42% /
tmpfs 854M 0 854M 0% /lib/init/rw
varrun 854M 64K 854M 1% /var/run
varlock 854M 0 854M 0% /var/lock
udev 854M 80K 854M 1% /dev
tmpfs 854M 0 854M 0% /dev/shm
/dev/sda2 147G 80G 60G 58% /mnt
/dev/sdj 197G 60G 128G 32% /vol
The partition in question is /vol (size: 200GB)
~/lists# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 655360 26541 628819 5% /
tmpfs 186059 3 186056 1% /lib/init/rw
varrun 186059 31 186028 1% /var/run
varlock 186059 2 186057 1% /var/lock
udev 186059 824 185235 1% /dev
tmpfs 186059 1 186058 1% /dev/shm
/dev/sda2 19546112 17573097 1973015 90% /mnt
/dev/sdj 13107200 13107200 0 100% /vol
~/lists# sudo /sbin/dumpe2fs /dev/sdj | grep "Block size"
dumpe2fs 1.41.4 (27-Jan-2009)
Block size: 4096
The number of inodes for the partition /vol are 13Million+. The block size is 4096. Taking the Block Size as 4096, the number of inodes the 200GB partition (ext3) should support is 52million+ (Maximum Inode Calculation: Volume size in bytes/2^12). So why does the partition only support 13million inode?

I'm pretty sure that inodes are allocated statically when you create the volume (using mfs.ext3 in this case). For whatever reason, mkfs.ext3 decided to reserve 13 Million inodes and now you can't create any more files.
See this 2001 discussion of inodes
The Wikipedia ext3 page has a footnote explaining this more concisely: wiki link
Also, inodes are allocated per file (not block), which is why there are only 13M inodes - mkfs.ext3 must have been configured with an average file size of 8 KB which would account for the issue you're seeing.

Related

/dev/mapper/RHELCSB-Home marked as full when it is not after verification

I was trying to copy a 1.5GiB file from a location to another and was warned that my disk space is full, so I proceeded to a verification using df -h, which gave the following output:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 114M 16G 1% /dev/shm
tmpfs 16G 2.0M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/RHELCSB-Root 50G 11G 40G 21% /
/dev/nvme0n1p2 3.0G 436M 2.6G 15% /boot
/dev/nvme0n1p1 200M 17M 184M 9% /boot/efi
/dev/mapper/RHELCSB-Home 100G 100G 438M 100% /home
tmpfs 3.1G 88K 3.1G 1% /run/user/4204967
where /dev/mapper/RHELCSB-Home seemed to cause the issue. But when running sudo du -xsh /dev/mapper/RHELCSB-Home, I got the following result:
0 /dev/mapper/RHELCSB-Home
and same thing for /dev/ and /dev/mapper/. After researching this issue, I figured out that this might have been caused by undeleted log files in /var/log/, but the total size of files there is far from approaching the 100GiB. What could cause my disk space to be full?
Additional context: I was running a local postgresql database when this happened, but I can't see how this can relate to my issue as postgres log files are not taking that much space either.
The issue was solved by deleting podman container volumes in ~/.local/share/containers/

Linux partition not showing full size [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
Improve this question
I have a Linux system where the disk space shows as only 29Gb, but when I look at the partition with the parted - print command it shows as a 64Gb partition. I'm not sure if the remaining disk space is unallocated, mounted in other folders, stuck in "tmpfs" or how to add it to the primary partition. This is in Ubuntu 18.04 OS. I would like for the full 64 GB to be available at root. I appreciate any help!
When I run df -h, here are the results:
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.2M 3.2G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 29G 25G 2.7G 91% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda2 976M 81M 829M 9% /boot
/dev/sda1 511M 4.4M 507M 1% /boot/efi
tmpfs 3.2G 0 3.2G 0% /run/user/1000
Results of parted print command shows a 64GB partition:
Model: ATA MSH-64 (scsi)
Disk /dev/sda: 63.4GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 538MB 537MB fat32 boot, esp
2 538MB 1612MB 1074MB ext4
3 1612MB 63.3GB 61.7GB
Results of vgs command:
VG #PV #LV #SN Attr VSize VFree
ubuntu-vg 1 1 0 wz--n- <57.50g <28.75g
Results of the lvs command:
(talos-env) pradmin#pradmin:~$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi-ao---- 28.75g
Depending on the installation, the root partition might only use a part of the logical volume (LV).
Try the commands vgs and lvs to get information about your current setup. I assume that vgs shows about 30G free space. You can enlarge the root volume using lvresize. After this you need to adapt the file system. This depends on the file system type you are using. If you use extX then you might want to run resize2fs.
Edit based on the edited question:
Yes, everything can be done when the disk is mounted and in use.
BUT YOU NEED TO TAKE CARE ABOUT THE COMMANDS YOURSELF!!! A WRONG COMMAND MIGHT DESTROY YOUR SYSTEM.
PLEASE TAKE YOUR TIME TO MAKE YOURSELF COMFORTABLE WITH LVS BEFORE CHANGING THE SYSTEM.
There are many good tutorials which might help you, e.g.:
http://ryandoyle.net/posts/expanding-a-lvm-partition-to-fill-remaining-drive-space/
The guidance from Andreas proved helpful. I managed to resize the logical volume to the full size of the partition using the following commands and sequence.
Resources that I found helpful:
https://www.redhat.com/sysadmin/resize-lvm-simple
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/ext4grow
root:~# lvs
  LV        VG        Attr       LSize   Pool Origin
Data%  Meta%  Move Log Cpy%Sync Convert
  ubuntu-lv ubuntu-vg -wi-ao---- <57.50g
 
Here you can see that the logical volume doesn't fill the full partition size
root:~# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  ubuntu-vg   1   1   0 wz--n- <57.50g <28.75g
Extend the logical volume to 100% of the free space, /dev/{VG FROM lvs CMD}/{LV FROM lvs CMD}
root:~# lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
Size of logical volume ubuntu-vg/ubuntu-lv changed from 28.75 GiB (7360 extents) to <57.50 GiB (14719 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
Checked disk space and saw that it hadn't changed yet
root:~# df
Filesystem 1K-blocks Used
Available Use% Mounted on
udev 16390292 0 16390292 0% /dev
tmpfs 3284628 1164 3283464 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 29542388 25311328 2707348 91% /
tmpfs 16423128 0 16423128 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 16423128 0 16423128 0% /sys/fs/cgroup
/dev/sda2 999320 82552 847956 9% /boot
/dev/sda1 523248 4492 518756 1% /boot/efi
tmpfs 3284624 0 3284624 0% /run/user/1000
Resize file system to full size of logical volume, use Filesystem name from df command above. Note this is an ext4 filesystem, you may have to use a different command for a different filesystem.
root:~# resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line
resizing required
old_desc_blocks = 4, new_desc_blocks = 8
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 15072256 (4k) blocks
long.
root:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 16390292 0 16390292 0% /dev
tmpfs 3284628 1164 3283464 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 59211724 25319316 31128948 45% /
tmpfs 16423128 0 16423128 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 16423128 0 16423128 0%
/sys/fs/cgroup
/dev/sda2 999320 82552 847956 9% /boot
/dev/sda1 523248 4492 518756 1% /boot/efi
tmpfs 3284624 0 3284624 0%
/run/user/1000

"No space left on device" when using dd to create a disk image

I am trying to trying to create a disk image of my Raspberry Pi Model 3 B+ onto a USB drive using dd. I know there are easier ways to do this on a Raspberry Pi, but I want to try this to test the procedure on a 'sacrificial' system, which I hope to then use on another linux computer running a much larger Ubuntu disk to create a backup. OS is Raspbian Buster 10.
I have been following a procedure I found on an article here: https://www.makeuseof.com/tag/easily-clone-restore-linux-disk-image-dd/
The USB drive has 64GB capacity and has been formatted, initially as exFAT but I also tried NTFS thinking maybe that was the issue. The command ended with the same error, however each time i have tried this the file size transferred has been different, varying from 2-8GB in size before the error occurred.
This is to identify my drives - the SD card is "mmcblk" and my USB drive is "sda", called "NINJA":
pi#raspberrypi:~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 57.9G 0 disk
└─sda1 8:1 1 57.9G 0 part
mmcblk0 179:0 0 14.9G 0 disk
├─mmcblk0p1 179:1 0 256M 0 part /boot
└─mmcblk0p2 179:2 0 14.6G 0 part /
This my command I tried to use:
pi#raspberrypi:~ $ sudo dd bs=4M if=/dev/mmcblk0 of=/media/pi/NINJA/raspibackup.img
and this is the output:
dd: error writing '/media/pi/NINJA/raspibackup.img': No space left on device
605+0 records in
604+0 records out
2535124992 bytes (2.5 GB, 2.4 GiB) copied, 325.617 s, 7.8 MB/s
Check how much disk space is "Avail" on the target device.
Example:
[jack#server1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 496M 41M 456M 9% /dev/shm
tmpfs 496M 6.9M 489M 2% /run
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/mapper/centos-root 6.2G 6.2G 172K 100% /
/dev/sda1 1014M 166M 849M 17% /boot
tmpfs 100M 24K 100M 1% /run/user/1000
/dev/sr0 552M 552M 0 100% /run/media/jack/CentOS 7 x86_64
Terminology:
df: DiskFree
-h: Human Readable Sizes (Ex: 6.2G instead of 6485900)
In this example, let's say I want to make a backup of the Boot drive (/dev/sda1) and save it in my Local User Home Folder on my Root Drive (/dev/mapper/centos-root).
When I so this, I will get an error that looks like:
[jack#server1 ~]$ sudo dd if=/dev/sda1 of=boot.img
dd: error writing 'boot.img': No space left on device
1736905+0 records in
1736904+0 records out
889294848 bytes (889 MB) copied, 4.76575 s, 187 MB/s
Terminology:
sudo: Super User Do
dd: Disk Duplicate
if: Input File (source)
of: Output File (destination)
The system is trying to copy ALL of /dev/sda1 (to include freespace) to boot.img, which is impossible at this because /dev/sda1 is 1014M and there is only 172K space left on /dev/mapper/centos-root.
With that said, the actual size of the /dev/sda is actually 16G total! Which means that there is 8G not allocated.
My /dev/sda1 should be 1G where my /dev/sda2 (centos-root) should be 15G... in which it is currently 6.2G
[jack#server1 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 16G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 15G 0 part
├─centos-root 253:0 0 6.2G 0 lvm /
└─centos-swap 253:1 0 820M 0 lvm [SWAP]
sr0 11:0 1 552M 0 rom /run/media/jack/CentOS 7 x86_64
This partition can be extended by doing the following:
[jack#server1 ~]$ sudo lvextend -L +8G /dev/mapper/centos-root
[jack#server1 ~]$ sudo xfs_growfs /dev/mapper/centos-root
Now that my partition is extended, I can do another DiskFree command to double check.
[jack#server1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 484M 0 484M 0% /dev
tmpfs 496M 33M 463M 7% /dev/shm
tmpfs 496M 6.9M 489M 2% /run
tmpfs 496M 0 496M 0% /sys/fs/cgroup
/dev/mapper/centos-root 15G 7.0G 7.3G 49% /
/dev/sda1 1014M 166M 849M 17% /boot
tmpfs 100M 24K 100M 1% /run/user/1000
/dev/sr0 552M 552M 0 100% /run/media/jack/CentOS 7 x86_64
My root partition is now 15G! Now I can perform my backup of the /dev/sda1 partition!
[jack#server1 ~]$ sudo dd if=/dev/sda1 of=boot.img
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 5.59741 s, 192 MB/s
Mission Complete!
sda1 is not mounted in /media/pi/NINJA/, the image you create is therefore stored on the mmcblk0p2 partition.
Since mmcblk0 is by definition larger than mmcblk0p2, you logically run out of space on it.
Solution :
You need to first mount sda1 using sudo mount /dev/sda1 /media/pi/NINJA/ and try your dd command again after.

Why does my primary partition not reflect the total disk space?

I remotely ssh'd and resized my primary partition with parted (rebooted as well and updated /etc/fstab) to give it more space.
Why on earth is my primary ext4 partition not reflecting the free space?
~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 985M 0 985M 0% /dev
tmpfs 200M 3.5M 197M 2% /run
/dev/sda1 7.8G 7.5G 0 100% /
tmpfs 999M 0 999M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 999M 0 999M 0% /sys/fs/cgroup
tmpfs 200M 0 200M 0% /run/user/0
~# fdisk -l
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x010920b9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 41943039 41940992 20G 83 Linux
Any ideas? I must have forgotten something really simple.
Execute parted and then print free. This will print free space of your system.

how to designate Cassandra data storage to certain file-system partition?

I used Cassandra to store my data. I use Centos.
The data seems always to be stored in the root partition, which is too small.
My file system partitions like
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 25G 26G 49% /
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 17M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/sda2 494M 177M 318M 36% /boot
/dev/sda1 200M 9.8M 191M 5% /boot/efi
/dev/mapper/centos-home 873G 66G 807G 8% /home
tmpfs 1.6G 0 1.6G 0% /run/user/1001
Obviously the root partition (50 GB) is much smaller than one at home (873GB).
Is there a way that I change a setup to enforce data storage using the
partition "/dev/mapper/centos-home" ?
I need to use the command "sudo service cassandra start" to activate Cassandra.
If without sudo, my authority doesn't allow me to activate Cassandra.
Thanks!
Edit the $CASSANDRA_HOME/conf/cassandra.yaml file (sometimes it is
located under /etc/cassandra also, depending on how you install
Cassandra)
Update the following properties
(only available since Cassandra 3.x) hints_directory: /var/lib/cassandra/hints // put your own directory here
data_file_directories: //put a list of directories here
/var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog // put your own directory here
saved_caches_directory: /var/lib/cassandra/saved_caches // put your
own directory here

Resources