How to check a disk for partitions for use in a script in Linux? - linux

I'm scripting something in Bash for Linux systems. How would I check a disk for partitions in a robust manner?
I could use grep, awk, or sed to parse the output from fdisk, sfdisk, etc., but this doesn't seem to be an exact science.
I could also check if there are partitions in /dev, but it is also possible that the partitions exist and haven't been probed yet (via partprobe, as an example).
What would you recommend?

I think I figured out a reliable way. I accidentally learned some more features of partprobe while reading the man page:
-d Don’t update the kernel.
-s Show a summary of devices and their partitions.
Used together, I can scan a disk for partitions without updating the kernel and get a reliable output to parse. It's still parsing text, but at least the output isn't as "human-oriented" as fdisk or sfdisk. This also is information as read from the disk and doesn't rely on the kernel being up-to-date on the partition status for this disk.
Take a look:
On a disk with no partition table:
# partprobe -d -s /dev/sdb
(no output)
On a disk with a partition table but no partitions:
# partprobe -d -s /dev/sdb
/dev/sdb: msdos partitions
On a disk with a partition table and one partition:
# partprobe -d -s /dev/sdb
/dev/sdb: msdos partitions 1
On a disk with a partition table and multiple partitions:
# partprobe -d -s /dev/sda
/dev/sda: msdos partitions 1 2 3 4 <5 6 7>
It is important to note that every exit status was 0 regardless of an existing partition table or partitions. In addition, I also noticed that the options cannot be grouped together (partprobe -d -s /dev/sdb works while partprobe -ds /dev/sdb does not).

Another option is to run:
lsblk
See https://unix.stackexchange.com/a/108951

you could also use:
parted /dev/sda print 1 &> /dev/null echo $?
if a partition (first partition) exist it return true and otherwise false

Related

mkfs.vfat: unable to open {partition}: No such file or directory (command succeeds, but throws this error and blocks rest of script)

Update: I got this working but am still not 100% sure why. I've appended the fully and consistently working script to the end for reference.
I'm trying to script a series of disk partition commands using sgdisk and mkfs.vfat. I'm working from a Live USB (NixOS 21pre), have a blank 1TB M.2 SSD, and am creating a 1GB EFI boot partition, and a 999GB ZFS partition.
Everything works up until I try to create a FAT32 filesystem on the EFI partition, using mkfs.vfat, where I get the error in the title.
However, the odd thing is, the mkfs.vfat command succeeds, but throws that error anyway and blocks the rest of the script. Any idea why it's doing this and how to fix it?
Starting with an unformatted 1TB M.2 SSD:
$ sudo parted /dev/disk/by-id/wwn-0x5001b448b94488f8 print
Error: /dev/sda: unrecognised disk label
Model: ATA WDC WDS100T2B0B- (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
Script:
$ ls
total 4
drwxr-xr-x 2 nixos users 60 May 18 20:25 .
drwx------ 17 nixos users 360 May 18 15:24 ..
-rwxr-xr-x 1 nixos users 2225 May 18 19:59 partition.sh
$ cat partition.sh
#!/usr/bin/env bash
#make gpt partition table and boot & rpool partitions for ZFS on 1TB M.2 SSD
#error handling on
set -e
#wipe the disk with -Z, then create two partitions, a 1GB (945GiB) EFI boot partition, and a ZFS root partition consisting of the rest of the drive, then print the results
DISK=/dev/disk/by-id/wwn-0x5001b448b94488f8
sgdisk -Z $DISK
sgdisk -n 1:0:+954M -t 1:EF00 -c 1:efi $DISK
sgdisk -n 2:0:0 -t 2:BF01 -c 2:zroot $DISK
sgdisk -p /dev/sda
#make a FAT32 filesystem on the EFI partition, then mount it
#mkfs.vfat -F 32 ${DISK}-part1 (troubleshooting with hardcoded version below)
mkfs.vfat -F 32 /dev/disk/by-id/wwn-0x5001b448b94488f8-part1
mkdir -p /mnt/boot
mount ${DISK}-part1 /mnt/boot
Result (everything fine until mkfs.vfat, which throws error and blocks the rest of the script):
$ sudo sh partition.sh
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries in memory.
Setting name!
partNum is 0
The operation has completed successfully.
Setting name!
partNum is 1
The operation has completed successfully.
Disk /dev/sda: 1953525168 sectors, 931.5 GiB
Model: WDC WDS100T2B0B-
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 77ED6A41-E722-4FFB-92EC-975A37DBCB97
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1955839 954.0 MiB EF00 efi
2 1955840 1953525134 930.6 GiB BF01 zroot
mkfs.fat 4.1 (2017-01-24)
mkfs.vfat: unable to open /dev/disk/by-id/wwn-0x5001b448b94488f8-part1: No such file or directory
Verifying the partitioning and FAT32 creation commands worked:
$ sudo parted /dev/disk/by-id/wwn-0x5001b448b94488f8 print
Model: ATA WDC WDS100T2B0B- (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 1001MB 1000MB fat32 efi boot, esp
2 1001MB 1000GB 999GB zroot
Fwiw, the same command works on the commandline with no error:
$ sudo mkfs.vfat -F 32 /dev/disk/by-id/wwn-0x5001b448b94488f8-part1
mkfs.fat 4.1 (2017-01-24)
Success. But why no error on the commandline, but an error in the script?
Update: fully and consistently working script:
#!/usr/bin/env bash
#make UEFI (GPT) partition table and two partitions (FAT32 boot and ZFS rpool) on 1TB M.2 SSD
#error handling on
set -e
#vars
DISK=/dev/disk/by-id/wwn-0x5001b448b94488f8
POOL='rpool'
#0. if /mnt/boot is mounted, umount it; if any NixOS filesystems are mounted, unmount them
if mount -l | grep -q '/mnt/boot'; then
umount -f /mnt/boot
fi
if mount -l | grep -q '/mnt/nix'; then
umount -fR /mnt
fi
#1. if a zfs pool exists, delete it
if zpool list | grep -q $POOL; then
zfs unmount -a
zpool export $POOL
zpool destroy -f $POOL
fi
#2. wipe the disk
sgdisk -Z $DISK
wipefs -a $DISK
#3. create two partitions, a 1GB (945GiB) EFI boot partition, and a ZFS root partition consisting of the rest of the drive, then print the results
sgdisk -n 1:0:+954M -t 1:EF00 -c 1:efiboot $DISK
sgdisk -n 2:0:0 -t 2:BF01 -c 2:zfsroot $DISK
sgdisk -p /dev/sda
#4. notify the OS of partition updates, and print partition info
partprobe
parted ${DISK} print
#5. make a FAT32 filesystem on the EFI boot partition
mkfs.vfat -F 32 ${DISK}-part1
#6. notify the OS of partition updates, and print new partition info
partprobe
parted ${DISK} print
#mount the partitions in nixos-zfs-pool-dataset-create.sh script. Make sure to first mount the ZFS root dataset on /mnt before mounting and subdirectories of /mnt.
It may take time for kernel to be notified about partition changes. Try calling partprobe before mkfs, to request kernel to re-read the partition tables.

gcloud instance disk space

I am trying to do some computing on cloud. For this I created a computing instance and then I attached an external storage with about 10TB. But it seemed that I did something wrong and I got only 200GB available for my datalab. Any comment will be helpful
To check this I used
df -h
and
sudo lsblk
Thanks.
As I can see from lsblk command, you have the right size of your datalab-pd disk.
But you can use only 196 Gb.
I think this may be because the file system does not occupy the entire disk space.
Need to extend the file system.
As an example if you have ext3 fs need to do:
- umount /dev/sdb # Unmount your disk
- e2fsck /dev/sdb # Check file system in your disk
- resize2fs /dev/sdb
resize2fs command without any parameters will extend filesystem to all free space on disk.
More info: https://access.redhat.com/articles/1196353

XFS No space left on device

I have a server setup of an XFS partition on LVM. While copying files to the home partition, "No space left on device" is displayed.
df -h displays sufficient space:
/dev/mapper/prod--vg-home 35G 21G 15G 60% /home
df -i also displays sufficient inodes:
/dev/mapper/prod--vg-home 36700160 379390 36320770 2% /home
I did verify the impact of changing the maximum percentage of inodes:
xfs_growfs -m 25 /dev/mapper/prod--vg-home
This amount can easily be decreased and increased.
While experimenting with this setting, I noticed that decreasing it to 3% and increasing it again to 25%, and deleting some files, allows me to add a lot more files again.
xfs_info displays:
meta-data=/dev/mapper/prod--vg-home isize=256 agcount=14, agsize=655360 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=9175040, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
I did read about 64-bit inodes, but it seems to be applicable only for large drives (over 1TB).
Is there any other setting which could cause the "No space left on device" message.
Thank you
There is a bug with xfs_growfs which causes inodes to not be properly distributed across a partition. The solution is to simply remount with the inode64 option. For example, if this was the /dev/vda1, you would do the following:
mount -o remount,inode64 /dev/vda1
You can find more information about the bug at the following link:
http://xfs.org/index.php/XFS_FAQ#Q:_Why_do_I_receive_No_space_left_on_device_after_xfs_growfs.3F
You should also check possible "deleted" files and restart some processes:
lsof -nP +L1

Fill a disk with an ext4 partition in a script

I tried to use parted for scripted partitionning like so :
parted -a optimal /dev/sda mklabel gpt mkpart primary ext4 1 -1
But it complains about -1 not being a recognized option. Still the same sub-command works in the parted prompt. So my question is how to use the same options in a script ?
Finally found a solution :
parted -s -a optimal /dev/sda mklabel gpt -- mkpart primary ext4 1 -1s
-- is very important for it to work here.
Note the use of ‘--’, to prevent the following ‘-1s’ last-sector indicator from being interpreted as an invalid command-line option.
You can also use --script option. In this case you should put your script part in single quotes.
Example:
parted --script /dev/sda 'mkpart primary ext4 1 -1'
I guess it's parted's argument parser's fault.
Try parted -a optimal /dev/sda mklabel gpt mkpart primary ext4 1 \-1 or parted -a optimal /dev/sda mklabel gpt mkpart primary ext4 1 \\-1

How to create loop partition from already existing partition

I believe /images/backups is using the space in /images ?
/dev/sdb1 820G 645G 135G 83% /images
/dev/loop0 296G 296G 0 100% /images/backups
I've a similar kind of partition in another machine /images which is 500G free, and I would want to take out 350G for /images/backups, how to do it ?
Is it right that, it is a simple loop mount which can give specified amount of space or we should create a NULL file of required size and mount ? If so, what are the mount options should be used to specify the size ?
You'll need to create the destination with a fixed size, but can use a "sparse file" which doesn't actually have any blocks written to it yet (and which thus doesn't actually consume space until you write to it).
For instance:
dd if=/dev/zero of=file.img bs=1 count=0 seek=20G
will create a sparse file preallocated to 20GB. That said, actually writing 20GB of zeros to disk up-front (making the file non-sparse) will be faster on writes and lead to less fragmentation.
This can be attached to a loopback device with the losetup command, have a filesystem created, and be mounted:
losetup /dev/loop1 file.img
mke2fs -j /dev/loop1
mount /dev/loop1 /mnt/somewhere
If you want to know if an existing file is sparse, the following will do the trick (on a system with GNU tools; some of the below is not supported in a pure POSIX environment):
{
read block_count block_size file_size
if (( block_count * block_size < file_size )) ; then
echo "Sparse"
else
echo "Non-Sparse"
fi
} < <(stat --format='%b %B %s'$'\n' /images/backups.img)

Resources