Map lvm volume to Physical volume - linux

lsblk provides output in this fornat:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 300G 0 disk
sda1 8:1 0 500M 0 part /boot
sda2 8:2 0 299.5G 0 part
vg_data1-lv_root (dm-0) 253:0 0 50G 0 lvm /
vg_data2-lv_swap (dm-1) 253:1 0 7.7G 0 lvm [SWAP]
vg_data3-LogVol04 (dm-2) 253:2 0 46.5G 0 lvm
vg_data4-LogVol03 (dm-3) 253:3 0 97.7G 0 lvm /map1
vg_data5-LogVol02 (dm-4) 253:4 0 97.7G 0 lvm /map2
sdb 8:16 0 50G 0 disk
for a mounted volume say /map1 how do i directly get the physical volume associated with it. Is there any direct command to fetch the information?

There is no direct command to show that information for a mount. You can run
lvdisplay -m
Which will show which physical volumes are currently being used by the logical volume.
Remember, thought, that there is no such thing as a direct association between a logical volume and a physical volume. Logical volumes are associated with volume groups. Volume groups have a pool of physical volumes over which they can distribute any volume group. If you always want to know that a given lv is on a given pv, you have to restrict the vg to only having that one pv. That rather misses the point. You can use pvmove to push extents off a pv (sometimes useful for maintenance) but you can't stop new extents being created on it if logical volumes are extended or created.
As to why there is no such potentially useful command...
LVM is not ZFS. ZFS is a complete storage and filesystem management system, managing both storage (at several levels of abstraction) and the mounting of filesystems. LVM, in contrast, is just one layer of the Linux Virtual File System. It provides a layer of abstraction on top of physical storage devices and makes no assumption about how the logical volumes are used.

Leaving the grep/awk/cut/whatever to you, this will show which PVs each LV actually uses:
lvs -o +devices
You'll get a separate line for each PV used by a given LV, so if an LV has extents on three PVs you will see three lines for that LV. The PV device node path is followed by the starting extent(I think) of the data on that PV in parentheses.

I need to emphasize that there is no direct relation between a mountpoint (logical volume) and a physical volume in LVM. This is one of its design goals.
However you can traverse the associations between the logical volume, the volume group and physical volumes assigned to that group. However this only tells you: The data is stored on one of those physical volumes, but not where exactly.
I couldn't find a command which can produce the output directly. However you can tinker something using mount, lvdisplay, vgdisplay and awk|sed:
mp=/mnt vgdisplay -v $(lvdisplay $(mount | awk -vmp="$mp" '$3==mp{print $1}') | awk '/VG Name/{print $3}')
I'm using the environment variable mp to pass the mount point to the command. (You need to execute the command as root or using sudo)
For my test-scenario it outputs:
...
--- Volume group ---
VG Name vg1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
...
VG Size 992.00 MiB
PE Size 4.00 MiB
Total PE 248
Alloc PE / Size 125 / 500.00 MiB
Free PE / Size 123 / 492.00 MiB
VG UUID VfOdHF-UR1K-91Wk-DP4h-zl3A-4UUk-iB90N7
--- Logical volume ---
LV Path /dev/vg1/testlv
LV Name testlv
VG Name vg1
LV UUID P0rgsf-qPcw-diji-YUxx-HvZV-LOe0-Iq0TQz
...
Block device 252:0
--- Physical volumes ---
PV Name /dev/loop0
PV UUID Qwijfr-pxt3-qcQW-jl8q-Q6Uj-em1f-AVXd1L
PV Status allocatable
Total PE / Free PE 124 / 0
PV Name /dev/loop1
PV UUID sWFfXp-lpHv-eoUI-KZhj-gC06-jfwE-pe0oU2
PV Status allocatable
Total PE / Free PE 124 / 123
If you only want to display the physical volumes you might pipe the results of the above command to sed:
above command | sed -n '/--- Physical volumes ---/,$p'

dev=$(df /map1 | tail -n 1|awk '{print $1}')
echo $dev | grep -q ^/dev/mapper && lvdisplay -m $dev 2>/dev/null | awk '/Physical volume/{print $3}' || echo $dev

Related

How do i assign a new disk/LV to the root folder "/" which has been already mounted by sdb2 partition?

[root#my-linux-vm ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 16G 0 disk
└─sda1 8:1 0 16G 0 part
sdb 8:16 0 10G 0 disk
├─sdb1 8:17 0 2M 0 part
└─sdb2 8:18 0 10G 0 part /
sdc 8:32 0 12G 0 disk
└─sdc1 8:33 0 12G 0 part
└─vg_new_root-lv0 252:0 0 11G 0 lvm
sr0 11:0 1 1024M 0 rom
Given the above partition/disk situation,
can i mount the 'vg_new_root-lv0' LV onto the root ("/")folder in order to extend the root capacity beyond sdb2 space?
The short answer is No, based on your current configuration.
Due to the fact that the / root filesystem is not part of LVM there is no easy way to expand its capacity.
My suggestion would be to run a disk space script to confirm what is the directory or service that is using a significant amount of the disk space and then (if possible) move that data into the new sdc1 drive / vg_new_root-lv0 Logical Volume, it needs to be formatted and mounted to be ready to use, once mounted you can stop your application and then move all the data to that new filesystem (i.e /mnt/data), after you confirm that the data has been moved you can then start your application, test and then remove the data from the original location under the sdb2 disk / root / filesystem to free up space.
Run the below one liner to get a disk usage report and confirm what you can remove / compress / move.
echo -n "Type Filesystem: ";read FS;NUMRESULTS=20;resize;clear;date;df -h $FS;echo "Largest Directories:"; du -x $FS 2>/dev/null| sort -rnk1| head -n $NUMRESULTS| awk '{printf "%d MB %s\n", $1/1024,$2}';echo "Largest Files:"; nice -n 19 find $FS -mount -type f -ls 2>/dev/null| sort -rnk7| head -n $NUMRESULTS|awk '{printf "%d MB\t%s\n", ($7/1024)/1024,$NF}'

could not figure out how much is one extent?

i got the homework from Devops school and one of LVM question below , i am confused with extents , teacher said that it should be calculated like this : 20x25 = 500 mb --> so the partition supposed to be made about 600 mb with extra space ! But from my google researches i found that 1 extent = 4 MB , 25x4 = 100 LVM=80MB ???
Create logical volume "datashare" inside volume group called "datagroup"
Create volume group "datagroup" from partition using 25M extents
Create logical volume with 20 extents /dev/datagroup/datashare
The disk I am using is 20GB in total size.
[root#centos7 ~]# lsblk /dev/sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 20G 0 disk
I create my first Physical Volume.
[root#centos7 ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
I create my first Volume Group setting 32MG as the Physical Extentsize (PE) size, default 4MB, this defines how the space is being allocated (chunks of PE size).
[root#centos7 ~]# vgcreate -s 32M myvg /dev/sdb
Volume group "myvg" successfully created
So my /dev/sdb disk size is 20GB, then I created the PV pvcreate, after this I created a new Volume Group with the Physical Extentsize (PE) size of 32MB.
Then I confirm this by the PE Size column.
[root#centos7 ~]# vgdisplay myvg
--- Volume group ---
VG Name myvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <19.97 GiB
PE Size 32.00 MiB
Total PE 639
Alloc PE / Size 0 / 0
Free PE / Size 639 / <19.97 GiB
VG UUID m3wDvh-i0aH-5Zr2-0ya7-1GaA-mLb2-Umd9x3
SO here is the math disk 20GB to MB (20 * 1024 = 20480).
Then 20480 MB (Total disk 20GB) / 32 MB (The size I wanted my PE) = 640 (vgdisplay shows Total PE 639 because rounded values probably we did not have exactly 32 MB to have the last PE to have exactly 640 as expected.)
So if you need to create a new Logical Volume (LV):
You can use PE (which is 32MG and we have 639 available to use), so let say I want a new LV with 50 PE, 50 * 32 MB (PE size) = 1600 MB / 1.6 GB. (Please note Current LE 50 from PE it changes to Logical Extent)
[root#centos7 ~]# lvcreate -l 50 -n mylv1 myvg
Logical volume "mylv1" created.
[root#centos7 ~]# lvdisplay /dev/myvg/mylv1
--- Logical volume ---
LV Path /dev/myvg/mylv1
LV Name mylv1
VG Name myvg
LV UUID BuQsPK-UKWL-tdVv-bFkR-X2md-zG3o-xzQIKk
LV Write Access read/write
LV Creation host, time centos7, 2021-10-10 20:38:33 +0000
LV Status available
# open 0
LV Size 1.56 GiB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
or
You can ask LVM to create an LV with a specific size like 950 MB so 950 / 32 MB (PE) = 29.6875 but it can not use 29.6875 PE because it needs to be rounded up to use 30 PE (Current LE 30).
[root#centos7 ~]# lvcreate -L 950MB -n mylv2 myvg
Rounding up size to full physical extent 960.00 MiB
Logical volume "mylv2" created.
[root#centos7 ~]# lvdisplay /dev/myvg/mylv2
--- Logical volume ---
LV Path /dev/myvg/mylv2
LV Name mylv2
VG Name myvg
LV UUID eJrAY2-Pb1x-VBbq-k8cI-vIlq-Tg3s-CsRFsB
LV Write Access read/write
LV Creation host, time centos7, 2021-10-10 20:40:33 +0000
LV Status available
# open 0
LV Size 960.00 MiB
Current LE 30
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1

Determine WWID of LUN from mapped drive on Linux

I am trying to establish if there is an easier method to determine the WWID of an iSCSI LUN connected with a Linux Filesystem or mountpoint.
A frequent problem we have is where a user requests a disk expansion on a RHEL system with multiple iSCSI LUNs connected. A user will provide us with the path their LUN is mounted on, and from this we need to establish which LUN they are referring to so that we can make the increase as appropriate at the Storage side.
Currently we run df -h to get the Filesystem name, pvdisplay to get the VG Name and then multipath -v4 -ll | grep "^mpath" to get the WWID. This feels messy, long-winded and prone inconsistent interpretation.
Is there a more concise command we can run to determine the WWID of the device?
Here's one approach. The output format leaves something to be desired - it's more suited to eyeballs than programs.
lsblk understands the mapping of a mounted filesystem down through the LVM and multipath layers to the underlying block devices. In the output below, /dev/sdc is my iSCSI-attached LUN, attached via one path to the target. It contains the volume group vg1 and a logical volume lv1. /mnt/tmp is where I have the filesystem on the LV mounted.
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:32 0 128M 0 disk
└─360a010a0b43e87ab1962194c4008dc35 253:4 0 128M 0 mpath
└─vg1-lv1 253:3 0 124M 0 lvm /mnt/tmp
At the 2nd level there is the SCSI WWN (360a010...), courtesy multipathd.

Command lvcreate throwing an invalid flag error?

I'm working on a group project that involves having to use the command lvcreate. When trying to run lvcreate with certain flags, this error message appears (the flags passed in are in the error message):
Error: Error Creating LVM LV for new image: Could not create thin LV named custom_vol0: Failed to run: lvcreate -i 2 -I 64K -Wy --yes --thin -n custom_vol0 --virtualsize 10GB poolA/LXDThinPool: Command does not accept option: --stripes 2.
Command does not accept option: --stripesize 64K
This is the info of the storage pool we are trying to create custom_vol0 on
--- Volume group ---
VG Name poolA
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <27.94 GiB
PE Size 4.00 MiB
Total PE 7152
Alloc PE / Size 7152 / <27.94 GiB
Free PE / Size 0 / 0
VG UUID zJfbN1-0x7E-gteT-r0SV-Grh4-ewtc-fikl38
Can a thinly provisioned logical volume have a greater number of stripes than the number of physical volumes in the volume group? Is the mix of flags above an issue? The error message is not very descriptive.
Any help would be appreciated, thanks!

missing partition in server centos 6.1

I used the command df-h on my centos 6.1
here's the output
[root#localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 2.3G 45G 5% /
tmpfs 5.9G 0 5.9G 0% /dev/shm
/dev/sda1 485M 35M 425M 8% /boot
/dev/mapper/VolGroup-lv_home
2.0T 199M 1.9T 1% /home
i found out that the hard disk is two terabyte. but when I used the command cat /proc/partitions | more
here's the output
[root#localhost sysconfig]# cat /proc/partitions | more
major minor #blocks name
8 0 4293656576 sda
8 1 512000 sda1
8 2 2146970624 sda2
253 0 52428800 dm-0
253 1 14417920 dm-1
253 2 2080120832 dm-2
you could see on the first line. it is 4396.7 GB . why is it i could only see is 2TB? how could i find my missing another 2TB and make it a partition.
I also use the command lsblk
here is the output
[root#localhost ~]# lblsk
-bash: lblsk: command not found
[root#localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO MOUNTPOINT
sda 8:0 0 4T 0
ââsda1 8:1 0 500M 0 /boot
ââsda2 8:2 0 2T 0
ââVolGroup-lv_root (dm-0) 253:0 0 50G 0 /
ââVolGroup-lv_swap (dm-1) 253:1 0 13.8G 0 [SWAP]
ââVolGroup-lv_home (dm-2) 253:2 0 2T 0 /home
sr0 11:0 1 1024M 0
using the parted /dev/sda i type the print free command
here's the output
(parted) print free
Model: DELL PERC 6/i (scsi)
Disk /dev/sda: 4397GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 525MB 524MB primary ext4 boot
2 525MB 2199GB 2198GB primary lvm
2199GB 4397GB 2198GB Free Space
I was wrong, sorry. As you can see in parted print free output you have 2 MBR partitions - boot and lvm and 2198GB free space (last row).
If you want to use all of your space you have to use GPT partitions. These partitions as opposed to MBR partition that can only address up to 2TB, can address your whole disk and up to 8 ZiB (zebibytes).
You can try to convert MBR partition table to GPT (example 1, example 2), though I strongly recommend to backup your data.
You are using tools showing info from different layers of your system and interpreting it wrong.
df, according to man page, will display the space available on all currently mounted file systems.
/proc/partitions holds info about partitions on your drive - physical device. This file shows you size of your drive as number of blocks. Usually, on HDD block size is a size of sector - 512 bytes.
So, sda size of 4293656576 is size in blocks, not kilobytes.
4293656576 blocks = (4293656576 / 2 ) kilobytes = 2146828288 KiB = 2047.375 GiB, or 2198.352 GB.
Assuming 1 GiB = 1 * 2^30, 1 GB = 1 * 10^3.
If you want to see size of your disk use fdisk -l <device name>.

Resources