could not figure out how much is one extent? - partition

i got the homework from Devops school and one of LVM question below , i am confused with extents , teacher said that it should be calculated like this : 20x25 = 500 mb --> so the partition supposed to be made about 600 mb with extra space ! But from my google researches i found that 1 extent = 4 MB , 25x4 = 100 LVM=80MB ???
Create logical volume "datashare" inside volume group called "datagroup"
Create volume group "datagroup" from partition using 25M extents
Create logical volume with 20 extents /dev/datagroup/datashare

The disk I am using is 20GB in total size.
[root#centos7 ~]# lsblk /dev/sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 20G 0 disk
I create my first Physical Volume.
[root#centos7 ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
I create my first Volume Group setting 32MG as the Physical Extentsize (PE) size, default 4MB, this defines how the space is being allocated (chunks of PE size).
[root#centos7 ~]# vgcreate -s 32M myvg /dev/sdb
Volume group "myvg" successfully created
So my /dev/sdb disk size is 20GB, then I created the PV pvcreate, after this I created a new Volume Group with the Physical Extentsize (PE) size of 32MB.
Then I confirm this by the PE Size column.
[root#centos7 ~]# vgdisplay myvg
--- Volume group ---
VG Name myvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <19.97 GiB
PE Size 32.00 MiB
Total PE 639
Alloc PE / Size 0 / 0
Free PE / Size 639 / <19.97 GiB
VG UUID m3wDvh-i0aH-5Zr2-0ya7-1GaA-mLb2-Umd9x3
SO here is the math disk 20GB to MB (20 * 1024 = 20480).
Then 20480 MB (Total disk 20GB) / 32 MB (The size I wanted my PE) = 640 (vgdisplay shows Total PE 639 because rounded values probably we did not have exactly 32 MB to have the last PE to have exactly 640 as expected.)
So if you need to create a new Logical Volume (LV):
You can use PE (which is 32MG and we have 639 available to use), so let say I want a new LV with 50 PE, 50 * 32 MB (PE size) = 1600 MB / 1.6 GB. (Please note Current LE 50 from PE it changes to Logical Extent)
[root#centos7 ~]# lvcreate -l 50 -n mylv1 myvg
Logical volume "mylv1" created.
[root#centos7 ~]# lvdisplay /dev/myvg/mylv1
--- Logical volume ---
LV Path /dev/myvg/mylv1
LV Name mylv1
VG Name myvg
LV UUID BuQsPK-UKWL-tdVv-bFkR-X2md-zG3o-xzQIKk
LV Write Access read/write
LV Creation host, time centos7, 2021-10-10 20:38:33 +0000
LV Status available
# open 0
LV Size 1.56 GiB
Current LE 50
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
or
You can ask LVM to create an LV with a specific size like 950 MB so 950 / 32 MB (PE) = 29.6875 but it can not use 29.6875 PE because it needs to be rounded up to use 30 PE (Current LE 30).
[root#centos7 ~]# lvcreate -L 950MB -n mylv2 myvg
Rounding up size to full physical extent 960.00 MiB
Logical volume "mylv2" created.
[root#centos7 ~]# lvdisplay /dev/myvg/mylv2
--- Logical volume ---
LV Path /dev/myvg/mylv2
LV Name mylv2
VG Name myvg
LV UUID eJrAY2-Pb1x-VBbq-k8cI-vIlq-Tg3s-CsRFsB
LV Write Access read/write
LV Creation host, time centos7, 2021-10-10 20:40:33 +0000
LV Status available
# open 0
LV Size 960.00 MiB
Current LE 30
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1

Related

Increasing the root Filesystems disk space [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
This is probably a stupid question, but I have a centos VM which I allocated a 20GB disk to. I've since realized that is way to small for my needs. In virtualbox I've increase the size of the disk by 100GB, and I've assigned that to a new physical volume in the vm. I've added that volume to the same volume group and logical group as my root file system, but I'm not seeing any change in size available to the file system.
What do I need to do to allocate more space to the file system?
[root#localhost ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.8G 0 7.8G 0% /dev
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 7.8G 9.5M 7.8G 1% /run
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/centos-root 18G 17G 353M 98% /
/dev/sda1 1014M 270M 745M 27% /boot
tmpfs 1.6G 32K 1.6G 1% /run/user/1000
[root#localhost ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name centos
PV Size 19.04 GiB / not usable 0
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4875
Free PE 0
Allocated PE 4875
PV UUID OyPa3x-9gvv-wn7u-H9V4-GZUr-NvXS-rUyb29
--- Physical volume ---
PV Name /dev/sda3
VG Name centos
PV Size <110.50 GiB / not usable 3.25 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 28287
Free PE 128
Allocated PE 28159
PV UUID gP1ANK-7qVz-91bX-I5e0-Jhhj-P12c-rlyQXm
[root#localhost ~]# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size <129.54 GiB
PE Size 4.00 MiB
Total PE 33162
Alloc PE / Size 33034 / <129.04 GiB
Free PE / Size 128 / 512.00 MiB
VG UUID w3y0SR-KCrW-njIZ-i1yU-Wx9A-c6jJ-iLSqtM
[root#localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/centos/swap
LV Name swap
VG Name centos
LV UUID DNZWst-UWUa-pAw5-PJQ1-8UNl-AjfV-zsjTRv
LV Write Access read/write
LV Creation host, time localhost, 2020-09-01 16:00:13 +0100
LV Status available
# open 2
LV Size 2.00 GiB
Current LE 513
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
--- Logical volume ---
LV Path /dev/centos/root
LV Name root
VG Name centos
LV UUID Yz0o0j-B164-JNi3-y5jd-Q9oo-EFe6-ecMgaM
LV Write Access read/write
LV Creation host, time localhost, 2020-09-01 16:00:14 +0100
LV Status available
# open 1
LV Size <127.04 GiB
Current LE 32521
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
It seems you already resized the logical volume, so the last step is to resize the file system to match. Most current filesystems can be grown on-line while the system is running and the filesystem is mounted. Each has its own tool.
For example for ext* you would run resize2fs /dev/mapper/centos-root .
This needs no more arguments, by default it grows to the size of the partition/volume it's in.
The current default filesystem for centos is xfs, the command for that is xfs_growfs .
First tell the kernel about the new partitions using partprobe:
partprobe
we need to resize our filesystem on /dev/sda(your root volume).You can first start with checking the filesystem on the partition using the e2fsck command and then resize it.
e2fsck -f /dev/(your root volume)
resize2fs /dev/(your root volume)

Mount docker logical volume

I'm trying to access to a logical volume where previously was used by docker. This is the result of various command:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 80G 0 disk
├─nvme0n1p1 259:3 0 80G 0 part /
└─nvme0n1p128 259:4 0 1M 0 part
nvme1n1 259:0 0 80G 0 disk
└─nvme1n1p1 259:1 0 80G 0 part
├─docker-docker--pool_tdata 253:1 0 79G 0 lvm
│ └─docker-docker--pool 253:2 0 79G 0 lvm
└─docker-docker--pool_tmeta 253:0 0 84M 0 lvm
└─docker-docker--pool 253:2 0 79G 0 lvm
fdisk
Disk /dev/nvme1n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00029c01
Device Boot Start End Blocks Id System
/dev/nvme1n1p1 2048 167772159 83885056 8e Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/nvme0n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 358A5F86-3BCA-4FB2-8C00-722B915A71AB
# Start End Size Type Name
1 4096 167772126 80G Linux filesyste Linux
128 2048 4095 1M BIOS boot BIOS Boot Partition
lvdisplay
--- Logical volume ---
LV Name docker-pool
VG Name docker
LV UUID piD2Wx-aDjf-CkpN-b4s4-YXWE-6ERm-GWTcOz
LV Write Access read/write
LV Creation host, time ip-172-31-39-159, 2020-02-16 09:18:57 +0000
LV Pool metadata docker-pool_tmeta
LV Pool data docker-pool_tdata
LV Status available
# open 0
LV Size 79.03 GiB
Allocated pool data 80.07%
Allocated metadata 31.58%
Current LE 20232
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
But when I try to mount the volume docker-docker--pool_tdata I get the following error:
mount /dev/mapper/docker-docker--pool_tdata /mnt/test
mount: /dev/mapper/docker-docker--pool_tdata is already mounted or /mnt/test busy
I've also tried to reboot the machine, to uninstall docker and to see if there is file opened on that volume using lsof
Do you have any clue about how can I mount that volume?
Thanks
Uninstalling docker does not really help as purge and autoremove only delete the installed packages and not the images, containers, volumes and config files.
To delete those you have to delete a bunch of directories contained in etc, var/lib, bin andvar/run
Clean up the env
try running docker system prune -a to remove unused containers, images etc
remove the volume with docker volume rm {volumeID}
create the volume again docker volume create docker-docker--pool_tdata
Kill the process
run lsof +D /mnt/test or cat ../docker/../tasks
this should display the PIDs of alive tasks.
Kill the task with kill -9 {PID}

Command lvcreate throwing an invalid flag error?

I'm working on a group project that involves having to use the command lvcreate. When trying to run lvcreate with certain flags, this error message appears (the flags passed in are in the error message):
Error: Error Creating LVM LV for new image: Could not create thin LV named custom_vol0: Failed to run: lvcreate -i 2 -I 64K -Wy --yes --thin -n custom_vol0 --virtualsize 10GB poolA/LXDThinPool: Command does not accept option: --stripes 2.
Command does not accept option: --stripesize 64K
This is the info of the storage pool we are trying to create custom_vol0 on
--- Volume group ---
VG Name poolA
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <27.94 GiB
PE Size 4.00 MiB
Total PE 7152
Alloc PE / Size 7152 / <27.94 GiB
Free PE / Size 0 / 0
VG UUID zJfbN1-0x7E-gteT-r0SV-Grh4-ewtc-fikl38
Can a thinly provisioned logical volume have a greater number of stripes than the number of physical volumes in the volume group? Is the mix of flags above an issue? The error message is not very descriptive.
Any help would be appreciated, thanks!

pvcreate failing to create PV. Device not found /dev/sdxy (or ignored by filtering)

I have an oVirt installation with CentOS Linux release 7.3.1611.
I want to add a new drive (sdb) to the oVirt volume group to work with VMs.
Here is the result of fdisk on the drive:
[root#host1 ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be
careful before using the write command.
Orden (m para obtener ayuda): p
Disk /dev/sdb: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512
bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk
label type: dos Identificador del disco: 0x7a33815f
Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sdb1 2048 586072367 293035160 8e Linux LVM
The partitions are showed up in /proc/partitions:
[root#host1 ~]# cat /proc/partitions
major minor #blocks name
8 0 293036184 sda
8 1 1024 sda1
8 2 1048576 sda2
8 3 53481472 sda3
8 4 1 sda4
8 5 23072768 sda5
8 6 215429120 sda6
8 16 293036184 sdb
8 17 293035160 sdb1
When I execute the command to create PV with "pvcreate /dev/sdb1" the result is:
[root#host1 ~]# pvcreate /dev/sdb1
Device /dev/sdb1 not found (or ignored by filtering).
I have revised the file /etc/lvm/lvm.conf for the filters, but I do not have any filter that makes LVM discarding the drive. I have rebooted the computer after creating the PV with pvcreate. I did research on Google for the error but no idea.
Thanks. Any help would be appreciated Manuel
Try to edit lvm.conf uncomment global_flter and edit like this
global_filter = [ "a|/dev/sdb|"]
After that edit multipath vi /etc/multipath.conf
[root#ovirtnode2 ~]#lsblk /dev/sdb NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb
8:16 0 200G 0 disk └─3678da6e715b018f01f1abdb887594aae 253:2
0 200G 0 mpath
edit
vi /etc/multipath.conf
append the following to multipath.conf blacklist {
wwid 3678da6e715b018f01f1abdb887594aae }
service multipathd restart
its work for me, and i have that problem to when im trying on ovirt but
[root#ovirtnode2 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb"
successfully created. [root#ovirtnode2 ~]#

Map lvm volume to Physical volume

lsblk provides output in this fornat:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 300G 0 disk
sda1 8:1 0 500M 0 part /boot
sda2 8:2 0 299.5G 0 part
vg_data1-lv_root (dm-0) 253:0 0 50G 0 lvm /
vg_data2-lv_swap (dm-1) 253:1 0 7.7G 0 lvm [SWAP]
vg_data3-LogVol04 (dm-2) 253:2 0 46.5G 0 lvm
vg_data4-LogVol03 (dm-3) 253:3 0 97.7G 0 lvm /map1
vg_data5-LogVol02 (dm-4) 253:4 0 97.7G 0 lvm /map2
sdb 8:16 0 50G 0 disk
for a mounted volume say /map1 how do i directly get the physical volume associated with it. Is there any direct command to fetch the information?
There is no direct command to show that information for a mount. You can run
lvdisplay -m
Which will show which physical volumes are currently being used by the logical volume.
Remember, thought, that there is no such thing as a direct association between a logical volume and a physical volume. Logical volumes are associated with volume groups. Volume groups have a pool of physical volumes over which they can distribute any volume group. If you always want to know that a given lv is on a given pv, you have to restrict the vg to only having that one pv. That rather misses the point. You can use pvmove to push extents off a pv (sometimes useful for maintenance) but you can't stop new extents being created on it if logical volumes are extended or created.
As to why there is no such potentially useful command...
LVM is not ZFS. ZFS is a complete storage and filesystem management system, managing both storage (at several levels of abstraction) and the mounting of filesystems. LVM, in contrast, is just one layer of the Linux Virtual File System. It provides a layer of abstraction on top of physical storage devices and makes no assumption about how the logical volumes are used.
Leaving the grep/awk/cut/whatever to you, this will show which PVs each LV actually uses:
lvs -o +devices
You'll get a separate line for each PV used by a given LV, so if an LV has extents on three PVs you will see three lines for that LV. The PV device node path is followed by the starting extent(I think) of the data on that PV in parentheses.
I need to emphasize that there is no direct relation between a mountpoint (logical volume) and a physical volume in LVM. This is one of its design goals.
However you can traverse the associations between the logical volume, the volume group and physical volumes assigned to that group. However this only tells you: The data is stored on one of those physical volumes, but not where exactly.
I couldn't find a command which can produce the output directly. However you can tinker something using mount, lvdisplay, vgdisplay and awk|sed:
mp=/mnt vgdisplay -v $(lvdisplay $(mount | awk -vmp="$mp" '$3==mp{print $1}') | awk '/VG Name/{print $3}')
I'm using the environment variable mp to pass the mount point to the command. (You need to execute the command as root or using sudo)
For my test-scenario it outputs:
...
--- Volume group ---
VG Name vg1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
...
VG Size 992.00 MiB
PE Size 4.00 MiB
Total PE 248
Alloc PE / Size 125 / 500.00 MiB
Free PE / Size 123 / 492.00 MiB
VG UUID VfOdHF-UR1K-91Wk-DP4h-zl3A-4UUk-iB90N7
--- Logical volume ---
LV Path /dev/vg1/testlv
LV Name testlv
VG Name vg1
LV UUID P0rgsf-qPcw-diji-YUxx-HvZV-LOe0-Iq0TQz
...
Block device 252:0
--- Physical volumes ---
PV Name /dev/loop0
PV UUID Qwijfr-pxt3-qcQW-jl8q-Q6Uj-em1f-AVXd1L
PV Status allocatable
Total PE / Free PE 124 / 0
PV Name /dev/loop1
PV UUID sWFfXp-lpHv-eoUI-KZhj-gC06-jfwE-pe0oU2
PV Status allocatable
Total PE / Free PE 124 / 123
If you only want to display the physical volumes you might pipe the results of the above command to sed:
above command | sed -n '/--- Physical volumes ---/,$p'
dev=$(df /map1 | tail -n 1|awk '{print $1}')
echo $dev | grep -q ^/dev/mapper && lvdisplay -m $dev 2>/dev/null | awk '/Physical volume/{print $3}' || echo $dev

Resources