pvcreate failing to create PV. Device not found /dev/sdxy (or ignored by filtering) - linux

I have an oVirt installation with CentOS Linux release 7.3.1611.
I want to add a new drive (sdb) to the oVirt volume group to work with VMs.
Here is the result of fdisk on the drive:
[root#host1 ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be
careful before using the write command.
Orden (m para obtener ayuda): p
Disk /dev/sdb: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512
bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk
label type: dos Identificador del disco: 0x7a33815f
Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sdb1 2048 586072367 293035160 8e Linux LVM
The partitions are showed up in /proc/partitions:
[root#host1 ~]# cat /proc/partitions
major minor #blocks name
8 0 293036184 sda
8 1 1024 sda1
8 2 1048576 sda2
8 3 53481472 sda3
8 4 1 sda4
8 5 23072768 sda5
8 6 215429120 sda6
8 16 293036184 sdb
8 17 293035160 sdb1
When I execute the command to create PV with "pvcreate /dev/sdb1" the result is:
[root#host1 ~]# pvcreate /dev/sdb1
Device /dev/sdb1 not found (or ignored by filtering).
I have revised the file /etc/lvm/lvm.conf for the filters, but I do not have any filter that makes LVM discarding the drive. I have rebooted the computer after creating the PV with pvcreate. I did research on Google for the error but no idea.
Thanks. Any help would be appreciated Manuel

Try to edit lvm.conf uncomment global_flter and edit like this
global_filter = [ "a|/dev/sdb|"]
After that edit multipath vi /etc/multipath.conf
[root#ovirtnode2 ~]#lsblk /dev/sdb NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb
8:16 0 200G 0 disk └─3678da6e715b018f01f1abdb887594aae 253:2
0 200G 0 mpath
edit
vi /etc/multipath.conf
append the following to multipath.conf blacklist {
wwid 3678da6e715b018f01f1abdb887594aae }
service multipathd restart
its work for me, and i have that problem to when im trying on ovirt but
[root#ovirtnode2 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb"
successfully created. [root#ovirtnode2 ~]#

Related

Mount docker logical volume

I'm trying to access to a logical volume where previously was used by docker. This is the result of various command:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 80G 0 disk
├─nvme0n1p1 259:3 0 80G 0 part /
└─nvme0n1p128 259:4 0 1M 0 part
nvme1n1 259:0 0 80G 0 disk
└─nvme1n1p1 259:1 0 80G 0 part
├─docker-docker--pool_tdata 253:1 0 79G 0 lvm
│ └─docker-docker--pool 253:2 0 79G 0 lvm
└─docker-docker--pool_tmeta 253:0 0 84M 0 lvm
└─docker-docker--pool 253:2 0 79G 0 lvm
fdisk
Disk /dev/nvme1n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00029c01
Device Boot Start End Blocks Id System
/dev/nvme1n1p1 2048 167772159 83885056 8e Linux LVM
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/nvme0n1: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 358A5F86-3BCA-4FB2-8C00-722B915A71AB
# Start End Size Type Name
1 4096 167772126 80G Linux filesyste Linux
128 2048 4095 1M BIOS boot BIOS Boot Partition
lvdisplay
--- Logical volume ---
LV Name docker-pool
VG Name docker
LV UUID piD2Wx-aDjf-CkpN-b4s4-YXWE-6ERm-GWTcOz
LV Write Access read/write
LV Creation host, time ip-172-31-39-159, 2020-02-16 09:18:57 +0000
LV Pool metadata docker-pool_tmeta
LV Pool data docker-pool_tdata
LV Status available
# open 0
LV Size 79.03 GiB
Allocated pool data 80.07%
Allocated metadata 31.58%
Current LE 20232
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
But when I try to mount the volume docker-docker--pool_tdata I get the following error:
mount /dev/mapper/docker-docker--pool_tdata /mnt/test
mount: /dev/mapper/docker-docker--pool_tdata is already mounted or /mnt/test busy
I've also tried to reboot the machine, to uninstall docker and to see if there is file opened on that volume using lsof
Do you have any clue about how can I mount that volume?
Thanks
Uninstalling docker does not really help as purge and autoremove only delete the installed packages and not the images, containers, volumes and config files.
To delete those you have to delete a bunch of directories contained in etc, var/lib, bin andvar/run
Clean up the env
try running docker system prune -a to remove unused containers, images etc
remove the volume with docker volume rm {volumeID}
create the volume again docker volume create docker-docker--pool_tdata
Kill the process
run lsof +D /mnt/test or cat ../docker/../tasks
this should display the PIDs of alive tasks.
Kill the task with kill -9 {PID}

Simulate mounted volume errors to cause read only

Few days ago we have encountered an unexpected error where one of the mounted drive on our RedHat linux machine became Read-Only. The issue was cause by the network outage in the datacenter.
Now I need to see if I can reproduce the same behavior where drive will be re-mounted as Read-Only while application is running.
I tried to remounted it was read-only but that didn't work because there are files that are opened (logs being written).
Is there a way to temporary cause the read-only if I have root access to the machine (but no access to the hypervisor).
That volume is mounted via /etc/fstab. Here is the record:
UUID=abfe2bbb-a8b6-4ae0-b8da-727cc788838f / ext4 defaults 1 1
UUID=8c828be6-bf54-4fe6-b68a-eec863d80133 /opt/sunapp ext4 rw 0 2
Here are the output of few commands that shows details about our mounted drive. I can add more details as needed.
Output of fdisk -l
Disk /dev/vda: 268.4 GB, 268435456000 bytes, 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008ba5f
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 524287966 262142959+ 83 Linux
Disk /dev/vdb: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Output of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 80G 0 disk
└─vda1 253:1 0 80G 0 part /
vdb 253:16 0 250G 0 disk /opt/sunup
Output of blkid command:
/dev/vda1: UUID="abfe2bbb-a8b6-4ae0-b8da-727cc788838f" TYPE="ext4"
/dev/sr0: UUID="2017-11-13-13-33-07-00" LABEL="config-2" TYPE="iso9660"
/dev/vdb: UUID="8c828be6-bf54-4fe6-b68a-eec863d80133" TYPE="ext4"
Output of parted -l command:
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0
has been opened read-only.
Error: /dev/sr0: unrecognised disk label
Model: QEMU QEMU DVD-ROM (scsi)
Disk /dev/sr0: 461kB
Sector size (logical/physical): 2048B/2048B
Partition Table: unknown
Disk Flags:
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 268GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 268GB 268GB primary ext4 boot
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 42.9GB 42.9GB ext4
Yes, you can do it. But the method proposed here may cause data loss, so use it only for testing.
Supposing you have /dev/vdb mounted as /opt/sunapp, do this:
First, unmount it. You may need to shut down any applications using it first.
Configure a loop device to mirror the contents of /dev/vdb:
losetup /dev/loop0 /dev/vdb
Then, mount /dev/loop0 instead of /dev/vdb:
mount /dev/loop0 /opt/sunapp -o rw,errors=remount-ro
Now, you can run your application. When it is time to make /opt/sunapp read-only, use this command:
blockdev --setro /dev/vdb
After that, attempts to write to /dev/loop0 will result in I/O errors. As soon as file system driver detects this, it will remount the file system as read-only.
To restore everything back, you will need to unmount /opt/sunapp, detach the loop device, and make /dev/vdb writable again:
umount /opt/sunapp
losetup -d /dev/loop0
blockdev --setrw /dev/vdb
When I had some issues like corrupted disks, I had used ntfsfix.
Please see if these commands, solve the problem.
sudo ntfsfix /dev/vda
sudo ntfsfix /dev/vdb

KVM Virtual Machine: Wrong Disk Size

Ever since I did yum update and tried to create a new (for example) 10GB Disk KVM VPS, the reported disk space inside VM is locked to the initial template size (usually 1GB for linux template).
Normally it should be 10GB (fdisk says so, but df command says otherwise).
[root#localhost ~]# resize2fs /dev/vda1
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vda1 is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/vda1 to 262160 (4k) blocks.
The filesystem on /dev/vda1 is now 262160 blocks long.
[root#localhost ~]# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/vda1 1008 760 198 80% /
none 246 0 246 0% /dev/shm
[root#localhost ~]# fdisk -l
Disk /dev/vda: 10.7 GB, 10737418240 bytes
4 heads, 32 sectors/track, 163840 cylinders
Units = cylinders of 128 * 512 = 65536 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b6106
Device Boot Start End Blocks Id System
/dev/vda1 17 16401 1048640 83 Linux
All above command is taken inside the VM.
Below is disk part of xml configuration on the host node:
disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source file='/kvm/v1046-2ogd-j1p2jraixpg1g03y.raw'/>
<target dev='vda' bus='virtio' />
</disk>
Sparse RAW is used. Not a problem with older VM.
du -hs on host node:
650M v1046-2ogd-j1p2jraixpg1g03y.raw
ls -lah on host node:
-rw-r--r-- 1 qemu qemu 10G Dec 21 21:03 v1046-2ogd-j1p2jraixpg1g03y.raw
Any help is really appreciated. Thanks for reading.
resize2fs /dev/vda1 online inside a VM is not supported. Had to load gparted to extend the partition manually.

Can't open /dev/sda2 exclusively. Mounted filesystem?

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 3999.7 GB, 3999688294400 bytes
255 heads, 63 sectors/track, 486267 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
/dev/sda2 1 2090 16785120 82 Linux swap / Solaris
/dev/sda3 1 218918 1758456029+ 8e Linux LVM
Partition table entries are not in disk order
Above is my "fdisk -l", my current problem is when I go and try to do "pvcreate /dev/sda2" it gives me "Can't open /dev/sda2 exclusively. Mounted filesystem?" and I have been searching google for a while now trying to find a way to fix this. There is defiantly things I tried from google but none of them ended up working.
You're trying to initialize a partition for use by LVM that's currently used by swap.
You should rather run
pvcreate /dev/sda3
i updated to the new Kernel and the problem was resolved in RHEL 6 . Had upgraded from 2.6.32-131.x to 2.6.32-431.x
Check those disks/partition you are using, are not mounted to any directory on your system.
If yes, umount them and try again.

missing partition in server centos 6.1

I used the command df-h on my centos 6.1
here's the output
[root#localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 2.3G 45G 5% /
tmpfs 5.9G 0 5.9G 0% /dev/shm
/dev/sda1 485M 35M 425M 8% /boot
/dev/mapper/VolGroup-lv_home
2.0T 199M 1.9T 1% /home
i found out that the hard disk is two terabyte. but when I used the command cat /proc/partitions | more
here's the output
[root#localhost sysconfig]# cat /proc/partitions | more
major minor #blocks name
8 0 4293656576 sda
8 1 512000 sda1
8 2 2146970624 sda2
253 0 52428800 dm-0
253 1 14417920 dm-1
253 2 2080120832 dm-2
you could see on the first line. it is 4396.7 GB . why is it i could only see is 2TB? how could i find my missing another 2TB and make it a partition.
I also use the command lsblk
here is the output
[root#localhost ~]# lblsk
-bash: lblsk: command not found
[root#localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO MOUNTPOINT
sda 8:0 0 4T 0
ââsda1 8:1 0 500M 0 /boot
ââsda2 8:2 0 2T 0
ââVolGroup-lv_root (dm-0) 253:0 0 50G 0 /
ââVolGroup-lv_swap (dm-1) 253:1 0 13.8G 0 [SWAP]
ââVolGroup-lv_home (dm-2) 253:2 0 2T 0 /home
sr0 11:0 1 1024M 0
using the parted /dev/sda i type the print free command
here's the output
(parted) print free
Model: DELL PERC 6/i (scsi)
Disk /dev/sda: 4397GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 525MB 524MB primary ext4 boot
2 525MB 2199GB 2198GB primary lvm
2199GB 4397GB 2198GB Free Space
I was wrong, sorry. As you can see in parted print free output you have 2 MBR partitions - boot and lvm and 2198GB free space (last row).
If you want to use all of your space you have to use GPT partitions. These partitions as opposed to MBR partition that can only address up to 2TB, can address your whole disk and up to 8 ZiB (zebibytes).
You can try to convert MBR partition table to GPT (example 1, example 2), though I strongly recommend to backup your data.
You are using tools showing info from different layers of your system and interpreting it wrong.
df, according to man page, will display the space available on all currently mounted file systems.
/proc/partitions holds info about partitions on your drive - physical device. This file shows you size of your drive as number of blocks. Usually, on HDD block size is a size of sector - 512 bytes.
So, sda size of 4293656576 is size in blocks, not kilobytes.
4293656576 blocks = (4293656576 / 2 ) kilobytes = 2146828288 KiB = 2047.375 GiB, or 2198.352 GB.
Assuming 1 GiB = 1 * 2^30, 1 GB = 1 * 10^3.
If you want to see size of your disk use fdisk -l <device name>.

Resources