Re-scan LUN on Linux - linux

We have expend existing LUN size on EMC Storage and now i want to re-scan on Host side but i don't know how to figure out SCSI ID of that specific LUN. I am new to storage.. This is what i am doing but don't know whether it is a right way or not
Pseudo name=emcpowerj
CLARiiON ID=APM00112500570 [Oracle_Cluster]
Logical device ID=200601602E002900B6BCA114C9F8E011 [LUN01]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0;
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
2 qla2xxx sdaj SP A1 active alive 0 1
2 qla2xxx sdaw SP B1 active alive 0 4
1 qla2xxx sdj SP A0 active alive 0 1
1 qla2xxx sdw SP B0 active alive 0 4
Here i am running find command on sdX device to find out SCSI ID to i can do echo 1 > /sys/bus/scsi/drivers/X:X:X:X/rescan to do re-scan LUN
$ find /sys/devices -name "*block*" | grep -e "sdaj" -e "sdaw" -e "sdj" -e "sdw"
/sys/devices/pci0000:00/0000:00:09.0/0000:05:00.1/host2/rport-2:0-1/target2:0:1/**2:0:1:8**/block:sdaw
/sys/devices/pci0000:00/0000:00:09.0/0000:05:00.1/host2/rport-2:0-0/target2:0:0/**2:0:0:8**/block:sdaj
/sys/devices/pci0000:00/0000:00:09.0/0000:05:00.0/host1/rport-1:0-1/target1:0:1/**1:0:1:8**/block:sdw
/sys/devices/pci0000:00/0000:00:09.0/0000:05:00.0/host1/rport-1:0-0/target1:0:0/**1:0:0:8**/block:sdj
or there is a alternative or other way to scan LUN?

I like to use the "lsscsi" program, which is probably available for your distribution.
% lsscsi
[0:0:0:0] cd/dvd NECVMWar VMware IDE CDR00 1.00 /dev/sr0
[2:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda
[2:0:1:0] disk VMware, VMware Virtual S 1.0 /dev/sdb
As for rescanning the bus, that's pretty much it.

Related

Check if dm multipath paths had previous errors like powerpath

We recently switches storage vendors and are now bound to DM Multipath for multipath management.
Does DM Multipath have a feature to see if any of the paths had a previous error on one of the paths.
In PowerPath you could see if there where any paths that had any errors since the last reboot/cleanup. There is a column where the errors are displayed. Like so (output is from windows verson, but does not differ from the rhel version we are using):
Pseudo name=harddisk1
Unity ID=CK0000000000001 [HOST1]
Logical device ID=123ABC123ABC123 [HOST1]
state=alive; policy=CLAROpt; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 4
==============================================================================
--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 port1\path0\tgt1\lun28 c1t0d0 SP B2 active alive 0 1
1 port1\path0\tgt0\lun28 c1t1d0 SP A2 active alive 0 1
0 port0\path0\tgt1\lun28 c0t1d0 SP B3 active alive 0 1
0 port0\path0\tgt0\lun28 c0t0d0 SP A3 active alive 0 1
Is there a similar output for multipath. Or do you need to go through all of the system logging?
I've already been searching the internet this last week, but it seems like there is non.
Thank you in advance.
multipath -ll gives similar output:
# multipath -ll
eui.006a1354d146f94124a9376b00011010 dm-2 NVME, NoName Vendor
size=11G features='3 queue_if_no_path queue_mode mq' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=50 status=active
|- 0:2:2:69648 nvme0n2 259:3 active ready running
|- 1:5:2:69648 nvme1n2 259:12 active ready running
|- 10:4:2:69648 nvme10n2 259:4 active ready running
|- 11:1:2:69648 nvme11n2 259:13 active ready running
...

Determine WWID of LUN from mapped drive on Linux

I am trying to establish if there is an easier method to determine the WWID of an iSCSI LUN connected with a Linux Filesystem or mountpoint.
A frequent problem we have is where a user requests a disk expansion on a RHEL system with multiple iSCSI LUNs connected. A user will provide us with the path their LUN is mounted on, and from this we need to establish which LUN they are referring to so that we can make the increase as appropriate at the Storage side.
Currently we run df -h to get the Filesystem name, pvdisplay to get the VG Name and then multipath -v4 -ll | grep "^mpath" to get the WWID. This feels messy, long-winded and prone inconsistent interpretation.
Is there a more concise command we can run to determine the WWID of the device?
Here's one approach. The output format leaves something to be desired - it's more suited to eyeballs than programs.
lsblk understands the mapping of a mounted filesystem down through the LVM and multipath layers to the underlying block devices. In the output below, /dev/sdc is my iSCSI-attached LUN, attached via one path to the target. It contains the volume group vg1 and a logical volume lv1. /mnt/tmp is where I have the filesystem on the LV mounted.
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:32 0 128M 0 disk
└─360a010a0b43e87ab1962194c4008dc35 253:4 0 128M 0 mpath
└─vg1-lv1 253:3 0 124M 0 lvm /mnt/tmp
At the 2nd level there is the SCSI WWN (360a010...), courtesy multipathd.

I can't unrepresent LUN (SAN) devices from server

I've 22Tb lun from SAN Storage (HITACHI) on my Linux Server(CentOS 6.7).
I configure multipath for this lun, and now I wanna remove it.
The storage team deattach the lun from my server and when I run "multipath -ll"it still exists.
mpathf (360060e801667af00000167af0000014b) dm-2 HITACHI,OPEN-V*12
size=22T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
|- 3:0:0:3 sdf 8:80 failed faulty running
`- 3:0:1:3 sdn 8:208 failed faulty running
this message stay until I reboot the server and i can't reboot all of my servers because they are in production environment.
anybody know what should I do?
Thanks
First you need to be sure the mpathf device is not being used:
lsof | grep mpathf
dmsetup info mpathf | grep -i open
In the dmsetup info open count needs to be equal to 0
So you are using the luns with lvm or something else, you need to remove everything from the lun.
Now you can delete sub disks with echo 1 > /sys/block/<x>/device/delete

Map lvm volume to Physical volume

lsblk provides output in this fornat:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 300G 0 disk
sda1 8:1 0 500M 0 part /boot
sda2 8:2 0 299.5G 0 part
vg_data1-lv_root (dm-0) 253:0 0 50G 0 lvm /
vg_data2-lv_swap (dm-1) 253:1 0 7.7G 0 lvm [SWAP]
vg_data3-LogVol04 (dm-2) 253:2 0 46.5G 0 lvm
vg_data4-LogVol03 (dm-3) 253:3 0 97.7G 0 lvm /map1
vg_data5-LogVol02 (dm-4) 253:4 0 97.7G 0 lvm /map2
sdb 8:16 0 50G 0 disk
for a mounted volume say /map1 how do i directly get the physical volume associated with it. Is there any direct command to fetch the information?
There is no direct command to show that information for a mount. You can run
lvdisplay -m
Which will show which physical volumes are currently being used by the logical volume.
Remember, thought, that there is no such thing as a direct association between a logical volume and a physical volume. Logical volumes are associated with volume groups. Volume groups have a pool of physical volumes over which they can distribute any volume group. If you always want to know that a given lv is on a given pv, you have to restrict the vg to only having that one pv. That rather misses the point. You can use pvmove to push extents off a pv (sometimes useful for maintenance) but you can't stop new extents being created on it if logical volumes are extended or created.
As to why there is no such potentially useful command...
LVM is not ZFS. ZFS is a complete storage and filesystem management system, managing both storage (at several levels of abstraction) and the mounting of filesystems. LVM, in contrast, is just one layer of the Linux Virtual File System. It provides a layer of abstraction on top of physical storage devices and makes no assumption about how the logical volumes are used.
Leaving the grep/awk/cut/whatever to you, this will show which PVs each LV actually uses:
lvs -o +devices
You'll get a separate line for each PV used by a given LV, so if an LV has extents on three PVs you will see three lines for that LV. The PV device node path is followed by the starting extent(I think) of the data on that PV in parentheses.
I need to emphasize that there is no direct relation between a mountpoint (logical volume) and a physical volume in LVM. This is one of its design goals.
However you can traverse the associations between the logical volume, the volume group and physical volumes assigned to that group. However this only tells you: The data is stored on one of those physical volumes, but not where exactly.
I couldn't find a command which can produce the output directly. However you can tinker something using mount, lvdisplay, vgdisplay and awk|sed:
mp=/mnt vgdisplay -v $(lvdisplay $(mount | awk -vmp="$mp" '$3==mp{print $1}') | awk '/VG Name/{print $3}')
I'm using the environment variable mp to pass the mount point to the command. (You need to execute the command as root or using sudo)
For my test-scenario it outputs:
...
--- Volume group ---
VG Name vg1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
...
VG Size 992.00 MiB
PE Size 4.00 MiB
Total PE 248
Alloc PE / Size 125 / 500.00 MiB
Free PE / Size 123 / 492.00 MiB
VG UUID VfOdHF-UR1K-91Wk-DP4h-zl3A-4UUk-iB90N7
--- Logical volume ---
LV Path /dev/vg1/testlv
LV Name testlv
VG Name vg1
LV UUID P0rgsf-qPcw-diji-YUxx-HvZV-LOe0-Iq0TQz
...
Block device 252:0
--- Physical volumes ---
PV Name /dev/loop0
PV UUID Qwijfr-pxt3-qcQW-jl8q-Q6Uj-em1f-AVXd1L
PV Status allocatable
Total PE / Free PE 124 / 0
PV Name /dev/loop1
PV UUID sWFfXp-lpHv-eoUI-KZhj-gC06-jfwE-pe0oU2
PV Status allocatable
Total PE / Free PE 124 / 123
If you only want to display the physical volumes you might pipe the results of the above command to sed:
above command | sed -n '/--- Physical volumes ---/,$p'
dev=$(df /map1 | tail -n 1|awk '{print $1}')
echo $dev | grep -q ^/dev/mapper && lvdisplay -m $dev 2>/dev/null | awk '/Physical volume/{print $3}' || echo $dev

cgroup blkio files cannot be written

I'm trying to control I/O bandwidth by using cgroup blkio controller.
Cgroup has been setup and mounted successfully, i.e. calling grep cgroup /proc/mounts
returns:
....
cgroup /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0
...
I then make a new folder in the blkio folder and write to the file blkio.throttle.read_bps_device, as follows:
1. mkdir user1; cd user1
2. echo "8:5 10485760" > blkio.throtlle.read_bps_device
----> echo: write error: Invalid argument
My device major:minor number is correct from using df -h and ls -l /dev/sda5 for the storage device.
And I can still write to file that requires no device major:minor number, such as blkio.weight (but the same error is thrown for blkio.weigth_device)
Any idea why I got that error?
Not sure which flavour/version of Linux you are using, on RHEL 6.x kernels, this was did not work for some reason, however it worked when I compiled on a custom kernel on RHEL and on other Fedora versions without any issues.
To check if supported on your kernel, run lssubsys -am | grep blkio. Check the path if you can file the file blkio.throttle.read_bps_device
However, here is an example of how you can do it persistently, set a cgroups to limit the program not to exceed more than 1 Mibi/s:
Get the MARJOR:MINOR device number from /proc/partitions
`cat /proc/partitions | grep vda`
major minor #blocks name
252 0 12582912 vda --> this is the primary disk (with MAJOR:MINOR -> 8:0)
Now if you want to limit your program to 1mib/s (convert the value to bytes/s) as follows. => 1MiB/s => 1024 kiB/1MiB * 1024 B/s = 1048576 Bytes/sec
Edit /etc/cgconfig.conf and add the following entry
group ioload {
blkio.throttle.read_bps_device = "252:0 1048576"
}
}
Edit /etc/cgrules.conf
*: blkio ioload
Restart the required services
`chkconfig {cgred,cgconfig} on;`
`service {cgred,cgconfig} restart`
Refer: blkio-controller.txt
hope this helps!

Resources