Determine WWID of LUN from mapped drive on Linux - linux

I am trying to establish if there is an easier method to determine the WWID of an iSCSI LUN connected with a Linux Filesystem or mountpoint.
A frequent problem we have is where a user requests a disk expansion on a RHEL system with multiple iSCSI LUNs connected. A user will provide us with the path their LUN is mounted on, and from this we need to establish which LUN they are referring to so that we can make the increase as appropriate at the Storage side.
Currently we run df -h to get the Filesystem name, pvdisplay to get the VG Name and then multipath -v4 -ll | grep "^mpath" to get the WWID. This feels messy, long-winded and prone inconsistent interpretation.
Is there a more concise command we can run to determine the WWID of the device?

Here's one approach. The output format leaves something to be desired - it's more suited to eyeballs than programs.
lsblk understands the mapping of a mounted filesystem down through the LVM and multipath layers to the underlying block devices. In the output below, /dev/sdc is my iSCSI-attached LUN, attached via one path to the target. It contains the volume group vg1 and a logical volume lv1. /mnt/tmp is where I have the filesystem on the LV mounted.
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:32 0 128M 0 disk
└─360a010a0b43e87ab1962194c4008dc35 253:4 0 128M 0 mpath
└─vg1-lv1 253:3 0 124M 0 lvm /mnt/tmp
At the 2nd level there is the SCSI WWN (360a010...), courtesy multipathd.

Related

How do I change the filesystem of my 64GB USB, from FAT32 to anything which allows me to put a 35GB file from my x86_64 Linux machine onto the USB?

'uname -a' on my machine gives:
Linux ct-lt-966 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux
Currently the filesystem of my USB is MS-DOS 'FAT32' which has a ~4.5 GB maximum size for individual files. I want to change this filesystem to something else, which does not have a limit. (I am trying to put a 35GB file onto a 64GB USB but I believe most USB filesystems do not limit the size of individual files).
I have not found it clear what choices of USB filesystem that I have. I tried to change the filesystem to 'NTFS', but I could not install or locate 'mkfs.ntfs' or even 'ntfsprogs'. (I also tried installing with 'pacman' and 'yum' but apparently 'pacman' requires an aarch architecture and I could not get access to 'yum-config-manager' in order to enable any repos).
So to conclude, with my minimal prowess I am just looking for any way to change the filesystem of my 64GB USB to anything which will accept a 35GB file from my machine.
Thanks
Edit 1: Just planning to use the USB on this Linux machine, not Windows.
If there's nothing on the stick you want, or it's safe to delete it then basically:
delete the current FAT32 partition from the stick
add a new partition, utilising the full size of the device
create an ext4 filesystem on the new partition
PLEASE BE CAREFUL WITH THIS PROCESS: selecting the wrong device can obliterate a disk you needed such as a $HOME or your root OS
All the following is from memory and untested: I don't have a USB stick available right now to test fully.
Start by plugging in the stick while tailing the syslog in a console and see where it gets mounted (hopefully it automounts which it should if it's a desktop based Linux you're running. Possibly not if it's a server)..
sudo tail -f /var/log/syslog
(it might be /var/log/messages depending on distro)
then plug the stick. syslog should show it being allocated a device and a mount point. A file manager window may open depending on your config if you are in a GUI. For example, you might see it being loaded on /dev/sdc1 and mounted at /media/<yourusername>/USBKEY or something.
Confirm by running lsblk and note the device for the key, i.e.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 167.7G 0 disk
├─sda1 8:1 0 69.9G 0 part /
└─sda2 8:2 0 97.9G 0 part /home
sdb 8:16 0 149.1G 0 disk
└─sdb1 8:17 0 149.1G 0 part /mnt/snapshots
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part /storage
sdd 8:48 0 465.8G 0 disk
└─sdd1 8:49 0 465.8G 0 part /mnt/backup
sr0 11:0 1 1024M 0 rom
Unmount the stick (if it mounted) but leave it plugged in. Assuming again your device is at /dev/sdc1...
umount /dev/sdc1
Now run cfdisk in a terminal if you have it (friendlier) or fdisk if not, passing it the device related to your USB stick, without the partition number.
man cfdisk
sudo cfdisk /dev/sdc
This should show the current FAT32 partition. Delete it, then create a new partition of type 'Linux', following the defaults for start and end blocks which will be suggested in such a way as to fill the available space.
When done, select the option to Write the changes. Again, DOUBLE AND TRIPLE CHECK you have the right device or you will blow away your main disk probably.
Once the changes are written, you can create the ext4 file system;
sudo mkfs.ext4 /dev/sdc1
And after it completes, you should be able to re-plug your stick and find that it remounts, this time with a file system that can take your large files.
This isn't the only way to achieve this, but it's probably the least fiddly. For the sake of repetition, don't make a mistake with the device identifiers. If you're unsure, ask.

Filesystem for a partition goes missing EC2 reboot

I created a d2.xlarge EC2 instance on AWS which returns the following output:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 1.8T 0 disk
xvdc 202:32 0 1.8T 0 disk
xvdd 202:48 0 1.8T 0 disk
The default /etc/fstab looks like this
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
Now, I make an EXT4 filesystem for xvdc
$ sudo mkfs -t ext4 /dev/xvdc
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 488375808 4k blocks and 122101760 inodes
Filesystem UUID: 2391499d-c66a-442f-b9ff-a994be3111f8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done
blkid returns a UID for the filesystem
$ sudo blkid /dev/xvdc
/dev/xvdc: UUID="2391499d-c66a-442f-b9ff-a994be3111f8" TYPE="ext4"
Then, I mount it on /mnt5
$ sudo mkdir -p /mnt5
$ sudo mount /dev/xvdc /mnt5
It gets succesfully mounted. Till there, the things work fine.
Now, I reboot the machine(first stop it and then start it) and then SSH into the machine.
I do
$ sudo blkid /dev/xvdc
It returns me nothing. Where did the filesystem go which I created before the reboot? I guess the filesystem for mounts remain created even after the reboot cycle.
Am I missing something to mount a partition on an AWS EC2 instance?
I followed this http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html and it does not seem to work as described above
You need to read up on EC2 Ephemeral Instance Store volumes. When you stop an instance with this type of volume the data on the volume is lost. You can reboot by performing a reboot/restart operation, but if you do a stop followed later by a start the data is lost. A stop followed by a start is not considered a "reboot" on EC2. When you stop an instance it is completely shut down and when you start it back later it is basically recreated on different backing hardware.
In other words what you describe isn't an issue, it is expected behavior. You need to be very aware of how these volumes work before depending on them.

About formatting new EBS volume on Amazon AWS

I don't have much experience with Linux and mounting/unmounting things. I'm using Amazon AWS, have booting up EC2 with Ubuntu image, and have attached a new EBS volume to the EC2. From the dashboard, I can see that the volume is attached to :/dev/sda1.
Now, I see from this guide from Amazon that the path will likely be changed by the kernel. So it's most likely that my /dev/sda1 device will be mounted on, maybe, /dev/xvda1.
So I logged in using terminal. I do ls /dev/ and I indeed see xvda1 on there. But I also see xvda. Now I want to format the device. But I don't know if the unformatted device is attached to xvda1 or xvda. I cannot list the content of /dev/xvda1 and /dev/xvda (it says ls: cannot access /dev/xvda1/: Not a directory). I guess I have to format it first.
I tried to format using sudo mkfs.ext4 /dev/xvda1. It says: /dev/xvda1 is mounted; will not make a filesystem here!.
I tried to format using sudo mkfs.ext4 /dev/xvda. It says: /dev/xvda is apparently in use by the system; will not make a filesystem here!
How can I format the volume?
EDIT:
The result of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
I then tried to use the command sudo mkfs -t ext4 /dev/xvda, but the same error message appears: /dev/xvda is apparently in use by the system; will not make a filesystem here!
When I tried to use the command mount /dev/xvda /webserver, error message appears: mount: /dev/xvda already mounted or /webserver busy. Some website indicate that this also probably because a corrupted or unformatted file system. So I guess I have to be able to format it first before able to mount it.
First of all you are trying to format /dev/xvda1, which is root device. Why ??
Second if you have added a new EBS, then follow below steps.
List Block Device's
This will give you list of block device attached to your EC2 which will look like
[ec2-user ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 100G 0 disk
xvda1 202:1 0 8G 0 disk /
Out of this xvda1 is the / (root) and xvdf is the one that you need to format and mount ( for the new EBS)
Format Device
sudo mkfs -t ext4 device_name # device_name is xvdf here
Create a Mount Point
sudo mkdir /mount_point
Mount the Volume
sudo mount device_name mount_point # here device_name is /dev/xvdf
Make an entry in /etc/fstab
device_name mount_point file_system_type fs_mntops fs_freq fs_passno
Execute
sudo mount -a
This will read your /etc/fstab file and if it's OK. it will mount the EBS to mount_point

Map lvm volume to Physical volume

lsblk provides output in this fornat:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 300G 0 disk
sda1 8:1 0 500M 0 part /boot
sda2 8:2 0 299.5G 0 part
vg_data1-lv_root (dm-0) 253:0 0 50G 0 lvm /
vg_data2-lv_swap (dm-1) 253:1 0 7.7G 0 lvm [SWAP]
vg_data3-LogVol04 (dm-2) 253:2 0 46.5G 0 lvm
vg_data4-LogVol03 (dm-3) 253:3 0 97.7G 0 lvm /map1
vg_data5-LogVol02 (dm-4) 253:4 0 97.7G 0 lvm /map2
sdb 8:16 0 50G 0 disk
for a mounted volume say /map1 how do i directly get the physical volume associated with it. Is there any direct command to fetch the information?
There is no direct command to show that information for a mount. You can run
lvdisplay -m
Which will show which physical volumes are currently being used by the logical volume.
Remember, thought, that there is no such thing as a direct association between a logical volume and a physical volume. Logical volumes are associated with volume groups. Volume groups have a pool of physical volumes over which they can distribute any volume group. If you always want to know that a given lv is on a given pv, you have to restrict the vg to only having that one pv. That rather misses the point. You can use pvmove to push extents off a pv (sometimes useful for maintenance) but you can't stop new extents being created on it if logical volumes are extended or created.
As to why there is no such potentially useful command...
LVM is not ZFS. ZFS is a complete storage and filesystem management system, managing both storage (at several levels of abstraction) and the mounting of filesystems. LVM, in contrast, is just one layer of the Linux Virtual File System. It provides a layer of abstraction on top of physical storage devices and makes no assumption about how the logical volumes are used.
Leaving the grep/awk/cut/whatever to you, this will show which PVs each LV actually uses:
lvs -o +devices
You'll get a separate line for each PV used by a given LV, so if an LV has extents on three PVs you will see three lines for that LV. The PV device node path is followed by the starting extent(I think) of the data on that PV in parentheses.
I need to emphasize that there is no direct relation between a mountpoint (logical volume) and a physical volume in LVM. This is one of its design goals.
However you can traverse the associations between the logical volume, the volume group and physical volumes assigned to that group. However this only tells you: The data is stored on one of those physical volumes, but not where exactly.
I couldn't find a command which can produce the output directly. However you can tinker something using mount, lvdisplay, vgdisplay and awk|sed:
mp=/mnt vgdisplay -v $(lvdisplay $(mount | awk -vmp="$mp" '$3==mp{print $1}') | awk '/VG Name/{print $3}')
I'm using the environment variable mp to pass the mount point to the command. (You need to execute the command as root or using sudo)
For my test-scenario it outputs:
...
--- Volume group ---
VG Name vg1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
...
VG Size 992.00 MiB
PE Size 4.00 MiB
Total PE 248
Alloc PE / Size 125 / 500.00 MiB
Free PE / Size 123 / 492.00 MiB
VG UUID VfOdHF-UR1K-91Wk-DP4h-zl3A-4UUk-iB90N7
--- Logical volume ---
LV Path /dev/vg1/testlv
LV Name testlv
VG Name vg1
LV UUID P0rgsf-qPcw-diji-YUxx-HvZV-LOe0-Iq0TQz
...
Block device 252:0
--- Physical volumes ---
PV Name /dev/loop0
PV UUID Qwijfr-pxt3-qcQW-jl8q-Q6Uj-em1f-AVXd1L
PV Status allocatable
Total PE / Free PE 124 / 0
PV Name /dev/loop1
PV UUID sWFfXp-lpHv-eoUI-KZhj-gC06-jfwE-pe0oU2
PV Status allocatable
Total PE / Free PE 124 / 123
If you only want to display the physical volumes you might pipe the results of the above command to sed:
above command | sed -n '/--- Physical volumes ---/,$p'
dev=$(df /map1 | tail -n 1|awk '{print $1}')
echo $dev | grep -q ^/dev/mapper && lvdisplay -m $dev 2>/dev/null | awk '/Physical volume/{print $3}' || echo $dev

How To Mount A Hard Disk Of File-System Type "devtmpfs"

I'm trying to recover some data from a hard drive extracted from a broken laptop, and I'm having problems mounting the disk to my current system (Linux Mint). The hard disk I'm recovering from ran Debian. Simply, I'm confused as to how I can mount the hard drive to access the files, however it's not as simple as any other mount I've done. The following details struggles and information I've encountered.
I get the following outputs when trying to mount the hard drive with different file-system tags. I should add that the file-system type isn't automatically detected when using auto, and "sdb" is definitely the correct address for the disk (taken it from dmesg).
$ mount /dev/sdb /mnt/usb -t ntfs
NTFS signature is missing.
Failed to mount '/dev/sdb': Invalid argument
The device '/dev/sdb' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
The following returns the same message when all other common file-system tags are used:
$ sudo mount /dev/sdb usb -t ext2
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
The results from these commands led me to believe that there was an issue with the hard disk and it's partitions, however fdisk proved that it's partition's do seem to be valid and correct:
$ sudo fdisk /dev/sdb -l
Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002da94
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 475920383 237959168 83 Linux
/dev/sdb2 475922430 488396799 6237185 5 Extended
/dev/sdb5 475922432 488396799 6237184 82 Linux swap / Solaris
I then decided to try verify the file-system type of the hard drive, which seems to be "devtmpfs", which I got from the following command using df:
$ df /dev/sdb -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
udev devtmpfs 1014764 4 1014760 1% /dev
And so finally, I mount the hard drive using -t devtmpfs, which is successful in mounting however I'm left with a confusing file system very unlike from what I would expect from what was a standard debian set up.
It contains file folders such as "block","bus","char","disk","dri","mapper"... and files like "sda1","sdb","sdb1","tty","vcs".
I'm totally stumped as to how I should progress, and I'm pretty convinced the hard disk isn't broken and that I'm just mounting it incorrectly. How can I successfully mount the disk so I can access my files? Any help would be greatly appreciated.
Ok, you are trying to mount the entire disk instead of individual partitions, which is why you are getting the error. In short the command you need is:
mount /dev/sdb1 /mnt/usb
The file /dev/sdb references the entire disk as a block file. This includes the partition table at the start, which is why it can't find a filesystem. The file /dev/sdb1 references the first partition, which is where your filesystem will be. From the looks of your fdisk output, this is not an ntfs partition since this is a Windows filesystem and the partition is marked as Linux (most likely you will have ext4 unless you specifically set up something different).
To add a quick explanation of devtmpfs, this is a special filesystem which contains these block files which are specified by udev. You can google both for more information, but by now I'm sure you now know its not what you are looking for.

Resources