Can't open /dev/sda2 exclusively. Mounted filesystem? - linux

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 3999.7 GB, 3999688294400 bytes
255 heads, 63 sectors/track, 486267 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
/dev/sda2 1 2090 16785120 82 Linux swap / Solaris
/dev/sda3 1 218918 1758456029+ 8e Linux LVM
Partition table entries are not in disk order
Above is my "fdisk -l", my current problem is when I go and try to do "pvcreate /dev/sda2" it gives me "Can't open /dev/sda2 exclusively. Mounted filesystem?" and I have been searching google for a while now trying to find a way to fix this. There is defiantly things I tried from google but none of them ended up working.

You're trying to initialize a partition for use by LVM that's currently used by swap.
You should rather run
pvcreate /dev/sda3

i updated to the new Kernel and the problem was resolved in RHEL 6 . Had upgraded from 2.6.32-131.x to 2.6.32-431.x

Check those disks/partition you are using, are not mounted to any directory on your system.
If yes, umount them and try again.

Related

Restore backup from 'dd' (possibly problem with softRAID HDDs)

I have 2 servers, Ubuntu 18.04. First is my main server which I want to backup. Second is tested server with Ubuntu 18.04 - KS-4 Server - Atom N2800 - 4GB DDR3 1066 MHz - SoftRAID 2x 2To SATA - when I want test my backup.
I make backup by 'dd' command and nextly I download this backup (wget) by server 2 (490gb, ~ 24hours downloading).
Now I want test my backup so I tried:
dd if=sdadisk.img of=/dev/sdb
I get:
193536+0 records in
193536+0 records out
99090432 bytes (99 MB, 94 MiB) copied, 5.06239 s, 19.6 MB/s
But nothing will change.
fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8efed6c9
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 4096 1050623 1046528 511M fd Linux raid autodetect
/dev/sda2 1050624 3905974271 3904923648 1.8T fd Linux raid autodetect
/dev/sda3 3905974272 3907020799 1046528 511M 82 Linux swap / Solaris
GPT PMBR size mismatch (879097967 != 3907029167) will be corrected by w(rite).
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B3B13223-6382-4EB6-84C3-8E66C917D396
Device Start End Sectors Size Type
/dev/sdb1 2048 1048575 1046528 511M EFI System
/dev/sdb2 1048576 2095103 1046528 511M Linux RAID
/dev/sdb3 2095104 878039039 875943936 417.7G Linux RAID
/dev/sdb4 878039040 879085567 1046528 511M Linux swap
Disk /dev/md1: 511 MiB, 535756800 bytes, 1046400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md2: 1.8 TiB, 1999320842240 bytes, 3904923520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
lsblk -l
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 511M 0 part
│ └─md1 9:1 0 511M 0 raid1 /boot
├─sda2 8:2 0 1.8T 0 part
│ └─md2 9:2 0 1.8T 0 raid1 /
└─sda3 8:3 0 511M 0 part [SWAP]
sdb 8:16 0 1.8T 0 disk
├─sdb1 8:17 0 511M 0 part
├─sdb2 8:18 0 511M 0 part
├─sdb3 8:19 0 417.7G 0 part
└─sdb4 8:20 0 511M 0 part
I think the problem is with configuration of disks on server 2, specifically with 'Linux raid' between them. I searching how change its, I testing commands like 'mdadm...' but it's not working like I expected. So I have questions:
How change 'Linux raid' from 2 HDDS to 1 HDD with current system and 2 HDD clear, when I can test my backup properly?
It's generally possible to restore backup 490GB on 1.8TB?
I selected the best option to full linux backup?
To touch on question #3 "I selected the best option to full linux backup?"
You are using the correct tool, but not the correct items to use the dd command on.
It appears you created a .img file for your sda drive (which is normally only used to create a bootable disk and not a full backup of the drive.
If you want to create a full backup, for example of /dev/sda you would run the following:
dd if=/dev/sda of=/dev/sdb
But In your case your /dev/sda drive is a twice the size of your /dev/sdb drive so you might want to consider backing up important files using tar as well as compressing the backup using either gzip or bzip2.

Simulate mounted volume errors to cause read only

Few days ago we have encountered an unexpected error where one of the mounted drive on our RedHat linux machine became Read-Only. The issue was cause by the network outage in the datacenter.
Now I need to see if I can reproduce the same behavior where drive will be re-mounted as Read-Only while application is running.
I tried to remounted it was read-only but that didn't work because there are files that are opened (logs being written).
Is there a way to temporary cause the read-only if I have root access to the machine (but no access to the hypervisor).
That volume is mounted via /etc/fstab. Here is the record:
UUID=abfe2bbb-a8b6-4ae0-b8da-727cc788838f / ext4 defaults 1 1
UUID=8c828be6-bf54-4fe6-b68a-eec863d80133 /opt/sunapp ext4 rw 0 2
Here are the output of few commands that shows details about our mounted drive. I can add more details as needed.
Output of fdisk -l
Disk /dev/vda: 268.4 GB, 268435456000 bytes, 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008ba5f
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 524287966 262142959+ 83 Linux
Disk /dev/vdb: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Output of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 80G 0 disk
└─vda1 253:1 0 80G 0 part /
vdb 253:16 0 250G 0 disk /opt/sunup
Output of blkid command:
/dev/vda1: UUID="abfe2bbb-a8b6-4ae0-b8da-727cc788838f" TYPE="ext4"
/dev/sr0: UUID="2017-11-13-13-33-07-00" LABEL="config-2" TYPE="iso9660"
/dev/vdb: UUID="8c828be6-bf54-4fe6-b68a-eec863d80133" TYPE="ext4"
Output of parted -l command:
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0
has been opened read-only.
Error: /dev/sr0: unrecognised disk label
Model: QEMU QEMU DVD-ROM (scsi)
Disk /dev/sr0: 461kB
Sector size (logical/physical): 2048B/2048B
Partition Table: unknown
Disk Flags:
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 268GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 268GB 268GB primary ext4 boot
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 42.9GB 42.9GB ext4
Yes, you can do it. But the method proposed here may cause data loss, so use it only for testing.
Supposing you have /dev/vdb mounted as /opt/sunapp, do this:
First, unmount it. You may need to shut down any applications using it first.
Configure a loop device to mirror the contents of /dev/vdb:
losetup /dev/loop0 /dev/vdb
Then, mount /dev/loop0 instead of /dev/vdb:
mount /dev/loop0 /opt/sunapp -o rw,errors=remount-ro
Now, you can run your application. When it is time to make /opt/sunapp read-only, use this command:
blockdev --setro /dev/vdb
After that, attempts to write to /dev/loop0 will result in I/O errors. As soon as file system driver detects this, it will remount the file system as read-only.
To restore everything back, you will need to unmount /opt/sunapp, detach the loop device, and make /dev/vdb writable again:
umount /opt/sunapp
losetup -d /dev/loop0
blockdev --setrw /dev/vdb
When I had some issues like corrupted disks, I had used ntfsfix.
Please see if these commands, solve the problem.
sudo ntfsfix /dev/vda
sudo ntfsfix /dev/vdb

pvcreate failing to create PV. Device not found /dev/sdxy (or ignored by filtering)

I have an oVirt installation with CentOS Linux release 7.3.1611.
I want to add a new drive (sdb) to the oVirt volume group to work with VMs.
Here is the result of fdisk on the drive:
[root#host1 ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be
careful before using the write command.
Orden (m para obtener ayuda): p
Disk /dev/sdb: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512
bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk
label type: dos Identificador del disco: 0x7a33815f
Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sdb1 2048 586072367 293035160 8e Linux LVM
The partitions are showed up in /proc/partitions:
[root#host1 ~]# cat /proc/partitions
major minor #blocks name
8 0 293036184 sda
8 1 1024 sda1
8 2 1048576 sda2
8 3 53481472 sda3
8 4 1 sda4
8 5 23072768 sda5
8 6 215429120 sda6
8 16 293036184 sdb
8 17 293035160 sdb1
When I execute the command to create PV with "pvcreate /dev/sdb1" the result is:
[root#host1 ~]# pvcreate /dev/sdb1
Device /dev/sdb1 not found (or ignored by filtering).
I have revised the file /etc/lvm/lvm.conf for the filters, but I do not have any filter that makes LVM discarding the drive. I have rebooted the computer after creating the PV with pvcreate. I did research on Google for the error but no idea.
Thanks. Any help would be appreciated Manuel
Try to edit lvm.conf uncomment global_flter and edit like this
global_filter = [ "a|/dev/sdb|"]
After that edit multipath vi /etc/multipath.conf
[root#ovirtnode2 ~]#lsblk /dev/sdb NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb
8:16 0 200G 0 disk └─3678da6e715b018f01f1abdb887594aae 253:2
0 200G 0 mpath
edit
vi /etc/multipath.conf
append the following to multipath.conf blacklist {
wwid 3678da6e715b018f01f1abdb887594aae }
service multipathd restart
its work for me, and i have that problem to when im trying on ovirt but
[root#ovirtnode2 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb"
successfully created. [root#ovirtnode2 ~]#

How to increase harddisk size of Azure VM

I am using Azure VM which is of type RHEL OS type. Currently I am using Standard D3 v2 size VM. I see only 32 hard disk storage available in VM. How do I increase the size of hard disk?
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 32G 32G 185M 100% /
devtmpfs 6.9G 0 6.9G 0% /dev
tmpfs 6.9G 0 6.9G 0% /dev/shm
tmpfs 6.9G 8.4M 6.9G 1% /run
tmpfs 6.9G 0 6.9G 0% /sys/fs/cgroup
/dev/sda1 497M 117M 381M 24% /boot
/dev/sdb1 197G 2.1G 185G 2% /mnt/resource
tmpfs 1.4G 0 1.4G 0% /run/user/1000
Note: I am using unmanaged disk.
If your Virtual Machine was created using the Azure Resource Manager (ARM) you can resize the OS disk or the data disk within the new Azure Portal.
Navigate to the Azure Resource Manager Virtual Machine that you want to resize the disk(s).
Shutdown the Virtual Machine from the Azure portal. Wait untill it's completely shutdown (de-allocated).
Select ‘Disks’ in the Settings blade (As in the below image).
Disks settings blade
Select the OS or Data disk that you would like to resize.
On the new blade, enter the new disk size (1023GB or 1TB max per disk) (As in the below image).
Change disk size
Hit ‘Save’ on top.
Start the Virtual Machine again.
That’s it! You can login to the VM and check that you have the new selected size for the disk(s).
So basically follow this article: https://learn.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks
az vm deallocate --resource-group myResourceGroup --name myVM
az disk list \
--resource-group myResourceGroup \
--query '[*].{Name:name,Gb:diskSizeGb,Tier:accountType}' \
--output table
az disk update \
--resource-group myResourceGroup \
--name myDataDisk \
--size-gb 200
az vm start --resource-group myResourceGroup --name myVM
for unmanaged disks:
https://blogs.msdn.microsoft.com/cloud_solution_architect/2016/05/24/step-by-step-how-to-resize-a-linux-vm-os-disk-in-azure-arm/
According to your description, I test in my lab, I test on Red Hat Enterprise Linux Server release 7.3 (Maipo).
Notes: When you don it, I strongly suggest you could backup your OS VHD. If you
do fails, you could not start your VM.
1.Stop your VM on Azure Portal.
2.Increase OS disk with Azure CLI.
az vm update -g shui -n shui --set storageProfile.osDisk.diskSizeGB=100
3.Start your VM and ssh to your VM. You could check df -h and fdisk -l. /dev/sda2 does not increase to 100GB. You need do as the following commands.
sudo -i
[root#shui ~]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): p
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001461e
Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 3917 30944256 83 Linux
Command (m for help): u
Changing display/entry units to sectors
Command (m for help): p
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001461e
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 1026048 62914559 30944256 83 Linux
Command (m for help): d
Partition number (1-4): 1
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (63-104857599, default 63): 64
Last sector, +sectors or +size{K,M,G} (64-1026047, default 1026047):
Using default value 1026047
Command (m for help): p
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001461e
Device Boot Start End Blocks Id System
/dev/sda1 64 1026047 512992 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 1026048 62914559 30944256 83 Linux
Command (m for help): wq
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root#shui ~]# fdisk -l /dev/sda
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001461e
Device Boot Start End Blocks Id System
/dev/sda1 1 64 512992 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 3917 30944256 83 Linux
4.Reboot your VM
5.SSH to your VM and resize the filesystem.
xfs_growfs -d /dev/sda2
Now, you could check your OS disk with df -h
[root#shui ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 100G 1.7G 98G 2% /
Use below link to resize the azure Ubuntu and RHEL servers OS disks.
9 Easy Steps To Increase Your Root Volume Of AZURE Instance

KVM Virtual Machine: Wrong Disk Size

Ever since I did yum update and tried to create a new (for example) 10GB Disk KVM VPS, the reported disk space inside VM is locked to the initial template size (usually 1GB for linux template).
Normally it should be 10GB (fdisk says so, but df command says otherwise).
[root#localhost ~]# resize2fs /dev/vda1
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vda1 is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/vda1 to 262160 (4k) blocks.
The filesystem on /dev/vda1 is now 262160 blocks long.
[root#localhost ~]# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/vda1 1008 760 198 80% /
none 246 0 246 0% /dev/shm
[root#localhost ~]# fdisk -l
Disk /dev/vda: 10.7 GB, 10737418240 bytes
4 heads, 32 sectors/track, 163840 cylinders
Units = cylinders of 128 * 512 = 65536 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b6106
Device Boot Start End Blocks Id System
/dev/vda1 17 16401 1048640 83 Linux
All above command is taken inside the VM.
Below is disk part of xml configuration on the host node:
disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source file='/kvm/v1046-2ogd-j1p2jraixpg1g03y.raw'/>
<target dev='vda' bus='virtio' />
</disk>
Sparse RAW is used. Not a problem with older VM.
du -hs on host node:
650M v1046-2ogd-j1p2jraixpg1g03y.raw
ls -lah on host node:
-rw-r--r-- 1 qemu qemu 10G Dec 21 21:03 v1046-2ogd-j1p2jraixpg1g03y.raw
Any help is really appreciated. Thanks for reading.
resize2fs /dev/vda1 online inside a VM is not supported. Had to load gparted to extend the partition manually.

Resources