xfs fllesystem remount doesn't work when modify quota configure - linux

1.At the begining
mount | grep home
/dev/sdb1 on /home type xfs (rw,relatime,attr2,inode64,noquota)
2.try to modify
mount -o remount,rw,relatime,attr2,inode64,prjquota /dev/sbd1 /home
3.check it again
mount | grep home
/dev/sdb1 on /home type xfs (rw,relatime,attr2,inode64,noquota)
It doesn' work.
cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Aug 9 15:24:43 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=8f1038a3-6c31-4ce1-a9ef-3d7325e10bef / ext4 defaults 1 1
UUID=c687eab8-3ddd-4756-b91e-ad562b522f7c /boot ext4 defaults 1 2
UUID=7ae72a46-1407-49e6-8669-95bb9e592794 /home xfs rw,relatime,attr2,inode64,prjquota 0 0
UUID=3ccea12f-25d0-437b-9c4b-6ad6a9bd724c /tmp xfs defaults 0 0
UUID=b8ab4016-49bd-4f48-9620-5bda76f4d8b1 /var/log xfs defaults 0 0
UUID=8b9a7ada-3f02-4ee5-8010-ad32a5d7461e swap swap defaults 0 0
I can modify the /etc/fstab then restart machine make it work. But,is there any way I can change the quota configure without reboot?

Quotas
XFS quotas are not a remountable option. The -o quota option must be specified on the initial mount for quotas to be in effect.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch06s09.html
BTW if you need to enable quota for root partition /etc/fstab
does not help you only need to tweak kernel boot options

Related

How to solve error Check for integrity of file "/etc/resolv.conf" failed when run ./runcluvfy.sh?

I want to install oracle database 12cR1 Real application cluster on oracle linux operating system .
I did all configurations in node 1 and node 2 but during the installation of grid infrastructure I got following errors :
checking DNS response from all servers in "/etc/resolv.conf"
checking response for name "kaash-his-2" from each of the name servers specified in "/etc/resolv.conf"
Node Name Source Comment Status
------------------------ ------------------------ ----------
checking response for name "kaash-his-1" from each of the name servers specified in "/etc/resolv.conf"
Node Name Source Comment Status
------------------ ------------------------ ----------
Check for integrity of file "/etc/resolv.conf" failed
and this is the output of file resolv.conf
[root#KAASH-HIS-1 tmp]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 10.93.200.222
nameserver 10.93.200.223
[root#KAASH-HIS-2 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 10.93.200.222
nameserver 10.93.200.223
the other error :
Starting check for /dev/shm mounted as temporary file system ...
ERROR:
PRVE-0426 : The size of in-memory file system mounted as /dev/shm is "33554432k" megabytes which is less than the required size of "2048" megabytes on node ""
PRVE-0426 : The size of in-memory file system mounted as /dev/shm is "33554432k" megabytes which is less than the required size of "2048" megabytes on node ""
Check for /dev/shm mounted as temporary file system failed
and I added already the file /dev/shm in file /etc/fstab and this is the file details :
[root#KAASH-HIS-1 tmp]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Dec 27 13:35:34 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol_kaash--his--1-root / xfs defaults 0 0
UUID=8c639649-0a25-48d7-9fe5-9ed62090f457 /boot xfs defaults 0 0
/dev/mapper/ol_kaash--his--1-home /home xfs defaults 0 0
/dev/mapper/ol_kaash--his--1-swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs size=32g 0 0
How to solve these errors please ?
After days of search I found the solution for these errors :
1- Edit file /etc/resolv.conf and add search and domain for all nodes :
[root#KAASH-HIS-1 tmp]# vi /etc/resolv.conf
# Generated by NetworkManager
search "local-domain-name"
domain "local-domain-name"
nameserver 10.93.200.222
nameserver 10.93.200.223
2- about the /dev/shm size error
you can change the size and mount using the following command as root user :
# mount -o remount,size=2G /dev/shm
then confirm the size by using
# df -h
and during the installation you can ignore the warning if the mounted size for /dev/shm greater than 2048M

Paramiko exec_command not working with mkfs?

Some issue executing the following bash with Paramiko:
def format_disk(self, device, size, dformat, mount, name):
stdin_, stdout_, stderr_ = self.client.exec_command(f"pvcreate {device};" \
f"vgcreate {name}-vg {device};" \
f"lvcreate -L {size} --name {name}-lv {name}-vg;" \
f"mkfs.{dformat} /dev/{name}-vg/{name}-lv;" \
f"mkdir {mount};" \
f"echo '/dev/{name}-vg/{name}-lv {mount} {dformat} defaults 0 0' >> /etc/fstab")
print(f"mkfs.{dformat} /dev/{name}-vg/{name}-lv;")
Print statement outputs: mkfs.ext4 /dev/first_try-vg/first_try-lv; If I copy and paste this exact command on the server there are no errors and it formats the disk as expected.
Troubleshooting steps
Server before running python script:
ls: cannot access /first_try: No such file or directory
[root#localhost ~]# vgs
[root#localhost ~]# lvs
[root#localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Feb 25 07:32:51 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=38b7e96a-71e5-4089-a348-bd23828f9dc8 / xfs defaults 0 0
UUID=72fd2a6a-85db-4596-9fc2-6604d0d865a3 /boot xfs defaults 0 0
Server after running python script:
[root#localhost ~]# ls /first_try/
[root#localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
first_try-vg 1 1 0 wz--n- <20.00g <15.00g
[root#localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
first_try-lv first_try-vg -wi-a----- 5.00g
[root#localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Feb 25 07:32:51 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=38b7e96a-71e5-4089-a348-bd23828f9dc8 / xfs defaults 0 0
UUID=72fd2a6a-85db-4596-9fc2-6604d0d865a3 /boot xfs defaults 0 0
/dev/first_try-vg/first_try-lv /first_try ext4 defaults 0 0
[root#localhost ~]# mount -a
mount: wrong fs type, bad option, bad superblock on /dev/mapper/first_try--vg-first_try--lv,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
The error from mount -a indicates that the disk is not formatted.
If I format the disk manually and run mount -a it works.
Example:
[root#localhost ~]# mkfs.ext4 /dev/first_try-vg/first_try-lv
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310720 blocks
65536 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): mdone
Writing superblocks and filesystem accounting information: done
[root#localhost ~]# mount -a
[root#localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 18G 4.7G 14G 27% /
devtmpfs 471M 0 471M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 8.4M 478M 2% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/sda1 297M 147M 151M 50% /boot
tmpfs 98M 12K 98M 1% /run/user/42
tmpfs 98M 0 98M 0% /run/user/0
/dev/mapper/first_try--vg-first_try--lv 4.8G 20M 4.6G 1% /first_try
Pariminko could not handle the output from mkfs. I changed the command to use the -q quiet flag and was able to get the script to run successfully.
New commmand mkfs -q -t {dformat} /dev/{name}-vg/{name}-lv

Fails to `mkdir /mnt/vzsnap0` for Container Backups with Permission Denied

This is all done as the root user.
The script for backups at /usr/share/perl5/PVE/VZDump/LXC.pm sets a default mount point
my $default_mount_point = "/mnt/vzsnap0";
But regardless of whether I use the GUI or the command line I get the following error:
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0:
Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
And lines 160 - 161 in that script is:
my $rootdir = $default_mount_point;
mkpath $rootdir;
After the installation before I created any images or did any backups I setup two things.
(1) SSHFS mount for /mnt/backups
(2) Added all other drives as Linux LVM
What I did for the drive addition is as simple as:
pvcreate /dev/sdb1
pvcreate /dev/sdc1
pvcreate /dev/sdd1
pvcreate /dev/sde1
vgextend pve /dev/sdb1
vgextend pve /dev/sdc1
vgextend pve /dev/sdd1
vgextend pve /dev/sde1
lvextend pve/data /dev/sdb1
lvextend pve/data /dev/sdc1
lvextend pve/data /dev/sdd1
lvextend pve/data /dev/sde1
For the SSHFS instructions see my blog post on it: https://6ftdan.com/allyourdev/2018/02/04/proxmox-a-vm-server-for-your-home/
Here are filesystem directory permission related files and details.
cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 9.0M 1.6G 1% /run
/dev/mapper/pve-root 37G 8.0G 27G 24% /
tmpfs 7.9G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/fuse 30M 20K 30M 1% /etc/pve
sshfs#10.0.0.10:/mnt/raid/proxmox_backup 1.4T 725G 672G 52% /mnt/backups
tmpfs 1.6G 0 1.6G 0% /run/user/0
ls -dla /mnt
drwxr-xr-x 3 root root 0 Aug 12 20:10 /mnt
ls /mnt
backups
ls -dla /mnt/backups
drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups
The command that I desire to succeed is:
vzdump 103 --compress lzo --node ProxMox --storage backup --remove 0 --mode snapshot
For the record the container image is only 8GB in size.
Cloning containers does work and snapshots work.
Q & A
Q) How are you running the perl script?
A) Through the GUI you click on Backup now, then select your storage (I have backups and local and the both produce this error), then select the state of the container (Snapshot, Suspend, Stop each produce the same error), then compression type (none, LZO, and gzip each produce the same error). Once all that is set you click Backup and get the following output.
INFO: starting new backup job: vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0
INFO: Starting Backup of VM 103 (lxc)
INFO: Backup started at 2019-08-18 16:21:11
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: Passport
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0: Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
INFO: Failed at 2019-08-18 16:21:11
INFO: Backup job finished with errors
TASK ERROR: job errors
From this you can see that the command is vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0 . I've also tried logging in with a SSH shell and running this command and get the same error.
Q) It could be that the directory's "immutable" attribute is set. Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
A) root#ProxMox:~# lsattr /
--------------e---- /tmp
--------------e---- /opt
--------------e---- /boot
lsattr: Inappropriate ioctl for device While reading flags on /sys
--------------e---- /lost+found
lsattr: Operation not supported While reading flags on /sbin
--------------e---- /media
--------------e---- /etc
--------------e---- /srv
--------------e---- /usr
lsattr: Operation not supported While reading flags on /libx32
lsattr: Operation not supported While reading flags on /bin
lsattr: Operation not supported While reading flags on /lib
lsattr: Inappropriate ioctl for device While reading flags on /proc
--------------e---- /root
--------------e---- /var
--------------e---- /home
lsattr: Inappropriate ioctl for device While reading flags on /dev
lsattr: Inappropriate ioctl for device While reading flags on /mnt
lsattr: Operation not supported While reading flags on /lib32
lsattr: Operation not supported While reading flags on /lib64
lsattr: Inappropriate ioctl for device While reading flags on /run
Q) Can you manually created /mnt/vzsnap0 without any issues?
A) root#ProxMox:~# mkdir /mnt/vzsnap0
mkdir: cannot create directory ‘/mnt/vzsnap0’: Permission denied
Q) Can you replicate it in a clean VM ?
A) I don't know. I don't have an extra system to try it on and I need the container's I have on it. Trying it within a VM in ProxMox… I'm not sure. I suppose I could try but I'd really rather not have to just yet. Maybe if all else fails.
Q) If you look at drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups, it looks like there are is a user with id 1001 which has access to the backups, so not even root will be able to write. You need to check why it is 1001 and which group is represented by 1002. Then you can add your root as well as the user under which the GUI runs to the group with id 1002.
A) I have no problem writing to the /mnt/backups directory. Just now did a cd /mnt/backups; mkdir test and that was successful.
From the message
mkdir /mnt/vzsnap0: Permission denied
it is obvious the problem is the permissions for /mnt directory.
It could be that the directory `s "immutable" attribute is set.
Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
As a reference:
The lower-case i in lsattr output indicates that the file or directory is set as immutable: even root must clear this attribute first before making any changes to it. With root access, you should be able to remove this with chattr -i /mnt, but there is probably a reason why this was done in the first place; you should find out what the reason was and whether or not it's still applicable before removing it. There may be security implications.
So, if this is the case, try:
chattr -i /mnt
to remove it.
References
lsattr output
According to inode flags—attributes manual page:
FS_IMMUTABLE_FL 'i':
The file is immutable: no changes are permitted to the file
contents or metadata (permissions, timestamps, ownership, link
count and so on). (This restriction applies even to the supe‐
ruser.) Only a privileged process (CAP_LINUX_IMMUTABLE) can
set or clear this attribute.
As long as the bounty is still up I'll give it to a legitimate answer that fixes the problem described here.
What I'm writing here for you all is a work around I've thought of which works. Note, it is very slow.
Since I am able to write to the /mnt/backups directory, which exists on another system on the network, I went ahead and changed the Perl script to point to /mnt/backups/vzsnap0 instead of /mnt/vzsnap0.
Bounty remains for anyone who can get the /mnt directory to work for the mount path to successfully mount vzsnap0 for the backup script..
1)
Perhaps your "/mnt/vzsnap0" is mounted as read only?
It may tell from your:
/dev/pve/root / ext4 errors=remount-ro 0 1
'errors=remount-ro' means in case of mistake remounting the partition like readonly. Perhaps this setting applies for your mounted filesystem as well.
Can you try remounting the drive as in the following link? https://askubuntu.com/questions/175739/how-do-i-remount-a-filesystem-as-read-write
And if that succeeds, manually create the directory afterwards?
2) If that didn't help:
https://www.linuxquestions.org/questions/linux-security-4/mkdir-throws-permission-denied-error-in-a-directoy-even-with-root-ownership-and-777-permission-4175424944/
There, someone remarked:
What is the filesystem for the partition that contains the directory.[?]
Double check the permissions of the directory, or whether it's a
symbolic link to another directory. If the directory is an NFS mount,
rootsquash can prevent writing by root.
Check for attributes (lsattr). Check for ACLs (getfacl). Check for
selinux restrictions. (ls -Z)
If the filesystem is corrupt, it might be initially mounted RW but
when you try to write to a bad area, change to RO.
Great, turns out this is a pretty long-standing issue with Ubuntu Make which is faced by many people.
I saw a workaround mentioned by an Ubuntu Developer in the above link.
Just follow the below steps:
sudo -s
unset SUDO_UID
unset SUDO_GID
Then run umake to install your application as normal.
you should now be able to install to any directory you want. Works flawlessly for me.
try ls laZ /mnt to review the security context, in case SE Linux is enabled. relabeling might be required then. errors=remount-ro should also be investigated (however, it is rather unlikely lsattr would fail, unless the /mnt inode itself is corrupted). Creating a new directory inode for these mount-points might be worth a try; if it works, one can swap them.
Just change /mnt/backups to /mnt/sshfs/backups
And the vzdump will work.

mount option stripe not defined in fstab

I have an AWS instance.
Suddenly I can see this new mount option stripe=32736
/dev/xvdb on /var/lib/elasticsearch0 type ext4 (rw,relatime,stripe=32736,data=ordered)
/dev/xvdc on /var/lib/elasticsearch1 type ext4 (rw,relatime,stripe=32736,data=ordered)
But this option not appears in fstab
root#thorin:~# cat /etc/fstab
# HEADER: This file was autogenerated at 2017-09-12 16:38:10 +0200
# HEADER: by puppet. While it can still be managed manually, it
# HEADER: is definitely not recommended.
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /var/lib/elasticsearch0 ext4 defaults 0 0
/dev/xvdc /var/lib/elasticsearch1 ext4 defaults 0 0
11.0.0.228://el_backup /srv/backup/el nfs4 tcp,nolock,rsize=32768,wsize=32768,intr,noatime,actimeo=3 0 0
So, I have two questions.
Why have this happend?
How can I fix this?
Some time ago I fix this in another machine. I was something related with RAID headers. There was some tool to set stripe to 0 but I can't find this now.
I can answer second question.
tune2fs -f -E stride=0 /dev/xvdb

linux kernel crash dump creation failure

I have a linux VPX on XEN. Which is not creating any core-dump when panic is occurred.
Which part of the linux code contains crash dump creation program and how can I debug this thing ?
Please check server's VMCore configuration. Kindly follow below steps
1./etc/kdump.conf – will have the below mentioned lines.
-----------------------------snip-----------------------------
ext4 UUID=6287df75-b1d9-466b-9d1d-e05e6d044b7a
path /var/crash/vmcore
-----------------------------snip-----------------------------
2./etc/fstab – will have the UUID and filesystem data.
-----------------------------snip-----------------------------
# /etc/fstab
# Created by anaconda on Wed May 25 16:10:52 2011
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=117b7a8d-0a8b-4fc8-b82b-f3cfda2a02df / ext4 defaults 1 1
UUID=e696757d-0321-4922-8327-3937380d332a /boot ext4 defaults 1 2
UUID=6287df75-b1d9-466b-9d1d-e05e6d044b7a /data ext4 defaults 1 2
UUID=d0dc1c92-efdc-454f-a337-dd1cbe24d93d /prd ext4 defaults 1 2
UUID=c8420cde-a816-41b7-93dc-3084f3a7ce21 swap swap defaults 0 0
#/dev/dm-0 /data1 ext4 defaults 00
#/dev/mapper/mpathe /data1 ext4 defaults 00
/dev/mapper/mpathgp1 /data2 ext4 noatime,data=writeback,errors=remount-ro 0 0
LABEL=/DATA1 /data1 ext4 noatime,data=writeback,errors=remount-ro 00
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
-----------------------------snip-----------------------------
3.With the above configuration the VMCore will be generated in the /data/var/crash/vmcore path.
Note: the VM core will be generated more than 10 GBs, hence configure the path where we have enough space)
Regards,
Jain

Resources