mount option stripe not defined in fstab - linux

I have an AWS instance.
Suddenly I can see this new mount option stripe=32736
/dev/xvdb on /var/lib/elasticsearch0 type ext4 (rw,relatime,stripe=32736,data=ordered)
/dev/xvdc on /var/lib/elasticsearch1 type ext4 (rw,relatime,stripe=32736,data=ordered)
But this option not appears in fstab
root#thorin:~# cat /etc/fstab
# HEADER: This file was autogenerated at 2017-09-12 16:38:10 +0200
# HEADER: by puppet. While it can still be managed manually, it
# HEADER: is definitely not recommended.
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /var/lib/elasticsearch0 ext4 defaults 0 0
/dev/xvdc /var/lib/elasticsearch1 ext4 defaults 0 0
11.0.0.228://el_backup /srv/backup/el nfs4 tcp,nolock,rsize=32768,wsize=32768,intr,noatime,actimeo=3 0 0
So, I have two questions.
Why have this happend?
How can I fix this?
Some time ago I fix this in another machine. I was something related with RAID headers. There was some tool to set stripe to 0 but I can't find this now.

I can answer second question.
tune2fs -f -E stride=0 /dev/xvdb

Related

Changing EC2 Instance Type modified EBS root device UUID and made disk read only. How to resolve?

I had a fully working Amazon Linux 2 instance, running on t2.small instance type. I wanted to try changing the instance to a t2.medium type to test. As I have done in the past, I simply shut down the instance, changed the type, and then restarted the instance.
After the restart, apache was down and my sites were un-reachable. I was able to login to the instance and when trying to start apache I discovered that the root drive was now read only which prevented start/etc. Through some troubleshooting I was able to get the drive remounted and thing running as normal, but everytime I restart the instance, it goes back to read-only and I have to perform the same fix each time to get it back to normal. I believe it's an issue with my /etc/fstab root device UUID not matching the current root device UUID. I never changed any of the attached EBS volumes, so I'm not sure how the change occured.
Some relevant info:
$ cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
To discover the UUID mismatch/fix, I performed the following:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 50G 0 disk
└─xvda1 202:1 0 50G 0 part /
xvdb 202:16 0 50G 0 disk
xvdf 202:80 0 50G 0 disk
└─xvdf1 202:81 0 50G 0 part
$ sudo blkid
/dev/xvda1: LABEL="/" UUID="2a7884f1-a23b-49a0-8693-ae82c155e5af" TYPE="xfs" PARTLABEL="Linux" PARTUUID="4d1e3134-c9e4-456d-a253-374c91394e99"
/dev/xvdf1: LABEL="/" UUID="a8346192-0f62-444c-9cd0-655ed0d49a8b" TYPE="ext4" PARTLABEL="Linux" PARTUUID="2688b30d-29ef-424f-9196-05ec7e4a0d80"
I had read that a possible fix would be to perform the following:
$ sudo mount -o remount,rw /
mount: /: can't find UUID=-1a7884f1-a23b-49a0-8693-ae82c155e5af.
Obviously, that didn't work. So I looked at my /etc/fstab:
#
UUID=-1a7884f1-a23b-49a0-8693-ae82c155e5af / xfs defaults,noatime 1 1
/swapfile swap swap defaults 0 0
Seeing this mismatch, I tried:
sudo mount -o remount nouuid /
Which worked, made the root writeable and I was able to get services back up and running.
So, this is how I've come to the belief that it has to do with the mismatch of the UUID in fstab.
My Questions:
Should I change the entry in /etc/fstab to match the current UUID: 2a7884f1-a23b-49a0-8693-ae82c155e5af
Any idea why this happened and how I can prevent it from happening in the future?

xfs fllesystem remount doesn't work when modify quota configure

1.At the begining
mount | grep home
/dev/sdb1 on /home type xfs (rw,relatime,attr2,inode64,noquota)
2.try to modify
mount -o remount,rw,relatime,attr2,inode64,prjquota /dev/sbd1 /home
3.check it again
mount | grep home
/dev/sdb1 on /home type xfs (rw,relatime,attr2,inode64,noquota)
It doesn' work.
cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Aug 9 15:24:43 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=8f1038a3-6c31-4ce1-a9ef-3d7325e10bef / ext4 defaults 1 1
UUID=c687eab8-3ddd-4756-b91e-ad562b522f7c /boot ext4 defaults 1 2
UUID=7ae72a46-1407-49e6-8669-95bb9e592794 /home xfs rw,relatime,attr2,inode64,prjquota 0 0
UUID=3ccea12f-25d0-437b-9c4b-6ad6a9bd724c /tmp xfs defaults 0 0
UUID=b8ab4016-49bd-4f48-9620-5bda76f4d8b1 /var/log xfs defaults 0 0
UUID=8b9a7ada-3f02-4ee5-8010-ad32a5d7461e swap swap defaults 0 0
I can modify the /etc/fstab then restart machine make it work. But,is there any way I can change the quota configure without reboot?
Quotas
XFS quotas are not a remountable option. The -o quota option must be specified on the initial mount for quotas to be in effect.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch06s09.html
BTW if you need to enable quota for root partition /etc/fstab
does not help you only need to tweak kernel boot options

Mounting instance storage corrupting ec2 instance [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm trying to mount two instance storages in my ec2 instance and before creating an AMI, I just want to try it's mounting those storages at the right mount point. But as soon as I stop and start my instance after mounting, I'm unable to connect. Looks like it's unable to boot even though ec2 console shows they are running.
I get this right after I create my instance(i2.2xlarge):
[root#xxxxx ec2-user]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 300G 0 disk
└─xvda1 202:1 0 300G 0 part /
xvdb 202:16 0 745.2G 0 disk
xvdc 202:32 0 745.2G 0 disk
Then I format and mount those two to two different location.
[root#xxxx ec2-user]# mkfs -t ext4 /dev/xvdc
[root#xxxx ec2-user]# mkfs -t ext4 /dev/xvdc
Here is my fstab:
#
LABEL=/ / ext4 defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/xvdb /media/ephemeral0 ext4 defaults,nofail,comment=cloudconfig 0 2
/dev/xvdc /media/ephemeral1 ext4 defaults,nofail,comment=cloudconfig 0 2
After I mount them, I get this which I want at the end:
[root#xxxxxx ec2-user]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 493G 1.2G 491G 1% /
devtmpfs 30G 68K 30G 1% /dev
tmpfs 31G 0 31G 0% /dev/shm
/dev/xvdb 734G 69M 697G 1% /media/ephemeral0
/dev/xvdc 734G 69M 697G 1% /media/ephemeral1
At this point, when I want to stop and start the instance, I'm unable to connect that instance. I know those two are ephemeral storage and I don't care it's content. But I want to recreate several similar instances like this, so before creating an AMI, I just wanted to test it to see after I restart this instance, it keeps mount configuration.
What I am doing wrong?
This issue is a major problem while working with paritioning. The root cause of problem is SElinux which is refusing SSH connection
Here are the steps which will solve your issue :
Step 1 : Create the volume in AWS Console and attach it to instance. (Assuming you know this already!)
Step 2 : By default it is always mounted on /dev/xvdc, please create the partition using fdisk and confirm the lsblk output, it should look like below:
$ sudo fdisk /dev/xvdc
Use options N to create a new partition and all the defaults for creating 1 full partition for entire volume and option W to write the partition in the filesystem
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdc 202:80 0 20G 0 disk
└─xvdc1 202:81 0 20G 0 part
*All the work ahead would be done on this xvdfc partition, make sure you are NOT using /dev/xvdc anywhere.
Step 3 : Format the below partition using
$ sudo mkfs -t ext4 /dev/xvdc1
Step 4: Make the entry in fstab as below:
/dev/xvdf1 /var ext4 defaults,noatime,nofail 0 2
Hope that helps :)
Here are some links that might help :
STEPS TO CREATE SEPARATE /VAR PARTITION ON EBS VOLUME AWS
CREATE ROOT SWAP AND LVM PARTITION ON EBS VOLUME (AWS)

linux kernel crash dump creation failure

I have a linux VPX on XEN. Which is not creating any core-dump when panic is occurred.
Which part of the linux code contains crash dump creation program and how can I debug this thing ?
Please check server's VMCore configuration. Kindly follow below steps
1./etc/kdump.conf – will have the below mentioned lines.
-----------------------------snip-----------------------------
ext4 UUID=6287df75-b1d9-466b-9d1d-e05e6d044b7a
path /var/crash/vmcore
-----------------------------snip-----------------------------
2./etc/fstab – will have the UUID and filesystem data.
-----------------------------snip-----------------------------
# /etc/fstab
# Created by anaconda on Wed May 25 16:10:52 2011
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=117b7a8d-0a8b-4fc8-b82b-f3cfda2a02df / ext4 defaults 1 1
UUID=e696757d-0321-4922-8327-3937380d332a /boot ext4 defaults 1 2
UUID=6287df75-b1d9-466b-9d1d-e05e6d044b7a /data ext4 defaults 1 2
UUID=d0dc1c92-efdc-454f-a337-dd1cbe24d93d /prd ext4 defaults 1 2
UUID=c8420cde-a816-41b7-93dc-3084f3a7ce21 swap swap defaults 0 0
#/dev/dm-0 /data1 ext4 defaults 00
#/dev/mapper/mpathe /data1 ext4 defaults 00
/dev/mapper/mpathgp1 /data2 ext4 noatime,data=writeback,errors=remount-ro 0 0
LABEL=/DATA1 /data1 ext4 noatime,data=writeback,errors=remount-ro 00
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
-----------------------------snip-----------------------------
3.With the above configuration the VMCore will be generated in the /data/var/crash/vmcore path.
Note: the VM core will be generated more than 10 GBs, hence configure the path where we have enough space)
Regards,
Jain

Lost tmp Directory on VPS - /scripts/securetmp issues

I have been trying to follow instructions on how to increase the tmp directory on our VPS from 512mb to 3gb. I successfully modified the tmpdsksize variable in securetmp to 3072000 and saved it using the vi editor and then I entered these lines into the command line:
/etc/init.d/cpanel stop
/etc/init.d/httpd stop
/etc/init.d/lsws stop
/etc/init.d/mysql stop
umount -l /tmp
umount -l /var/tmp
mv /usr/tmpDSK /usr/tmpDSK_back
/scripts/securetmp
/etc/init.d/cpanel start
/etc/init.d/httpd start
/etc/init.d/lsws start
/etc/init.d/mysql start
This is meant to recreate your tmp directory on the VPA.
However this did not work and I now have no tmp directory. The VPS is working and the problem that led me to try increase the tmp directory size has now been fixed. The original problem was running a large select query on the database. But I am concerned about the lack of the tmp directory as this was not my intention. Is it ok to run without one?
The problem with it not creating one seems to come down to running /scripts/securetmp.
Basically when I run this I get errors so my tmp directory is not recreated. The errors I get are these:
root [~]# /scripts/securetmp
/scripts/securetmp: line 1: !/usr/bin/perl: No such file or directory
/scripts/securetmp: line 7: syntax error near unexpected token `}'
/scripts/securetmp: line 7: `BEGIN { unshift #INC, '/usr/local/cpanel'; }'
root [~]# /scripts/securetmp: line 7: syntax error near unexpected token `}'
Any ideas where I am going wrong? I don't have a ton of Linux experience, it's a case of Google and learn. I am accessing the VPS remotely using Putty. I have Googled around lots but can't find much info on /scripts/securetmp errors. Everywhere that talks about increasing tmp directory size just acts like running that line will work. I did not modify lines 1 and 7 when changing the tmp directory size.
The VPS is running Cent OS 6.3.
Running scripts/securetmp to increase my tmpDSK size didn't work for me either: That script simply deleted the partition so I was left with no tmpDSK!
This is on an Xen VPS server with WHM/cpanel.
After many hours of persistence, I found this post:
How to increase the size of disk space /tmp (/usr/tmpDSK) partition in linux server
Only thing I had to change was:
1.) Stop MySql service and process kill the tailwatchd process.
[root#server ~]# /etc/init.d/mysqld stop
[root#server ~]# kill -9 2522
To:
1.) Stop MySql service and process kill the tailwatchd process.
[root#server ~]# /etc/init.d/cpanel stop
[root#server ~]# /etc/init.d/mysql stop
(To start these services again when you've finnished, change the stop to start)
Also at step No. 11
11.)Edit the fstab and replace /tmp entry line with :-
/usr/tmpDSK /tmp ext3 loop,noexec,nosuid,rw 0 0
Here is how to access and edit that pesky etc/fstab with SSH:
To make sure this partition is mounted automatically after every reboot, edit the /etc/fstab and replace /tmp entry line with the following one.
/usr/temp-disk /tmp ext3 rw,noexec,nosuid,loop 0 0
[root#server ~]# pico -w /etc/fstab
You should see something like this:
code:
/dev/hda3 / ext3 defaults,usrquota 1 1
/dev/hda1 /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/hda2 swap swap defaults 0 0
At the bottom add
code:
/usr/temp-disk /tmp ext3 rw,noexec,nosuid,loop 0 0
While we are at it we are going to secure /dev/shm. Look for the mount line for /dev/shm and change it to the following:
none /dev/shm tmpfs noexec,nosuid 0 0
Umount and remount /dev/shm for the changes to take effect.
[root#server ~]# umount /dev/shm
[root#server ~]# /dev/shm
Hit: Ctrl + x to exit, y to save
Well I didn't quite do that either.
Here is my etc/fstab:
/dev/sda1 / ext3 defaults,usrquota,grpquota 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs noexec,nosuid 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
/dev/sda2 swap swap defaults 0 0
/usr/tmpDSK /tmp ext3 loop,noexec,nosuid,rw 0 0
/tmp /var/tmp ext3 defaults,bind,noauto 0 0
I already had the /usr/tmpDSK line, so I just replaced that line with the one recommended, leaving the bottom /tmp line intact.
Everything now works great.
My 1G tmpDSK which was 85% full, has now been increased to 2G, and only 7% full.
I also didn't restore the contents of my tmp backup (it was over-full of crudd).
Best to check first though that everything is still working OK - you might have something in that previous tmp file that's needed.

Resources