mount: wrong fs type, bad option, bad superblock on - rhel

I'm facing issues while mounting the XFS file system on LINUX - RHEL 6.7.
Error I see is-
[root#XXXXXXXfgd1000 ~]# mount /dev/vg00_hana/lv00_hana /hana
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg00_hana-lv00_hana,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
[root#XXXXXXXfgd1000 ~]# dmesg | tail
XFS (dm-4): bad version
XFS (dm-4): SB validate failed
XFS (dm-4): bad version
XFS (dm-4): SB validate failed
XFS (dm-4): bad version
XFS (dm-4): SB validate failed
XFS (dm-4): bad version
XFS (dm-4): SB validate failed
XFS (dm-4): bad version
XFS (dm-4): SB validate failed
Steps which followed are -
[root#XXXXXXXfgd1000 ~]# mkfs.xfs -f /dev/vg00_hana/lv00_hana
meta-data=/dev/vg00_hana/lv00_hana isize=512 agcount=4, agsize=60293120 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0
data = bsize=4096 blocks=241172480, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=117760, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Version Number of the xfsprogs Package is -
[root#XXXXXXXfgd1000 ~]# xfs_repair -V
xfs_repair version 4.3.0
[root#XXXXXXXfgd1000 ~]#
Any suggestions to overcome this issue is much appreciated..!

Issue Resolved.
AS I un-installed the current version of XFS package (version 4.3.0) and Subscribed to SAP HANA. We needed SAP HANA subscription, since we are using this RHEL Image for SAP HANA Installation.
SAP HANA subscription comes with Scalable File System Add-On (XFS) at no additional charge.
This issue is resolved. Thanks

Related

Oracle Datapump impdp Inappropriate ioctl for device

i had an error while exporting my schema and then import it into a new database.
My Exporting System is an Oracle Linux 7 with 19c Database and the importing system is Oracle Linux 8 with 21c XE. When i import the schema with impdp i receive the error:
impdp system/password#localhost/xepdb1 full=y directory=data_pump_dir dumpfile=test.dmp
ORA-39001: ivalid argument
ORA-39000: incorrect specification of dump file
ORA-31619: invalid dump file "opt/oracle/admin/XE/dpdump/CC96F85...01/test.dmp"
ORA-27072: File-I/O-Error
Linux-x86_64 Error: 25: Inappropriate ioctl for device
Additional information: 4
Additional information: 1
As stated in the comments
When you got the error Inappropriate ioctl for device , Oracle is not responsible, as the error is coming from Linux.
Most of the times due to:
The datapump file is corrupted.
The file is not not a valid datapump file
The datapump file is empty.

ext4 signature detected on /dev/sdb1 at offset 1080 while creating Gluster cluster

I am getting attached error while creating Gluster cluster on Kubernetes. Can someone please advice why this error is coming and how to resolve it?
This is solved by by using the command wipefs -a /dev/sdb1. It wipes the signature present on newly created partition.

Centos cgconfig fails to start

I need docker installed on one of my servers, and whenever I try to start the docker service, it fails because of cgconfig. Cgconfig throws the following error:
Starting cgconfig service: Error: cannot mount cpu to /cgroup/cpu: No such file or directory
/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Failed to parse /etc/cgconfig.conf or /etc/cgconfig.d [FAILED]
I'm running CentOS 6.5 Final with the following /etc/cgconfig.conf file:
mount {
cpuset = /cgroup/cpuset;
cpu = /cgroup/cpu;
cpuacct = /cgroup/cpuacct;
memory = /cgroup/memory;
devices = /cgroup/devices;
freezer = /cgroup/freezer;
net_cls = /cgroup/net_cls;
blkio = /cgroup/blkio;
}
I appreciate any responses
To use cgroups on newer versions of CentOS you need to install libcgroup as well as libcgroup-tools:
$ sudo yum install libcgroup
$ sudo yum install libcgroup-tools
To create group use cgcreate, e.g.:
$ sudo cgcreate -g memory,cpu,blkio,cpuset:userlimited
To verify that /etc/cgconfig.conf is correct use cgconfigparser
$ cgconfigparser -l /etc/cgconfig.conf
For details check: https://wiki.archlinux.org/index.php/cgroups
Note: In CentOS 6 and earlier versions one only needed to install libcgroup
This error may be due to kernel you are using is booted with cgroup_disable=memory and/or /etc/cgconfig.conf
contains "memory=/cgroup/memory", in your case you may solve this by commenting out "memory = /cgroup/memory" from cgconfig.conf.
You can also refer 1 or 2 for more information.

cassandra connection error : Could not connect to localhost:9160

my cassandra was working well but suddenly it stop working!
when i use cqlsh command i get this error:
connection error : Could not connect to localhost:9160
and in output.log file i seeing this :
Service exit with a return value of 1
OpenJDK Client VM warning: Insufficient space for shared memory file:
/tmp/hsperfdata_cassandra/10963
Try using the -Djava.io.tmpdir= option to select an alternate temp location.
INFO 12:23:31,307 Logging initialized
log4j:ERROR Failed to flush writer,
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:297)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:220)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:290)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:294)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:140)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
at org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:276)
at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
at org.apache.log4j.Category.callAppenders(Category.java:206)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.info(Category.java:666)
at org.apache.cassandra.service.CassandraDaemon.initLog4j(CassandraDaemon.java:118)
at org.apache.cassandra.service.CassandraDaemon.<clinit>(CassandraDaemon.java:65)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
at java.lang.Class.newInstance0(Class.java:374)
at java.lang.Class.newInstance(Class.java:327)
at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:190)
INFO 12:23:31,332 32bit JVM detected. It is recommended to run Cassandra on a 64bit JVM for better performance.
INFO 12:23:31,335 JVM vendor/version: OpenJDK Client VM/1.6.0_27
WARN 12:23:31,335 OpenJDK is not recommended. Please upgrade to the newest Oracle Java release
INFO 12:23:31,335 Heap size: 252641280/253689856
INFO 12:23:31,335 Classpath: /usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/avro-1.4.0-fixes.jar:/usr/share/cassandra/lib/avro-1.4.0-sources-fixes.jar$
INFO 12:23:31,691 JNA mlockall successful
INFO 12:23:31,715 Loading settings from filService exit with a return value of 1
OpenJDK Client VM warning: Insufficient space for shared memory file:
can somebody help me? :(
The /tmp directory on your host isn't large enough for the temp files that cassandra wishes to make. The temp files are related to the amount of data in the system. As your database is larger now than it was in the past, it started before but it does not start now.
Check the status of the /tmp directory with df. Here is my system
$ df /tmp
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda6 570881944 350121276 191761552 65% /
To alter the place that is used for these temp files like the error says set java.io.tmpdir
On my system (Ubuntu Linux) this can be done by editing the end of the file /etc/cassandra/cassandra-env.sh
JVM_OPTS="$JVM_OPTS $JVM_EXTRA_OPTS -Djava.io.tmpdir=/opt"
Ensure that the new temp directory has sufficient space and that the permissions are correct, probably allowing read/write for the cassandra user would be enough

Initramfs built into custom Linux kernel is not running

I am building a custom initramfs image that I am building as a CPIO archive into the Linux kernel (3.2).
The issue I am having is that no matter what I try, the kernel does not appear to even attempt to run from the initramfs.
The files I have in my CPIO archive:
cpio -it < initramfs.cpio
.
init
usr
usr/sbin
lib
lib/libcrypt.so.1
lib/libm.so
lib/libc.so.6
lib/libgcc_s.so
lib/libcrypt-2.12.2.so
lib/libgcc_s.so.1
lib/libm-2.12.2.so
lib/libc.so
lib/libc-2.12.2.so
lib/ld-linux.so.3
lib/ld-2.12.2.so
lib/libm.so.6
proc
sbin
mnt
mnt/root
root
etc
bin
bin/sh
bin/mknod
bin/mount
bin/busybox
sys
dev
4468 blocks
Init is very simple, and should just init devices and spawn a shell (for now):
#!/bin/sh
mount -t devtmpfs none /dev
mount -t proc none /proc
mount -t sysfs none /sys
/bin/busybox --install -s
exec /bin/sh
In the kernel .config I have:
CONFIG_INITRAMFS_SOURCE="../initramfs.cpio"
CONFIG_INITRAMFS_ROOT_UID=0
CONFIG_INITRAMFS_ROOT_GID=0
CONFIG_BLK_DEV_INITRD=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=1
CONFIG_BLK_DEV_RAM_SIZE=32768
Kernel builds and the uImage size is larger depending on the initramfs size, so I know the image is being packed. However I get this output when I boot:
console [netcon0] enabled
netconsole: network logging started
omap_rtc omap_rtc: setting system clock to 2000-01-02 00:48:38 UTC (946774118)
Warning: unable to open an initial console.
Freeing init memory: 1252K
mmc0: host does not support reading read-only switch. assuming write-enable.
mmc0: new high speed SDHC card at address e624
mmcblk0: mmc0:e624 SU08G 7.40 GiB
mmcblk0: p1
Kernel panic - not syncing: Attempted to kill init!
[<c000d518>] (unwind_backtrace+0x0/0xe0) from [<c0315cf8>] (panic+0x58/0x188)
[<c0315cf8>] (panic+0x58/0x188) from [<c0021520>] (do_exit+0x98/0x6c0)
[<c0021520>] (do_exit+0x98/0x6c0) from [<c0021e88>] (do_group_exit+0xb0/0xdc)
[<c0021e88>] (do_group_exit+0xb0/0xdc) from [<c0021ec4>] (sys_exit_group+0x10/0x18)
[<c0021ec4>] (sys_exit_group+0x10/0x18) from [<c00093a0>] (ret_fast_syscall+0x0/0x2c)
From that output, it does not look like it is even trying to extract the CPIO archive as initramfs. I expect to see this printk output, which is present in linux code init/initramfs.c:
printk(KERN_INFO "Trying to unpack rootfs image as initramfs...\n");
I tried the filesystem once booting is complete (using chroot) and it works fine... so I believe the filesystem/libraries are sane.
Could anyone give me some pointers as to what I may have incorrect? Thanks in advance for any assistance!
I figured it out. I will post the answer in case anyone else has this issue.
I was missing a console device, this line was the clue:
Warning: unable to open an initial console.
After adding printk's so that I better understood the startup sequence, I realized that console device is opened prior to running the init script. Therefore, the console device must be in the initramfs filesystem directly, and we cannot rely on the devtmpfs mount to create that.
I think when the init script ran the shell was trying to open the console and failed, that's why the kernel was outputting:
Kernel panic - not syncing: Attempted to kill init!
Executing the folowing commands from within the /dev directory of initramfs on the kernel build machine will generate the required device nodes:
mknod -m 622 console c 5 1
mknod -m 622 tty0 c 4 0
After re-CPIO archiving the filesystem and rebuilding the kernel, I finally have a working filesystem in initramfs that the kernel will boot.

Resources