ext4 signature detected on /dev/sdb1 at offset 1080 while creating Gluster cluster - glusterfs

I am getting attached error while creating Gluster cluster on Kubernetes. Can someone please advice why this error is coming and how to resolve it?

This is solved by by using the command wipefs -a /dev/sdb1. It wipes the signature present on newly created partition.

Related

cassandra service (3.11.5) stops automaticall after it starts/restart on AWS linux

cassandra service (3.11.5) stops automatically after it starts/restart on AWS linux.
I have fresh installation of cassandra on new instance of AWS linux (t3.xlarge) and
sudo service cassandra start
or
sudo service cassandra restart
after 1 or 2 seconds, the service stop automatically. I looked into logs and I found these.
I am not sure, I havent change configs related to snitch and its always SimpleSnitch. I dont have any multiple cassandras. Just only on single EC2.
Logs
INFO [main] 2020-02-12 17:40:50,833 ColumnFamilyStore.java:426 - Initializing system.schema_aggregates
INFO [main] 2020-02-12 17:40:50,836 ViewManager.java:137 - Not submitting build tasks for views in keyspace system as storage service is not initialized
INFO [main] 2020-02-12 17:40:51,094 ApproximateTime.java:44 - Scheduling approximate time-check task with a precision of 10 milliseconds
ERROR [main] 2020-02-12 17:40:51,137 CassandraDaemon.java:759 - Cannot start node if snitch's data center (datacenter1) differs from previous data center (dc1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
Installation steps
sudo curl -OL https://www.apache.org/dist/cassandra/redhat/311x/cassandra-3.11.5-1.noarch.rpm
sudo rpm -i cassandra-3.11.5-1.noarch.rpm
sudo pip install cassandra-driver
export CQLSH_NO_BUNDLED=true
sudo chkconfig --levels 3 cassandra on
The issue is in your log file:
ERROR [main] 2020-02-12 17:40:51,137 CassandraDaemon.java:759 - Cannot start node if snitch's data center (datacenter1) differs from previous data center (dc1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.
It seems that you started the cluster, stopped it and renamed the datacenter from dc1 to datacenter1.
In order to fix:
If no data is stored, delete the data directories
If data is stored, rename the datacenter back to dc1 in the config
I had the same problem , where cassandra service immediately stops after it was started.
in the cassandra configuration file located at /etc/cassandra/cassandra.yaml change the cluster_name to the previous one, like this:
...
# The name of the cluster. This is mainly used to prevent machines in
# one logical cluster from joining another.
cluster_name: 'dc1'
# This defines the number of tokens randomly assigned to this node on the ring
# The more tokens, relative to other nodes, the larger the proportion of data
...

Heketi can't provision a volume for Heketi database

I'm trying to make glusterfs cluster with Heketi for Kubernetes persistent volumes. I have 3 nodes in gluster cluster:
heketi-cli node list
Id:242e801e6eeb7ec10acda60a409b5d98 Cluster:fd539c5d13b6229498c6c67ac491163d
Id:439fb090888a745633f9db6ac4d243b8 Cluster:fd539c5d13b6229498c6c67ac491163d
Id:5e9b7e5f3ec33c77c42437e89ca857a3 Cluster:fd539c5d13b6229498c6c67ac491163d
But when I try to provision a volume for Heketi database by using command:
heketi-cli setup-openshift-heketi-storage
I get an error:
Error: No space
But I have enough free space on my volumes:
Devices:
Id:931b4f87e3675368a4f737ed6862e0cf Name:/dev/sdb State:online Size (GiB):29 Used (GiB):0 Free (GiB):29
Devices:
Id:3a2a30b22ade4efca7949e9cc082b685 Name:/dev/sdb State:online Size (GiB):29 Used (GiB):0 Free (GiB):29
Devices:
Id:5d1b5c7b258c52569bff1e1c720015c5 Name:/dev/sdb State:online Size (GiB):29 Used (GiB):0 Free (GiB):29
What can be the reason for this strange behavior?
I'm sorry, I have found the reason. It's the count of gluster node, it should be equal to count of gluster instances in kubernetes. In previous turn I had only 3 gluster nodes and 4 gluster instances in kubernetes.
There can be a number of problems that lead to this error message. The 2 most common ones are:
You do not have the minimum of 3 nodes in your gluster cluster
The heketi-cli setup-openshift-heketi-storage command needs to create a volume for heketi's database. That volume is now 2GB by default but it used to 32GB(!) (see heketi issue #639). So depending on your heketi-cli version it may be trying to create a 32GB volume on your 29GB bricks. Nasty.
I suggest you look at the logs of heketi:
$ kubectl get pod -l name=heketi
NAME READY STATUS RESTARTS AGE
heketi-703226055-7g3hb 1/1 Running 0 18h
$ kubectl logs heketi-703226055-7g3hb -f
Heketi v3.0.0-111-gc5f0f58
[heketi] INFO 2017/02/14 22:17:53 Loaded kubernetes executor
...

Error mounting azure vhd to kubernetes pod

On kubernetes v1.4.3 I'm trying to mount the azure disk (vhd) to a pod using following configuration:
volumes:
- name: "data"
azureDisk:
diskURI: "https://testdevk8disks685.blob.core.windows.net/vhds/test-disk-01.vhd"
diskName: "test-disk-01"
But it returns following error while creating pod
MountVolume.SetUp failed for volume "kubernetes.io/azure-disk/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480-data" (spec.Name: "data") pod "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480" (UID: "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480") with: mount failed: exit status 32
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/falkonry-dev-k8-ampool-locator-01 /var/lib/kubelet/pods/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480/volumes/kubernetes.io~azure-disk/data [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/test-disk-01 does not exist
There was a bug in v1.4.3 which was the cause of this problem. The bug has been solved in v1.4.7+. Upgrading the kubernetes cluster to appropriate version solved the problem.

What does udev need to startup properly in Linux?

I have an issue with udev startup on my i.MX6 board. udev-182 was cross-built by the Yocto 1.8 BSP for the board. I see the following output on startup:
INIT: version 2.88 booting
Starting udev
udevd[188]: bind failed: No such file or directory
error binding udev control socket
udevd[188]: error binding udev control socket
I believe the error is a result of the lack of /run/udev/control existing. But I am unsure what creates that.
I noticed this while I was looking into an issue with my touchscreen not working. If I manually restart udev from the command line, everything seems to work fine and my touchscreen begins functioning.
root#nitrogen6x:~# /etc/init.d/udev restart
Stopping udevd
Starting udev
udevd[451]: starting version 182
mxc_v4l_open: Mxc Camera no sensor ipu1/csi0
mxc_cam_select_input: input(0) CSI IC MEM
mxc_v4l_open: Mxc Camera no sensor ipu0/csi0
mxc_v4l_open: Mxc Camera no sensor ipu0/csi1
When I do a restart, /run/udev/control is created.
Any ideas on what could be causing this failure?
Thanks
I had the same issue and I managed to resolve that by appending rootwait rw to my bootargs in u-boot.
For instance, if your bootargs were:
console=ttymxc3,115200 root=/dev/mtdblock4 rootfstype=jffs2 mtdparts=spi0.0:512k(uboot),256k(ubootenv),6144k(kernel),256k(fdt),20m(rootfs),-(data)
Change it to:
console=ttymxc3,115200 root=/dev/mtdblock4 rootfstype=jffs2 rootwait rw mtdparts=spi0.0:512k(uboot),256k(ubootenv),6144k(kernel),256k(fdt),20m(rootfs),-(data)
That's because the kernel mounts the rootfs as r/o by default and thus new files cannot be created by any process at startup.
Compare strace output of "udev start by init" and "udev start from console" might give you some idea.

File missing or corrupted on mounting jffs2

I'm facing two problems on mounting jffs2 on NOR flash:
I'm running a board with squashfs as rootfs and I tried to mount jffs2 on another mtdblock as below :
mount -t jffs2 /dev/mtdblock6 /tmp/jffs
After that I copy some files into /tmp/jffs but system gives the error when the files larger than 4096 bytes:
cp: write error: Input/output error
Then I unmount the mtdblock and re-mount it again, but the files I just copied has disappeared.
I confirmed the flash block has been written by dumping /dev/mtd6 or /dev/mtdblock6, but those files cannot be seen after remounting.
=====
I opened the printk log and have following message showed up when I put a file in mounted folder:
jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x00120814: 0x0219 instead
Node totlen on flash (0x0000000c) != totlen from node ref (0x00000044)
and below message appeared when I tried to re-mount the mtdblock:
JFFS2 notice: (608) jffs2_get_inode_nodes: Node header CRC failed at 0x0e0050. {0000,9600,01e88b11,01000000}
Very appreciate if there is any suggestion.

Resources