0-glusterfs: failed to set volfile server: File exists - glusterfs

my kafka use the glusterfs as the storage, and when i apply the yaml of the kafka, the pod is always in the status of ContainerCreating, then i check the describe of the pod. I get the following err:
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.155:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-10840.scope.
[2020-03-14 13:56:14.771098] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:14.782472] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:14.782519] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11012.scope.
[2020-03-14 13:56:15.441030] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:15.452832] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:15.452871] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11236.scope.
[2020-03-14 13:56:16.646525] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:16.658118] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:16.658168] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11732.scope.
How can I solve the problem?

Ensure you have the right name of your volume in the yaml file under path: <the_volume_name>.
To show all gluster volumes use:
sudo gluster volume status all
Restart the volume (in this case my volume is just called gfs):
gluster volume stop gfs
gluster volume start gfs
Now delete your pod and create it again.
Alternatively try Kadlu.io or Ceph Storage.

Related

lubuntu / centos container CrashLoopBackOff error

when ever i run # kubectl run ubuntu --image=ubuntu or centos
i gt containercrashoff , when checked in kubectl describe pod below error is observed
Warning Failed 4s (x3 over 22s) kubelet Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "ping": executable file not found in $PATH: unknown
pl suggest to solve this issue

/.byfn bring down consists of warning message

From Hyperledger tutorial, byfn.sh tool to disconnect the network peers from hyperledger network
$ ./byfn.sh -m down
It show following warning messages:
WARNING: Network net_byfn not found.
Removing volume net_peer0.org2.example.com
WARNING: Volume net_peer0.org2.example.com not found.
Removing volume net_peer1.org2.example.com
WARNING: Volume net_peer1.org2.example.com not found.
Removing volume net_peer1.org1.example.com
WARNING: Volume net_peer1.org1.example.com not found.
Removing volume net_peer0.org1.example.com
WARNING: Volume net_peer0.org1.example.com not found.
Removing volume net_orderer.example.com
WARNING: Volume net_orderer.example.com not found.
I wonder about the message whether it could lead to an error.
It's not a problem ... there was actually an extra line in byfn.sh which results in calling docker-compose down twice. The issue has been resolved in the master branch but was never backported to the release-1.1 branch.

Monitoring JHipster error starting jhipster-alerter

I have installed monitoring out of the box according to this link:
http://www.jhipster.tech/monitoring/
When I start with:
docker-compose up -d
Everything starts but not Elastalert:
First log:
ERROR: for monitoring_jhipster-alerter_1 Cannot start service jhipster-alerter: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\"/Users/john/source/intellij/company/app/myservice/alerts/config.yaml\\" to rootfs \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged\\" at \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged/opt/elastalert/config.yaml\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a direStarting monitoring_jhipster-import-dashboards_1
Second log:
ERROR: for jhipster-alerter Cannot start service jhipster-alerter: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\"/Users/john/source/intellij/company/app/myservice/alerts/config.yaml\\" to rootfs \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged\\" at \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged/opt/elastalert/config.yaml\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
Using the default docker-compose.yml file that I got with:
curl -O https://raw.githubusercontent.com/jhipster/jhipster-console/master/bootstrap/docker-compose.yml
Im not sure what this messages says?
This is because the volumes path for JHipster Alerter is incorrect. Change
jhipster-alerter:
image: jhipster/jhipster-alerter:latest
environment:
- ES_HOST=jhipster-elasticsearch
- ES_PORT=9200
volumes:
- ../jhipster-alerter/rules/:/opt/elastalert/rules/
- ../alerts/config.yaml:/opt/elastalert/config.yaml
To
- ../alerts/rules/:/opt/elastalert/rules/
- ../jhipster-alerter/config.yaml:/opt/elastalert/config.yaml
As shown in https://github.com/jhipster/jhipster-console/pull/102/commits/fa5bc75ec29ca357477ac1a22203ae6cbe2af2f7.

Error mounting azure vhd to kubernetes pod

On kubernetes v1.4.3 I'm trying to mount the azure disk (vhd) to a pod using following configuration:
volumes:
- name: "data"
azureDisk:
diskURI: "https://testdevk8disks685.blob.core.windows.net/vhds/test-disk-01.vhd"
diskName: "test-disk-01"
But it returns following error while creating pod
MountVolume.SetUp failed for volume "kubernetes.io/azure-disk/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480-data" (spec.Name: "data") pod "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480" (UID: "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480") with: mount failed: exit status 32
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/falkonry-dev-k8-ampool-locator-01 /var/lib/kubelet/pods/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480/volumes/kubernetes.io~azure-disk/data [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/test-disk-01 does not exist
There was a bug in v1.4.3 which was the cause of this problem. The bug has been solved in v1.4.7+. Upgrading the kubernetes cluster to appropriate version solved the problem.

Glusterfs denied mount

I'm using GlusterFS 3.3.2. Two servers, a brick on each one. The Volume is "ARCHIVE80"
I can mount the volume on Server2; if I touch a new file, it appears inside the brick on Server1.
However, if I try to mount the volume on Server1, I have an error:
Mount failed. Please check the log file for more details.
The log gives:
[2013-11-11 03:33:59.796431] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-0: changing port to 24011 (from 0)
[2013-11-11 03:33:59.796810] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-1: changing port to 24009 (from 0)
[2013-11-11 03:34:03.794182] I [client-handshake.c:1614:select_server_supported_programs] 0-ARCHIVE80-client-0: Using Program GlusterFS 3.3.2, Num (1298437), Version (330)
[2013-11-11 03:34:03.794387] W [client-handshake.c:1320:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to set the volume (Permission denied)
[2013-11-11 03:34:03.794407] W [client-handshake.c:1346:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to get 'process-uuid' from reply dict
[2013-11-11 03:34:03.794418] E [client-handshake.c:1352:client_setvolume_cbk] 0-ARCHIVE80-client-0: SETVOLUME on remote-host failed: Authentication failed
[2013-11-11 03:34:03.794426] I [client-handshake.c:1437:client_setvolume_cbk] 0-ARCHIVE80-client-0: sending AUTH_FAILED event
[2013-11-11 03:34:03.794443] E [fuse-bridge.c:4256:notify] 0-fuse: Server authenication failed. Shutting down.
How comes I can mount on one server and not on the other one???
It may a permission problem. Could you try if it can be resolved by set auth.allow:
[root#test1 ~]#gluster volume set ARCHIVE80 auth.allow 'SERVER1IPADDRESS'
It works on my side.

Resources