On kubernetes v1.4.3 I'm trying to mount the azure disk (vhd) to a pod using following configuration:
volumes:
- name: "data"
azureDisk:
diskURI: "https://testdevk8disks685.blob.core.windows.net/vhds/test-disk-01.vhd"
diskName: "test-disk-01"
But it returns following error while creating pod
MountVolume.SetUp failed for volume "kubernetes.io/azure-disk/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480-data" (spec.Name: "data") pod "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480" (UID: "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480") with: mount failed: exit status 32
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/falkonry-dev-k8-ampool-locator-01 /var/lib/kubelet/pods/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480/volumes/kubernetes.io~azure-disk/data [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/test-disk-01 does not exist
There was a bug in v1.4.3 which was the cause of this problem. The bug has been solved in v1.4.7+. Upgrading the kubernetes cluster to appropriate version solved the problem.
Related
when ever i run # kubectl run ubuntu --image=ubuntu or centos
i gt containercrashoff , when checked in kubectl describe pod below error is observed
Warning Failed 4s (x3 over 22s) kubelet Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "ping": executable file not found in $PATH: unknown
pl suggest to solve this issue
my kafka use the glusterfs as the storage, and when i apply the yaml of the kafka, the pod is always in the status of ContainerCreating, then i check the describe of the pod. I get the following err:
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.155:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-10840.scope.
[2020-03-14 13:56:14.771098] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:14.782472] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:14.782519] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11012.scope.
[2020-03-14 13:56:15.441030] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:15.452832] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:15.452871] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11236.scope.
[2020-03-14 13:56:16.646525] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:16.658118] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:16.658168] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11732.scope.
How can I solve the problem?
Ensure you have the right name of your volume in the yaml file under path: <the_volume_name>.
To show all gluster volumes use:
sudo gluster volume status all
Restart the volume (in this case my volume is just called gfs):
gluster volume stop gfs
gluster volume start gfs
Now delete your pod and create it again.
Alternatively try Kadlu.io or Ceph Storage.
I have installed monitoring out of the box according to this link:
http://www.jhipster.tech/monitoring/
When I start with:
docker-compose up -d
Everything starts but not Elastalert:
First log:
ERROR: for monitoring_jhipster-alerter_1 Cannot start service jhipster-alerter: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\"/Users/john/source/intellij/company/app/myservice/alerts/config.yaml\\" to rootfs \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged\\" at \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged/opt/elastalert/config.yaml\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a direStarting monitoring_jhipster-import-dashboards_1
Second log:
ERROR: for jhipster-alerter Cannot start service jhipster-alerter: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\"/Users/john/source/intellij/company/app/myservice/alerts/config.yaml\\" to rootfs \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged\\" at \\"/var/lib/docker/overlay2/5657c6e9e7bb2be5cf4fa9860c04269e34be15641f4e3f0c1449af7cbf82ced5/merged/opt/elastalert/config.yaml\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
Using the default docker-compose.yml file that I got with:
curl -O https://raw.githubusercontent.com/jhipster/jhipster-console/master/bootstrap/docker-compose.yml
Im not sure what this messages says?
This is because the volumes path for JHipster Alerter is incorrect. Change
jhipster-alerter:
image: jhipster/jhipster-alerter:latest
environment:
- ES_HOST=jhipster-elasticsearch
- ES_PORT=9200
volumes:
- ../jhipster-alerter/rules/:/opt/elastalert/rules/
- ../alerts/config.yaml:/opt/elastalert/config.yaml
To
- ../alerts/rules/:/opt/elastalert/rules/
- ../jhipster-alerter/config.yaml:/opt/elastalert/config.yaml
As shown in https://github.com/jhipster/jhipster-console/pull/102/commits/fa5bc75ec29ca357477ac1a22203ae6cbe2af2f7.
I'm trying to make glusterfs cluster with Heketi for Kubernetes persistent volumes. I have 3 nodes in gluster cluster:
heketi-cli node list
Id:242e801e6eeb7ec10acda60a409b5d98 Cluster:fd539c5d13b6229498c6c67ac491163d
Id:439fb090888a745633f9db6ac4d243b8 Cluster:fd539c5d13b6229498c6c67ac491163d
Id:5e9b7e5f3ec33c77c42437e89ca857a3 Cluster:fd539c5d13b6229498c6c67ac491163d
But when I try to provision a volume for Heketi database by using command:
heketi-cli setup-openshift-heketi-storage
I get an error:
Error: No space
But I have enough free space on my volumes:
Devices:
Id:931b4f87e3675368a4f737ed6862e0cf Name:/dev/sdb State:online Size (GiB):29 Used (GiB):0 Free (GiB):29
Devices:
Id:3a2a30b22ade4efca7949e9cc082b685 Name:/dev/sdb State:online Size (GiB):29 Used (GiB):0 Free (GiB):29
Devices:
Id:5d1b5c7b258c52569bff1e1c720015c5 Name:/dev/sdb State:online Size (GiB):29 Used (GiB):0 Free (GiB):29
What can be the reason for this strange behavior?
I'm sorry, I have found the reason. It's the count of gluster node, it should be equal to count of gluster instances in kubernetes. In previous turn I had only 3 gluster nodes and 4 gluster instances in kubernetes.
There can be a number of problems that lead to this error message. The 2 most common ones are:
You do not have the minimum of 3 nodes in your gluster cluster
The heketi-cli setup-openshift-heketi-storage command needs to create a volume for heketi's database. That volume is now 2GB by default but it used to 32GB(!) (see heketi issue #639). So depending on your heketi-cli version it may be trying to create a 32GB volume on your 29GB bricks. Nasty.
I suggest you look at the logs of heketi:
$ kubectl get pod -l name=heketi
NAME READY STATUS RESTARTS AGE
heketi-703226055-7g3hb 1/1 Running 0 18h
$ kubectl logs heketi-703226055-7g3hb -f
Heketi v3.0.0-111-gc5f0f58
[heketi] INFO 2017/02/14 22:17:53 Loaded kubernetes executor
...
I'm trying to attach a volume to kubernetes pod but getting below error :
error validating "test-pod.yaml": error validating data: found invalid
field azureFile for v1.Volume; if you choose to ignore these errors,
turn validation off with --validate=false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}
Kubernetes v1.1.2 doesn't support azureFile, see https://github.com/kubernetes/kubernetes/blob/v1.1.2/pkg/api/v1/types.go#L203.
The earliest version that supports azureFile seems to be v1.2.0: https://github.com/kubernetes/kubernetes/blob/v1.2.0/pkg/api/v1/types.go#L263