I'm trying to attach a volume to kubernetes pod but getting below error :
error validating "test-pod.yaml": error validating data: found invalid
field azureFile for v1.Volume; if you choose to ignore these errors,
turn validation off with --validate=false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}
Kubernetes v1.1.2 doesn't support azureFile, see https://github.com/kubernetes/kubernetes/blob/v1.1.2/pkg/api/v1/types.go#L203.
The earliest version that supports azureFile seems to be v1.2.0: https://github.com/kubernetes/kubernetes/blob/v1.2.0/pkg/api/v1/types.go#L263
Related
Kubernetes Version: 1.21
Spark Version: 3.0.0
I am using a container in a Kubernetes pod (client pod) to invoke Spark Submit which then starts a Driver pod. The client pod which did the Spark Submit starts to watch the Driver pod via LoggingPodStatusWatcherImpl. After approximately 1 hour, the client pod experiences 401 error
22/11/03 13:05:44 WARN WatchConnectionManager: Exec Failure: HTTP 401, Status: 401 - Unauthorized
java.net.ProtocolException: Expected HTTP 101 response but was '401 Unauthorized'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
22/11/03 13:05:46 INFO LoggingPodStatusWatcherImpl: Application status for spark-blahblahblah (phase: Running)
I think Spark on Kubernetes usually looks in /var/run/secrets/kubernetes.io/serviceaccount/token so I would get the warning below when starting the client pod.
22/11/03 13:13:13 WARN Config: Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
However, since I provide another oauth token file via the conf below in Spark Submit command the client pod was able to connect to the Kubernetes API and start the Driver pod.
--conf spark.kubernetes.authenticate.submission.oauthTokenFile=/mytokendir/token
The token is provided to the client pod via projected volume (new to Kubernete versions 1.20+), the token expiration duration can be specified in the yaml manifest as shown below:
See this doc for reference on how this is implemented:
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-bound-service-account-tokens
spec:
serviceAccountName: my-serviceaccount
volumes:
- name: token-vol
projected:
sources:
- serviceAccountToken:
expirationSeconds: 7200
path: token
containers:
-name: my-container
image: some-image
volumeMounts:
-name: token-vol
mountPath: /mytokendir
I then exec into the client pod to get the JWT token in /mytokendir and decoded it.
It showed valid for 2 hours; however, coming back to the original question, my client pod is still getting 401 error after 1 hour.
Sometimes I would get this error:
22/11/03 14:10:57 INFO LoggingPodStatusWatcherImpl: Application my-application with submission ID my-namespace:my-driver finished
22/11/03 14:10:57 INFO ShutdownHookManager: Shutdown hook called
22/11/03 14:10:57 INFO ShutdownHookManager: Deleting directory /tmp/spark-blahblah
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Can you please assist when deploying we getting ImagePullBackOff for our pods.
running kubectl get <pod-name> -n namespace -o yaml am getting below error.
containerStatuses:
- image: mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644
imageID: ""
lastState: {}
name: dmd-base
ready: false
restartCount: 0
started: false
state:
waiting:
message: Back-off pulling image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644"
reason: ImagePullBackOff
hostIP: x.x.x.53
phase: Pending
podIP: x.x.x.237
and running kubectl describe pod <pod-name> -n namespace am getting below error infomation
Normal Scheduled 85m default-scheduler Successfully assigned dmd-int/app-app-base-5b4b75756c-lrcp6 to aks-agentpool-35064155-vmss00000a
Warning Failed 85m kubelet Failed to pull image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
[rpc error: code = Unknown desc = failed to pull and unpack image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to resolve reference "mycontainer-registry.io/commpany/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to do request: Head "https://mycontainer-registry.azurecr.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
dial tcp: lookup mycontainer-registry.azurecr.io on [::1]:53: read udp [::1]:56109->[::1]:53: read: connection refused,
rpc error: code = Unknown desc = failed to pull and unpack image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to resolve reference "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to do request: Head "https://mycontainer-registry.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
dial tcp: lookup mycontainer-registry.io on [::1]:53: read udp [::1]:60759->[::1]:53: read: connection refused]`
From the described logs I can see the issue is a connection but I can't tell where the issue is with connectivity, we running our apps in a Kubernetes cluster on Azure.
If anyone has come across this issue can you please assist the application has been running successfully throughout the past months we just got this issue this morning.
There is a known Azure outage multiple regions today.
Some DNS issue that also affects image pulls.
https://status.azure.com/en-us/status
When I trigger publish on gitlab, it fails and I check the logs of gitlab-runner pod, It shows the error below:
kubectl logs -n gitlab-tur prod-gitlab-ci-runner-0
ERROR: Job failed (system failure): Post https://10.96.0.1:443/api/v1/namespaces/gitlab-tur/secrets: dial tcp 10.96.0.1:443: i/o timeout duration=1m0.007641837s job=3044 project=25 runner=DwvfWx49
ERROR: Error cleaning up secrets: resource name may not be empty job=3044 project=25 runner=DwvfWx49
Here is the screenshot
Version Info:
root#ubuntu:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
I am not able to publish anything to environments for days. Has anyone experienced such a problem before?
Looks like there is a workaround for that by deleting the gitlab-runner pod in k8s and retrying the process as per this post.
Installed a k8s cluster by kubespray with vagrant used its default Vagrantfile setting.
OS selected centos.
After the cluster setup finished, ran commands on master host:
$ kubectl version
Client Version: version.Info{Major:"", Minor:"", GitVersion:"v1.9.0+coreos.0", GitCommit:"1b69a2a6c01194421b0aa17747a8c1a81738a8dd", GitTreeState:"clean", BuildDate:"2017-12-19T02:52:15Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.9.0+coreos.0", GitCommit:"1b69a2a6c01194421b0aa17747a8c1a81738a8dd", GitTreeState:"clean", BuildDate:"2017-12-19T02:52:15Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Downloaded newest Helm from github.
$ ./helm init
$ ./helm version
Client: &version.Version{SemVer:"v2.8.0", GitCommit:"14af25f1de6832228539259b821949d20069a222", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.0", GitCommit:"14af25f1de6832228539259b821949d20069a222", GitTreeState:"clean"}
$ ./helm search
...
stable/phpbb 0.6.1 3.2.2 Community forum that supports the notion of use...
...
$ ./helm install stable/phpbb
Error: no available release name found
Why can't find when installing?
Have you tried adding --name my-release?
On kubernetes v1.4.3 I'm trying to mount the azure disk (vhd) to a pod using following configuration:
volumes:
- name: "data"
azureDisk:
diskURI: "https://testdevk8disks685.blob.core.windows.net/vhds/test-disk-01.vhd"
diskName: "test-disk-01"
But it returns following error while creating pod
MountVolume.SetUp failed for volume "kubernetes.io/azure-disk/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480-data" (spec.Name: "data") pod "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480" (UID: "0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480") with: mount failed: exit status 32
Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/falkonry-dev-k8-ampool-locator-01 /var/lib/kubelet/pods/0a0e1c0f-9b7a-11e6-8cc5-000d3a32f480/volumes/kubernetes.io~azure-disk/data [bind]
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/azure-disk/mounts/test-disk-01 does not exist
There was a bug in v1.4.3 which was the cause of this problem. The bug has been solved in v1.4.7+. Upgrading the kubernetes cluster to appropriate version solved the problem.