while installing the gitlab-ce in openshift using gitlab operators I am facing the following issue. Anyone help me "4 pod has unbound immediate PVC" - gitlab

[root#bastion ~]# kubectl describe po gitlab-ui-gitaly-0 -n gitlab-system
Name: gitlab-ui-gitaly-0
Namespace: gitlab-system
Priority: 0
Node:
Labels: app=gitaly
app.kubernetes.io/component=gitaly
app.kubernetes.io/instance=gitlab-ui-gitaly
app.kubernetes.io/managed-by=gitlab-operator
app.kubernetes.io/name=gitlab-ui
app.kubernetes.io/part-of=gitlab
chart=gitaly-5.7.1
controller-revision-hash=gitlab-ui-gitaly-7f87fb98bd
heritage=Helm
release=gitlab-ui
statefulset.kubernetes.io/pod-name=gitlab-ui-gitaly-0
Annotations: checksum/config: acaaa7500c4f82921dc017dbfb173dd7ee4a44f9704b5bd0bceda31702f06d3d
gitlab.com/prometheus_port: 9236
gitlab.com/prometheus_scrape: true
openshift.io/scc: anyuid
prometheus.io/port: 9236
prometheus.io/scrape: true
Status: Pending
IP:
IPs:
Controlled By: StatefulSet/gitlab-ui-gitaly
Init Containers:
certificates:
Image: registry.gitlab.com/gitlab-org/build/cng/alpine-certificates:20191127-r2
Port:
Host Port:
Requests:
cpu: 50m
Environment:
Mounts:
/etc/ssl/certs from etc-ssl-certs (rw)
configure:
Image: registry.gitlab.com/gitlab-org/cloud-native/mirror/images/busybox:latest
Port:
Host Port:
Command:
sh
/config/configure
Requests:
cpu: 50m
Environment:
Mounts:
/config from gitaly-config (ro)
/init-config from init-gitaly-secrets (ro)
/init-secrets from gitaly-secrets (rw)
Containers:
gitaly:
Image: registry.gitlab.com/gitlab-org/build/cng/gitaly:v14.7.1
Ports: 8075/TCP, 9236/TCP
Host Ports: 0/TCP, 0/TCP
Requests:
cpu: 100m
memory: 200Mi
Liveness: exec [/scripts/healthcheck] delay=30s timeout=3s period=10s #success=1 #failure=3
Readiness: exec [/scripts/healthcheck] delay=10s timeout=3s period=10s #success=1 #failure=3
Environment:
CONFIG_TEMPLATE_DIRECTORY: /etc/gitaly/templates
CONFIG_DIRECTORY: /etc/gitaly
GITALY_CONFIG_FILE: /etc/gitaly/config.toml
SSL_CERT_DIR: /etc/ssl/certs
Mounts:
/etc/gitaly/templates from gitaly-config (rw)
/etc/gitlab-secrets from gitaly-secrets (ro)
/etc/ssl/certs/ from etc-ssl-certs (ro)
/home/git/repositories from repo-data (rw)
Conditions:
Type Status
PodScheduled False
Volumes:
repo-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: repo-data-gitlab-ui-gitaly-0
ReadOnly: false
gitaly-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: gitlab-ui-gitaly
Optional: false
gitaly-secrets:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit:
init-gitaly-secrets:
Type: Projected (a volume that contains injected data from multiple sources)
SecretName: gitlab-ui-gitaly-secret
SecretOptionalName:
SecretName: gitlab-ui-gitlab-shell-secret
SecretOptionalName:
etc-ssl-certs:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit:
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 82m default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 82m default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 66m default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 65m default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 52m default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 4m44s default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 6s default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 2m1s default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 48s default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.

I am facing the following issue due to PVC storage class and pods are showing pending status.
but I am follow this link
https://docs.gitlab.com/charts/installation/operator.html

Related

Unable to connect to MongoDB: MongoNetworkError & MongoNetworkError connecting to kubernetis MongoDB pod with mongoose

I am trying to connect to MongoDB in a microservice-based project using NodeJs, Kubernetes, Ingress, and skaffold.
I got two errors on doing skaffold dev:
MongoNetworkError: failed to connect to server [auth-mongo-srv:21017] on first connect [MongoNetworkTimeoutError: connection timed out.
Mongoose default connection error: MongoNetworkError: MongoNetworkError: failed to connect to server [auth-mongo-srv:21017] on first connect [MongoNetworkTimeoutError: connection timed out at connectionFailureError.
My auth-mongo-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-deploy
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
My server.ts
const dbURI: string = "mongodb://auth-mongo-srv:21017/auth"
logger.debug(dbURI)
logger.info('connecting to database...')
// changing {} --> options change nothing!
mongoose.connect(dbURI, {}).then(() => {
logger.info('Mongoose connection done')
app.listen(APP_PORT, () => {
logger.info(`server listening on ${APP_PORT}`)
})
console.clear();
}).catch((e) => {
logger.info('Mongoose connection error')
logger.error(e)
})
Additional information:
1. pod is created:
rhythm#vivobook:~/Documents/TicketResale/server$ kubectl get pods
NAME STATUS RESTARTS AGE
auth-deploy-595c6cbf6d-9wzt9 1/1 Running 0 5m53s
auth-mongo-deploy-6b96b7798c-9726w 1/1 Running 0 5m53s
tickets-deploy-675b7b9b58-f5bzs 1/1 Running 0 5m53s
2. pod description:
kubectl describe pod auth-mongo-deploy-6b96b7798c-9726w
Name: auth-mongo-deploy-694b67f76d-ksw82
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Tue, 21 Jun 2022 14:11:47 +0530
Labels: app=auth-mongo
pod-template-hash=694b67f76d
skaffold.dev/run-id=2f5d2142-0f1a-4fa4-b641-3f301f10e65a
Annotations: <none>
Status: Running
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/auth-mongo-deploy-694b67f76d
Containers:
auth-mongo:
Container ID: docker://fa43cd7e03ac32ed63c82419e5f9722deffd2f93206b6a0f2b25ae9be8f6cedf
Image: mongo
Image ID: docker-pullable://mongo#sha256:37e84d3dd30cdfb5472ec42b8a6b4dc6ca7cacd91ebcfa0410a54528bbc5fa6d
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 21 Jun 2022 14:11:52 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zw7s9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-zw7s9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 79s default-scheduler Successfully assigned default/auth-mongo-deploy-694b67f76d-ksw82 to minikube
Normal Pulling 79s kubelet Pulling image "mongo"
Normal Pulled 75s kubelet Successfully pulled image "mongo" in 4.429126953s
Normal Created 75s kubelet Created container auth-mongo
Normal Started 75s kubelet Started container auth-mongo
I have also tried:
kubectl describe service auth-mongo-srv
Name: auth-mongo-srv
Namespace: default
Labels: skaffold.dev/run-id=2f5d2142-0f1a-4fa4-b641-3f301f10e65a
Annotations: <none>
Selector: app=auth-mongo
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.42.183
IPs: 10.100.42.183
Port: db 27017/TCP
TargetPort: 27017/TCP
Endpoints: 172.17.0.2:27017
Session Affinity: None
Events: <none>
And then changed:
const dbURI: string = "mongodb://auth-mongo-srv:21017/auth" to
const dbURI: string = "mongodb://172.17.0.2:27017:21017/auth"
generated a different error of MongooseServerSelectionError.
const dbURI: string = "mongodb://auth-mongo-srv:27017/auth"

AWS EKS terraform tutorial (with assumeRole) - k8s dashboard error

I followed the tutorial at https://learn.hashicorp.com/tutorials/terraform/eks.
Everything works fine with a single IAM user with the required permissions as specified at https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/iam-permissions.md
But when I try to assumeRole in a cross AWSAccount scenario I run into errors/failures.
I started kubectl proxy as per step 5.
However, when I try to access the k8s dashboard at http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ (after completing steps 1-5), I get the error message as follows -
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
I also got zero pods in READY state for the metrics server deployment in step 3 of the tutorial -
$ kubectl get deployment metrics-server -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 0/1 1 0 21m
My kube dns too has zero pods in READY state and the status is -
kubectl -n kube-system -l=k8s-app=kube-dns get pod
NAME READY STATUS RESTARTS AGE
coredns-55cbf8d6c5-5h8md 0/1 Pending 0 10m
coredns-55cbf8d6c5-n7wp8 0/1 Pending 0 10m
My terraform version info is as below -
$ terraform version
2021/03/06 21:18:18 [WARN] Log levels other than TRACE are currently unreliable, and are supported only for backward compatibility.
Use TF_LOG=TRACE to see Terraform's internal logs.
----
2021/03/06 21:18:18 [INFO] Terraform version: 0.14.7
2021/03/06 21:18:18 [INFO] Go runtime version: go1.15.6
2021/03/06 21:18:18 [INFO] CLI args: []string{"/usr/local/bin/terraform", "version"}
2021/03/06 21:18:18 [DEBUG] Attempting to open CLI config file: /Users/user1/.terraformrc
2021/03/06 21:18:18 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2021/03/06 21:18:18 [DEBUG] ignoring non-existing provider search directory terraform.d/plugins
2021/03/06 21:18:18 [DEBUG] ignoring non-existing provider search directory /Users/user1/.terraform.d/plugins
2021/03/06 21:18:18 [DEBUG] ignoring non-existing provider search directory /Users/user1/Library/Application Support/io.terraform/plugins
2021/03/06 21:18:18 [DEBUG] ignoring non-existing provider search directory /Library/Application Support/io.terraform/plugins
2021/03/06 21:18:18 [INFO] CLI command args: []string{"version"}
Terraform v0.14.7
+ provider registry.terraform.io/hashicorp/aws v3.31.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.0.2
+ provider registry.terraform.io/hashicorp/local v2.0.0
+ provider registry.terraform.io/hashicorp/null v3.0.0
+ provider registry.terraform.io/hashicorp/random v3.0.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
Output of describe pods for kube-system ns is -
$ kubectl describe pods -n kube-system
Name: coredns-7dcf49c5dd-kffzw
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: <none>
Labels: eks.amazonaws.com/component=coredns
k8s-app=kube-dns
pod-template-hash=7dcf49c5dd
Annotations: eks.amazonaws.com/compute-type: ec2
kubernetes.io/psp: eks.privileged
Status: Pending
IP:
Controlled By: ReplicaSet/coredns-7dcf49c5dd
Containers:
coredns:
Image: 602401143452.dkr.ecr.ca-central-1.amazonaws.com/eks/coredns:v1.8.0-eksbuild.1
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/tmp from tmp (rw)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-sqv8j (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-sqv8j:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-sqv8j
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 34s (x16 over 15m) default-scheduler no nodes available to schedule pods
Name: coredns-7dcf49c5dd-rdw94
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: <none>
Labels: eks.amazonaws.com/component=coredns
k8s-app=kube-dns
pod-template-hash=7dcf49c5dd
Annotations: eks.amazonaws.com/compute-type: ec2
kubernetes.io/psp: eks.privileged
Status: Pending
IP:
Controlled By: ReplicaSet/coredns-7dcf49c5dd
Containers:
coredns:
Image: 602401143452.dkr.ecr.ca-central-1.amazonaws.com/eks/coredns:v1.8.0-eksbuild.1
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/tmp from tmp (rw)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-sqv8j (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-sqv8j:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-sqv8j
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35s (x16 over 15m) default-scheduler no nodes available to schedule pods
Name: metrics-server-5889d4b758-2bmc4
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: k8s-app=metrics-server
pod-template-hash=5889d4b758
Annotations: kubernetes.io/psp: eks.privileged
Status: Pending
IP:
Controlled By: ReplicaSet/metrics-server-5889d4b758
Containers:
metrics-server:
Image: k8s.gcr.io/metrics-server-amd64:v0.3.6
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-wsqkn (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
metrics-server-token-wsqkn:
Type: Secret (a volume populated by a Secret)
SecretName: metrics-server-token-wsqkn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6s (x9 over 6m56s) default-scheduler no nodes available to schedule pods
Also,
$ kubectl get nodes
No resources found.
And,
$ kubectl describe nodes
returns nothing
Can someone help me troubleshoot and fix this ?
TIA.
Self documenting my solution
Given my AWS setup is as follows
account1:user1:role1
account2:user2:role2
and the role setup is as below -
arn:aws:iam::account2:role/role2
<< trust relationship >>
eks.amazonaws.com
ec2.amazonaws.com
arn:aws:iam::account1:user/user1
arn:aws:sts::account2:assumed-role/role2/user11
Updating the eks-cluster.tf as below -
map_roles = [
{
"groups": [ "system:masters" ],
"rolearn": "arn:aws:iam::account2:role/role2",
"username": "role2"
}
]
map_users = [
{
"groups": [ "system:masters" ],
"userarn": "arn:aws:iam::account1:user/user1",
"username": "user1"
},
{
"groups": [ "system:masters" ],
"userarn": "arn:aws:sts::account2:assumed-role/role2/user11",
"username": "user1"
}
]
p.s.: Yes "user11" is a generated username suffixed with a "1" to the account1 user with a username of "user1"
Makes everything work !

Not able to container images from ACR to Minikube vm

I have created a virtual machine on Azure and I have installed minikube on the VM with VirtualBox. I have created kubectl secret using the instructions in the following link:
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-kubernetes
I am able to initiate pull request from ACR on the Azure portal:
But the Container is creating for a very long time:
Following is the description of the pod in uestion:
Name: loginfunctionality-84b59c4464-rr5ss
Namespace: default
Priority: 0
Node: minikube/192.168.99.101
Start Time: Mon, 29 Jun 2020 11:42:01 +0000
Labels: io.kompose.service=loginfunctionality
pod-template-hash=84b59c4464
Annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.21.0 (992df58d8)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/loginfunctionality-84b59c4464
Containers:
loginfunctionality:
Container ID:
Image: healthcareakscicdacr.azurecr.io/loginfunctionality:latest
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ASPNETCORE_ENVIRONMENT: Development
RedisCacheConnection: rediscache:6379
WebApiBaseUrl: http://20.185.77.158:5018/api/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f4wfq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-f4wfq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f4wfq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m3s default-scheduler Successfully assigned default/loginfunctionality-84b59c4464-rr5ss to mini
Normal Pulling 4m35s kubelet, minikube Pulling image "healthcareakscicdacr.azurecr.io/loginfunctionality:latest"
PS C:\DeploymentFiles> kubectl describe pod loginfunctionality-84b59c4464-rr5ss
Name: loginfunctionality-84b59c4464-rr5ss
Namespace: default
Priority: 0
Node: minikube/192.168.99.101
Start Time: Mon, 29 Jun 2020 11:42:01 +0000
Labels: io.kompose.service=loginfunctionality
pod-template-hash=84b59c4464
Annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.21.0 (992df58d8)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/loginfunctionality-84b59c4464
Containers:
loginfunctionality:
Container ID:
Image: healthcareakscicdacr.azurecr.io/loginfunctionality:latest
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ASPNETCORE_ENVIRONMENT: Development
RedisCacheConnection: rediscache:6379
WebApiBaseUrl: http://20.185.77.158:5018/api/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f4wfq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-f4wfq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f4wfq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m35s default-scheduler Successfully assigned default/loginfunctionality-84b59c4464-rr5ss to mini
Normal Pulling 7m7s kubelet, minikube Pulling image "healthcareakscicdacr.azurecr.io/loginfunctionality:latest"
PS C:\DeploymentFiles> kubectl describe pod loginfunctionality-84b59c4464-rr5ss
Name: loginfunctionality-84b59c4464-rr5ss
Namespace: default
Priority: 0
Node: minikube/192.168.99.101
Start Time: Mon, 29 Jun 2020 11:42:01 +0000
Labels: io.kompose.service=loginfunctionality
pod-template-hash=84b59c4464
Annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.21.0 (992df58d8)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/loginfunctionality-84b59c4464
Containers:
loginfunctionality:
Container ID:
Image: healthcareakscicdacr.azurecr.io/loginfunctionality:latest
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ASPNETCORE_ENVIRONMENT: Development
RedisCacheConnection: rediscache:6379
WebApiBaseUrl: http://20.185.77.158:5018/api/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f4wfq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-f4wfq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f4wfq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/loginfunctionality-84b59c4464-rr5ss to minik
Normal Pulling 11m kubelet, minikube Pulling image "healthcareakscicdacr.azurecr.io/loginfunctionality:latest"
PS C:\DeploymentFiles> kubectl describe pod loginfunctionality-84b59c4464-rr5ss
Name: loginfunctionality-84b59c4464-rr5ss
Namespace: default
Priority: 0
Node: minikube/192.168.99.101
Start Time: Mon, 29 Jun 2020 11:42:01 +0000
Labels: io.kompose.service=loginfunctionality
pod-template-hash=84b59c4464
Annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.21.0 (992df58d8)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/loginfunctionality-84b59c4464
Containers:
loginfunctionality:
Container ID:
Image: healthcareakscicdacr.azurecr.io/loginfunctionality:latest
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ASPNETCORE_ENVIRONMENT: Development
RedisCacheConnection: rediscache:6379
WebApiBaseUrl: http://20.185.77.158:5018/api/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f4wfq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-f4wfq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f4wfq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned default/loginfunctionality-84b59c4464-rr5ss to minik
Normal Pulling 16m kubelet, minikube Pulling image "healthcareakscicdacr.azurecr.io/loginfunctionality:latest"
Please let me know where I am going wrong.
Restarting the VM helped resolving the issue.

Task Instance State Task is in the 'queued' state which is not a valid state for execution. The task must be cleared in order to be run. Airflow task

Kubernetes Pod describes as above, and it says it is using local executor instead of Kubernetes executor and invalid image. Pod log shows as below
kubectl describe pod tablescreationschematablescreation-ecabd38a66664a33b6645a72ef056edc
Name: swedschematablescreationschematablescreation-ecabd38a66664a33b6645a72ef056edc
Namespace: default
Priority: 0
Node: 10.73.96.181
Start Time: Mon, 11 May 2020 18:22:15 +0530
Labels: airflow-worker=5888feda-6aee-49c8-a94b-39cbe5758062
airflow_version=1.10.10
dag_id=Swed-schema-tables-creation
execution_date=2020-05-11T12_52_09.829627_plus_00_00
kubernetes_executor=True
task_id=Schema_Tables_Creation
try_number=1
Annotations: <none>
Status: Pending
IP: 172.17.0.46
IPs:
IP: 172.17.0.46
Containers:
base:
Container ID:
Image: :
Image ID:
Port: <none>
Host Port: <none>
Command:
airflow
run
Swed-schema-tables-creation
Schema_Tables_Creation
2020-05-11T12:52:09.829627+00:00
--local
--pool
default_pool
-sd
/root/airflow/dags/User_Creation_dag.py
State: Waiting
Reason: InvalidImageName
Ready: False
Restart Count: 0
Environment:
AIRFLOW__CORE__EXECUTOR: LocalExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql://airflowkube:airflowkube#10.73.96.181:5434/airflowkube
Mounts:
/root/airflow/dags from airflow-dags (ro)
/root/airflow/logs from airflow-logs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-64cxg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
airflow-dags:
Type: HostPath (bare host directory volume)
Path: /data/Naveen/Airflow/dags
HostPathType:
airflow-logs:
Type: HostPath (bare host directory volume)
Path: /data/Naveen/Airflow/Logs
HostPathType:
default-token-64cxg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-64cxg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/swedschematablescreationschematablescreation-ecabd38a66664a33b6645a72ef056edc to evblfnclnullnull1538
Warning Failed 2m15s (x12 over 4m28s) kubelet, evblfnclnullnull1538 Error: InvalidImageName
Warning InspectFailed 2m (x13 over 4m28s) kubelet, evblfnclnullnull1538 Failed to apply default image tag ":": couldn't parse image reference ":": invalid reference format
**strong text**
enter code here

Reclaim data(keyspaces) in persistentVolume kubernetes cassandra

I have created Cassandra cluster using Kubernetes on AWS. I created volume as persistentVolume with reclaim policy as retain. But when I delete the pod(all instances) and recreate pod old data get lost.
Here is status of my setting.
$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1bc3f896-c0a5-11e8-84a8-02c7556b5a4a 320Gi RWO Retain Bound default/cassandra-storage-cassandra-1 gp2 21d
pvc-f3ff4203-c0a4-11e8-84a8-02c7556b5a4a 320Gi RWO Retain Bound default/cassandra-storage-cassandra-0 gp2 21d
$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-storage-cassandra-0 Bound pvc-f3ff4203-c0a4-11e8-84a8-02c7556b5a4a 320Gi RWO gp2 21d
cassandra-storage-cassandra-1 Bound pvc-1bc3f896-c0a5-11e8-84a8-02c7556b5a4a 320Gi RWO gp2 21d
$kubectl get pods
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 39s
cassandra-1 1/1 Running 0 27s
$kubectl get statefulsets
NAME DESIRED CURRENT AGE
cassandra 2 2 1m
----
Now if I add some data(keyspaces, tables) and then delete statefulsets and again recreate it, old data(keyspaces, tables) are missing. As my policy is reclaim, it should be there.
Here is my statefulset creation yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 2
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 180
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: cassandra
containers:
- env:
- name: MAX_HEAP_SIZE
value: 1024M
- name: HEAP_NEWSIZE
value: 1024M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "CassandraCluster"
- name: CASSANDRA_DC
value: "DC1-Cassandra"
- name: CASSANDRA_RACK
value: "Rack1-Cassandra"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: library/cassandra
name: cassandra
volumeMounts:
- mountPath: /cassandra-storage
name: cassandra-storage
volumeClaimTemplates:
- metadata:
name: cassandra-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 320Gi
Configuration of PV are as follows.
$kubectl describe pv [10:33]
Name: pvc-1bc3f896-c0a5-11e8-84a8-02c7556b5a4a
Labels: failure-domain.beta.kubernetes.io/region=us-west-2
failure-domain.beta.kubernetes.io/zone=us-west-2b
Annotations: kubernetes.io/createdby=aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: default/cassandra-storage-cassandra-1
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 320Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://us-west-2b/vol-0dceef39c7948c69e
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
Name: pvc-f3ff4203-c0a4-11e8-84a8-02c7556b5a4a
Labels: failure-domain.beta.kubernetes.io/region=us-west-2
failure-domain.beta.kubernetes.io/zone=us-west-2b
Annotations: kubernetes.io/createdby=aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: default/cassandra-storage-cassandra-0
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 320Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://us-west-2b/vol-07c16900909f80cd1
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
am I missing some setting or reclaim is not possible when all statefulset get deleted, only individual pod deletion/restart can claim the volume data?
I think maybe your Cassandra's mountPath "/cassandra-storage" is not right, if you did not change your Cassandra data path, the default Cassandra data path is "/var/lib/cassandra/data", so you need to change the volume mountPath to "/var/lib/cassandra/data" in your Cassandra yaml file.

Resources