test image from azure container registry - azure

I created a simple Docker image from a "Hello World" java application.
This is my Dockerfile
FROM java:8
COPY . /var/www/java
WORKDIR /var/www/java
RUN javac HelloWorld.java
CMD ["java", "HelloWorld"]
I pushed the image (java-app) to Azure Container Registry.
$ az acr repository list --name AContainerRegistry --output tableResult
----------------
java-app
I want to deploy it
amhg$ kubectl run dockerproject --image=acontainerregistry.azurecr.io/java-app:v1
deployment.apps "dockerproject" created
amhg$ kubectl expose deployments dockerproject --port=80 --type=LoadBalancer
service "dockerproject" exposed
and see the pods, the pod is crashed
amhg$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dockerproject-b6799d879-pt5rx 0/1 CrashLoopBackOff 8 19m
Is there a way to "test"/run the image from the central registry, how come it crashes?
HERE DESCRIBE POD
amhg$ kubectl describe pod dockerproject-64fbf7649-spc7h
Name: dockerproject-64fbf7649-spc7h
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 11:53:58 +0200
Labels: pod-template-hash=209693205
run=dockerproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"dockerproject-64fbf7649","uid":"946610e4-43b7-11e8-9537-0a58ac1...
Status: Running
IP: 10.244.0.38
Controlled By: ReplicaSet/dockerproject-64fbf7649
Containers:
dockerproject:
Container ID: docker://1f2a7a6870a37e4d6b53fc834b0d4d3b681e9faaacc3772177a918e66357404e
Image: acontainerregistry.azurecr.io/java-app:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/java-app#sha256:eaf6fe53a59de287ad76a18de2c7f05580b1f25153624161aadcc7b8ef47b0c4
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
Ready: False
Restart Count: 13
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43m default-scheduler Successfully assigned dockerproject2-64fbf7649-spc7h to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 43m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/java-app:v1" already present on machine
Normal Created 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Started container
Warning FailedSync 8m (x161 over 43m) kubelet, aks-nodepool1-39744669-0 Error syncing pod
Warning BackOff 3m (x184 over 43m) kubelet, aks-nodepool1-39744669-0 Back-off restarting failed container

When you run an application in the Pod, Kubernetes expects that it will work all the time as a daemon until you will stop it somehow.
In your details about the pod I see this:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
It means that your application exited with code 0 (which means "all is ok") right after start. So, the image was successfully downloaded (registry is OK) and run, but the application exited.
That's why Kubernetes tries to restart the pod all the time.
The only thing I can suggest - find a reason why the application stops and fix it.

Related

GKE deployment ReactJS app CrashLoopBackoff

I have built my Docker image for my ReactJS app. I run the image locally and tested it, which works fine.
I'm using Google Cloud Build which automatically puts my container image to gcr.io container repo.
Then I try to create a deployment from my container image which resides in gcr.io
But the deployment is not successfully finishing. It gives me
Pod errors: CrashLoopBackOff
Does not have minimum availability
Here is my Docker image.
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD [ "npm", "start" ]
When I look at the pod -> container logs, I see my app has been restarting without any error log.
I 2020-04-27T15:07:57.505370601Z > react-scripts start
I 2020-04-27T15:07:57.505375001Z
I 2020-04-27T15:07:59.392880949Z [34mℹ[39m [90m「wds」[39m: Project is running at http://10.48.0.14/
I 2020-04-27T15:07:59.393329591Z [34mℹ[39m [90m「wds」[39m: webpack output is served from
I 2020-04-27T15:07:59.393494921Z [34mℹ[39m [90m「wds」[39m: Content not from webpack is served from /usr/src/app/public
I 2020-04-27T15:07:59.393641770Z [34mℹ[39m [90m「wds」[39m: 404s will fallback to /
I 2020-04-27T15:07:59.393881277Z Starting the development server...
I suspect what happens is, kubernetes does not wait enough and try to restart application.
I'm not using a deployment.yaml but simply using the GCP console.
There is no health endpoint in my ReactJS app.
This is the output for kubectl describe pod ...
Name: helloworld-gke-7fd977fd94-kvrcj
Namespace: default
Priority: 0
Node: gke-helloworld-gke-default-pool-a23be758-g8q7/10.182.0.2
Start Time: Mon, 27 Apr 2020 17:15:18 +0200
Labels: app=hello
pod-template-hash=7fd977fd94
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container hello-app
Status: Running
IP: 10.48.0.15
IPs: <none>
Controlled By: ReplicaSet/helloworld-gke-7fd977fd94
Containers:
hello-app:
Container ID: docker://389151ed6...
Image: gcr.io/tuition-h...
Image ID: docker-pullable://gcr.io/...
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 27 Apr 2020 18:28:00 +0200
Finished: Mon, 27 Apr 2020 18:28:02 +0200
Ready: False
Restart Count: 19
Requests:
cpu: 100m
Environment:
PORT: 8080
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sqpm7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-sqpm7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sqpm7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m17s (x320 over 74m) kubelet, gke-helloworld-gke-default-pool-... Back-off restarting failed container

Azure Container Registry image pulls are very slow with image size ~150 MBs

When deploying an image to an AKS instance, the image pull from the ACR (Premium SKU) is very slow, even for "small" images around ~150 MBs in size.
Both the AKS resource and the ACR resource are in the Canada East region.
Here is an example:
root#076fff2831b2:/tmp# kubectl describe pod application-service-59bcf96874-pvrmb
Name: application-service-59bcf96874-pvrmb
Namespace: default
Priority: 0
Node: aks-41067869-1/10.255.13.163
Start Time: Tue, 11 Feb 2020 18:15:53 -0500
Labels: app.kubernetes.io/instance=application-service
app.kubernetes.io/name=application-service
pod-template-hash=59bcf96874
Annotations: <none>
Status: Running
IP: 10.255.13.175
IPs: <none>
Controlled By: ReplicaSet/application-service-59bcf96874
Containers:
application-service:
Container ID: docker://0e86526a293d9055d482a09f043f0be68c594244fe4216f8fb190bc2caf6b65b
Image: myacr01.azurecr.io/microservices/application-service:0.0.6
Image ID: docker-pullable://myacr01.azurecr.io/microservices/application-service#sha256:cfbb3ffa7adc52da9cc0b8d7f78376076ea712025b59df8e406c559d369f4085
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 11 Feb 2020 18:35:00 -0500
Finished: Tue, 11 Feb 2020 18:35:00 -0500
Ready: False
Restart Count: 5
Liveness: http-get https://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
PORT: 3000
undefined: undefined
Mounts:
/kvmnt from application-service-kv-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from application-service-token-9jk8j (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
application-service-kv-volume:
Type: FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
Driver: azure/kv
FSType:
SecretRef: &LocalObjectReference{Name:kvcreds,}
ReadOnly: false
Options: map[keyvaultname:testIt2 keyvaultobjectnames:APPLICATION-SVC-SQLDB-CS;INGESTION-CONSUMER-EHB-CS;INGESTION-PRODUCER-EHB-CS keyvaultobjecttypes:secret;secret;secret tenantid:REMOVED usepodidentity:false usevmmanagedidentity:false]
application-service-token-9jk8j:
Type: Secret (a volume populated by a Secret)
SecretName: application-service-token-9jk8j
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20m default-scheduler Successfully assigned default/application-service-59bcf96874-pvrmb to aks-41067869-1
Normal Pulling 20m kubelet, aks-41067869-1 Pulling image "myacr01.azurecr.io/microservices/application-service:0.0.6"
Normal Pulled 4m39s kubelet, aks-41067869-1 Successfully pulled image "myacr01.azurecr.io/microservices/application-service:0.0.6"
Normal Started 3m36s (x4 over 4m33s) kubelet, aks-41067869-1 Started container application-service
Warning BackOff 3m4s (x11 over 4m30s) kubelet, aks-41067869-1 Back-off restarting failed container
Normal Pulled 2m52s (x4 over 4m32s) kubelet, aks-41067869-1 Container image "myacr01.azurecr.io/microservices/application-service:0.0.6" already present on machine
Normal Created 2m51s (x5 over 4m33s) kubelet, aks-41067869-1 Created container application-service
Some details were modified/removed for privacy reasons.
However, the thing to note is the ~15m needed to go from a state of "Pulling" to "Pulled" for an image from an ACR.
This issue is occurring daily. The Azure Insights blade of the AKS instance shows a maximum of 26% node CPU and 14.32% node memory utilization over the last 7 days.
How we can go about troubleshooting this further to determine the possible causes of delays?
Any help is greatly appreciated.
Thanks!

kubernetes giving CrashLoopBackOff error while creating pods

I'm creating a pod of node container, and it is giving CrashLoopBackOff error.
kubectl get pods
kubectl describe pod test-node3
Any help would be appreciated.
You can add command as below so that pod will remain in running state.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
Ref: Doc
Your container does not have a long running process. The main process in the container is exiting with exit code 0 which usually means that the process has terminated successfully. You can see it in the kubectl describe output you have shared.
Reason: Completed
Exit Code: 0
Once Pod is assigned to a node by scheduler, kubelet starts creating containers using container runtime. There are three possible states of containers: Waiting, Running and Terminated.
Terminated: Indicates that the container completed its execution and has stopped running.
A container enters into this when it has successfully completed execution or when it has failed for some reason. Regardless, a reason and exit code is displayed, as well as the container’s start and finish time.
On your screenshot its clear that container inside pod is running to completion with its work done, with exit code 0 as below snippet
State: Terminated
Reason: Completed
Exit Code: 0
You should either add a long running process to your container or define restartPolicy: Never on pod definition.
Tested your image with adding correct restart policy and POD runs correctly to completion with no crash
kubectl run test --image=abhishekk27/kube-pub:new --restart=Never
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test 0/1 Completed 0 8m12s
yaml genrated :
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: test
name: test
spec:
containers:
- image: abhishekk27/kube-pub:new
name: test
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
Result:
$ kubectl describe pod test
Name: test
Namespace: default
Priority: 0
Node: dlv-k8s-node-1/131.160.200.104
Start Time: Fri, 17 Jan 2020 09:45:00 +0000
Labels: run=test
Annotations: <none>
Status: Succeeded
IP: 10.244.1.12
IPs:
IP: 10.244.1.12
Containers:
test:
Container ID: docker://b335e5fef022dced824f85ba2bfe4c024608c9b5463599eb36591a14d709786d
Image: abhishekk27/kube-pub:new
Image ID: docker-pullable://abhishekk27/kube-pub#sha256:6a696bd733edaa48b9be781960f4ee178d16f1c9aea51e53bd0f54326a3d05b1
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 17 Jan 2020 09:45:48 +0000
Finished: Fri, 17 Jan 2020 09:45:48 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7f4mt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-7f4mt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7f4mt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m50s default-scheduler Successfully assigned default/test to dlv-k8s-node-1
Normal Pulling 6m46s kubelet, dlv-k8s-node-1 Pulling image "abhishekk27/kube-pub:new"
Normal Pulled 5m58s kubelet, dlv-k8s-node-1 Successfully pulled image "abhishekk27/kube-pub:new"
Normal Created 5m58s kubelet, dlv-k8s-node-1 Created container test
Normal Started 5m58s kubelet, dlv-k8s-node-1 Started container test

NodeJs api container crashing in kubernetes

As part of the CICD pipeline I deploy my web api to kubernetes, the most recent branch I'm working on keeps crashing.
I have made sure the app runs locally for all the configurations, also the CICD pipeline on the master branch succeeds. I'm assuming is some change I introduced is making the app fail but I can't see any problem on the logs.
This is my DOCKERFILE
FROM node:12
WORKDIR /usr/src/app
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 5000
EXPOSE $PORT
CMD [ "npm", "start" ]
this is what I get when I run kubectl describe on the corresponding pod
Controlled By: ReplicaSet/review-refactor-e-0jmik1-7f75c45779
Containers:
auto-deploy-app:
Container ID: docker://8d6035b8ee0938262ea50e2f74d3ab627761fdf5b1811460b24f94a74f880810
Image: registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751
Image ID: docker-pullable://registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints#sha256:de1e4478867f54a76f1c82374dcebb1d40b3eb0cde24caf936a21a4d16471312
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 27 Jul 2019 19:18:07 +0100
Finished: Sat, 27 Jul 2019 19:18:49 +0100
Ready: False
Restart Count: 7
Liveness: http-get http://:5000/ delay=15s timeout=15s period=10s #success=1 #failure=3
Readiness: http-get http://:5000/ delay=5s timeout=3s period=10s #success=1 #failure=3
Environment Variables from:
review-refactor-e-0jmik1-secret Secret Optional: false
Environment:
DATABASE_URL: postgres://:#review-refactor-e-0jmik1-postgres:5432/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mvvfv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mvvfv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mvvfv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m52s default-scheduler Successfully assigned metadata-service-13359548/review-refactor-e-0jmik1-7f75c45779-jfw22 to gke-qa2-default-pool-4dc045be-g8d9
Normal Pulling 9m51s kubelet, gke-qa2-default-pool-4dc045be-g8d9 pulling image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751"
Normal Pulled 9m45s kubelet, gke-qa2-default-pool-4dc045be-g8d9 Successfully pulled image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751"
Warning Unhealthy 8m58s kubelet, gke-qa2-default-pool-4dc045be-g8d9 Readiness probe failed: Get http://10.48.1.34:5000/: dial tcp 10.48.1.34:5000: connect: connection refused
Warning Unhealthy 8m28s (x6 over 9m28s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Started 8m23s (x3 over 9m42s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Started container
Warning Unhealthy 8m23s (x6 over 9m23s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 8m23s (x2 over 9m3s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Killing container with id docker://auto-deploy-app:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 8m23s (x2 over 9m3s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Container image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751" already present on machine
Normal Created 8m23s (x3 over 9m43s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Created container
Warning BackOff 4m42s (x7 over 5m43s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Back-off restarting failed container
I expect the app to get deployed to kubernetes but instead I see a CrashLoopBackOff error on kubernetes.
I also don't see any application specific errors in the logs.
I figured it out. I had to add an endpoint mapped to the root url, apparently as part of the CD it gets ping and if there is no response then the job fails.

app nodejs in kubernetes cluster dont stay runing - CrashLoopBackOff

I have a small application in nodejs to do tests with kubernetes, but it seems that the application does not keep running
I put all the code that I developed to test, in the GitHub
I'm run kubectl create -f deploy.yaml
Works, but..
[webapp#srvapih ex-node]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api-7b89bd4755-4lc6k 1/1 Running 0 5s
api-7b89bd4755-7x964 0/1 ContainerCreating 0 5s
api-7b89bd4755-dv299 1/1 Running 0 5s
api-7b89bd4755-w6tzj 0/1 ContainerCreating 0 5s
api-7b89bd4755-xnm8l 0/1 ContainerCreating 0 5s
[webapp#srvapih ex-node]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api-7b89bd4755-4lc6k 0/1 CrashLoopBackOff 1 11s
api-7b89bd4755-7x964 0/1 CrashLoopBackOff 1 11s
api-7b89bd4755-dv299 0/1 CrashLoopBackOff 1 11s
api-7b89bd4755-w6tzj 0/1 CrashLoopBackOff 1 11s
api-7b89bd4755-xnm8l 0/1 CrashLoopBackOff 1 11s
Events for describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 6m48s (x5 over 8m14s) kubelet, srvweb05.beirario.intranet Container image "node:8-alpine" already present on machine
Normal Created 6m48s (x5 over 8m14s) kubelet, srvweb05.beirario.intranet Created container
Normal Started 6m48s (x5 over 8m12s) kubelet, srvweb05.beirario.intranet Started container
Normal Scheduled 6m9s default-scheduler Successfully assigned default/api-7b89bd4755-4lc6k to srvweb05.beirario.intranet
Warning BackOff 3m2s (x28 over 8m8s) kubelet, srvweb05.beirario.intranet Back-off restarting failed container
All I can say here - you are providing a task that finish with command: ["/bin/sh","-c", "node", "servidor.js"].
Instead of this you should provide command in that way so it never completes.
Describe your pods shows that container in the pod has been completed successfully with exit code 0
Containers:
ex-node:
Container ID: docker://836ffd771b3514fd13ae3e6b8818a7f35807db55cf8f756e962131823a476675
Image: node:8-alpine
Image ID: docker-pullable://node#sha256:8e9987a6d91d783c56980f1bd4b23b4c05f9f6076d513d6350fef8fe09ed01fd
Port: 3000/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
node
servidor.js
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 08 Mar 2019 14:29:54 +0000
Finished: Fri, 08 Mar 2019 14:29:54 +0000
you may use "process.stdout.write" method in your code ,This will cause the k8s session to be lost. Do not print anything in stdout!
Try to use pm2 https://pm2.io/docs/runtime/integration/docker/. It starts your nodejs app as a background process.

Resources