GKE deployment ReactJS app CrashLoopBackoff - node.js

I have built my Docker image for my ReactJS app. I run the image locally and tested it, which works fine.
I'm using Google Cloud Build which automatically puts my container image to gcr.io container repo.
Then I try to create a deployment from my container image which resides in gcr.io
But the deployment is not successfully finishing. It gives me
Pod errors: CrashLoopBackOff
Does not have minimum availability
Here is my Docker image.
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD [ "npm", "start" ]
When I look at the pod -> container logs, I see my app has been restarting without any error log.
I 2020-04-27T15:07:57.505370601Z > react-scripts start
I 2020-04-27T15:07:57.505375001Z
I 2020-04-27T15:07:59.392880949Z [34mℹ[39m [90m「wds」[39m: Project is running at http://10.48.0.14/
I 2020-04-27T15:07:59.393329591Z [34mℹ[39m [90m「wds」[39m: webpack output is served from
I 2020-04-27T15:07:59.393494921Z [34mℹ[39m [90m「wds」[39m: Content not from webpack is served from /usr/src/app/public
I 2020-04-27T15:07:59.393641770Z [34mℹ[39m [90m「wds」[39m: 404s will fallback to /
I 2020-04-27T15:07:59.393881277Z Starting the development server...
I suspect what happens is, kubernetes does not wait enough and try to restart application.
I'm not using a deployment.yaml but simply using the GCP console.
There is no health endpoint in my ReactJS app.
This is the output for kubectl describe pod ...
Name: helloworld-gke-7fd977fd94-kvrcj
Namespace: default
Priority: 0
Node: gke-helloworld-gke-default-pool-a23be758-g8q7/10.182.0.2
Start Time: Mon, 27 Apr 2020 17:15:18 +0200
Labels: app=hello
pod-template-hash=7fd977fd94
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container hello-app
Status: Running
IP: 10.48.0.15
IPs: <none>
Controlled By: ReplicaSet/helloworld-gke-7fd977fd94
Containers:
hello-app:
Container ID: docker://389151ed6...
Image: gcr.io/tuition-h...
Image ID: docker-pullable://gcr.io/...
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 27 Apr 2020 18:28:00 +0200
Finished: Mon, 27 Apr 2020 18:28:02 +0200
Ready: False
Restart Count: 19
Requests:
cpu: 100m
Environment:
PORT: 8080
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sqpm7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-sqpm7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sqpm7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m17s (x320 over 74m) kubelet, gke-helloworld-gke-default-pool-... Back-off restarting failed container

Related

kubernetes giving CrashLoopBackOff error while creating pods

I'm creating a pod of node container, and it is giving CrashLoopBackOff error.
kubectl get pods
kubectl describe pod test-node3
Any help would be appreciated.
You can add command as below so that pod will remain in running state.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
Ref: Doc
Your container does not have a long running process. The main process in the container is exiting with exit code 0 which usually means that the process has terminated successfully. You can see it in the kubectl describe output you have shared.
Reason: Completed
Exit Code: 0
Once Pod is assigned to a node by scheduler, kubelet starts creating containers using container runtime. There are three possible states of containers: Waiting, Running and Terminated.
Terminated: Indicates that the container completed its execution and has stopped running.
A container enters into this when it has successfully completed execution or when it has failed for some reason. Regardless, a reason and exit code is displayed, as well as the container’s start and finish time.
On your screenshot its clear that container inside pod is running to completion with its work done, with exit code 0 as below snippet
State: Terminated
Reason: Completed
Exit Code: 0
You should either add a long running process to your container or define restartPolicy: Never on pod definition.
Tested your image with adding correct restart policy and POD runs correctly to completion with no crash
kubectl run test --image=abhishekk27/kube-pub:new --restart=Never
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test 0/1 Completed 0 8m12s
yaml genrated :
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: test
name: test
spec:
containers:
- image: abhishekk27/kube-pub:new
name: test
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
Result:
$ kubectl describe pod test
Name: test
Namespace: default
Priority: 0
Node: dlv-k8s-node-1/131.160.200.104
Start Time: Fri, 17 Jan 2020 09:45:00 +0000
Labels: run=test
Annotations: <none>
Status: Succeeded
IP: 10.244.1.12
IPs:
IP: 10.244.1.12
Containers:
test:
Container ID: docker://b335e5fef022dced824f85ba2bfe4c024608c9b5463599eb36591a14d709786d
Image: abhishekk27/kube-pub:new
Image ID: docker-pullable://abhishekk27/kube-pub#sha256:6a696bd733edaa48b9be781960f4ee178d16f1c9aea51e53bd0f54326a3d05b1
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 17 Jan 2020 09:45:48 +0000
Finished: Fri, 17 Jan 2020 09:45:48 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7f4mt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-7f4mt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7f4mt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m50s default-scheduler Successfully assigned default/test to dlv-k8s-node-1
Normal Pulling 6m46s kubelet, dlv-k8s-node-1 Pulling image "abhishekk27/kube-pub:new"
Normal Pulled 5m58s kubelet, dlv-k8s-node-1 Successfully pulled image "abhishekk27/kube-pub:new"
Normal Created 5m58s kubelet, dlv-k8s-node-1 Created container test
Normal Started 5m58s kubelet, dlv-k8s-node-1 Started container test

NodeJs api container crashing in kubernetes

As part of the CICD pipeline I deploy my web api to kubernetes, the most recent branch I'm working on keeps crashing.
I have made sure the app runs locally for all the configurations, also the CICD pipeline on the master branch succeeds. I'm assuming is some change I introduced is making the app fail but I can't see any problem on the logs.
This is my DOCKERFILE
FROM node:12
WORKDIR /usr/src/app
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 5000
EXPOSE $PORT
CMD [ "npm", "start" ]
this is what I get when I run kubectl describe on the corresponding pod
Controlled By: ReplicaSet/review-refactor-e-0jmik1-7f75c45779
Containers:
auto-deploy-app:
Container ID: docker://8d6035b8ee0938262ea50e2f74d3ab627761fdf5b1811460b24f94a74f880810
Image: registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751
Image ID: docker-pullable://registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints#sha256:de1e4478867f54a76f1c82374dcebb1d40b3eb0cde24caf936a21a4d16471312
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 27 Jul 2019 19:18:07 +0100
Finished: Sat, 27 Jul 2019 19:18:49 +0100
Ready: False
Restart Count: 7
Liveness: http-get http://:5000/ delay=15s timeout=15s period=10s #success=1 #failure=3
Readiness: http-get http://:5000/ delay=5s timeout=3s period=10s #success=1 #failure=3
Environment Variables from:
review-refactor-e-0jmik1-secret Secret Optional: false
Environment:
DATABASE_URL: postgres://:#review-refactor-e-0jmik1-postgres:5432/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mvvfv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mvvfv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mvvfv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m52s default-scheduler Successfully assigned metadata-service-13359548/review-refactor-e-0jmik1-7f75c45779-jfw22 to gke-qa2-default-pool-4dc045be-g8d9
Normal Pulling 9m51s kubelet, gke-qa2-default-pool-4dc045be-g8d9 pulling image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751"
Normal Pulled 9m45s kubelet, gke-qa2-default-pool-4dc045be-g8d9 Successfully pulled image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751"
Warning Unhealthy 8m58s kubelet, gke-qa2-default-pool-4dc045be-g8d9 Readiness probe failed: Get http://10.48.1.34:5000/: dial tcp 10.48.1.34:5000: connect: connection refused
Warning Unhealthy 8m28s (x6 over 9m28s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Started 8m23s (x3 over 9m42s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Started container
Warning Unhealthy 8m23s (x6 over 9m23s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 8m23s (x2 over 9m3s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Killing container with id docker://auto-deploy-app:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 8m23s (x2 over 9m3s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Container image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751" already present on machine
Normal Created 8m23s (x3 over 9m43s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Created container
Warning BackOff 4m42s (x7 over 5m43s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Back-off restarting failed container
I expect the app to get deployed to kubernetes but instead I see a CrashLoopBackOff error on kubernetes.
I also don't see any application specific errors in the logs.
I figured it out. I had to add an endpoint mapped to the root url, apparently as part of the CD it gets ping and if there is no response then the job fails.

how to solve EADDRINUSE in GKE container?

I am new to containers and using GKE. I used to run my node server app with npm run debug and am trying to do this as well on GKE using the shell of my container. When I log into the shell of myapp container and do this I get:
> api_server#0.0.0 start /usr/src/app
> node src/
events.js:167
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE :::8089
Normally I deal with this using something like killall -9 node but when I do this it looks like I am kicked out of my shell and the container is restarted by kubernetes. It seems node is using the port already or something:
netstat -tulpn | grep 8089
tcp 0 0 :::8089 :::* LISTEN 23/node
How can I start my server from the shell?
My config files:
Dockerfile:
FROM node:10-alpine
RUN apk add --update \
libc6-compat
WORKDIR /usr/src/app
COPY package*.json ./
COPY templates-mjml/ templates-mjml/
COPY public/ public/
COPY src/ src/
COPY data/ data/
COPY config/ config/
COPY migrations/ migrations/
ENV NODE_ENV 'development'
ENV PORT '8089'
RUN npm install --development
myapp.yaml:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
- port: 8089
name: http
selector:
app: myapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myproject-224713/firstapp:v4
ports:
- containerPort: 8089
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=myproject-224713:europe-west4:mydatabase=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
---
myrouter.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myapp-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "*"
gateways:
- myapp-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: myapp
weight: 100
websocketUpgrade: true
EDIT:
I got following logs:
EDIT 2:
After adding a featherjs health service I get following output for describe:
Name: myapp-95df4dcd6-lptnq
Namespace: default
Node: gke-standard-cluster-1-default-pool-59600833-pcj3/10.164.0.3
Start Time: Wed, 02 Jan 2019 22:08:33 +0100
Labels: app=myapp
pod-template-hash=518908782
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container myapp; cpu request for container cloudsql-proxy
sidecar.istio.io/status:
{"version":"3c9617ff82c9962a58890e4fa987c69ca62487fda71c23f3a2aad1d7bb46c748","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status: Running
IP: 10.44.3.17
Controlled By: ReplicaSet/myapp-95df4dcd6
Init Containers:
istio-init:
Container ID: docker://768b2327c6cfa57b3d25a7029e52ce6a88dec6848e91dd7edcdf9074c91ff270
Image: gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0
Image ID: docker-pullable://gcr.io/gke-release/istio/proxy_init#sha256:e30d47d2f269347a973523d0c5d7540dbf7f87d24aca2737ebc09dbe5be53134
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
8089,
-d
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 02 Jan 2019 22:08:34 +0100
Finished: Wed, 02 Jan 2019 22:08:35 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Containers:
myapp:
Container ID: docker://5566a3e8242ec6755dc2f26872cfb024fab42d5f64aadc3db1258fcb834f8418
Image: gcr.io/myproject-224713/firstapp:v4
Image ID: docker-pullable://gcr.io/myproject-224713/firstapp#sha256:0cbd4fae0b32fa0da5a8e6eb56cb9b86767568d243d4e01b22d332d568717f41
Port: 8089/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 02 Jan 2019 22:09:19 +0100
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 02 Jan 2019 22:08:35 +0100
Finished: Wed, 02 Jan 2019 22:09:19 +0100
Ready: False
Restart Count: 1
Requests:
cpu: 100m
Liveness: http-get http://:8089/health delay=15s timeout=20s period=10s #success=1 #failure=3
Readiness: http-get http://:8089/health delay=5s timeout=5s period=10s #success=1 #failure=3
Environment:
POSTGRES_DB_HOST: 127.0.0.1:5432
POSTGRES_DB_USER: <set to the key 'username' in secret 'mysecret'> Optional: false
POSTGRES_DB_PASSWORD: <set to the key 'password' in secret 'mysecret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
cloudsql-proxy:
Container ID: docker://414799a0699abe38c9759f82a77e1a3e06123714576d6d57390eeb07611f9a63
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy#sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=myproject-224713:europe-west4:osm=tcp:5432
-credential_file=/secrets/cloudsql/credentials.json
State: Running
Started: Wed, 02 Jan 2019 22:08:36 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
istio-proxy:
Container ID: docker://898bc95c6f8bde18814ef01ce499820d545d7ea2d8bf494b0308f06ab419041e
Image: gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0
Image ID: docker-pullable://gcr.io/gke-release/istio/proxyv2#sha256:826ef4469e4f1d4cabd0dc846f9b7de6507b54f5f0d0171430fcd3fb6f5132dc
Port: <none>
Host Port: <none>
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
myapp
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15007
--discoveryRefreshDelay
1s
--zipkinAddress
zipkin.istio-system:9411
--connectTimeout
10s
--statsdUdpAddress
istio-statsd-prom-bridge.istio-system:9125
--proxyAdminPort
15000
--controlPlaneAuthPolicy
NONE
State: Running
Started: Wed, 02 Jan 2019 22:08:36 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
POD_NAME: myapp-95df4dcd6-lptnq (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: myapp-95df4dcd6-lptnq (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-instance-credentials
Optional: false
default-token-9vtz5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9vtz5
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 68s default-scheduler Successfully assigned myapp-95df4dcd6-lptnq to gke-standard-cluster-1-default-pool-59600833-pcj3
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "istio-envoy"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "default-token-9vtz5"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "cloudsql-instance-credentials"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "istio-certs"
Normal Pulled 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0" already present on machine
Normal Created 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Pulled 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
Normal Created 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Created 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Pulled 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0" already present on machine
Normal Created 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Warning Unhealthy 31s (x4 over 61s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Pulled 22s (x2 over 66s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/myproject-224713/firstapp:v4" already present on machine
Warning Unhealthy 22s (x3 over 42s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 22s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Killing container with id docker://myapp:Container failed liveness probe.. Container will be killed and recreated.
This is just how Kubernetes works, as long as your pod has processes running it will remain 'up'. The moment you kill one if its processes Kubernetes will restart the pod because it crashed or something went wrong.
If you really want to debug with npm run debug consider either:
Create a container with the CMD (at the end) or ENTRYPOINT value in your Dockerfile that is npm run debug. Then run it using a Deployment definition in Kubernetes.
Override the command in the myapp container in your deployment definition with something like:
spec:
containers:
- name: myapp
image: gcr.io/myproject-224713/firstapp:v4
ports:
- containerPort: 8089
command: ["npm", "run", "debug" ]
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password

test image from azure container registry

I created a simple Docker image from a "Hello World" java application.
This is my Dockerfile
FROM java:8
COPY . /var/www/java
WORKDIR /var/www/java
RUN javac HelloWorld.java
CMD ["java", "HelloWorld"]
I pushed the image (java-app) to Azure Container Registry.
$ az acr repository list --name AContainerRegistry --output tableResult
----------------
java-app
I want to deploy it
amhg$ kubectl run dockerproject --image=acontainerregistry.azurecr.io/java-app:v1
deployment.apps "dockerproject" created
amhg$ kubectl expose deployments dockerproject --port=80 --type=LoadBalancer
service "dockerproject" exposed
and see the pods, the pod is crashed
amhg$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dockerproject-b6799d879-pt5rx 0/1 CrashLoopBackOff 8 19m
Is there a way to "test"/run the image from the central registry, how come it crashes?
HERE DESCRIBE POD
amhg$ kubectl describe pod dockerproject-64fbf7649-spc7h
Name: dockerproject-64fbf7649-spc7h
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 11:53:58 +0200
Labels: pod-template-hash=209693205
run=dockerproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"dockerproject-64fbf7649","uid":"946610e4-43b7-11e8-9537-0a58ac1...
Status: Running
IP: 10.244.0.38
Controlled By: ReplicaSet/dockerproject-64fbf7649
Containers:
dockerproject:
Container ID: docker://1f2a7a6870a37e4d6b53fc834b0d4d3b681e9faaacc3772177a918e66357404e
Image: acontainerregistry.azurecr.io/java-app:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/java-app#sha256:eaf6fe53a59de287ad76a18de2c7f05580b1f25153624161aadcc7b8ef47b0c4
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
Ready: False
Restart Count: 13
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43m default-scheduler Successfully assigned dockerproject2-64fbf7649-spc7h to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 43m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/java-app:v1" already present on machine
Normal Created 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Started container
Warning FailedSync 8m (x161 over 43m) kubelet, aks-nodepool1-39744669-0 Error syncing pod
Warning BackOff 3m (x184 over 43m) kubelet, aks-nodepool1-39744669-0 Back-off restarting failed container
When you run an application in the Pod, Kubernetes expects that it will work all the time as a daemon until you will stop it somehow.
In your details about the pod I see this:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
It means that your application exited with code 0 (which means "all is ok") right after start. So, the image was successfully downloaded (registry is OK) and run, but the application exited.
That's why Kubernetes tries to restart the pod all the time.
The only thing I can suggest - find a reason why the application stops and fix it.

Performance metrics in Kubernetes Dashboard missing in Azure Kubernetes deployment

Update 2: I was able to get the statistics by using grafana and influxDB. However, I find this overkill. I want to see the current status of my cluster, not persee the historical trends. Based on the linked image, it should be possible by using the pre-deployed Heapster and the Kubernetes Dashboard
Update 1:
With the command below, I do see resource information. I guess the remaining part of the question is why it is not showing up (or how I should configure it to show up) in the kubernetes dashboard, as shown in this image: https://docs.giantswarm.io/img/dashboard-ui.png
$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-agentpool0-41204139-0 36m 1% 682Mi 9%
k8s-agentpool0-41204139-1 33m 1% 732Mi 10%
k8s-agentpool0-41204139-10 36m 1% 690Mi 10%
[truncated]
I am trying to monitor performance in my Azure Kubernetes deployment. I noticed it has Heapster running by default. I did not launch this one, but do want to leverage it if it is there. My question is: how can I access it, or is there something wrong with it? Here are the details I can think of, let me know if you need more.
$ kubectl cluster-info
Kubernetes master is running at https://[hidden].uksouth.cloudapp.azure.com
Heapster is running at https://[hidden].uksouth.cloudapp.azure.com/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://[hidden].uksouth.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://[hidden].uksouth.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
tiller-deploy is running at https://[hidden].uksouth.cloudapp.azure.com/api/v1/namespaces/kube-system/services/tiller-deploy:tiller/proxy
I set up a proxy:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
Point my browser to
localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/workload?namespace=default
I see the kubernetes dashboard, but do notice that I do not see the performance graphs that are displayed at https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. I also do not see the admin section.
I then point my browser to localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy and get
404 page not found
Inspecting the pods:
kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
heapster-2205950583-43w4b 2/2 Running 0 1d
kube-addon-manager-k8s-master-41204139-0 1/1 Running 0 1d
kube-apiserver-k8s-master-41204139-0 1/1 Running 0 1d
kube-controller-manager-k8s-master-41204139-0 1/1 Running 0 1d
kube-dns-v20-2000462293-1j20h 3/3 Running 0 16h
kube-dns-v20-2000462293-hqwfn 3/3 Running 0 16h
kube-proxy-0kwkf 1/1 Running 0 1d
kube-proxy-13bh5 1/1 Running 0 1d
[truncated]
kube-proxy-zfbb1 1/1 Running 0 1d
kube-scheduler-k8s-master-41204139-0 1/1 Running 0 1d
kubernetes-dashboard-732940207-w7pt2 1/1 Running 0 1d
tiller-deploy-3007245560-4tk78 1/1 Running 0 1d
Checking the log:
$kubectl logs heapster-2205950583-43w4b heapster --namespace=kube-system
I0309 06:11:21.241752 19 heapster.go:72] /heapster --source=kubernetes.summary_api:""
I0309 06:11:21.241813 19 heapster.go:73] Heapster version v1.4.2
I0309 06:11:21.242310 19 configs.go:61] Using Kubernetes client with master "https://10.0.0.1:443" and version v1
I0309 06:11:21.242331 19 configs.go:62] Using kubelet port 10255
I0309 06:11:21.243557 19 heapster.go:196] Starting with Metric Sink
I0309 06:11:21.344547 19 heapster.go:106] Starting heapster on port 8082
E0309 14:14:05.000293 19 summary.go:389] Node k8s-agentpool0-41204139-32 is not ready
E0309 14:14:05.000331 19 summary.go:389] Node k8s-agentpool0-41204139-56 is not ready
[truncated the other agent pool messages saying not ready]
E0309 14:24:05.000645 19 summary.go:389] Node k8s-master-41204139-0 is not ready
$kubectl describe pod heapster-2205950583-43w4b --namespace=kube-system
Name: heapster-2205950583-43w4b
Namespace: kube-system
Node: k8s-agentpool0-41204139-54/10.240.0.11
Start Time: Fri, 09 Mar 2018 07:11:15 +0100
Labels: k8s-app=heapster
pod-template-hash=2205950583
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"heapster-2205950583","uid":"ac75e772-2360-11e8-9e1c-00224807...
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 10.244.58.2
Controlled By: ReplicaSet/heapster-2205950583
Containers:
heapster:
Container ID: docker://a9205e7ab9070a1d1bdee4a1b93eb47339972ad979c4d35e7d6b59ac15a91817
Image: k8s-gcrio.azureedge.net/heapster-amd64:v1.4.2
Image ID: docker-pullable://k8s-gcrio.azureedge.net/heapster-amd64#sha256:f58ded16b56884eeb73b1ba256bcc489714570bacdeca43d4ba3b91ef9897b20
Port: <none>
Command:
/heapster
--source=kubernetes.summary_api:""
State: Running
Started: Fri, 09 Mar 2018 07:11:20 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 121m
memory: 464Mi
Requests:
cpu: 121m
memory: 464Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from heapster-token-txk8b (ro)
heapster-nanny:
Container ID: docker://68e021532a482f32abec844d6f9ea00a4a8232b8d1004b7df4199d2c7d3a3b4c
Image: k8s-gcrio.azureedge.net/addon-resizer:1.7
Image ID: docker-pullable://k8s-gcrio.azureedge.net/addon-resizer#sha256:dcec9a5c2e20b8df19f3e9eeb87d9054a9e94e71479b935d5cfdbede9ce15895
Port: <none>
Command:
/pod_nanny
--cpu=80m
--extra-cpu=0.5m
--memory=140Mi
--extra-memory=4Mi
--threshold=5
--deployment=heapster
--container=heapster
--poll-period=300000
--estimator=exponential
State: Running
Started: Fri, 09 Mar 2018 07:11:18 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 50m
memory: 90Mi
Requests:
cpu: 50m
memory: 90Mi
Environment:
MY_POD_NAME: heapster-2205950583-43w4b (v1:metadata.name)
MY_POD_NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from heapster-token-txk8b (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
heapster-token-txk8b:
Type: Secret (a volume populated by a Secret)
SecretName: heapster-token-txk8b
Optional: false
QoS Class: Guaranteed
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: <none>
Events: <none>
I have seen in the past that if you restart the dashboard pod it starts working. Can you try that real fast and let me know?

Resources