As part of the CICD pipeline I deploy my web api to kubernetes, the most recent branch I'm working on keeps crashing.
I have made sure the app runs locally for all the configurations, also the CICD pipeline on the master branch succeeds. I'm assuming is some change I introduced is making the app fail but I can't see any problem on the logs.
This is my DOCKERFILE
FROM node:12
WORKDIR /usr/src/app
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 5000
EXPOSE $PORT
CMD [ "npm", "start" ]
this is what I get when I run kubectl describe on the corresponding pod
Controlled By: ReplicaSet/review-refactor-e-0jmik1-7f75c45779
Containers:
auto-deploy-app:
Container ID: docker://8d6035b8ee0938262ea50e2f74d3ab627761fdf5b1811460b24f94a74f880810
Image: registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751
Image ID: docker-pullable://registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints#sha256:de1e4478867f54a76f1c82374dcebb1d40b3eb0cde24caf936a21a4d16471312
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 27 Jul 2019 19:18:07 +0100
Finished: Sat, 27 Jul 2019 19:18:49 +0100
Ready: False
Restart Count: 7
Liveness: http-get http://:5000/ delay=15s timeout=15s period=10s #success=1 #failure=3
Readiness: http-get http://:5000/ delay=5s timeout=3s period=10s #success=1 #failure=3
Environment Variables from:
review-refactor-e-0jmik1-secret Secret Optional: false
Environment:
DATABASE_URL: postgres://:#review-refactor-e-0jmik1-postgres:5432/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mvvfv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mvvfv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mvvfv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m52s default-scheduler Successfully assigned metadata-service-13359548/review-refactor-e-0jmik1-7f75c45779-jfw22 to gke-qa2-default-pool-4dc045be-g8d9
Normal Pulling 9m51s kubelet, gke-qa2-default-pool-4dc045be-g8d9 pulling image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751"
Normal Pulled 9m45s kubelet, gke-qa2-default-pool-4dc045be-g8d9 Successfully pulled image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751"
Warning Unhealthy 8m58s kubelet, gke-qa2-default-pool-4dc045be-g8d9 Readiness probe failed: Get http://10.48.1.34:5000/: dial tcp 10.48.1.34:5000: connect: connection refused
Warning Unhealthy 8m28s (x6 over 9m28s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Started 8m23s (x3 over 9m42s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Started container
Warning Unhealthy 8m23s (x6 over 9m23s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 8m23s (x2 over 9m3s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Killing container with id docker://auto-deploy-app:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 8m23s (x2 over 9m3s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Container image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751" already present on machine
Normal Created 8m23s (x3 over 9m43s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Created container
Warning BackOff 4m42s (x7 over 5m43s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Back-off restarting failed container
I expect the app to get deployed to kubernetes but instead I see a CrashLoopBackOff error on kubernetes.
I also don't see any application specific errors in the logs.
I figured it out. I had to add an endpoint mapped to the root url, apparently as part of the CD it gets ping and if there is no response then the job fails.
Related
I have built my Docker image for my ReactJS app. I run the image locally and tested it, which works fine.
I'm using Google Cloud Build which automatically puts my container image to gcr.io container repo.
Then I try to create a deployment from my container image which resides in gcr.io
But the deployment is not successfully finishing. It gives me
Pod errors: CrashLoopBackOff
Does not have minimum availability
Here is my Docker image.
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . ./
EXPOSE 3000
CMD [ "npm", "start" ]
When I look at the pod -> container logs, I see my app has been restarting without any error log.
I 2020-04-27T15:07:57.505370601Z > react-scripts start
I 2020-04-27T15:07:57.505375001Z
I 2020-04-27T15:07:59.392880949Z [34mℹ[39m [90m「wds」[39m: Project is running at http://10.48.0.14/
I 2020-04-27T15:07:59.393329591Z [34mℹ[39m [90m「wds」[39m: webpack output is served from
I 2020-04-27T15:07:59.393494921Z [34mℹ[39m [90m「wds」[39m: Content not from webpack is served from /usr/src/app/public
I 2020-04-27T15:07:59.393641770Z [34mℹ[39m [90m「wds」[39m: 404s will fallback to /
I 2020-04-27T15:07:59.393881277Z Starting the development server...
I suspect what happens is, kubernetes does not wait enough and try to restart application.
I'm not using a deployment.yaml but simply using the GCP console.
There is no health endpoint in my ReactJS app.
This is the output for kubectl describe pod ...
Name: helloworld-gke-7fd977fd94-kvrcj
Namespace: default
Priority: 0
Node: gke-helloworld-gke-default-pool-a23be758-g8q7/10.182.0.2
Start Time: Mon, 27 Apr 2020 17:15:18 +0200
Labels: app=hello
pod-template-hash=7fd977fd94
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container hello-app
Status: Running
IP: 10.48.0.15
IPs: <none>
Controlled By: ReplicaSet/helloworld-gke-7fd977fd94
Containers:
hello-app:
Container ID: docker://389151ed6...
Image: gcr.io/tuition-h...
Image ID: docker-pullable://gcr.io/...
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 27 Apr 2020 18:28:00 +0200
Finished: Mon, 27 Apr 2020 18:28:02 +0200
Ready: False
Restart Count: 19
Requests:
cpu: 100m
Environment:
PORT: 8080
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sqpm7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-sqpm7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sqpm7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m17s (x320 over 74m) kubelet, gke-helloworld-gke-default-pool-... Back-off restarting failed container
i am trying to install fluend with elasticsearch and kibana using bitnami helm chat.
I am following below mention article
Integrate Logging Kubernetes Kibana ElasticSearch Fluentd
But when I deploy the elasticsearch it's pod goes on Terminating or Back-off state.
I am stuck on this from 3 days, any help is appreciated.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 41m (x2 over 41m) default-scheduler error while running "VolumeBinding" filter plugin for pod "elasticsearch-master-0": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 41m default-scheduler Successfully assigned default/elasticsearch-master-0 to minikube
Normal Pulling 41m kubelet, minikube Pulling image "busybox:latest"
Normal Pulled 41m kubelet, minikube Successfully pulled image "busybox:latest"
Normal Created 41m kubelet, minikube Created container sysctl
Normal Started 41m kubelet, minikube Started container sysctl
Normal Pulling 41m kubelet, minikube Pulling image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
Normal Pulled 39m kubelet, minikube Successfully pulled image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6"
Normal Created 39m kubelet, minikube Created container chown
Normal Started 39m kubelet, minikube Started container chown
Normal Created 38m kubelet, minikube Created container elasticsearch
Normal Started 38m kubelet, minikube Started container elasticsearch
Warning Unhealthy 38m kubelet, minikube Readiness probe failed: Get http://172.17.0.7:9200/_cluster/health?local=true: dial tcp 172.17.0.7:9200: connect: connection refused
Normal Pulled 38m (x2 over 38m) kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Warning FailedMount 32m kubelet, minikube MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
Normal SandboxChanged 32m kubelet, minikube Pod sandbox changed, it will be killed and re-created.
Normal Pulling 32m kubelet, minikube Pulling image "busybox:latest"
Normal Pulled 32m kubelet, minikube Successfully pulled image "busybox:latest"
Normal Created 32m kubelet, minikube Created container sysctl
Normal Started 32m kubelet, minikube Started container sysctl
Normal Pulled 32m kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Normal Created 32m kubelet, minikube Created container chown
Normal Started 32m kubelet, minikube Started container chown
Normal Pulled 32m (x2 over 32m) kubelet, minikube Container image "docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.6" already present on machine
Normal Created 32m (x2 over 32m) kubelet, minikube Created container elasticsearch
Normal Started 32m (x2 over 32m) kubelet, minikube Started container elasticsearch
Warning Unhealthy 32m kubelet, minikube Readiness probe failed: Get http://172.17.0.6:9200/_cluster/health?local=true: dial tcp 172.17.0.6:9200: connect: connection refused
Warning BackOff 32m (x2 over 32m) kubelet, minikube Back-off restarting failed container
The issue here is the pod has unbound immediate PersistentVolumeClaims. You can set master.persistence.enabled to false while using helm to deploy it. Alternatively you need check if a default storage class exists in the cluster and if it doesn't then create a storage class and make it default.
Short answer: it crashed. You can check the Pod status object for some details like exit status and if was an oomkill and then look at the container logs to see if they show anything.
When deploying an image to an AKS instance, the image pull from the ACR (Premium SKU) is very slow, even for "small" images around ~150 MBs in size.
Both the AKS resource and the ACR resource are in the Canada East region.
Here is an example:
root#076fff2831b2:/tmp# kubectl describe pod application-service-59bcf96874-pvrmb
Name: application-service-59bcf96874-pvrmb
Namespace: default
Priority: 0
Node: aks-41067869-1/10.255.13.163
Start Time: Tue, 11 Feb 2020 18:15:53 -0500
Labels: app.kubernetes.io/instance=application-service
app.kubernetes.io/name=application-service
pod-template-hash=59bcf96874
Annotations: <none>
Status: Running
IP: 10.255.13.175
IPs: <none>
Controlled By: ReplicaSet/application-service-59bcf96874
Containers:
application-service:
Container ID: docker://0e86526a293d9055d482a09f043f0be68c594244fe4216f8fb190bc2caf6b65b
Image: myacr01.azurecr.io/microservices/application-service:0.0.6
Image ID: docker-pullable://myacr01.azurecr.io/microservices/application-service#sha256:cfbb3ffa7adc52da9cc0b8d7f78376076ea712025b59df8e406c559d369f4085
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 11 Feb 2020 18:35:00 -0500
Finished: Tue, 11 Feb 2020 18:35:00 -0500
Ready: False
Restart Count: 5
Liveness: http-get https://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
PORT: 3000
undefined: undefined
Mounts:
/kvmnt from application-service-kv-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from application-service-token-9jk8j (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
application-service-kv-volume:
Type: FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
Driver: azure/kv
FSType:
SecretRef: &LocalObjectReference{Name:kvcreds,}
ReadOnly: false
Options: map[keyvaultname:testIt2 keyvaultobjectnames:APPLICATION-SVC-SQLDB-CS;INGESTION-CONSUMER-EHB-CS;INGESTION-PRODUCER-EHB-CS keyvaultobjecttypes:secret;secret;secret tenantid:REMOVED usepodidentity:false usevmmanagedidentity:false]
application-service-token-9jk8j:
Type: Secret (a volume populated by a Secret)
SecretName: application-service-token-9jk8j
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20m default-scheduler Successfully assigned default/application-service-59bcf96874-pvrmb to aks-41067869-1
Normal Pulling 20m kubelet, aks-41067869-1 Pulling image "myacr01.azurecr.io/microservices/application-service:0.0.6"
Normal Pulled 4m39s kubelet, aks-41067869-1 Successfully pulled image "myacr01.azurecr.io/microservices/application-service:0.0.6"
Normal Started 3m36s (x4 over 4m33s) kubelet, aks-41067869-1 Started container application-service
Warning BackOff 3m4s (x11 over 4m30s) kubelet, aks-41067869-1 Back-off restarting failed container
Normal Pulled 2m52s (x4 over 4m32s) kubelet, aks-41067869-1 Container image "myacr01.azurecr.io/microservices/application-service:0.0.6" already present on machine
Normal Created 2m51s (x5 over 4m33s) kubelet, aks-41067869-1 Created container application-service
Some details were modified/removed for privacy reasons.
However, the thing to note is the ~15m needed to go from a state of "Pulling" to "Pulled" for an image from an ACR.
This issue is occurring daily. The Azure Insights blade of the AKS instance shows a maximum of 26% node CPU and 14.32% node memory utilization over the last 7 days.
How we can go about troubleshooting this further to determine the possible causes of delays?
Any help is greatly appreciated.
Thanks!
I am new to containers and using GKE. I used to run my node server app with npm run debug and am trying to do this as well on GKE using the shell of my container. When I log into the shell of myapp container and do this I get:
> api_server#0.0.0 start /usr/src/app
> node src/
events.js:167
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE :::8089
Normally I deal with this using something like killall -9 node but when I do this it looks like I am kicked out of my shell and the container is restarted by kubernetes. It seems node is using the port already or something:
netstat -tulpn | grep 8089
tcp 0 0 :::8089 :::* LISTEN 23/node
How can I start my server from the shell?
My config files:
Dockerfile:
FROM node:10-alpine
RUN apk add --update \
libc6-compat
WORKDIR /usr/src/app
COPY package*.json ./
COPY templates-mjml/ templates-mjml/
COPY public/ public/
COPY src/ src/
COPY data/ data/
COPY config/ config/
COPY migrations/ migrations/
ENV NODE_ENV 'development'
ENV PORT '8089'
RUN npm install --development
myapp.yaml:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
- port: 8089
name: http
selector:
app: myapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myproject-224713/firstapp:v4
ports:
- containerPort: 8089
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=myproject-224713:europe-west4:mydatabase=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
---
myrouter.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myapp-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "*"
gateways:
- myapp-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: myapp
weight: 100
websocketUpgrade: true
EDIT:
I got following logs:
EDIT 2:
After adding a featherjs health service I get following output for describe:
Name: myapp-95df4dcd6-lptnq
Namespace: default
Node: gke-standard-cluster-1-default-pool-59600833-pcj3/10.164.0.3
Start Time: Wed, 02 Jan 2019 22:08:33 +0100
Labels: app=myapp
pod-template-hash=518908782
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container myapp; cpu request for container cloudsql-proxy
sidecar.istio.io/status:
{"version":"3c9617ff82c9962a58890e4fa987c69ca62487fda71c23f3a2aad1d7bb46c748","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status: Running
IP: 10.44.3.17
Controlled By: ReplicaSet/myapp-95df4dcd6
Init Containers:
istio-init:
Container ID: docker://768b2327c6cfa57b3d25a7029e52ce6a88dec6848e91dd7edcdf9074c91ff270
Image: gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0
Image ID: docker-pullable://gcr.io/gke-release/istio/proxy_init#sha256:e30d47d2f269347a973523d0c5d7540dbf7f87d24aca2737ebc09dbe5be53134
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
8089,
-d
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 02 Jan 2019 22:08:34 +0100
Finished: Wed, 02 Jan 2019 22:08:35 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Containers:
myapp:
Container ID: docker://5566a3e8242ec6755dc2f26872cfb024fab42d5f64aadc3db1258fcb834f8418
Image: gcr.io/myproject-224713/firstapp:v4
Image ID: docker-pullable://gcr.io/myproject-224713/firstapp#sha256:0cbd4fae0b32fa0da5a8e6eb56cb9b86767568d243d4e01b22d332d568717f41
Port: 8089/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 02 Jan 2019 22:09:19 +0100
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 02 Jan 2019 22:08:35 +0100
Finished: Wed, 02 Jan 2019 22:09:19 +0100
Ready: False
Restart Count: 1
Requests:
cpu: 100m
Liveness: http-get http://:8089/health delay=15s timeout=20s period=10s #success=1 #failure=3
Readiness: http-get http://:8089/health delay=5s timeout=5s period=10s #success=1 #failure=3
Environment:
POSTGRES_DB_HOST: 127.0.0.1:5432
POSTGRES_DB_USER: <set to the key 'username' in secret 'mysecret'> Optional: false
POSTGRES_DB_PASSWORD: <set to the key 'password' in secret 'mysecret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
cloudsql-proxy:
Container ID: docker://414799a0699abe38c9759f82a77e1a3e06123714576d6d57390eeb07611f9a63
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy#sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=myproject-224713:europe-west4:osm=tcp:5432
-credential_file=/secrets/cloudsql/credentials.json
State: Running
Started: Wed, 02 Jan 2019 22:08:36 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
istio-proxy:
Container ID: docker://898bc95c6f8bde18814ef01ce499820d545d7ea2d8bf494b0308f06ab419041e
Image: gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0
Image ID: docker-pullable://gcr.io/gke-release/istio/proxyv2#sha256:826ef4469e4f1d4cabd0dc846f9b7de6507b54f5f0d0171430fcd3fb6f5132dc
Port: <none>
Host Port: <none>
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
myapp
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15007
--discoveryRefreshDelay
1s
--zipkinAddress
zipkin.istio-system:9411
--connectTimeout
10s
--statsdUdpAddress
istio-statsd-prom-bridge.istio-system:9125
--proxyAdminPort
15000
--controlPlaneAuthPolicy
NONE
State: Running
Started: Wed, 02 Jan 2019 22:08:36 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
POD_NAME: myapp-95df4dcd6-lptnq (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: myapp-95df4dcd6-lptnq (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-instance-credentials
Optional: false
default-token-9vtz5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9vtz5
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 68s default-scheduler Successfully assigned myapp-95df4dcd6-lptnq to gke-standard-cluster-1-default-pool-59600833-pcj3
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "istio-envoy"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "default-token-9vtz5"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "cloudsql-instance-credentials"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "istio-certs"
Normal Pulled 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0" already present on machine
Normal Created 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Pulled 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
Normal Created 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Created 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Pulled 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0" already present on machine
Normal Created 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Warning Unhealthy 31s (x4 over 61s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Pulled 22s (x2 over 66s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/myproject-224713/firstapp:v4" already present on machine
Warning Unhealthy 22s (x3 over 42s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 22s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Killing container with id docker://myapp:Container failed liveness probe.. Container will be killed and recreated.
This is just how Kubernetes works, as long as your pod has processes running it will remain 'up'. The moment you kill one if its processes Kubernetes will restart the pod because it crashed or something went wrong.
If you really want to debug with npm run debug consider either:
Create a container with the CMD (at the end) or ENTRYPOINT value in your Dockerfile that is npm run debug. Then run it using a Deployment definition in Kubernetes.
Override the command in the myapp container in your deployment definition with something like:
spec:
containers:
- name: myapp
image: gcr.io/myproject-224713/firstapp:v4
ports:
- containerPort: 8089
command: ["npm", "run", "debug" ]
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
I created a simple Docker image from a "Hello World" java application.
This is my Dockerfile
FROM java:8
COPY . /var/www/java
WORKDIR /var/www/java
RUN javac HelloWorld.java
CMD ["java", "HelloWorld"]
I pushed the image (java-app) to Azure Container Registry.
$ az acr repository list --name AContainerRegistry --output tableResult
----------------
java-app
I want to deploy it
amhg$ kubectl run dockerproject --image=acontainerregistry.azurecr.io/java-app:v1
deployment.apps "dockerproject" created
amhg$ kubectl expose deployments dockerproject --port=80 --type=LoadBalancer
service "dockerproject" exposed
and see the pods, the pod is crashed
amhg$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dockerproject-b6799d879-pt5rx 0/1 CrashLoopBackOff 8 19m
Is there a way to "test"/run the image from the central registry, how come it crashes?
HERE DESCRIBE POD
amhg$ kubectl describe pod dockerproject-64fbf7649-spc7h
Name: dockerproject-64fbf7649-spc7h
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 11:53:58 +0200
Labels: pod-template-hash=209693205
run=dockerproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"dockerproject-64fbf7649","uid":"946610e4-43b7-11e8-9537-0a58ac1...
Status: Running
IP: 10.244.0.38
Controlled By: ReplicaSet/dockerproject-64fbf7649
Containers:
dockerproject:
Container ID: docker://1f2a7a6870a37e4d6b53fc834b0d4d3b681e9faaacc3772177a918e66357404e
Image: acontainerregistry.azurecr.io/java-app:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/java-app#sha256:eaf6fe53a59de287ad76a18de2c7f05580b1f25153624161aadcc7b8ef47b0c4
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
Ready: False
Restart Count: 13
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43m default-scheduler Successfully assigned dockerproject2-64fbf7649-spc7h to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 43m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/java-app:v1" already present on machine
Normal Created 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Started container
Warning FailedSync 8m (x161 over 43m) kubelet, aks-nodepool1-39744669-0 Error syncing pod
Warning BackOff 3m (x184 over 43m) kubelet, aks-nodepool1-39744669-0 Back-off restarting failed container
When you run an application in the Pod, Kubernetes expects that it will work all the time as a daemon until you will stop it somehow.
In your details about the pod I see this:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
It means that your application exited with code 0 (which means "all is ok") right after start. So, the image was successfully downloaded (registry is OK) and run, but the application exited.
That's why Kubernetes tries to restart the pod all the time.
The only thing I can suggest - find a reason why the application stops and fix it.