Kubernetes Node 14 pod restarts and terminated with exit code 0 - node.js

I have an Angular Universal application with the following Dockerfile:
FROM node:14-alpine
WORKDIR /app
COPY package.json /app
COPY dist/webapp /app/dist/webapp
ENV NODE_ENV "production"
ENV PORT 80
EXPOSE 80
CMD ["npm", "run", "serve:ssr"]
And I can deploy it to a Kubernetes cluster just fine but it keeps getting restarted every 10 minutes or so:
NAME READY STATUS RESTARTS AGE
api-xxxxxxxxx-xxxxx 1/1 Running 0 48m
webapp-xxxxxxxxxx-xxxxx 1/1 Running 232 5d19h
Pod logs are clean and when I describe the pod I just see:
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 22 Sep 2020 15:58:27 -0300
Finished: Tue, 22 Sep 2020 16:20:31 -0300
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 3m31s (x233 over 5d19h) kubelet, pool-xxxxxxxxx-xxxxx Created container webapp
Normal Started 3m31s (x233 over 5d19h) kubelet, pool-xxxxxxxxx-xxxxx Started container webapp
Normal Pulled 3m31s (x232 over 5d18h) kubelet, pool-xxxxxxxxx-xxxxx Container image "registry.gitlab.com/..." already present on machine
This is my deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: $CI_PROJECT_NAME
namespace: $KUBE_NAMESPACE
labels:
app: webapp
tier: frontend
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
selector:
matchLabels:
app: webapp
tier: frontend
template:
metadata:
labels:
app: webapp
tier: frontend
spec:
imagePullSecrets:
- name: gitlab-registry
containers:
- name: $CI_PROJECT_NAME
image: $IMAGE_TAG
ports:
- containerPort: 80
How can I tell the reason it keeps restarting? Thanks!

Related

Nextjs app not running on port after deploying to VM (docker, kubernetes)

Trying to spin up Nextjs application on server by creating docker image and running it on a VM along with Kubernetes.
After deployment, I can see my nextjs pod running fine and even in logs it claimed that server has started. However I can't access the web via browser nor curl the url.
I have tried a few things
Changing from port 3000 to 5000 to confirm port 3000 is not blocked by whatever reason
Check result of lsof -i :5000, I see nothing showing up.
Check result of curl http://localhost:5000, I got curl: (7) Failed to connect to localhost port 5000: Connection refused return.
Verified my nextjs pod's log, as below
> begin-next#0.1.0 dev /usr/src/app
> next dev
ready - started server on 0.0.0.0:5000, url: http://localhost:5000
wait - compiling...
event - compiled client and server successfully in 2.2s (171 modules)
Attention: Next.js now collects completely anonymous telemetry regarding usage.
This information is used to shape Next.js' roadmap and prioritize features.
You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
https://nextjs.org/telemetry
Code reference
Below is the Dockerfile and deployment.yaml use
Dockerfile
FROM node:12
ENV PORT 5000
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Installing dependencies
COPY package*.json /usr/src/app/
RUN npm install
# Copying source files
COPY . /usr/src/app
# Building app
RUN npm run build
EXPOSE 5000
# Running the app
CMD "npm" "run" "dev"
# Running the app
CMD [ "npm", "start" ]
Deployment Yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextjs-web
spec:
selector:
matchLabels:
app: nextjs-web
replicas: 1
template:
metadata:
labels:
app: nextjs-web
spec:
containers:
- name: nextjs
image: maskedPath/dockerimages/nextjs:1234
ports:
- containerPort: 5000
imagePullSecrets:
- name: acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: nextjs-web
spec:
selector:
app: nextjs-web
ports:
- protocol: TCP
port: 80
targetPort: 5000
Other details
Describe pod
Name: nextjs-web-c9cfb675d-6thbx
Namespace: default
Priority: 0
Node: maskedData
Start Time: Thu, 21 Apr 2022 15:23:56 +0000
Labels: app=nextjs-web
pod-template-hash=c9cfb675d
Annotations: <none>
Status: Running
IP: maskedData
IPs:
IP: maskedData
Controlled By: ReplicaSet/nextjs-web-c9cfb675d
Containers:
nextjs:
Container ID: containerd://c7bbad112c6915af0aea2a7a7e1a7f42a87840e7e22ca1fe49afdc0000000000
Image: maskedData/dockerimages/nextjs:1234
Image ID: maskedData/dockerimages/nextjs#sha256:000000000036f0e0728c128322f95a9585b6724332d3f853e7f240a8df2b0ad3
Port: 5000/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 21 Apr 2022 15:24:09 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-n72pn (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-n72pn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-n72pn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Finally have it work, the catch is to expose the app in services via type load balancer. Hence this app will be accessible to the internet world.
Command below can be use to expose.
kubectl expose deployment nextjs-web --type=LoadBalancer --name=nextjs-web

How do I keep a Kubernetes pod running with no http endpoint? (prevent back-off)

I have a usecase where (at least for now) I need a k8s pod to stay up without a HTTP or TCP endpoint. I tried the following deployment...
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-deployment
labels:
app: node-app
spec:
selector:
matchLabels:
app: node-app
template:
metadata:
labels:
app: node-app
spec:
containers:
- name: node-server
image: node:17-alpine
exec:
command: ["node","-v"]
livenessProbe:
exec:
command: ["node","-v"]
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
exec:
command: ["node","-v"]
initialDelaySeconds: 5
periodSeconds: 5
But after a while it stops the pod with the following error...
Warning BackOff 5s (x10 over 73s) kubelet, minikube Back-off restarting failed container
And the pod status shows...
node-deployment-58445f5649-z6lkz 0/1 CrashLoopBackOff 5 (103s ago) 4m47s
I know it is running because I see the version in kubectl logs <node-name>. How do I get the image to stay up with no long running process? Is this even possible?
The process of the container simply exited, the same happens if you run node -v in your terminal.
Not sure if the use case you provide is your real use case ( I can't see any reason to have the node version as an application), so ..
if you really want to have the version, you can change the command to be
["watch","-n1","node","-v"]
So it will print the node version every second

Setting up a React application and NodeJS backend in Kubernetes?

I am trying to setup a sample React application wired to a NodeJS backend as two pods in Kubernetes. This is the (mostly) the default CRA and NodeJS application with Express i.e. npx create-react-app my_app.
Both application runs fine locally through yarn start and npm app.js respectively. The React application uses a proxy defined in package.json to communicate with the NodeJS back-end.
React package.json
...
"proxy": "http://localhost:3001/"
...
React Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn
COPY . .
CMD [ "yarn", "start" ]
NodeJS Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD [ "node", "app.js" ]
ui-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-ui
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-ui
template:
metadata:
labels:
app: my_namespace
component: sample-ui
spec:
containers:
-
name: sample-ui
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
server-deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sample-server
namespace: my_namespace
spec:
replicas: 1
selector:
matchLabels:
app: my_namespace
component: sample-server
template:
metadata:
labels:
app: my_namespace
component: sample-server
spec:
containers:
-
name: sample-server
image: xxx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
name: http
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
ui-service
apiVersion: v1
kind: Service
metadata:
name: sample-ui
namespace: my_namespace
labels: {app: sample-ui}
spec:
type: LoadBalancer
selector:
component: sample-ui
ports:
- name: listen
protocol: TCP
port: 3000
server-service
apiVersion: v1
kind: Service
metadata:
name: sample-server
namespace: my_namespace
labels: {app: sample-server}
spec:
selector:
component: sample-server
ports:
- name: listen
protocol: TCP
port: 3001
Both services run fine on my system.
get svc
sample-server ClusterIP 10.19.255.171 <none> 3001/TCP 26m
sample-ui LoadBalancer 10.19.242.42 34.82.235.125 3000:31074/TCP 26m
However, my deployment for the CRA crashes multiple time despite indicating it is still running.
get pods
sample-server-598776c5fc-55jsz 1/1 Running 0 42m
sample-ui-c75ccb746-qppk2 1/1 Running 4 2m38s
I suspect that my React Dockerfile is improperly configured but I'm not sure how to write it to work with a NodeJS backend in kubernetes.
a) How can I setup my Dockerfile for my CRA such that it will run in a pod?
b) How can I setup my docker services and pods such that they communicate?
You will have to use come API gateway in front of your server or you can use ambassador from kubernetes.
Then you can get your client connected to server.
a) How can I setup my Dockerfile for my CRA such that it will run in a
pod?
React docker file is looking good you need to check why container of pod is failing.
Using kubectl describe pod <POD name> or debug more logs using the command kubectl logs <pod name>
How can I setup my docker services and pods such that they
communicate?
For this, you are on right track, how server and frontend will communicate in Kubernetes using the service name.
This might weird at first level but Kubernetes DNS takes care of it.
How if you have two service frontend (sample-ui) and backend (sample-server)
sample-ui will send the request to sample-server so they get connected that way.
You can also try this by going inside the sample-ui POD(container)
kubect exec -it sample-ui-c75ccb746-qppk2 -- /bin/bash
now you are inside of sample-ui container let's send request to sample-server from here
if curl not exist you can install it using the apk install curl or apt-get install curl or yum install curl
curl http://sample-server:3001
Magic you might see response from server.
So your while flow goes like
user coming to frontend load balancer service > calling sample-ui service > internally inside kubernetes cluster now your sample-ui calling the sample-server
All the service that you create inside the K8s will be accesible by it's name.

GKE not able to pull image from artifactory

I am using GKE, gitlab and JFrog to achieve CI and CD. All my steps work, but my deployment to GKE fails as its not able to poll my image. Image does exists. I have given below deployment, yaml and error message.
Below is my deployment file, i have hardcoded the image to be clear that image does exists.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: go
name: hello-world-go
spec:
progressDeadlineSeconds: 600
replicas: 3
selector:
matchLabels:
app: go
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 33%
type: RollingUpdate
template:
metadata:
labels:
app: go
spec:
containers:
-
# image: "<IMAGE_NAME>"
image: cicd-docker-local.jfrog.io/stage_proj:56914646
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
name: go
ports:
-
containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
imagePullSecrets:
- name: my.secret
Below is my gitlab yaml file snippet
deploy to dev:
stage: Dev-Deploy
image: dtzar/helm-kubectl
script:
- kubectl config set-cluster mycluster --server="$KUBE_URL" --insecure-skip-tls-verify=true
- kubectl config set-credentials admin --username="$KUBE_USER" --password="$KUBE_PASSWORD"
- kubectl config set-context default --cluster=mycluster --user=admin
- kubectl config use-context default; sleep 10
- kubectl delete secret my.secret
- kubectl create secret docker-registry my.secret --docker-server=$ARTIFACTORY_DOCKER_REPOSITORY --docker-username=$ARTIFACTORY_USER --docker-password=$ARTIFACTORY_PASS --docker-email="abc#gmail.com"
- echo ${STAGE_CONTAINER_IMAGE}
- kubectl apply -f deployment.yaml
- kubectl rollout status -w "deployment/hello-world-go"
# - kubectl rollout status -f deployment.yaml
- kubectl get all,ing -l app='hello-world-go'
only:
- master
I get error like below in GKE.
Cannot pull image 'cicd-docker-local.jfrog.io/stage_proj:56914646' from the registry.

applying changes to pod code source realtime - npm

I have reactjs app running on my pod and I have mounted source code from the host machine to the pod. It works fine but when I change my code in the host machine, pod source code also changes but when I run the site it has not affected the application. here is my manifest, what I'm doing wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: webapp
tier: frontend
phase: development
spec:
containers:
- name: webapp
image: xxxxxx
command:
- npm
args:
- run
- dev
env:
- name: environment
value: dev
- name: AUTHOR
value: webapp
ports:
- containerPort: 3000
volumeMounts:
- mountPath: /code
name: code
imagePullSecrets:
- name: regcred
volumes:
- name: code
hostPath:
path: /hosthome/xxxx/development/react-app/src
and i know for a fact npm is not watching my changes, how can i resolve it in pods?
Basically, you need to reload your application everytime you change your code and your pods don't reload or restart when you change the code under the /code directory. You will have to re-create your pod since you are using a deployment you can either:
kubectl delete <pod-where-your-app-is-running>
or
export PATCH='{"spec":{"template":{"metadata":{"annotations":{"timestamp":"'$(date)'"}}}}}'
kubectl patch deployment webapp -p "$PATCH"
Your pods should restart after that.
what Rico has mentioned is correct, you need to patch or rebuild with every changes, but you can avoid that by running minikube without vm-driver here is the command to run minikube without vm-driver only works in Linux, by doing this you can mount host path to pod. hope this will help
sudo minikube start --bootstrapper=localkube --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost -v=1

Resources