error when trying to run ui (app) on kubernetes? - node.js

Basically when i try to run my app(UI) on kubernetess using kubectl the pod fails and says error.
logs of the pod
> wootz#0.1.0 start /usr/src/app
> node scripts/start.js
internal/modules/cjs/loader.js:626
throw err;
^
Error: Cannot find module 'react-dev-utils/chalk'
Require stack:
- /usr/src/app/scripts/start.js
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:623:15)
at Function.Module._load (internal/modules/cjs/loader.js:527:27)
at Module.require (internal/modules/cjs/loader.js:681:19)
at require (internal/modules/cjs/helpers.js:16:16)
at Object.<anonymous> (/usr/src/app/scripts/start.js:19:15)
at Module._compile (internal/modules/cjs/loader.js:774:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:785:10)
at Module.load (internal/modules/cjs/loader.js:641:32)
at Function.Module._load (internal/modules/cjs/loader.js:556:12)
at Function.Module.runMain (internal/modules/cjs/loader.js:837:10) {
code: 'MODULE_NOT_FOUND',
requireStack: [ '/usr/src/app/scripts/start.js' ]
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! wootz#0.1.0 start: `node scripts/start.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the wootz#0.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional log ging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2019-06-17T15_56_02_649Z-debug.log
uipersistantvolume
kind: PersistentVolume
apiVersion: v1
metadata:
name: ui-initdb-pv-volume
labels:
type: local
app: ui
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/pagedesigneryamls/client"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ui-initdb-pv-claim-one
labels:
app: ui
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
uipersistantvolumetwo
kind: PersistentVolume
apiVersion: v1
metadata:
name: ui-initdb-pv-volume-two
labels:
type: local
app: ui
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/pagedesigneryamls/client"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ui-initdb-pv-claim-two
labels:
app: ui
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
ui.yaml
apiVersion: v1
kind: Service
metadata:
name: ui
labels:
app: ui
spec:
ports:
- name: myport
port: 80
targetPort: 3000
selector:
app: ui
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui
labels:
app: ui
spec:
selector:
matchLabels:
app: ui
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: ui
tier: frontend
spec:
containers:
- image: suji165475/devops-sample:updatedclientdockerfile
name: ui
ports:
- containerPort: 80
name: myport
volumeMounts:
- name: ui-persistent-storage-one
mountPath: /usr/src/app
- name: ui-persistent-storage-two
mountPath: /usr/src/app/node_modules
volumes:
- name: ui-persistent-storage-one
persistentVolumeClaim:
claimName: ui-initdb-pv-claim-one
- name: ui-persistent-storage-two
persistentVolumeClaim:
claimName: ui-initdb-pv-claim-two
the image used in the ui yaml was built using the following dockerfile
FROM node:12.4.0-alpine
RUN mkdir -p usr/src/app
WORKDIR /usr/src/app
COPY package.json package.json
RUN npm install && npm cache clean --force
RUN npm install -g webpack-cli
WORKDIR /usr/src/app
COPY . .
WORKDIR /usr/src/app
EXPOSE 3000
RUN npm run build
CMD [ "npm","start" ]
how can i solve the error Cannot find module 'react-dev-utils/chalk'?? Is there anything missing from the dockerfile??

Delete all of the volumes, persistent volumes, and persistent volume claims. Your code is in your image (you COPY . . to get it in) and you should run it from there.
Kubernetes is extremely ill-suited to be a live development environment. Notice here that you’re spending more YAML space on trying to inject your local source code into the container than everything else in your deployment combined; in a real production setup you’d also have to make sure your source code gets on to every single node (and assumes you even have access to the nodes; in many cloud-hosted environments you won’t).
I’d recommend developing your application normally — no Docker, no Kubernetes — and only once it works, worry about packaging it up and deploying it. Things like rolling zero-downtime restarts in Kubernetes are rather different from live code reloads in a development environment.

Related

Containerized application not accessible from browser

I am using Azure Kubernetes cluster and using below as dockerfile.The container is deployed successfully in a Pod.
FROM node:12 as build-stage
WORKDIR /app
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install
COPY ./ /app/
ARG URI
ENV REACT_APP_URI=$URI
RUN npm run build
EXPOSE 80
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
deployment yml file:
kind: Deployment
apiVersion: apps/v1
metadata:
name: m-app
spec:
replicas: 1
selector:
matchLabels:
app: m-app
template:
metadata:
labels:
app: m-app
spec:
containers:
- name: metadata-app
image: >-
<url>
imagePullPolicy: Always
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: dockersecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: dockersecret
key: password
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: m-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
selector:
app: m-app
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
I would use the above yml file for the deployment and I want to access the app via pvt IP Address . By running the above yml I would get the service m-app with an External private IP but it is not accessible.
Then I tried with NodePort and for the same I replace above LoadBalancer snippet with below:
kind: Service
apiVersion: v1
metadata:
name: m-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
nodePort: 31000
selector:
app: m-app
Again I can not access the app from my browser with :
Could someone please assist. I suspected an issue with the Dockerfile as well and used different Dockerfile but no luck.(Please ignore yml indentations if any)
Finally the issue got fixed . I added below snippet in the Dockerfile:
FROM httpd:alpine
WORKDIR /var/www/html
COPY ./httpd.conf /usr/local/apache2/conf/httpd.conf
COPY --from=build-stage /app/build/ .
along with:
FROM node:12 as build-stage
WORKDIR /app
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install
COPY ./ /app/
ARG URI
ENV REACT_APP_URI=$URI
RUN npm run build

502 Bad gateway on Nodejs application deployed on Kubernetes cluster

I am deploying nodejs application on kubernetes, After deployment pod is up and running, but when I am trying to access the application through ingress it is giving 502 bad gateway error.
Dockerfile
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 3123
CMD [ "node", "index.js" ]
Deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "node-development"
namespace: "development"
spec:
selector:
matchLabels:
app: "node-development"
replicas: 1
template:
metadata:
labels:
app: "node-development"
spec:
containers:
-
name: "node-development"
image: "xxx"
imagePullPolicy: "Always"
env:
-
name: "NODE_ENV"
value: "development"
ports:
-
containerPort: 47033
service.yaml
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "node-development-service"
namespace: "development"
labels:
app: "node-development"
spec:
ports:
-
port: 47033
targetPort: 3123
selector:
app: "node-development"
ingress.yaml
---
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "node-development-ingress"
namespace: "development"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
rules:
-
host: "xxxx"
http:
paths:
-
backend:
service:
name: "node-development"
port:
number: 47033
path: "/node-development/(.*)"
pathType: "ImplementationSpecific"
With ingress or even with the pod cluster ip I am not being able to access application it is throwing 502 bad gateway nginx
Issue got resolved, I am using SSL in my application as a result it was not re-directing with the given ingress url.
Need to add below annotation in ingress.yaml file.
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

Unable to access local certificate on kubernetes cluster

I have a node application running in a container that works well when I run it locally on docker.
When I try to run it in my k8 cluster, I get the following error.
kubectl -n some-namespace logs --follow my-container-5d7dfbf876-86kv7
> code#1.0.0 my-container /src
> node src/app.js
Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1486:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:921:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:695:12) {
code: 'UNABLE_TO_GET_ISSUER_CERT_LOCALLY'
}
This is strange as the only I run the container with
command: ["npm", "run", "consumer"]
I have also tried adding to my Dockerfile
npm config set strict-ssl false
as per the recommendation here: npm install error - unable to get local issuer certificate but it doesn't seem to help.
So it should be trying to authenticate this way.
I would appreciate any pointers on this.
Here is a copy of my .yaml file for completeness.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: label
name: label
namespace: some-namespace
spec:
replicas: 1
selector:
matchLabels:
name: lable
template:
metadata:
labels:
name: label
spec:
containers:
- name: label
image: some-registry:latest
resources:
limits:
memory: 7000Mi
cpu: '3'
ports:
- containerPort: 80
command: ["npm", "run", "application"]
env:
- name: "DATABASE_URL"
valueFrom:
secretKeyRef:
name: postgres
key: DBUri
- name: "DEBUG"
value: "*,-babel,-mongo:*,mongo:queries,-http-proxy-agent,-https-proxy-agent,-proxy-agent,-superagent,-superagent-proxy,-sinek*,-kafka*"
- name: "ENV"
value: "production"
- name: "NODE_ENV"
value: "production"
- name: "SERVICE"
value: "consumer"
volumeMounts:
- name: certs
mountPath: /etc/secrets
readOnly: true
volumes:
- name: certs
secret:
secretName: certs
items:
- key: certificate
path: certificate
- key: key
path: key
It looks that the pod is not mounting the secrets in the right place. Make sure that .spec.volumeMounts.mountPath is pointing on the right path for the container image.

My Node.js app is failing to start as part of Kubernetes Deployment

Hi All,
My node.js app is failing while I am trying to deploy it in Kubernetes using a docker image. The container in Kubernetes pod is getting created but it's immediately getting terminated, after executing the command "npm start" Here is the content of my dockerfile:
FROM node:13.12.0-alpine
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY nodejs/package.json ./
RUN npm install
RUN npm update
COPY nodejs/ .
COPY . ./
CMD ["npm", "start"]
Here is the content of the yaml file:
kind: Service
apiVersion: v1
metadata:
name: nodejs-service
spec:
selector:
app: nodejs
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 30016
selector:
app: nodejs
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodejs
name: nodejs-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
containers:
- image: 336319716199.dkr.ecr.ap-south-1.amazonaws.com/ddp/nodejs-frontend:106
name: frontend-nodejs
command: ["npm", "start"]
ports:
- containerPort: 3000
Any suggestion will be highly appreciated ! Thanks in advance !

My container is Warning BackOff or Crashloopback

I am creating several microservices on Azure (Kubernetes) and I have the following problem: if I do not put this command inside the YAML container it shows the message of BackOff or CrashloopBack and does not leave there.
The command I place is this:
command: [ "sleep" ]
args: [ "infinity" ]
This is the error that shows if I do not put this code
Warning BackOff 7s (x4 over 37s) kubelet, aks-agentpool-29153703-2 Back-off restarting the failed container
My DockerFile for one of this microservices:
FROM node:10
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD npm start EXPOSE 6060
My YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: permis-deployment
labels:
app: permis-deployment
spec:
replicas: 1
selector:
matchLabels:
app: permis
template:
metadata:
labels:
app: permis
spec:
containers:
- name: permis
image: myacr.azurecr.io/permission-container:latest
command: [ "sleep" ]
args: [ "infinity" ]
ports:
- containerPort: 6060
apiVersion: v1
kind: Service
metadata:
name: permis-service
spec:
selector:
app: permis
ports:
- protocol: TCP
port: 6060
targetPort: 6060
type: LoadBalancer
Can you tell me what I am doing wrong or what is wrong?
Thank you!
If your app is running fine, change the Dockerfile by this one and re-create the image. Should work:
FROM node:10
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 6060
CMD ["npm", "start"]
i suggest you do the following:
containers:
- args:
- -ec
- sleep 1000
command:
- /bin/sh
invoking sleep directly didnt work for me

Resources