Applications containerized with docker will not communicate in a single Kubernetes pod - node.js

My application consists of a UI built in react, an API, MQTT broker, and a webhook to monitor changes in a database all built with node.
The database has not yet been made into a volume, but is running on my local computer.
Here is my deployment.yml file
`
# defining Service
apiVersion: v1
kind: Service
metadata:
name: factoryforge
spec:
selector:
app: factoryforge
ports:
- port: 80
name: api
targetPort: 3000
- port: 81
name: mqtt
targetPort: 3001
- port: 82
name: dbmonitor
targetPort: 3002
- port: 83
name: ui
targetPort: 3003
type: LoadBalancer
---
#Defining multi container pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: factoryforge
spec:
replicas: 1
selector:
matchLabels:
app: factoryforge
template:
metadata:
labels:
app: factoryforge
spec:
containers:
- name: api
image: blueridgetest/api
ports:
- containerPort: 3000
- name: mqtt
image: blueridgetest/mqtt
ports:
- containerPort: 3001
- name: dbmonitor
image: blueridgetest/dbmonitor
ports:
- containerPort: 3002
- name: ui
image: blueridgetest/ui
ports:
- containerPort: 3003
`
...and the dockerbuild files for the four services
UI
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i --f
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3000
CMD [ "npm", "start" ]
`
API
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3002
EXPOSE 8000
CMD [ "node", "API.js" ]
`
MQTT
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 8884
CMD [ "node", "MQTTBroker.js" ]
`
DBMonitor
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3001
CMD [ "node", "index.js" ]
`
Any help would be greatly appreciated. Thanks!

i would say that in a standard Kubernetes workload you should start four different Pods with four different deployments. Then create four different services and you will see that they can communicate with one another.
With this, your pods will be simpler to check, and you are ready to scale the pods singularly.
Maybe I missed something?

Related

Deployment Kubernetes in Azure - Most NODE_OPTIONs are not supported in packaged apps

I'm trying to deploy and image from Azure Cloud Register into a Kubernete cluster in Azure.
When I'm running a YAML file I get this error
[72320:0209/093322.154:ERROR:node_bindings.cc(289)] Most NODE_OPTIONs are not supported in packaged apps. See documentation for more details.
The image is an application developed with Node & Express, it's just a "Hello World"
Here my Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sharepointbackend-dev
namespace: dev
labels:
app: sharepointbackend-dev
spec:
replicas: 1
selector:
matchLabels:
app: sharepointbackend-dev
template:
metadata:
labels:
app: sharepointbackend-dev
spec:
containers:
- name: samplecontainer
image: sharepointbackendcontainerregister.azurecr.io/sharepointbackend:dev
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
protocol: TCP
And my DockerFile
FROM node:lts-alpine
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "tsconfig.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install\
&& npm install typescript -g
COPY . .
RUN tsc
EXPOSE 3000
RUN chown -R node /usr/src/app
USER node
CMD ["node", "dist/app.js"]
Locally, I can run succesfully. I build an image in my Docker Desktop and run it in a browser.
Any help?
Thanks in advance

Kubernetes Rollout Drop Old Pods When New Pods Are Not Fully Ready

I'm using the kubectl rollout command to update my deployment. But since my project is a NodeJS Project. The npm run start will take some take(a few seconds before the application is actually running.) But Kubernetes will drop the old pods immediately after the npm run start is executed.
For example,
kubectl logs -f my-app
> my app start
> nest start
The Kubernetes will drop the old pods now. However, it will take another 10 seconds until
Application is running on: http://[::1]:5274
which means my service is actually up.
I'd like to know whether there is a way to modify this like waiting some more time before kubernetes drop the old pods.
My docker file:
FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
COPY protos ./protos/
COPY tsconfig.build.json ./
COPY tsconfig.json ./
# Install app dependencies
RUN npm install
RUN export NODE_OPTIONS=--max_old_space_size=16384
RUN npm run build
COPY . .
# FROM node:14
# COPY --from=builder /app/node_modules ./node_modules
# COPY --from=builder /app/package*.json ./
# COPY --from=builder /app/dist ./dist
# COPY --from=builder /app/protos ./protos
EXPOSE 5273
CMD ["npm", "run", "start"]
Spec for my kubernetes yaml file:
spec:
replicas: 4
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: image
imagePullPolicy: Always
resources:
limits:
memory: "8Gi"
cpu: "10"
requests:
memory: "8Gi"
cpu: "10"
livenessProbe:
httpGet:
path: /api/Health
port: 5274
initialDelaySeconds: 180
periodSeconds: 80
timeoutSeconds: 20
failureThreshold: 2
ports:
- containerPort: 5274
- containerPort: 5900
Use a startup probe on your container. https://docs.openshift.com/container-platform/4.11/applications/application-health.html . Pods don't count as "ready" until all of their containers have passed their startup (and readiness) checks.
And during a deployment the scheduler counts non-ready pods as "unavailable" for things like the "maxUnavailable" setting of the deployment. Thus the scheduler won't keep shutting down working pods until new pods are ready for traffic. (https://docs.openshift.com/container-platform/4.11/applications/deployments/deployment-strategies.html)
As an additional benefit, services won't route traffic to non-ready pods, so they won't receive any traffic until the containers have passed their startup probes.

Containerized application not accessible from browser

I am using Azure Kubernetes cluster and using below as dockerfile.The container is deployed successfully in a Pod.
FROM node:12 as build-stage
WORKDIR /app
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install
COPY ./ /app/
ARG URI
ENV REACT_APP_URI=$URI
RUN npm run build
EXPOSE 80
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
deployment yml file:
kind: Deployment
apiVersion: apps/v1
metadata:
name: m-app
spec:
replicas: 1
selector:
matchLabels:
app: m-app
template:
metadata:
labels:
app: m-app
spec:
containers:
- name: metadata-app
image: >-
<url>
imagePullPolicy: Always
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: dockersecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: dockersecret
key: password
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: m-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
selector:
app: m-app
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
I would use the above yml file for the deployment and I want to access the app via pvt IP Address . By running the above yml I would get the service m-app with an External private IP but it is not accessible.
Then I tried with NodePort and for the same I replace above LoadBalancer snippet with below:
kind: Service
apiVersion: v1
metadata:
name: m-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
nodePort: 31000
selector:
app: m-app
Again I can not access the app from my browser with :
Could someone please assist. I suspected an issue with the Dockerfile as well and used different Dockerfile but no luck.(Please ignore yml indentations if any)
Finally the issue got fixed . I added below snippet in the Dockerfile:
FROM httpd:alpine
WORKDIR /var/www/html
COPY ./httpd.conf /usr/local/apache2/conf/httpd.conf
COPY --from=build-stage /app/build/ .
along with:
FROM node:12 as build-stage
WORKDIR /app
COPY package.json /app/
COPY package-lock.json /app/
RUN npm install
COPY ./ /app/
ARG URI
ENV REACT_APP_URI=$URI
RUN npm run build

How to pass variables to Reactjs through kubernetes

I have a simple reactjs application and I am going to deploy this in my kubernetes cluster.
The Dockerfile for the reactjs application looks like below:
# build env
FROM node:13.12.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm run build
# production env
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
And I want to pass two environment variables through kubernetes as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: 19950818/k8s:frontend
imagePullPolicy: Always
ports:
- containerPort: 80
env:
- name: REACT_APP_BACKEND_URL
valueFrom:
configMapKeyRef:
name: backend-configs
key: backend.url
- name: REACT_APP_BACKEND_PORT
valueFrom:
configMapKeyRef:
name: backend-configs
key: backend.port
I am accessing these variables with the NodeJs as below:
let url = process.env.REACT_APP_BACKEND_URL
let port = process.env.REACT_APP_BACKEND_PORT
But How can I modify the Dockerfile mentioned above to pass these two variable?
The js you have shown:
let url = process.env.REACT_APP_BACKEND_URL
let port = process.env.REACT_APP_BACKEND_PORT
runs in the user's browser, not in kubernetes or docker, therefore the environment variables you set on the server do not exist at the time the code is run. The kubernetes/docker container only serves the javascript as a file to the user's browser.
Basically, you cannot use environment variables in this way.

My container is Warning BackOff or Crashloopback

I am creating several microservices on Azure (Kubernetes) and I have the following problem: if I do not put this command inside the YAML container it shows the message of BackOff or CrashloopBack and does not leave there.
The command I place is this:
command: [ "sleep" ]
args: [ "infinity" ]
This is the error that shows if I do not put this code
Warning BackOff 7s (x4 over 37s) kubelet, aks-agentpool-29153703-2 Back-off restarting the failed container
My DockerFile for one of this microservices:
FROM node:10
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD npm start EXPOSE 6060
My YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: permis-deployment
labels:
app: permis-deployment
spec:
replicas: 1
selector:
matchLabels:
app: permis
template:
metadata:
labels:
app: permis
spec:
containers:
- name: permis
image: myacr.azurecr.io/permission-container:latest
command: [ "sleep" ]
args: [ "infinity" ]
ports:
- containerPort: 6060
apiVersion: v1
kind: Service
metadata:
name: permis-service
spec:
selector:
app: permis
ports:
- protocol: TCP
port: 6060
targetPort: 6060
type: LoadBalancer
Can you tell me what I am doing wrong or what is wrong?
Thank you!
If your app is running fine, change the Dockerfile by this one and re-create the image. Should work:
FROM node:10
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 6060
CMD ["npm", "start"]
i suggest you do the following:
containers:
- args:
- -ec
- sleep 1000
command:
- /bin/sh
invoking sleep directly didnt work for me

Resources