I'm using the kubectl rollout command to update my deployment. But since my project is a NodeJS Project. The npm run start will take some take(a few seconds before the application is actually running.) But Kubernetes will drop the old pods immediately after the npm run start is executed.
For example,
kubectl logs -f my-app
> my app start
> nest start
The Kubernetes will drop the old pods now. However, it will take another 10 seconds until
Application is running on: http://[::1]:5274
which means my service is actually up.
I'd like to know whether there is a way to modify this like waiting some more time before kubernetes drop the old pods.
My docker file:
FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
COPY protos ./protos/
COPY tsconfig.build.json ./
COPY tsconfig.json ./
# Install app dependencies
RUN npm install
RUN export NODE_OPTIONS=--max_old_space_size=16384
RUN npm run build
COPY . .
# FROM node:14
# COPY --from=builder /app/node_modules ./node_modules
# COPY --from=builder /app/package*.json ./
# COPY --from=builder /app/dist ./dist
# COPY --from=builder /app/protos ./protos
EXPOSE 5273
CMD ["npm", "run", "start"]
Spec for my kubernetes yaml file:
spec:
replicas: 4
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: image
imagePullPolicy: Always
resources:
limits:
memory: "8Gi"
cpu: "10"
requests:
memory: "8Gi"
cpu: "10"
livenessProbe:
httpGet:
path: /api/Health
port: 5274
initialDelaySeconds: 180
periodSeconds: 80
timeoutSeconds: 20
failureThreshold: 2
ports:
- containerPort: 5274
- containerPort: 5900
Use a startup probe on your container. https://docs.openshift.com/container-platform/4.11/applications/application-health.html . Pods don't count as "ready" until all of their containers have passed their startup (and readiness) checks.
And during a deployment the scheduler counts non-ready pods as "unavailable" for things like the "maxUnavailable" setting of the deployment. Thus the scheduler won't keep shutting down working pods until new pods are ready for traffic. (https://docs.openshift.com/container-platform/4.11/applications/deployments/deployment-strategies.html)
As an additional benefit, services won't route traffic to non-ready pods, so they won't receive any traffic until the containers have passed their startup probes.
Related
I'm trying to deploy and image from Azure Cloud Register into a Kubernete cluster in Azure.
When I'm running a YAML file I get this error
[72320:0209/093322.154:ERROR:node_bindings.cc(289)] Most NODE_OPTIONs are not supported in packaged apps. See documentation for more details.
The image is an application developed with Node & Express, it's just a "Hello World"
Here my Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sharepointbackend-dev
namespace: dev
labels:
app: sharepointbackend-dev
spec:
replicas: 1
selector:
matchLabels:
app: sharepointbackend-dev
template:
metadata:
labels:
app: sharepointbackend-dev
spec:
containers:
- name: samplecontainer
image: sharepointbackendcontainerregister.azurecr.io/sharepointbackend:dev
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
protocol: TCP
And my DockerFile
FROM node:lts-alpine
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "tsconfig.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install\
&& npm install typescript -g
COPY . .
RUN tsc
EXPOSE 3000
RUN chown -R node /usr/src/app
USER node
CMD ["node", "dist/app.js"]
Locally, I can run succesfully. I build an image in my Docker Desktop and run it in a browser.
Any help?
Thanks in advance
My application consists of a UI built in react, an API, MQTT broker, and a webhook to monitor changes in a database all built with node.
The database has not yet been made into a volume, but is running on my local computer.
Here is my deployment.yml file
`
# defining Service
apiVersion: v1
kind: Service
metadata:
name: factoryforge
spec:
selector:
app: factoryforge
ports:
- port: 80
name: api
targetPort: 3000
- port: 81
name: mqtt
targetPort: 3001
- port: 82
name: dbmonitor
targetPort: 3002
- port: 83
name: ui
targetPort: 3003
type: LoadBalancer
---
#Defining multi container pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: factoryforge
spec:
replicas: 1
selector:
matchLabels:
app: factoryforge
template:
metadata:
labels:
app: factoryforge
spec:
containers:
- name: api
image: blueridgetest/api
ports:
- containerPort: 3000
- name: mqtt
image: blueridgetest/mqtt
ports:
- containerPort: 3001
- name: dbmonitor
image: blueridgetest/dbmonitor
ports:
- containerPort: 3002
- name: ui
image: blueridgetest/ui
ports:
- containerPort: 3003
`
...and the dockerbuild files for the four services
UI
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i --f
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3000
CMD [ "npm", "start" ]
`
API
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3002
EXPOSE 8000
CMD [ "node", "API.js" ]
`
MQTT
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 8884
CMD [ "node", "MQTTBroker.js" ]
`
DBMonitor
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3001
CMD [ "node", "index.js" ]
`
Any help would be greatly appreciated. Thanks!
i would say that in a standard Kubernetes workload you should start four different Pods with four different deployments. Then create four different services and you will see that they can communicate with one another.
With this, your pods will be simpler to check, and you are ready to scale the pods singularly.
Maybe I missed something?
I have a node application running as a pod in a Kubernetes cluster, but it always takes around 8 minutes for the application to start executing.
The application logs only appear at around the 8 mins mark, I don't think it has anything to do with the application itself as the application doesn't throw any errors out at all.
My EKS cluster is at v1.18.
Would appreciate it if anyone can point me to any logs that I could use to investigate this issue.
cronjob.yaml
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: <name>
namespace: <namespace>
spec:
schedule: "*/4 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
app: <name>
spec:
restartPolicy: Never
containers:
- name: <name>
image: <container image>
env:
<env variables>
volumeMounts:
<mounts>
volumes:
<PVCs>
Application logs from pod
npm WARN registry Unexpected warning for https://registry.npmjs.org/: Miscellaneous Warning ETIMEDOUT: request to https://registry.npmjs.org/npm failed, reason: connect ETIMEDOUT 104.16.22.35:443
npm WARN registry Using stale data from https://registry.npmjs.org/ due to a request error during revalidation.
> <app name>>#1.0.0 start:<env>
> node src/app.js
Application ABC: starting
.
<application logs>
.
Application ABC: completed
Dockerfile
FROM node:15.14.0-alpine3.12
# Create app directory
WORKDIR /app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
#RUN npm ci --only=production
# Bundle app source
COPY . .
CMD [ "node", "src/app.js"]
I'm trying to get a basic nodeJs and postgres system up and running but for some reason the node container keeps receiving a SIGTERM signal and shutting down only to be started back up due to the restart policy and then shutdown again. The cycle goes on and on.
What am I missing here? I ran the same code in non-swarm mode and it worked fine, the container was healthy and stayed up. One more thing I did notice during swarm mode is that inspite of asking docker to keep 1 replica, the docker stack services service_name always returns 0/1 replicas
Posting my dockerfile and docker-compose.yml file here
# BASE stage
FROM node:14-alpine as base
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install --frozen-lockfile --prod
FROM node:14-alpine
ENV NODE_ENV development
ENV VERSION V1
WORKDIR /usr/src/app
RUN apk --no-cache add curl
COPY src src/
# Other copy commands
COPY --from=base /usr/src/app/node_modules /usr/src/app/node_modules
# check every 5s to ensure this service returns HTTP 200
HEALTHCHECK --interval=5s --timeout=3s --start-period=10s --retries=3 \
CMD curl -fs http://localhost/health
ENTRYPOINT [ "node", "src/index.js" ]
version: "3.7"
services:
api:
image: demo/hobby:v1
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 5
window: 120s
rollback_config:
parallelism: 1
delay: 20s
order: start-first
update_config:
parallelism: 1
delay: 1s
failure_action: rollback
order: start-first
env_file:
- ./.env
ports:
- target: 9200
published: 80
mode: host
networks:
- verse
postgres:
image: "postgres:12.3-alpine"
container_name: "test-db-dev"
networks:
- verse
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_USER=${DB_USER}
expose:
- "5432"
ports:
- "5432:5432"
restart: "unless-stopped"
networks:
verse:
driver: overlay
external: false
I assume that your container is being killed every 30 seconds. If this is true, then this is the cause:
The HEALTHCHECK is trying to curl http://localhost/health (defaults to port 80).
Although you expose the app to the host on port 80, the HEALTHCHECK is performed from the container perspective, where no service is listening on that port.
Assuming that your node app is listening on port 9200 and that a GET performed on /health returns status 200, the Dockerfile should be built like this:
[...]
HEALTHCHECK --interval=5s --timeout=3s --start-period=10s --retries=3 \
CMD curl -fs http://localhost:9200/health
[...]
I have a web application written in Node.js that I'm trying to get into Docker. Running the image with docker run -p 80:80 image works just fine; I'm able to access the webpage it's hosting. However, when I try to run it in a stack, I'm unable to access the page, and Chrome just sits "Waiting for localhost..." forever.
Dockerfile:
FROM readytalk/nodejs
WORKDIR /app
ADD . /app
RUN npm i
EXPOSE 80
CMD []
ENTRYPOINT ["/nodejs/bin/npm", "start"]
docker-compose.yml:
version: "3"
services:
web:
image: image_name
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
Any help would be greatly appreciated.
EDIT: Added some logging and it seems that the HTTP request is never actually making it to the Node.js app. It seems like Docker has recieved the request, but hasn't routed it to the running app.
Some time your docker container run your IP address.
you can check it by running this command docker info
second option
Go to terminal and write
in window
ipconfig
and see preferred IP then access your container with that IP with specifying port
in Ubuntu
ifconfig
Hope this will solve your problem and try to access using 127.0.0.1:port
You can check this slide which shows and run hello world node+docker
docker-node-hello-world-application
And I will recommend using this Docker file.
Node_DockerFile
FROM alpine
RUN apk update && apk upgrade
RUN apk add nodejs
RUN mkdir -p /app
ADD app/package.json /app
WORKDIR /app/
ENV HOME /app
ENV NODE_ENV development
RUN npm install
ADD app /app
EXPOSE 3000
CMD npm start