I'm trying to deploy and image from Azure Cloud Register into a Kubernete cluster in Azure.
When I'm running a YAML file I get this error
[72320:0209/093322.154:ERROR:node_bindings.cc(289)] Most NODE_OPTIONs are not supported in packaged apps. See documentation for more details.
The image is an application developed with Node & Express, it's just a "Hello World"
Here my Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sharepointbackend-dev
namespace: dev
labels:
app: sharepointbackend-dev
spec:
replicas: 1
selector:
matchLabels:
app: sharepointbackend-dev
template:
metadata:
labels:
app: sharepointbackend-dev
spec:
containers:
- name: samplecontainer
image: sharepointbackendcontainerregister.azurecr.io/sharepointbackend:dev
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
protocol: TCP
And my DockerFile
FROM node:lts-alpine
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "tsconfig.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install\
&& npm install typescript -g
COPY . .
RUN tsc
EXPOSE 3000
RUN chown -R node /usr/src/app
USER node
CMD ["node", "dist/app.js"]
Locally, I can run succesfully. I build an image in my Docker Desktop and run it in a browser.
Any help?
Thanks in advance
My application consists of a UI built in react, an API, MQTT broker, and a webhook to monitor changes in a database all built with node.
The database has not yet been made into a volume, but is running on my local computer.
Here is my deployment.yml file
`
# defining Service
apiVersion: v1
kind: Service
metadata:
name: factoryforge
spec:
selector:
app: factoryforge
ports:
- port: 80
name: api
targetPort: 3000
- port: 81
name: mqtt
targetPort: 3001
- port: 82
name: dbmonitor
targetPort: 3002
- port: 83
name: ui
targetPort: 3003
type: LoadBalancer
---
#Defining multi container pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: factoryforge
spec:
replicas: 1
selector:
matchLabels:
app: factoryforge
template:
metadata:
labels:
app: factoryforge
spec:
containers:
- name: api
image: blueridgetest/api
ports:
- containerPort: 3000
- name: mqtt
image: blueridgetest/mqtt
ports:
- containerPort: 3001
- name: dbmonitor
image: blueridgetest/dbmonitor
ports:
- containerPort: 3002
- name: ui
image: blueridgetest/ui
ports:
- containerPort: 3003
`
...and the dockerbuild files for the four services
UI
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i --f
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3000
CMD [ "npm", "start" ]
`
API
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3002
EXPOSE 8000
CMD [ "node", "API.js" ]
`
MQTT
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 8884
CMD [ "node", "MQTTBroker.js" ]
`
DBMonitor
`
FROM node:16
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm i
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
# Ports will have to be mapped to local ports when deployed with kubernetes
EXPOSE 3001
CMD [ "node", "index.js" ]
`
Any help would be greatly appreciated. Thanks!
i would say that in a standard Kubernetes workload you should start four different Pods with four different deployments. Then create four different services and you will see that they can communicate with one another.
With this, your pods will be simpler to check, and you are ready to scale the pods singularly.
Maybe I missed something?
I'm using the kubectl rollout command to update my deployment. But since my project is a NodeJS Project. The npm run start will take some take(a few seconds before the application is actually running.) But Kubernetes will drop the old pods immediately after the npm run start is executed.
For example,
kubectl logs -f my-app
> my app start
> nest start
The Kubernetes will drop the old pods now. However, it will take another 10 seconds until
Application is running on: http://[::1]:5274
which means my service is actually up.
I'd like to know whether there is a way to modify this like waiting some more time before kubernetes drop the old pods.
My docker file:
FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
COPY protos ./protos/
COPY tsconfig.build.json ./
COPY tsconfig.json ./
# Install app dependencies
RUN npm install
RUN export NODE_OPTIONS=--max_old_space_size=16384
RUN npm run build
COPY . .
# FROM node:14
# COPY --from=builder /app/node_modules ./node_modules
# COPY --from=builder /app/package*.json ./
# COPY --from=builder /app/dist ./dist
# COPY --from=builder /app/protos ./protos
EXPOSE 5273
CMD ["npm", "run", "start"]
Spec for my kubernetes yaml file:
spec:
replicas: 4
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: image
imagePullPolicy: Always
resources:
limits:
memory: "8Gi"
cpu: "10"
requests:
memory: "8Gi"
cpu: "10"
livenessProbe:
httpGet:
path: /api/Health
port: 5274
initialDelaySeconds: 180
periodSeconds: 80
timeoutSeconds: 20
failureThreshold: 2
ports:
- containerPort: 5274
- containerPort: 5900
Use a startup probe on your container. https://docs.openshift.com/container-platform/4.11/applications/application-health.html . Pods don't count as "ready" until all of their containers have passed their startup (and readiness) checks.
And during a deployment the scheduler counts non-ready pods as "unavailable" for things like the "maxUnavailable" setting of the deployment. Thus the scheduler won't keep shutting down working pods until new pods are ready for traffic. (https://docs.openshift.com/container-platform/4.11/applications/deployments/deployment-strategies.html)
As an additional benefit, services won't route traffic to non-ready pods, so they won't receive any traffic until the containers have passed their startup probes.
As stated in the title I am experiencing error
Back-off restarting failed container while creating a service
I've seen questions on Stack Overflow but I am still not sure how to resolve it.
This is my deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: book-api
spec:
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
name: book-api
labels:
app: book-api
spec:
containers:
- name: book-api
image: newmaster/kubecourse-books:v1
ports:
- name: http
containerPort: 3000
while the service deployment file is:
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30000
type: LoadBalancer
This is my Dockerfile:
FROM node:alpine
# Create app directory
WORKDIR /src
# Install app dependencies
COPY package.json /src/
COPY package-lock.json /src/
RUN npm install
# Bundle app source
ADD . /src
RUN npm run build
EXPOSE 3000
CMD [ "npm", "run serve" ]
I have no idea how to resolve this issue, I am newbie in the Kubernetes and DevOps world.
Repo is over here: https://github.com/codemasternode/BookService.Kubecourse.git
I tried to run your deployment locally and this is what the log shown:
kubectl log book-api-8d98bf6d5-zbv4q
Usage: npm <command>
where <command> is one of:
access, adduser, audit, bin, bugs, c, cache, ci, cit,
clean-install, clean-install-test, completion, config,
create, ddp, dedupe, deprecate, dist-tag, docs, doctor,
edit, explore, get, help, help-search, hook, i, init,
install, install-ci-test, install-test, it, link, list, ln,
login, logout, ls, outdated, owner, pack, ping, prefix,
profile, prune, publish, rb, rebuild, repo, restart, root,
run, run-script, s, se, search, set, shrinkwrap, star,
stars, start, stop, t, team, test, token, tst, un,
uninstall, unpublish, unstar, up, update, v, version, view,
whoami
npm <command> -h quick help on <command>
npm -l display full usage info
npm help <term> search for help on <term>
npm help npm involved overview
Specify configs in the ini-formatted file:
/root/.npmrc
or on the command line via: npm <command> --key value
Config info can be viewed via: npm help config
npm#6.5.0-next.0 /usr/local/lib/node_modules/npm
It seems no command is running by default with the newmaster/kubecourse-books:v1
I guess if you want to run the default npm command, you could run the following deploy config (note the command value):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: book-api
spec:
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
name: book-api
labels:
app: book-api
spec:
containers:
- name: book-api
image: newmaster/kubecourse-books:v1
command: ["npm", "start"]
ports:
- name: http
containerPort: 3000
I've been messing around with kubernetes and I'm trying to setup a development environment with minikube, node and nodemon. My image works fine if I run it in a standalone container, however it crashes with the following error if I put it in my deployment.
yarn run v1.3.2
$ nodemon --legacy-watch --exec babel-node src/index.js
/app/node_modules/.bin/nodemon:2
'use
^^^^^
SyntaxError: Invalid or unexpected token
at createScript (vm.js:80:10)
at Object.runInThisContext (vm.js:139:10)
at Module._compile (module.js:599:28)
at Object.Module._extensions..js (module.js:646:10)
at Module.load (module.js:554:32)
at tryModuleLoad (module.js:497:12)
at Function.Module._load (module.js:489:3)
at Function.Module.runMain (module.js:676:10)
at startup (bootstrap_node.js:187:16)
at bootstrap_node.js:608:3
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I have a dev command in my package.json as so
"dev": "nodemon --legacy-watch --exec babel-node src/index.js",
My image is being built with the following docker file
FROM node:8.9.1-alpine
WORKDIR /app
COPY . /app/
RUN cd /app && yarn install
and my deployment is set up with this
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: nodeapp
name: nodeapp
spec:
replicas: 3
selector:
matchLabels:
app: nodeapp
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: nodeapp
imagePullPolicy: Never
image: app:latest
command:
- yarn
args:
- run
- dev
ports:
- containerPort: 8080
volumeMounts:
- name: code
mountPath: /app
volumes:
- name: code
hostPath:
path: /Users/adam/Workspaces/scratch/expresssite
---
apiVersion: v1
kind: Service
metadata:
name: nodeapp
labels:
app: nodeapp
spec:
selector:
app: nodeapp
ports:
- name: nodeapp
port: 8080
nodePort: 30005
type: NodePort
---
It's obviously crashing on the 'use strict' in the nodemon binstub, but I have no idea why. It works just fine as a standalone docker container. The goal is to have nodemon restart the node process in each pod when I save changes for development, but I'm really not sure where my mistake is.
EDIT:
I have narrowed it down slightly. It is mounting the node_modules from the file host and this is what is causing it to crash. I do have a .dockerignore file setup. Is there a way to either get it to work like this (so if I run npm install it will pickup the changes) or is there a way to get it to use the node_modules that were installed with the image?
There are several issues when mounting node_modules fro your local computer to a container, e.g.:
1) node_modules has local symlinks which will not easily be resolvable inside your container.
2) If you have dependencies which rely on native binaries, they will be compiled for the operating system where you installed the dependencies on. If you mount them to a different OS, there will be issues running these binaries. Are you running npm install on Win/Mac and mount it to the linux based container build from the image above? Then, that is most likely your problem.
We experienced the exact same problems in our team while developing software directly inside Kubernetes pods/containers. That's why we started an open source project called DevSpace CLI: https://github.com/covexo/devspace
The DevSpace CLI can establish a reliable and super fast 2-way code sync between your local folders and folders within your dev containers (works with any Kubernetes cluster, any volume and even with ephemeral / non-persistent folders) and it is designed to work perfectly with hot reloading tools such as nodemon. Let me know if it works for you or if there is anything you are missing.