Nodemon is crashing my Kubernetes deployment (node_modules causing the issue) - node.js

I've been messing around with kubernetes and I'm trying to setup a development environment with minikube, node and nodemon. My image works fine if I run it in a standalone container, however it crashes with the following error if I put it in my deployment.
yarn run v1.3.2
$ nodemon --legacy-watch --exec babel-node src/index.js
/app/node_modules/.bin/nodemon:2
'use
^^^^^
SyntaxError: Invalid or unexpected token
at createScript (vm.js:80:10)
at Object.runInThisContext (vm.js:139:10)
at Module._compile (module.js:599:28)
at Object.Module._extensions..js (module.js:646:10)
at Module.load (module.js:554:32)
at tryModuleLoad (module.js:497:12)
at Function.Module._load (module.js:489:3)
at Function.Module.runMain (module.js:676:10)
at startup (bootstrap_node.js:187:16)
at bootstrap_node.js:608:3
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I have a dev command in my package.json as so
"dev": "nodemon --legacy-watch --exec babel-node src/index.js",
My image is being built with the following docker file
FROM node:8.9.1-alpine
WORKDIR /app
COPY . /app/
RUN cd /app && yarn install
and my deployment is set up with this
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: nodeapp
name: nodeapp
spec:
replicas: 3
selector:
matchLabels:
app: nodeapp
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: nodeapp
imagePullPolicy: Never
image: app:latest
command:
- yarn
args:
- run
- dev
ports:
- containerPort: 8080
volumeMounts:
- name: code
mountPath: /app
volumes:
- name: code
hostPath:
path: /Users/adam/Workspaces/scratch/expresssite
---
apiVersion: v1
kind: Service
metadata:
name: nodeapp
labels:
app: nodeapp
spec:
selector:
app: nodeapp
ports:
- name: nodeapp
port: 8080
nodePort: 30005
type: NodePort
---
It's obviously crashing on the 'use strict' in the nodemon binstub, but I have no idea why. It works just fine as a standalone docker container. The goal is to have nodemon restart the node process in each pod when I save changes for development, but I'm really not sure where my mistake is.
EDIT:
I have narrowed it down slightly. It is mounting the node_modules from the file host and this is what is causing it to crash. I do have a .dockerignore file setup. Is there a way to either get it to work like this (so if I run npm install it will pickup the changes) or is there a way to get it to use the node_modules that were installed with the image?

There are several issues when mounting node_modules fro your local computer to a container, e.g.:
1) node_modules has local symlinks which will not easily be resolvable inside your container.
2) If you have dependencies which rely on native binaries, they will be compiled for the operating system where you installed the dependencies on. If you mount them to a different OS, there will be issues running these binaries. Are you running npm install on Win/Mac and mount it to the linux based container build from the image above? Then, that is most likely your problem.
We experienced the exact same problems in our team while developing software directly inside Kubernetes pods/containers. That's why we started an open source project called DevSpace CLI: https://github.com/covexo/devspace
The DevSpace CLI can establish a reliable and super fast 2-way code sync between your local folders and folders within your dev containers (works with any Kubernetes cluster, any volume and even with ephemeral / non-persistent folders) and it is designed to work perfectly with hot reloading tools such as nodemon. Let me know if it works for you or if there is anything you are missing.

Related

Deployment Kubernetes in Azure - Most NODE_OPTIONs are not supported in packaged apps

I'm trying to deploy and image from Azure Cloud Register into a Kubernete cluster in Azure.
When I'm running a YAML file I get this error
[72320:0209/093322.154:ERROR:node_bindings.cc(289)] Most NODE_OPTIONs are not supported in packaged apps. See documentation for more details.
The image is an application developed with Node & Express, it's just a "Hello World"
Here my Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sharepointbackend-dev
namespace: dev
labels:
app: sharepointbackend-dev
spec:
replicas: 1
selector:
matchLabels:
app: sharepointbackend-dev
template:
metadata:
labels:
app: sharepointbackend-dev
spec:
containers:
- name: samplecontainer
image: sharepointbackendcontainerregister.azurecr.io/sharepointbackend:dev
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
protocol: TCP
And my DockerFile
FROM node:lts-alpine
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "tsconfig.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install\
&& npm install typescript -g
COPY . .
RUN tsc
EXPOSE 3000
RUN chown -R node /usr/src/app
USER node
CMD ["node", "dist/app.js"]
Locally, I can run succesfully. I build an image in my Docker Desktop and run it in a browser.
Any help?
Thanks in advance

Running a NodeJS as a CronJob inside Kubernetes

I have a small NodeJS script that I want to run inside a container inside a kubernetes cluster as a CronJob. I'm having a bit of a hard time figuring out how to do that, given most examples are simple "run this Bash command" type deals.
package.json:
{
...
"scripts": {
"start": "node bin/path/to/index.js",
"compile": "tsc"
}
}
npm run compile && npm run start works on the command-line. Moving on to the Docker container setup...
Dockerfile:
FROM node:18
WORKDIR /working/dir/
...
RUN npm run compile
CMD [ "npm", "run", "start" ]
When I build and then docker run this container on the command-line, the script runs successfully. This gives me confidence that most things above are correct and it must be a problem with my CronJob...
my-cron.yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cron-foo
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: job-foo
image: gcr.io/...
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
When I kubectl apply -f my-cron.yaml sure enough I get some pods that run, one per-minute, however they all error out:
% kubectl logs cron-foo-27805019-j8gbp
> mdmp#0.0.1 start
> node bin/path/to/index.js
node:internal/modules/cjs/loader:998
throw err;
^
Error: Cannot find module '/working/dir/bin/path/to/index.js'
at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
at Module._load (node:internal/modules/cjs/loader:841:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
at node:internal/main/run_main_module:23:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v18.11.0
The fact that it's trying to run the correct command means the correct Docker container is being pulled successfully, but I don't know why the script is not being found...
Any help would be appreciated. Most CronJob examples I've seen have a command: list in the template spec...
The error you show about the path not being found should appear when you docker run ... - but it didn't!
So, I assume it is related to the imagePullPolicy. Something is fixed, checked locally and then re-pushed to the given registry for your Kubernetes workloads to use. If it was re-pushed with the same tag, then don't forget to tell Kubernetes to query the registry download the new digest by changing the imagePullPolicy to Always.

Node application not starting immediately

I have a node application running as a pod in a Kubernetes cluster, but it always takes around 8 minutes for the application to start executing.
The application logs only appear at around the 8 mins mark, I don't think it has anything to do with the application itself as the application doesn't throw any errors out at all.
My EKS cluster is at v1.18.
Would appreciate it if anyone can point me to any logs that I could use to investigate this issue.
cronjob.yaml
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: <name>
namespace: <namespace>
spec:
schedule: "*/4 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
app: <name>
spec:
restartPolicy: Never
containers:
- name: <name>
image: <container image>
env:
<env variables>
volumeMounts:
<mounts>
volumes:
<PVCs>
Application logs from pod
npm WARN registry Unexpected warning for https://registry.npmjs.org/: Miscellaneous Warning ETIMEDOUT: request to https://registry.npmjs.org/npm failed, reason: connect ETIMEDOUT 104.16.22.35:443
npm WARN registry Using stale data from https://registry.npmjs.org/ due to a request error during revalidation.
> <app name>>#1.0.0 start:<env>
> node src/app.js
Application ABC: starting
.
<application logs>
.
Application ABC: completed
Dockerfile
FROM node:15.14.0-alpine3.12
# Create app directory
WORKDIR /app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
#RUN npm ci --only=production
# Bundle app source
COPY . .
CMD [ "node", "src/app.js"]

error when trying to run ui (app) on kubernetes?

Basically when i try to run my app(UI) on kubernetess using kubectl the pod fails and says error.
logs of the pod
> wootz#0.1.0 start /usr/src/app
> node scripts/start.js
internal/modules/cjs/loader.js:626
throw err;
^
Error: Cannot find module 'react-dev-utils/chalk'
Require stack:
- /usr/src/app/scripts/start.js
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:623:15)
at Function.Module._load (internal/modules/cjs/loader.js:527:27)
at Module.require (internal/modules/cjs/loader.js:681:19)
at require (internal/modules/cjs/helpers.js:16:16)
at Object.<anonymous> (/usr/src/app/scripts/start.js:19:15)
at Module._compile (internal/modules/cjs/loader.js:774:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:785:10)
at Module.load (internal/modules/cjs/loader.js:641:32)
at Function.Module._load (internal/modules/cjs/loader.js:556:12)
at Function.Module.runMain (internal/modules/cjs/loader.js:837:10) {
code: 'MODULE_NOT_FOUND',
requireStack: [ '/usr/src/app/scripts/start.js' ]
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! wootz#0.1.0 start: `node scripts/start.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the wootz#0.1.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional log ging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2019-06-17T15_56_02_649Z-debug.log
uipersistantvolume
kind: PersistentVolume
apiVersion: v1
metadata:
name: ui-initdb-pv-volume
labels:
type: local
app: ui
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/pagedesigneryamls/client"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ui-initdb-pv-claim-one
labels:
app: ui
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
uipersistantvolumetwo
kind: PersistentVolume
apiVersion: v1
metadata:
name: ui-initdb-pv-volume-two
labels:
type: local
app: ui
spec:
storageClassName: manual
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
hostPath:
path: "/home/vignesh/pagedesigneryamls/client"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ui-initdb-pv-claim-two
labels:
app: ui
spec:
storageClassName: manual
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Mi
ui.yaml
apiVersion: v1
kind: Service
metadata:
name: ui
labels:
app: ui
spec:
ports:
- name: myport
port: 80
targetPort: 3000
selector:
app: ui
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui
labels:
app: ui
spec:
selector:
matchLabels:
app: ui
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: ui
tier: frontend
spec:
containers:
- image: suji165475/devops-sample:updatedclientdockerfile
name: ui
ports:
- containerPort: 80
name: myport
volumeMounts:
- name: ui-persistent-storage-one
mountPath: /usr/src/app
- name: ui-persistent-storage-two
mountPath: /usr/src/app/node_modules
volumes:
- name: ui-persistent-storage-one
persistentVolumeClaim:
claimName: ui-initdb-pv-claim-one
- name: ui-persistent-storage-two
persistentVolumeClaim:
claimName: ui-initdb-pv-claim-two
the image used in the ui yaml was built using the following dockerfile
FROM node:12.4.0-alpine
RUN mkdir -p usr/src/app
WORKDIR /usr/src/app
COPY package.json package.json
RUN npm install && npm cache clean --force
RUN npm install -g webpack-cli
WORKDIR /usr/src/app
COPY . .
WORKDIR /usr/src/app
EXPOSE 3000
RUN npm run build
CMD [ "npm","start" ]
how can i solve the error Cannot find module 'react-dev-utils/chalk'?? Is there anything missing from the dockerfile??
Delete all of the volumes, persistent volumes, and persistent volume claims. Your code is in your image (you COPY . . to get it in) and you should run it from there.
Kubernetes is extremely ill-suited to be a live development environment. Notice here that you’re spending more YAML space on trying to inject your local source code into the container than everything else in your deployment combined; in a real production setup you’d also have to make sure your source code gets on to every single node (and assumes you even have access to the nodes; in many cloud-hosted environments you won’t).
I’d recommend developing your application normally — no Docker, no Kubernetes — and only once it works, worry about packaging it up and deploying it. Things like rolling zero-downtime restarts in Kubernetes are rather different from live code reloads in a development environment.

Angular 5 app with server rendering with Angular Universal on App Engine

I am newbie to node.js & angular and I have simple Angular app build Angular 5 with some simple routes.
I also want to support server side rendering in my app with Angular Universal and host my app on Google Cloud App Engine.
I tried to upload a starter kit of angular universal on App Engine it fails. (https://github.com/gdi2290/angular-starter).I have deployed it using docker. Although deploy is successful but it gives 502 Bad Gateway error for nginx. I have tried clearing cache and all other suggestion avaliable on net. But still same result.
I have also tried example from Google: https://codelabs.developers.google.com/codelabs/cloud-cardboard-viewer/index.html?index=..%2F..%2Findex worked but it is basic.
Please help me create a App Engine deploy-able version of code https://github.com/gdi2290/angular-starter.
Before I go into any detail, let me give you the Github link of my Angular Universal seed project with Dockerfile and Sass. I use it as a starting point for my projects. Since I am a fan of Vagrant, you will find the Vagranfile in the repository and use it to create the exact same environment for development as well as testing the Docker container. The Readme file provides all the details as to how to work with the project.
Here is the link.
Angular Universal Project Creation
The Angular Universal setup steps are detailed here in the official documentation.
However, I had a few wasted hours to find out the following point
Webpack 3 is not compatible with ts-loader versions higher than
3.5.0. At the time of developing this, the latest version of Angular CLI is 1.7.2 which uses Webpack 3.*. Hence, while setting up Angular
Universal, install ts-config#3.5.0
Dockerfile
My Dockerfile looks like below. So, as you can see, I am using the docker feature multi stage build to first build the project in a container, copying the distribution to a new container and discarding the old container used for build. This allows Google Cloud build trigger to build the source code and create the Docker image from the distribution.
FROM node:8.10.0 AS ausbuilder
RUN useradd --create-home --shell /bin/bash aus; \
chown -R aus /home/aus
USER aus
WORKDIR /home/aus
COPY aus/ /home/aus
RUN mkdir /home/aus/.npm; \
npm config set prefix /home/aus/.npm; \
npm install --quiet --no-progress -g webpack#3.11.0; \
npm install --quiet --no-progress -g #angular/cli#1.7.2; \
npm install --quiet --no-progress;
ENV PATH=/home/aus/.npm/bin:$PATH
RUN npm run build; \
webpack --config webpack.server.config.js --no-progress
FROM keymetrics/pm2:8-alpine
RUN adduser -h /home/aus -s /bin/bash aus; \
chown -R aus /home/aus
USER aus
WORKDIR /home/aus
COPY --from=ausbuilder /home/aus/dist /home/aus/dist
EXPOSE 4000/tcp
ENTRYPOINT ["pm2-runtime","start","/home/aus/dist/server.js"]
Deployment in Kubernetes in Google Cloud
We need to first create a build trigger in Google Cloud, so that as soon as we push the code in (let's say) the master branch, the code build and subsequent deployment is triggered. Your code may be hosted in Google Cloud source control, Bitbucket or Github. You need to integrate your source control with the build trigger.While creating the build trigger, you will have option to select either Dockerfile or cloudbuild.yaml, if you chose the first option, your code will be built and subsequently the Docker image will be stored in the Google Container Repository. I go for the second option as it allows me to do more like deployment in Kubernetes.
Here is how my cloudbuild.yaml looks like.
A few important points to note here:
While cloning the repository I cannot give any external URL. So, what happens here is when you create a build trigger, google creates another repository in the Google domain which basically points to the external source control like Bitbucket in my case. You can find this in the Google Source Control section.
Secondly, I am creating a tag latest for the container image so that I can refer it in the Kubernetes deployment descriptor which I named as kubedeployment.yaml. kubedeployment.yaml is referenced in the cloudbuild.yaml as you can see below
steps:
- name: gcr.io/cloud-builders/git
args: ['clone', 'https://source.developers.google.com/p/aus2018/r/bitbucket-saptarshibasu-aus']
- name: 'gcr.io/cloud-builders/docker'
args: ["build", "-t", "gcr.io/aus2018/aus:$REVISION_ID", "."]
- name: 'gcr.io/cloud-builders/docker'
args: ["tag", "gcr.io/aus2018/aus:$REVISION_ID", "gcr.io/aus2018/aus:latest"]
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "gcr.io/aus2018/aus:$REVISION_ID"]
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "gcr.io/aus2018/aus:latest"]
- name: 'gcr.io/cloud-builders/kubectl'
args:
- 'create'
- '-f'
- 'kubedeployment.yaml'
env:
- 'CLOUDSDK_COMPUTE_ZONE=asia-south1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=aus'
Finally, here is how the kubedeployment.yaml looks:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aus-deploy
spec:
replicas: 1
selector:
matchLabels:
app: aus
template:
metadata:
labels:
app: aus
spec:
containers:
- name: aus
image: gcr.io/aus2018/aus:latest
ports:
- containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
name: aus-svc
labels:
app: aus
spec:
type: NodePort
selector:
app: aus
ports:
- protocol: TCP
port: 80
targetPort: 4000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: aus-ing
spec:
backend:
serviceName: aus-svc
servicePort: 80
Once deployment is complete after a few minutes, you'll get the Ingress URL. And then after a few minutes you app starts showing up at the URL.
You are definitely going to customise this to fit your needs. However, I hope, this would probably give you a starting point.

Resources