I have a small NodeJS script that I want to run inside a container inside a kubernetes cluster as a CronJob. I'm having a bit of a hard time figuring out how to do that, given most examples are simple "run this Bash command" type deals.
package.json:
{
...
"scripts": {
"start": "node bin/path/to/index.js",
"compile": "tsc"
}
}
npm run compile && npm run start works on the command-line. Moving on to the Docker container setup...
Dockerfile:
FROM node:18
WORKDIR /working/dir/
...
RUN npm run compile
CMD [ "npm", "run", "start" ]
When I build and then docker run this container on the command-line, the script runs successfully. This gives me confidence that most things above are correct and it must be a problem with my CronJob...
my-cron.yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cron-foo
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: job-foo
image: gcr.io/...
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
When I kubectl apply -f my-cron.yaml sure enough I get some pods that run, one per-minute, however they all error out:
% kubectl logs cron-foo-27805019-j8gbp
> mdmp#0.0.1 start
> node bin/path/to/index.js
node:internal/modules/cjs/loader:998
throw err;
^
Error: Cannot find module '/working/dir/bin/path/to/index.js'
at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
at Module._load (node:internal/modules/cjs/loader:841:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
at node:internal/main/run_main_module:23:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v18.11.0
The fact that it's trying to run the correct command means the correct Docker container is being pulled successfully, but I don't know why the script is not being found...
Any help would be appreciated. Most CronJob examples I've seen have a command: list in the template spec...
The error you show about the path not being found should appear when you docker run ... - but it didn't!
So, I assume it is related to the imagePullPolicy. Something is fixed, checked locally and then re-pushed to the given registry for your Kubernetes workloads to use. If it was re-pushed with the same tag, then don't forget to tell Kubernetes to query the registry download the new digest by changing the imagePullPolicy to Always.
Related
I am trying to automate unit testing before deploying a node.js container to a local kubernetes cluster. It is not clear to me whether or not I need to configure this in my deployment.yaml, Dockerfile, package.json, or some combination of them. And once configured how to instruct Kubernetes to output any failures and exit before deploying.
Do I need to do as this SO answer suggests and write a shell script and modify environment variables? Or, is this something I can automate with Kubernetes deployment.yaml
If it's useful, I am using mocha with chai. And this is being deployed from Google Cloud Source Repositories to a local Kubernetes instance.
Since I'm entirely new to Kubernetes, it would be great to have as much detail as possible.
Here is my deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: my-app-image
imagePullPolicy: IfNotPresent
Here is my Dockerfile:
# Use base node 18-alpine image from Docker hub
FROM node:18-alpine
WORKDIR /MY_APP
# Copy package.json and install dependencies
COPY package*.json ./
RUN npm install
# Copy rest of the application source code
COPY . .
# Run index.js
ENTRYPOINT ["node", "src/index.js"]
Here is my package.json:
"scripts": {
"start": "node src/index.js",
"test": "npm install mocha -g && mocha --timeout 4000 --exit"
}
And here is a basic unit test I'm using to experiment with:
import { expect } from 'chai'
describe('Basic unit test', ()=>{
it('Checks if 3*3=9', () => {
expect(3*3, 9)
})
})
My issue was a misunderstanding about how the CI/CD pipeline works when it comes to tests.
It goes like this:
Commit > Build (dev) > Deploy (dev) > Run tests > (pass or fail) > (if pass) promote to production commit > ... deploy.
If the dev deploy fails the tests, the code is never committed, therefore never moves forward in the pipeline.
In my case, my package.json file needed this:
"scripts": {
"start": "node src/index.js",
"test": "npm install mocha -g && mocha --timeout 4000 --exit",
"build": "docker image build -f ./Dockerfile -t my_project:dev ."
},
My cloudbuild.json file needed this:
- id: "Run unit tests"
name: node
entrypoint: npm
args: ["test"]
- id: "run"
name: node
entrypoint: npm
args: ["start"]
If the tests pass, the deploy proceeds, otherwise the build fails.
I'm using Docker Desktop and I'm working with a Nest.js app, when I build my project with docker-compose build everything is fine since there are no errors in console, however when I run docker-compose up -d my container keeps on failing because it can't find the build directory of my app. The strange thing is that this works perfectly fine in my Windows computer, but my macOS laptop is the one that's failing
Error: Cannot find module '/tmp/dist/main'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:885:15)
at Function.Module._load (internal/modules/cjs/loader.js:730:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
This is my dockerfile:
FROM node:14.17.0-alpine
WORKDIR /tmp
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3005
ENV NODE_TLS_REJECT_UNAUTHORIZED=0
# Run it
ENTRYPOINT ["node", "/tmp/dist/main"]
This is my docker-compose file and the folder structure of the project is pretty basic, after I run the npm run build command a dist folder is created at the root of my project.
version: '3'
services:
my-api:
build: ./my-api
container_name: 'my-api'
restart: always
environment:
NODE_ENV: "docker-compose"
APP_PORT: 3005
ports:
- "3005:3005"
- 9229:9229
depends_on:
- redis
- mysql
As you set WORKDIR /tmp you are currently executing commands in this directory. https://docs.docker.com/engine/reference/builder/#workdir
And from what you provided there is no another tmp directory in you current working directory.
Try to change last command to
ENTRYPOINT ["node", "/dist/main"]
I get Error: Cannot find module '/usr/src/nuxt-app/nuxt' when I try to run app on the server. i didn't change anything in dockerfile or circleci config. it had been working before, i don't know what happened. Image is build by circleci. Locally without using docker everything works as it should. What do I do with it?
Error:
throw err;
^
Error: Cannot find module '/usr/src/nuxt-app/nuxt'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:880:15)
at Function.Module._load (internal/modules/cjs/loader.js:725:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Dockerfile:
FROM node:lts-alpine
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
RUN apk update && apk upgrade
RUN apk add python make g++
ADD . /usr/src/nuxt-app/
RUN npm install
RUN npm run build
EXPOSE 5002
CMD [ "nuxt", "start" ]
circleci config.yml:
version: 2.1
orbs:
docker: circleci/docker#1.5.0
node: circleci/node#4.1.0
workflows:
build-deploy:
jobs:
- deploy:
context:
- docker
requires:
- build
filters:
branches:
only: master
jobs:
deploy:
machine: true
steps:
- checkout
- docker/install-docker-tools
- run:
name: Login to Docker
command: docker login -u=$DOCKER_LOGIN -p=$DOCKER_PASSWORD registry.xxx.com
- docker/build:
image: yyy
registry: registry.xxx.com
tag: latest
- docker/push:
image: yyy
registry: registry.xxx.com
tag: latest
I've been messing around with kubernetes and I'm trying to setup a development environment with minikube, node and nodemon. My image works fine if I run it in a standalone container, however it crashes with the following error if I put it in my deployment.
yarn run v1.3.2
$ nodemon --legacy-watch --exec babel-node src/index.js
/app/node_modules/.bin/nodemon:2
'use
^^^^^
SyntaxError: Invalid or unexpected token
at createScript (vm.js:80:10)
at Object.runInThisContext (vm.js:139:10)
at Module._compile (module.js:599:28)
at Object.Module._extensions..js (module.js:646:10)
at Module.load (module.js:554:32)
at tryModuleLoad (module.js:497:12)
at Function.Module._load (module.js:489:3)
at Function.Module.runMain (module.js:676:10)
at startup (bootstrap_node.js:187:16)
at bootstrap_node.js:608:3
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I have a dev command in my package.json as so
"dev": "nodemon --legacy-watch --exec babel-node src/index.js",
My image is being built with the following docker file
FROM node:8.9.1-alpine
WORKDIR /app
COPY . /app/
RUN cd /app && yarn install
and my deployment is set up with this
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: nodeapp
name: nodeapp
spec:
replicas: 3
selector:
matchLabels:
app: nodeapp
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: nodeapp
imagePullPolicy: Never
image: app:latest
command:
- yarn
args:
- run
- dev
ports:
- containerPort: 8080
volumeMounts:
- name: code
mountPath: /app
volumes:
- name: code
hostPath:
path: /Users/adam/Workspaces/scratch/expresssite
---
apiVersion: v1
kind: Service
metadata:
name: nodeapp
labels:
app: nodeapp
spec:
selector:
app: nodeapp
ports:
- name: nodeapp
port: 8080
nodePort: 30005
type: NodePort
---
It's obviously crashing on the 'use strict' in the nodemon binstub, but I have no idea why. It works just fine as a standalone docker container. The goal is to have nodemon restart the node process in each pod when I save changes for development, but I'm really not sure where my mistake is.
EDIT:
I have narrowed it down slightly. It is mounting the node_modules from the file host and this is what is causing it to crash. I do have a .dockerignore file setup. Is there a way to either get it to work like this (so if I run npm install it will pickup the changes) or is there a way to get it to use the node_modules that were installed with the image?
There are several issues when mounting node_modules fro your local computer to a container, e.g.:
1) node_modules has local symlinks which will not easily be resolvable inside your container.
2) If you have dependencies which rely on native binaries, they will be compiled for the operating system where you installed the dependencies on. If you mount them to a different OS, there will be issues running these binaries. Are you running npm install on Win/Mac and mount it to the linux based container build from the image above? Then, that is most likely your problem.
We experienced the exact same problems in our team while developing software directly inside Kubernetes pods/containers. That's why we started an open source project called DevSpace CLI: https://github.com/covexo/devspace
The DevSpace CLI can establish a reliable and super fast 2-way code sync between your local folders and folders within your dev containers (works with any Kubernetes cluster, any volume and even with ephemeral / non-persistent folders) and it is designed to work perfectly with hot reloading tools such as nodemon. Let me know if it works for you or if there is anything you are missing.
I'm working on automation of following build steps:
- building frontend application with webpack
- running tests on it
I am using Jenkins with blue-ocean plugin enabled, here is Jenkinsfile:
Jenkinsfile:pipeline {
agent {
dockerfile {
filename 'Dockerfile'
}
}
stages {
stage('Build') {
steps {
sh 'npm run build'
}
}
}
}
I'm using following Dockerfile
FROM node:latest
WORKDIR /app
COPY . /app
RUN npm install webpack -g && npm install
The problem is that when running npm run build it can not find webpack:
> webpack --config webpack-production.config.js --progress --colors
module.js:529
throw err;
^
Error: Cannot find module 'webpack'
at Function.Module._resolveFilename (module.js:527:15)
at Function.Module._load (module.js:476:23)
at Module.require (module.js:568:17)
at require (internal/module.js:11:18)
at Object.<anonymous> (/var/lib/jenkins/workspace/l-ui-webpack-example_master-IXSLD4CQSVAM2DRFHYHOYUANEHJ73R5PUGW4BMYVT5WPGB6ZZKEQ/webpack-production.config.js:1:79)
It looks like commands are being executed in host context, not on container as manual running works just fine:
docker build . -t sample
docker run sample npm run build
Here is full jenkins log:Jenkins build log
Here is a link to a repository: Source code
I had exactly the same issue. For some reason, 'RUN npm install' within the Dockerfile didn't take effect in the Jenkins pipeline although it worked well when I built the image manually.
I got the pipeline working by running "npm install" as a step in the pipeline. So add this to your Jenkinsfile before the 'Build' stage:
stage ('install app') {
steps {
sh "npm install"
}
}
I don't know why this happens but it might have something to do with how Jenkins sets the context for the Docker build. I hope someone else can elaborate on this.