Dockerfile how to make start command different between dev and prod? - node.js

I have a TypeScript Node app. I have a dev and start npm scritps:
"dev": "ts-node-dev src/index.ts",
"build": "npm run test:ci && tsc",
"start": "node dist/index"
When developing I watch changes on the .ts files and when running in production I want to run the .js files from the dist dir (which is generated using the npm build script).
This is my Dockerfile:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm i --only=prod
COPY . .
CMD ["npm", "run", "dev"]
When its running on dev env its good, but on production the CMD command should be like that:
CMD ["npm", "start"]
Also the RUN npm i --only-prod command also needs to be changed respectively.
How to make it adjustable to dev vs prod?

In kubernetes you can overwrite the default command args:
apiVersion: apps/v1
kind: Deployment
[...]
spec:
template:
spec:
containers:
- name: CONTAINER-NAME
image: IMAGE-NAME
args: [
"npm",
"start" ]
See the kubernetes documentation.
The detailed implementation depend on the deployment system you're using:
You can write two different .yaml files, one for the development and one for the production environment.
If you're deploying with helm, you can set this configuration in a value file per environment.
You can also use Kustomize as described in this example.

Related

Can not start nestjs project on the k8s server

I am working on a nestjs project, this project can start by running, and works well :
npm run start:test
However, if I deploy the project to k8s, it will show :
[10:44:59 PM] Starting compilation in watch mode...
[10:45:22 PM] Found 0 errors. Watching for file changes.
and never show that it is running.
here is my docker :
FROM node:16 AS builder
# Create app directory
WORKDIR /app
ENV NODE_ENV='prod'
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY tsconfig.build.json ./
COPY tsconfig.json ./
COPY ca-cert.pem ./
COPY prisma ./prisma/
COPY protos ./protos/
# Install app dependencies
RUN npm install
RUN npx prisma generate
COPY . .
RUN npm run build
FROM node:16-alpine
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/protos ./protos
COPY --from=builder /app/tsconfig.build.json ./
COPY --from=builder /app/tsconfig.json ./
COPY --from=builder /app/prisma ./prisma
COPY --from=builder /app/ca-cert.pem ./
EXPOSE 3190
CMD [ "npm", "run", "start:test" ]
here is the package.json :
"start:test": "nest start --watch",
"start:dev": "set NODE_ENV=dev && nest start --watch",
"start:debug": "nest start --debug --watch",
"start:prod": " nest start",
Am I missing anything?
you should not watch the file in Kubernetes it just creates overhead, plus make sure there is proper healthcheck and memory request.
so better to change cmd to
CMD [ "npm", "run", "start" ]
but why we need to build the same package again? its pretty heavy process to build, so better to just consume dist
nest start simply ensures the project has been built (same as nest build), then invokes the node command in a portable, easy way to execute the compiled application.
NestJs already provides the correct command start:prod, but I (and Google) assumed the naked start was the call to invoke. Probably others make the same assumption. So do this instead:
{
"start": "npm run start:prod",
"start:prod": "node dist/main",
}
How to get back to point 0 of memory consumption with NestJs
If you still want to work with the existing setup then make sure these
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
requests:
cpu: 1000m
memory: 2000Mi

docker-compose setup node app - deep dive

I am getting a npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json' error currently with the below docker setup or a error TS2307: Cannot find module 'Actions' or its corresponding type declarations- i think its a case that the paths are not found in tsconfig.json during the build or i am not COPYing the correct directory/volume as part of the Dockerfile. Have spent multiple days working through different path configs / setups, any help getting this to build would be greatly appreciated.
Would love to see a node / TS / docker / mysql project example if there are any in the community to share - have found it difficult to find opensource projects to compare this to for hints.
...
"paths": {
"Actions/*": [
"Actions/*"
],
}
docker-compose
version: '3.8'
services:
app:
image: app:latest
container_name: balanced-money-backend
build:
context: .
dockerfile: Dockerfile
# TODO investigate uid and gid, how does it get in - from a startup script? Think it needs to be added like user: $UID:$GID if my cmd calls a setup to id on host machine. Needs more investigation.
depends_on:
db:
condition: service_healthy
env_file:
- .env
restart: always
volumes:
- .:/var/www/
command: npm start
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=$MYSQL_HOST
- DB_USERNAME=$MYSQL_USER
- DB_PORT=$MYSQL_DOCKER_PORT
- DB_PASSWORD=$MYSQL_PASSWORD
- DB_DATABASE=$MYSQL_DATABASE
db:
image: mysql:5.7
restart: always
container_name: balanced-money-database
environment:
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_DATABASE=$MYSQL_DATABASE
ports:
- $MYSQL_LOCAL_PORT:$MYSQL_DOCKER_PORT
volumes:
- db:/var/lib/mysql
healthcheck: # mysql does not start immediatly, app needs to wait for mysql to start, having condition: service_healthy on app and a healthcheck makes sure db has started before app... i think.
test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
timeout: 20s
retries: 10
volumes:
db:
Dockerfile
############### Stage 1 - build the project
# use alpine version of node to keep the image size small as possible
FROM node:16-alpine AS build
# node docs recommend this
WORKDIR /usr/src/app
# docker caches per row as it builds, so copy those files which do not change often to the container first and following builds will not need copy as they are already cached by Docker.
COPY package*.json ./
COPY src tsconfig.json ./
RUN npm install
RUN npm run build
# TODO not sure about the stages - can i have a test / dev stage so test / dev is run in docker too.
############### Stage 2 - run the project
FROM build AS prod
EXPOSE 4000
# from stage 1, i.e. build take the code in the dist / package.json and copy to the container
COPY --from=build /usr/src/app/dist ./dist/
COPY --from=build /usr/src/app/package*.json ./
# npm ci will install exact versions from a package-lock file, and --production will only install dependencies, not dev dependencies.
RUN npm ci --production && npm cache clean --force
# make sure user is not root which could have security consequences.
USER node
CMD ["node", "dist/index.js"]
package.json scripts
"scripts": {
"build": "tsc",
"start": "node ./dist/index.js",
"node": "./dist/index.js",
"dev": "NODE_ENV=development DOTENV_CONFIG_PATH=.env.dev nodemon ts-node src/index.ts",
"format:prettier": "prettier --config .prettierrc 'src/**/*.ts' --write",
"lint": "eslint . --ext .ts",
"lint:fix": "eslint . --ext .ts --fix",
"test": "DOTENV_CONFIG_PATH=.env.test NODE_ENV=test jest --runInBand",
"test:coverage": "DOTENV_CONFIG_PATH=.env.test NODE_ENV=test jest --coverage",
},

Docker Nodemon not reloading on changes even though -L is set

Hi I'm trying to dockerize an app im currently working on. It uses nodejs and mariadb. I have some difficulties with figuring out how to make nodemon work.
I tried using --legacy-watch or -L which is the short form but it didn't change the result.
NPM installs all dependecies correct i even get the nodemon text but it doesn't restart the server when i make changes.
Would be gal if anyone could help
package.json:
{
"name": "nodejs_mariadb_docker_test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node src/index.js",
"dev": "nodemon -L src/index.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.17.2",
"mariadb": "^2.5.5",
"nodemon": "^2.0.15"
}
}
Dockerfile for nodejs:
# Specifies the image of your engine
FROM node:16.13.2
# The working directory inside your container
WORKDIR /app
# Get the package.json first to install dependencies
COPY package.json /app
# This will install those dependencies
RUN npm install
# Copy the rest of the app to the working directory
COPY . /app
# Run the container
CMD ["npm", "run", "dev"]
and the docker compose file:
version: "3"
services:
node:
build: .
container_name: express-api
ports:
- "80:8000"
depends_on:
- mysql
mysql:
image: mariadb:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: "password"
volumes:
- mysqldata:/var/lib/mysql
- ./mysql-dump:/docker-entrypoint-initdb.d
volumes:
mysqldata: {}
So the obvious problem is that you do not mount your code into the container. That is why nodemon cannot see any changes, and react to them.
Additionally, it may be more straight forward to develop the application locally and only use docker as a mean to package/ship it.
If you still want to go down this route, I would suggest something like this.
services:
express-api:
build: ./
# overwrite the prod command
command: npm run dev
ports:
- "80:8000"
volumes:
# mount your code folder into the app folder
- .:/app
# mysql stuff ...
In your dockerfile you can swap the command for the production one, since in development, compose will override it.
FROM node:16.13.2
WORKDIR /app
COPY package.json package-lock.json ./
# use ci to install from the lock file,
# to avoid suprises in prod
RUN npm ci
COPY . ./
# use the prod command
CMD ["npm", "run", "start"]
This will do a bit of redundant work in development, like copying the code, but it should be OK.
Additionally, you may want to use a .dockerignore to ignore the mysqldump for example. Otherwise, it will be copied into the image, which is probably not desirable.
Also note that through npm ci your dependencies are locked, and won't update automatically. It will also throw errors if your lock file is not in sync with package.json. This is what you want for production. If you develop locally, you can run npm install locally or via docker exec to bump the dependencies, if required. Then you can check if nothing is broken, and be sure that for your prod image it will be fine since it's used from the lock file again.

#prisma/client did not initialize yet. Please run "prisma generate" and try to import it again

I am using prisma, postgres, docker, kubernets.
npx prisma migrate dev working.
and npx prisma generate produce below output:
✔ Generated Prisma Client (2.23.0) to ./node_modules/#prisma/client in 68ms
You can now start using Prisma Client in your code. Reference: https://pris.ly/d/client
import { PrismaClient } from '#prisma/client'
const prisma = new PrismaClient()
but when I tried to use in my route file produce the error:
new-route.ts
import { PrismaClient } from '#prisma/client';
const prisma = new PrismaClient();
my docker file:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install --only=prod
COPY . .
CMD ["npm", "start"]
I know that this has been marked as solved, but I just wanted to share my setup for anyone interested.
Dockerfile
# Build image
FROM node:16.13-alpine as builder
WORKDIR /app
# Not sure if you will need this
# RUN apk add --update openssl
COPY package*.json ./
RUN npm ci --quiet
COPY ./prisma prisma
COPY ./src src
RUN npm run build
# Production image
FROM node:16.13-alpine
WORKDIR /app
ENV NODE_ENV production
COPY package*.json ./
RUN npm ci --only=production --quiet
COPY --chown=node:node --from=builder /app/prisma /app/prisma
COPY --chown=node:node --from=builder /app/src /app/src
USER node
EXPOSE 8080
CMD ["node", "src/index.js"]
package.json
{
"name": "example",
"description": "",
"version": "0.1.0",
"scripts": {
"generate": "npx prisma generate",
"deploy": "npx prisma migrate deploy",
"dev": "npm run generate && nodemon --watch \"src/**\" --ext \"js,json\" --exec \"node src/index.js\"",
"build": "npm run generate",
"start": "npm run build && node build/index.js"
},
"prisma": {
"schema": "prisma/schema.prisma"
},
"dependencies": {
"#prisma/client": "^3.6.0"
},
"devDependencies": {
"#tsconfig/node16": "^1.0.2",
"#types/node": "^16.11.12",
"nodemon": "^2.0.15",
"prisma": "^3.6.0"
}
}
I run this in Kubernetes. To make things smooth with database and migrations I run an initContainer that runs the prisma migrate deploy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: EXAMPLE
spec:
replicas: 1
selector:
matchLabels:
app: EXAMPLE
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: EXAMPLE
spec:
containers:
image: DOCKER_IMAGE
imagePullPolicy: IfNotPresent
name: SERVICE_NAME
ports:
- containerPort: 8080
name: http
protocol: TCP
initContainers:
- command:
- npm
- run
- deploy
image: DOCKER_IMAGE
imagePullPolicy: IfNotPresent
name: database-migrate-deploy
(This is a live service I just copied and stripped away anything non essential)
I hope this could be of use to someone
I usually don't use docker for this while developing, but I have this issue every time I change something in my schema.prisma and have to use npx prisma generate. The solution for me is to restart the node application running npm start again. Maybe if you restart your containers it might work.
if you are inside kubernets pod then access the pod using terminal then give generate command
kubectl exec -it pod_name sh
npx prisma generate
Here is another way to solve this.
Since the .prisma folder is needed by the prisma client as shown in the picture below or in the docs another way is to make sure it ships with your code. You can do this as follows.
Wrong way: ship the generated folder
You would think you can just include the generated files in the image by adding an exception rule for the .prisma folder to your .dockerignore (notice the exclamation point)
node_modules/
!nodes_modules/.prisma
But the query engine used by prisma is different for each operating system so you could run into troubles.
Correct way : generate the files with the image
Just add RUN npx prisma generate to your Dockerfile before the build command. This way the files are generated during the image creation and you don't have to run the prisma generate command on every container. The downside of this method is that your docker image will be larger. If this is an issue you can try with the other answers.
You forgot to copy the prisma directory, since generating the Prisma Client requires the schema.prisma file. You should copy the whole prisma directory in case you also need the migrations.
Your final Dockerfile should contain the following:
WORKDIR /app
COPY package*.json .
COPY prisma ./prisma/
RUN npm install --only=prod
This error happens because docker doesn't install devDependencies, which means that it doesn't pick up the Prisma CLI.
To remedy this you can add the prisma CLI package to your dependencies instead of the devDependencies. (Making sure to run npm install afterward to update the package-lock.json)
For me, I just changed my start under scripts in my package.json so that my app would generate prisma types before running:
"start": "npx prisma generate && nodemon server.ts"
This worked for me – I just wanted to drop this here for anyone who runs into the same issue.

How do I run a webpack build from a docker container?

The app I'm making is written in ES6 and other goodies is transpiled by webpack inside a Docker container. At the moment everything works from creating the inner directory, installing dependencies, and creating the compiled bundle file.
When running the container instead, it says that dist/bundle.js does not exist. Except if I create the bundle file in the host directory, it will work.
I've tried creating a volume for the dist directory at it works the first time, but after making changes and rebuilding it does not pick up the new changes.
What I'm trying to achieve is having the container build and run the compiled bundle. I'm not sure if the webpack part should be in the Dockerfile as a build step or at runtime since the CMD ["yarn", "start"] crashes but RUN ["yarn", "start"] works.
Any suggestions ands help is appreciated. Thanks in advance.
|_src
|_index.js
|_dist
|_bundle.js
|_Dockerfile
|_.dockerignore
|_docker-compose.yml
|_webpack.config.js
|_package.json
|_yarn.lock
docker-compose.yml
version: "3.3"
services:
server:
build: .
image: selina-server
volumes:
- ./:/usr/app/selina-server
- /usr/app/selina-server/node_modules
# - /usr/app/selina-server/dist
ports:
- 3000:3000
Dockerfile
FROM node:latest
LABEL version="1.0"
LABEL description="This is the Selina server Docker image."
LABEL maintainer="AJ alvaroo#selina.com"
WORKDIR "/tmp"
COPY ["package.json", "yarn.lock*", "./"]
RUN ["yarn"]
WORKDIR "/usr/app/selina-server"
RUN ["ln", "-s", "/tmp/node_modules"]
COPY [".", "./"]
RUN ["yarn", "run", "build"]
EXPOSE 3000
CMD ["yarn", "start"]
.dockerignore
.git
.gitignore
node_modules
npm-debug.log
dist
package.json
{
"scripts": {
"build": "webpack",
"start": "node dist/bundle.js"
}
}
I was able to get a docker service in the browser with webpack by adding the following lines to webpack.config.js:
module.exports = {
//...
devServer: {
host: '0.0.0.0',
port: 3000
},
};
Docker seems to want the internal container address to be 0.0.0.0 and not localhost, which is the default string for webpack. Changing webpack.config.js specification and copying that into the container when it is being built allowed the correct port to be recognized on `http://localhost:3000' on the host machine. It worked for my project; hope it works for yours.
I haven't included my src tree structure but its basically identical to yours,
I use the following docker setup to get it to run and its how we dev stuff every day.
In package.json we have
"scripts": {
"start": "npm run lint-ts && npm run lint-scss && webpack-dev-server --inline --progress --port 6868",
}
dockerfile
FROM node:8.11.3-alpine
WORKDIR /usr/app
COPY package.json .npmrc ./
RUN mkdir -p /home/node/.cache/yarn && \
chmod -R 0755 /home/node/.cache && \
chown -R node:node /home/node && \
apk --no-cache add \
g++ gcc libgcc libstdc++ make python
COPY . .
EXPOSE 6868
ENTRYPOINT [ "/bin/ash" ]
docker-compose.yml
version: "3"
volumes:
yarn:
services:
web:
user: "1000:1000"
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
container_name: "some-app"
command: -c "npm config set proxy=$http_proxy && npm run start"
volumes:
- .:/usr/app/
ports:
- "6868:6868"
Please note this Dockerfile is not suitable for production it's for a dev environment as its running stuff as root.
With this docker file there its a gotcha.
Because alpine is on musl and we are on glib if we install node modules on the host the compiled natives won't work on the docker container, Once the container is up if you get an error we run this to fix it (its a bit of a sticking plaster right now)
docker-compose exec container_name_goes_here /bin/ash -c "npm rebuild node-sass --force"
ikky but it works.
Try changing your start script in the package.json to perform the build first (doing this, you won't need the RUN command to perform the build in your Dockerfile:
{
"scripts": {
"build": "webpack",
"start": "webpack && node dist/bundle.js"
}
}

Resources