I'm working on a webpage built with Astrojs. I'm fairly newbie as a frontend developer, and definitely not a full expert in Docker, but my current working folder is 270MB in size, dependencies included, yet when i build the docker image it gets to 1.32GB
This is my package.json in case it helps
{
"name": "personalsite",
"version": "0.0.1",
"private": true,
"scripts": {
"dev": "astro dev",
"start": "astro dev",
"build": "astro build",
"preview": "astro preview",
"astro": "astro"
},
"dependencies": {
"#astrojs/image": "^0.5.1",
"#astrojs/svelte": "^1.0.0",
"#astrojs/tailwind": "^1.0.0",
"svelte": "^3.50.1",
"#fortawesome/free-brands-svg-icons": "^6.2.0",
"#fortawesome/free-solid-svg-icons": "^6.2.0",
"#tailwindcss/typography": "^0.5.7",
"astro": "^1.2.1",
"autoprefixer": "^10.4.8",
"daisyui": "^2.25.0",
"postcss": "^8.4.16",
"svelte-fa": "^3.0.3"
}
}
This is my DockerFile
FROM node:lts-alpine
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN yarn install
COPY . .
EXPOSE 3000
RUN chown -R node /usr/src/app
USER node
CMD ["yarn", "run", "start", "--host"]
I've even used the alpine image for Node.js, but it still seems so large for me.
Do you know what might be the issue here?
EDIT: I followed the tips from the users in the comments and got a multistage dockerfile, but the image size is still kinda large? Now it's 654MB in size.
I know it's a big improvement but I'm still confused as how can this be still so large, since the source code is 60KB in size (it's just a small personal porfolio site with a couple animations)
This is the new updated dockerfile, did I miss something?
FROM node:lts as builder
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN yarn install --silent --production=true --frozen-lockfile
COPY . .
FROM node:lts-slim
# I used slim because there were people online who recommended to not
# mix and match distros and lts-alpine uses a different linux distro
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
EXPOSE 3000
RUN chown -R node /usr/src/app
USER node
CMD ["yarn", "run", "start", "--host"]
I realized that, for my use case, I just wanted the output of astro build which is a bunch of static html files with some javascript on certain parts, since I don't have server side rendering enabled.
I just changed completely the Dockerfile to the following and now it's really small, only 9.81MB
# ---> Build stage
FROM node:18-bullseye as node-build
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY . /usr/src/app/
RUN yarn install --silent --production=true --frozen-lockfile
RUN yarn build --silent
# ---> Serve stage
FROM nginx:stable-alpine
COPY --from=node-build /usr/src/app/dist /usr/share/nginx/html
Related
I am working on a nestjs project, this project can start by running, and works well :
npm run start:test
However, if I deploy the project to k8s, it will show :
[10:44:59 PM] Starting compilation in watch mode...
[10:45:22 PM] Found 0 errors. Watching for file changes.
and never show that it is running.
here is my docker :
FROM node:16 AS builder
# Create app directory
WORKDIR /app
ENV NODE_ENV='prod'
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY tsconfig.build.json ./
COPY tsconfig.json ./
COPY ca-cert.pem ./
COPY prisma ./prisma/
COPY protos ./protos/
# Install app dependencies
RUN npm install
RUN npx prisma generate
COPY . .
RUN npm run build
FROM node:16-alpine
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/protos ./protos
COPY --from=builder /app/tsconfig.build.json ./
COPY --from=builder /app/tsconfig.json ./
COPY --from=builder /app/prisma ./prisma
COPY --from=builder /app/ca-cert.pem ./
EXPOSE 3190
CMD [ "npm", "run", "start:test" ]
here is the package.json :
"start:test": "nest start --watch",
"start:dev": "set NODE_ENV=dev && nest start --watch",
"start:debug": "nest start --debug --watch",
"start:prod": " nest start",
Am I missing anything?
you should not watch the file in Kubernetes it just creates overhead, plus make sure there is proper healthcheck and memory request.
so better to change cmd to
CMD [ "npm", "run", "start" ]
but why we need to build the same package again? its pretty heavy process to build, so better to just consume dist
nest start simply ensures the project has been built (same as nest build), then invokes the node command in a portable, easy way to execute the compiled application.
NestJs already provides the correct command start:prod, but I (and Google) assumed the naked start was the call to invoke. Probably others make the same assumption. So do this instead:
{
"start": "npm run start:prod",
"start:prod": "node dist/main",
}
How to get back to point 0 of memory consumption with NestJs
If you still want to work with the existing setup then make sure these
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
requests:
cpu: 1000m
memory: 2000Mi
I have an index.ts, then this is my script
"scripts": {
"test": "jest",
"start": "ts-node-dev ./index.ts"
},
I tried to dockerize it, what should I do? do I need to add another command for npm build? or generate a .js file?
my dockerfile like this
FROM node:10-alpine
WORKDIR /
# copy configs to /app folder
COPY package*.json ./
COPY tsconfig.json ./
COPY . .
# check files list
RUN ls -a
RUN npm install
EXPOSE 3001
CMD [ "npm", "start"]
I can't access localhost:3001 in my browser after I run
docker build -t testApp .
then
docker run -p 80:3001 testApp
I'm trying to make nodemon work with docker so the server restarts after every change, but I can't seem to make it work.
Dockerfile
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
package.json
"main": "server.ts",
"scripts": {
"build": "tsc -p .",
"start": "nodemon -L src/server.ts"
},
"dependencies": {
"express": "^4.17.1",
"ts-node": "^8.6.2",
"typescript": "^3.7.5"
},
"devDependencies": {
"#types/node": "^13.7.0",
"eslint": "^6.8.0",
"nodemon": "^2.0.12"
}
If I run the server locally, nodemon works, but through Docker it does not (it just runs once). Do you have any idea how to solve it?
Using nodemon in Docker containers doesn't make sense. The reason is whenever you change some code you need to do a docker build to make that into an image and then run that image as container.
So, the last container actually stops and a new container starts each time you want to change a code. It is like you stopping node and running again.
There may be a scenario when you mount your code from host machine to the container then running nodemon on the mountpoint would probably be a fair choice. But for your dockerfile it isn't.
What is the run command of your docker container?
If you need docker container see the change you make at local, you should mount the volumes from host(local) to container like this:
docker run -dp 8080:8080 -v $(pwd):/usr/src/app
with $(pwd) for your current working dir(local)
Or if you use docker-compose, mount volumes like this:
services:
your_app:
build:
context: .
volumes: # mount volumes here
- ./:/usr/src/app
# rest config...
I have created a node application using typescript.
{
"name": "my-app",
"version": "1.0.0",
"main": "index.ts",
"author": "",
"license": "MIT",
"scripts": {
"start": "node -r ts-node/register index.ts",
},
"dependencies": {
"#types/express": "^4.17.3",
},
"devDependencies": {
"ts-node": "^7.0.1",
"typescript": "^3.4.5"
}
}
Currently, I have used following docker file for running my application
FROM node:10
WORKDIR /app
COPY package*.json ./
RUN npm i
COPY . .
EXPOSE 1234
CMD ["npm", "run", "start"]
I want to run my application using node command instead of npm
FROM node:10
WORKDIR /app
COPY package*.json ./
RUN npm i
COPY . .
EXPOSE 1234
CMD ["node", "-r", "ts-node/register", "index.ts"]
But it throws an error like this
'egister", "index.ts"]' is not recognized as an internal or external command,
operable program or batch file
The ts-node is not registered in the WORKDIR environment, you need to add the relative path.
CMD ["node", "-r", "./node_modules/ts-node/register", "index.ts"]
If you want to run other packages, you need to register the path like this
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
The app I'm making is written in ES6 and other goodies is transpiled by webpack inside a Docker container. At the moment everything works from creating the inner directory, installing dependencies, and creating the compiled bundle file.
When running the container instead, it says that dist/bundle.js does not exist. Except if I create the bundle file in the host directory, it will work.
I've tried creating a volume for the dist directory at it works the first time, but after making changes and rebuilding it does not pick up the new changes.
What I'm trying to achieve is having the container build and run the compiled bundle. I'm not sure if the webpack part should be in the Dockerfile as a build step or at runtime since the CMD ["yarn", "start"] crashes but RUN ["yarn", "start"] works.
Any suggestions ands help is appreciated. Thanks in advance.
|_src
|_index.js
|_dist
|_bundle.js
|_Dockerfile
|_.dockerignore
|_docker-compose.yml
|_webpack.config.js
|_package.json
|_yarn.lock
docker-compose.yml
version: "3.3"
services:
server:
build: .
image: selina-server
volumes:
- ./:/usr/app/selina-server
- /usr/app/selina-server/node_modules
# - /usr/app/selina-server/dist
ports:
- 3000:3000
Dockerfile
FROM node:latest
LABEL version="1.0"
LABEL description="This is the Selina server Docker image."
LABEL maintainer="AJ alvaroo#selina.com"
WORKDIR "/tmp"
COPY ["package.json", "yarn.lock*", "./"]
RUN ["yarn"]
WORKDIR "/usr/app/selina-server"
RUN ["ln", "-s", "/tmp/node_modules"]
COPY [".", "./"]
RUN ["yarn", "run", "build"]
EXPOSE 3000
CMD ["yarn", "start"]
.dockerignore
.git
.gitignore
node_modules
npm-debug.log
dist
package.json
{
"scripts": {
"build": "webpack",
"start": "node dist/bundle.js"
}
}
I was able to get a docker service in the browser with webpack by adding the following lines to webpack.config.js:
module.exports = {
//...
devServer: {
host: '0.0.0.0',
port: 3000
},
};
Docker seems to want the internal container address to be 0.0.0.0 and not localhost, which is the default string for webpack. Changing webpack.config.js specification and copying that into the container when it is being built allowed the correct port to be recognized on `http://localhost:3000' on the host machine. It worked for my project; hope it works for yours.
I haven't included my src tree structure but its basically identical to yours,
I use the following docker setup to get it to run and its how we dev stuff every day.
In package.json we have
"scripts": {
"start": "npm run lint-ts && npm run lint-scss && webpack-dev-server --inline --progress --port 6868",
}
dockerfile
FROM node:8.11.3-alpine
WORKDIR /usr/app
COPY package.json .npmrc ./
RUN mkdir -p /home/node/.cache/yarn && \
chmod -R 0755 /home/node/.cache && \
chown -R node:node /home/node && \
apk --no-cache add \
g++ gcc libgcc libstdc++ make python
COPY . .
EXPOSE 6868
ENTRYPOINT [ "/bin/ash" ]
docker-compose.yml
version: "3"
volumes:
yarn:
services:
web:
user: "1000:1000"
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
container_name: "some-app"
command: -c "npm config set proxy=$http_proxy && npm run start"
volumes:
- .:/usr/app/
ports:
- "6868:6868"
Please note this Dockerfile is not suitable for production it's for a dev environment as its running stuff as root.
With this docker file there its a gotcha.
Because alpine is on musl and we are on glib if we install node modules on the host the compiled natives won't work on the docker container, Once the container is up if you get an error we run this to fix it (its a bit of a sticking plaster right now)
docker-compose exec container_name_goes_here /bin/ash -c "npm rebuild node-sass --force"
ikky but it works.
Try changing your start script in the package.json to perform the build first (doing this, you won't need the RUN command to perform the build in your Dockerfile:
{
"scripts": {
"build": "webpack",
"start": "webpack && node dist/bundle.js"
}
}