NodeJS webpack building in docker container - node.js

I read this about long-term caching and tried to implement this in my project, but the manifest file generates wrong links to assets when I'm trying to build it in docker container, but generation process works well.
Dockerfile.web
FROM node:8.2.1-alpine
WORKDIR /web
ADD /tmp/app.tar.gz /web
# At the end node_modules will be removed because of bug with npm prune.
# In this case we need re-install production-only deps to reduce container weight.
RUN yarn install && \
yarn run build-production && \
rm -rf node_modules && \
yarn install --production && \
rm -rf src
RUN adduser -D mySecretUser
USER mySecretUser
Anyone know what it can be, why building in docker container is different?
I tried to remove all images, switch off docker container caching, remove dist directory before generation – not works.

I found that problem was introduced by my building script which uses docker-compose.
Docker-compose.yml
version: '3'
services:
# Web-server which is responsible for server-side rendering.
# It also add additional middleware layers when proxify requests
# for dependent systems such as API (for example enrich with auth data before sending).
web:
build:
context: .
dockerfile: ./docker/Dockerfile.web
ports:
- "${WEBSERVER_PORT}"
env_file:
- ./docker/web.env
volumes:
- assets:/web/dist/assets
command: ["yarn", "run", "_production"]
# Nginx used as proxy-wrapper which is placed before web-server.
# It's resposible for static files and proxy to web-server.
# It also can be used for proxify websocket connection to specific game server.
nginx:
build:
context: .
dockerfile: ./docker/Dockerfile.nginx
volumes:
- assets:/www/assets
ports:
- "80:${NGINX_PORT}"
env_file:
- ./docker/web.env
- ./nginx/nginx.env
depends_on:
- web
command: ["/bin/sh", "-c", "envsubst < /etc/nginx/templates/default.template > /etc/nginx/sites-enabled/default && nginx -g 'daemon off;'"]
volumes:
assets:
There I have used shared volume assets and I didn't clear it when start docker-compose again.
I have using this command to build:
rm -rf tmp && mkdir -p tmp && tar -czvf tmp/app.tar.gz src config .babelrc mq.json postcss.config.js webpack.*.js package.json yarn.lock && export $(cat ./docker/web.env | grep -v ^# | xargs) && docker-compose -p cruiserwars build,
where I have preparing tar.gz with sources, setting up environment variables and then use docker-compose.yml to build my project. But there I forgot to remove volume that has been created before...
So solution will be to have this command instead:
rm -rf tmp && mkdir -p tmp && tar -czvf tmp/app.tar.gz src config .babelrc mq.json postcss.config.js webpack.*.js package.json yarn.lock && export $(cat ./docker/web.env | grep -v ^# | xargs) && docker-compose down -v && docker-compose -p cruiserwars build,
you can see that I have added docker-compose down -v to stop containers and remove volume which has been create before.

Related

Docker compose - my images repository tag is <none> when building

I am a newbie in Docker, and while going through a course online, I stumbled into a problem. While trying to build separate dev, prod and test images, only my dev one seems to build correctly.
My prod and test images build with their flag as "<none>", even though I run the build with the following command:
"sudo docker build -t ultimatenode:test --target test ."
Also my prod image is supposed to be smaller in size, because I removed the original node_modules and ./test folder, but there seems to be some mistake on my part.
Would anyone be kind enough to check the problem in the following Dockerfile:
FROM node:16 as base
EXPOSE 80
WORKDIR /app
COPY package*.json ./
RUN npm config list
RUN npm ci \
&& npm cache clean --force
ENV PATH /app/node_modules/.bin:$PATH
CMD ["node", "server.js"]
#DEVELOPMENT
FROM base as dev
ENV NODE_ENV=development
# NOTE: these apt dependencies are only needed
# for testing. they shouldn't be in production
RUN apt-get update -qq \
&& apt-get install -qy --no-install-recommends \
bzip2 \
ca-certificates \
curl \
libfontconfig \
&& rm -rf /var/lib/apt/lists/*
RUN npm config list
RUN npm install --only=development \
&& npm cache clean --force
COPY . /app
CMD ["nodemon", "server.js"]
#TEST
FROM dev as test
COPY . .
RUN npm audit
#PREPROD
FROM test as preprod
#Removing unecessary folders
RUN rm -rf ./tests && rm -rf ./node_modules
FROM base as prod
COPY --from=pre-prod /app /app
WORKDIR /app
HEALTHCHECK CMD curl http://127.0.0.1/ || exit 1
CMD ["node", "server.js"]
Also, here is my docker-compose.yml
version: '2.4'
services:
redis:
image: redis:alpine
db:
image: postgres:9.6
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
volumes:
- db-data:/var/lib/postgresql/data
vote:
image: bretfisher/examplevotingapp_vote
ports:
- '5000:80'
depends_on:
- redis
result:
build:
context: .
target: dev
ports:
- '5001:80'
volumes:
- .:/app
environment:
- NODE_ENV=development
depends_on:
- db
worker:
image: bretfisher/examplevotingapp_worker
depends_on:
- redis
- db
volumes:
db-data:

Running chown in Dockerfile does nothing

I'm having some trouble setting up a Nuxt and Rails container using Docker. The two containers are separate, but interact with each other.
Currently, I'm having trouble running the dev servers for both the Nuxt and the Rails containers due to insufficient permissions. Looking at the logs for both of the containers, it seems that Docker can't do actions such as mkdir.
EACCESS: Permission Denied: 'mkdir: /usr/src/app/.nuxt' # nuxt
EACCESS: Permission Denied: 'mkdir: /usr/src/app/tmp' # rails
My docker-compose.dev.yml file
version: 3
services:
backend:
privileged: true
image: tablevibes-backend
build:
dockerfile: Dockerfile-dev
context: tablevibes-backend
args:
UID: ${UID:-1001}
BUNDLER_VERSION: 2.0.2
PG_MAJOR: 10
mode: development
tty: true
stdin_open: true
volumes:
- ./tablevibes-backend:/usr/src/app:Z
- gem_data_api:/usr/local/bundle:cached
ports:
- "3000:3000"
depends_on:
- db
user: rails
client-ui:
image: client-ui
command: yarn run dev
build:
context: client-ui
dockerfile: Dockerfile-dev
args:
UID: ${UID:-1001}
PORT: 5000
MODE: DEV
restart: always
volumes:
- ./client-ui:/usr/src/app
- client_ui_node_modules:/usr/src/app/node_modules:cached
ports:
- 5000:5000
user: client-ui
The 2 Dockerfiles
The Rails Dockerfile-dev
FROM ruby:2.6.3
ARG PG_MAJOR
ARG BUNDLER_VERSION
ARG UID
ARG MODE
RUN adduser rails --uid $UID --disabled-password --gecos ""
# Add POSTGRESQL to the source list using the right version
RUN curl -sSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& echo 'deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
ENV RAILS_ENV $MODE
RUN apt-get update -qq && apt-get install -y postgresql-client-$PG_MAJOR vim
RUN apt-get -y install sudo
WORKDIR /usr/src/app
CMD chown -R rails /usr/src/app
COPY Gemfile /usr/src/app/Gemfile
COPY Gemfile.lock /usr/src/app/Gemfile.lock
ENV BUNDLER_VERSION $BUNDLER_VERSION
RUN gem install bundler:$BUNDLER_VERSION
RUN bundle install
COPY . /usr/src/app
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
The Nuxt Dockerfile-dev
FROM node:10
ARG UID
ARG MODE=DEV
ARG PORT
RUN adduser client-ui --uid $UID --disabled-password --gecos ""
RUN apt-get update
RUN apt-get -y install sudo
RUN mkdir /usr/src/app
RUN chown -R client-ui /usr/src/app
COPY package.json yarn.lock /usr/src/app
RUN yarn install
COPY . /usr/src/app
ENV API_URL=http://localhost:3000/v1
ENV REVIEW_URL=http://localhost:8000
# expose 5000 on container
EXPOSE $PORT
# set app serving to permissive / assigned
ENV NUXT_HOST=0.0.0.0
# set app port
ENV NUXT_PORT=$PORT
My problem is that those lines where I do RUN chown ... never seem to take place. If I manually go into the containers with docker exec -u root -it backend bash and run chown -R rails . manually, everything works as expected. Likewise I tried running chmod 777 as a test, however that also had no effect on the permission denied error I keep getting.
What might be causing Docker to ignore my chown command?
This Stack Overflow question seems relevant, however it doesn't quite apply because I don't have any VOLUME mounts inside my Dockerfiles. A user in the comments of the accepted answer has my same issue, though unfortunately no solution.
Containers are like ogres, or onions, they have layers.
When using VOLUMEs or MOUNTs, the directory (or file) is not actually IN the container, but only appears to be in it.
Your Dockerfile uses a layer for /usr/src/app, which as you probably already know is your ./tablevibes-backend directory on your host computer.
services:
backend:
volumes:
- ./tablevibes-backend:/usr/src/app:Z
When you use a volume or mount in this way, the only thing Docker can do is simple CRUD (create, read, update, delete) options, it can not (and should not) modify the metadata as it is modifying your host drive, which could be a security issue.
try this:
sudo chown -R $USER:$USER .

Docker x NodeJS - Issue with node_modules

I'm a web developer who currently is working on a next.js project (it's just a framework to SSR ReactJS). I'm using Docker config on this project and I discovered an issue when I add/remove dependencies. When I add a dependency, build my project and up it with docker-compose, my new dependency isn't added to my Docker image. I have to clean my docker system with docker system prune to reset everything then I could build and up my project. After that, my dependency is added to my Docker container.
I use Dockerfile to configure my image and different docker-compose files to set different configurations depending on my environments. Here is my configuration:
Dockerfile
FROM node:10.13.0-alpine
# SET environment variables
ENV NODE_VERSION 10.13.0
ENV YARN_VERSION 1.12.3
# Install Yarn
RUN apk add --no-cache --virtual .build-deps-yarn curl \
&& curl -fSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" \
&& tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ \
&& ln -snf /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn \
&& ln -snf /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg \
&& rm yarn-v$YARN_VERSION.tar.gz \
&& apk del .build-deps-yarn
# Create app directory
RUN mkdir /website
WORKDIR /website
ADD package*.json /website
# Install app dependencies
RUN yarn install
# Build source files
COPY . /website/
RUN yarn run build
docker-compose.yml (dev env)
version: "3"
services:
app:
container_name: website
build:
context: .
ports:
- "3000:3000"
- "3332:3332"
- "9229:9229"
volumes:
- /website/node_modules/
- .:/website
command: yarn run dev 0.0.0.0 3000
environment:
SERVER_URL: https://XXXXXXX.com
Here my commands to run my Docker environment:
docker-compose build --no-cache
docker-compose up
I suppose that something is wrong in my Docker's configuration but I can't catch it. Do you have an idea to help me?
Thanks!
Your volumes right now are not set up to do what you intend to do. The current set below means that you are overriding the contents of your website directory in the container with your local . directory.
volumes:
- /website/node_modules/
- .:/website
I'm sure your intention is to map your local directory into the container first, and then override node_modules with the original contents of the image's node_modules directory, i.e. /website/node_modules/.
Changing the order of your volumes like below should solve the issue.
volumes:
- .:/website
- /website/node_modules/
You are explicitly telling Docker you want this behavior. When you say:
volumes:
- /website/node_modules/
You are telling Docker you don't want to use the node_modules directory that's baked into the image. Instead, it should create an anonymous volume to hold the node_modules directory (which has some special behavior on its first use) and persist the data there, even if other characteristics like the underlying image change.
That means if you change your package.json and rebuild the image, Docker will keep using the volume version of your node_modules directory. (Similarly, the bind mount of .:/website means everything else in the last half of your Dockerfile is essentially ignored.)
I would remove the volumes: block in this setup to respect the program that's being built in the image. (I'd also suggest moving the command: to a CMD line in the Dockerfile.) Develop and test your application without using Docker, and build and deploy an image once it's essentially working, but not before.

Inside Docker container NetworkingError: connect ECONNREFUSED 127.0.0.1:8002

I'm building up a nodejs app which is running in the docker container and getting following error
NetworkingError: connect ECONNREFUSED 127.0.0.1:8000"
And If I tried with dynamodb-local:8000 then it will give me following error
NetworkingError: write EPROTO
140494555330368:error:1408F10B:SSLroutines:ssl3_get_record:wrong
version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:252:
I am using the following docker-compose.yml
version: "3"
services:
node_app:
build: .
container_name: 'node_app'
restart: 'always'
command: 'npm run start:local'
ports:
- "3146:3146"
links:
- dynamodb-local
dynamodb-local:
container_name: 'dynamodb-local'
build: dynamodb-local/
restart: 'always'
ports:
- "8000:8000"
Node js docker configuration as follows, node_app
FROM node:latest
RUN mkdir -p /app/node_app
WORKDIR /app/node_app
# Install app dependencies
COPY package.json /app/node_app
#RUN npm cache clean --force && npm install
RUN npm install
# Bundle app source
COPY . /app/node_app
# Build the built version
EXPOSE 3146
#RUN npm run dev
CMD ["npm", "start"]
Dynamo DB local docker configuration as follows, dynamodb-local
#
# Dockerfile for DynamoDB Local
#
# https://aws.amazon.com/blogs/aws/dynamodb-local-for-desktop-development/
#
FROM openjdk:7-jre
RUN mkdir -p /var/dynamodb_local
RUN mkdir -p /var/dynamodb_picstgraph
# Create working space
WORKDIR /var/dynamodb_picstgraph
# Default port for DynamoDB Local
EXPOSE 8000
# Get the package from Amazon
RUN wget -O /tmp/dynamodb_local_latest https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.tar.gz && \
tar xfz /tmp/dynamodb_local_latest && \
rm -f /tmp/dynamodb_local_latest
# Default command for image
ENTRYPOINT ["/usr/bin/java", "-Djava.library.path=.", "-jar", "DynamoDBLocal.jar", "-sharedDb", "-dbPath", "/var/dynamodb_local"]
CMD ["-port", "8000"]
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/var/dynamodb_local", "/var/dynamodb_nodeapp"]
But when I tried to connect outside docker container to local dynamodb and it will work perfectly.
Please help me to sort out this issue.
Inside the docker container, the DB will be available with the host dynamodb-local:8000.
It might be an SSL issue, please check your apache configuration if you have used the port for other application.
For that case, you can use link dynamo on another port as follows,
#
# Dockerfile for DynamoDB Local
#
# https://aws.amazon.com/blogs/aws/dynamodb-local-for-desktop-development/
#
FROM openjdk:7-jre
RUN mkdir -p /var/dynamodb_local
RUN mkdir -p /var/dynamodb_picstgraph
# Create working space
WORKDIR /var/dynamodb_picstgraph
# Default port for DynamoDB Local
EXPOSE 8004
# Get the package from Amazon
RUN wget -O /tmp/dynamodb_local_latest https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.tar.gz && \
tar xfz /tmp/dynamodb_local_latest && \
rm -f /tmp/dynamodb_local_latest
# Default command for image
ENTRYPOINT ["/usr/bin/java", "-Djava.library.path=.", "-jar", "DynamoDBLocal.jar", "-sharedDb", "-dbPath", "/var/dynamodb_local"]
CMD ["-port", "8004"]
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/var/dynamodb_local", "/var/dynamodb_nodeapp"]
Now in your docker container, the DB will be available with the host dynamodb-local:8004.

How to keep node_modules inside container while using docker-compose and a non-root user?

I'm looking for a way to achieve these goals at the same time:
using a non-root user inside the container
keeping node_modules inside container (to not to "pollute" the working directory on the host)
not using a Dockerfile
I'm not sure if these goals are considered "best practice". For example, keeping node_modules inside the container has its disadvantages.
Currently my compose file is like this:
services:
# ...
node:
image: "node:9"
user: "node"
working_dir: /home/node/app
environment:
# - NODE_ENV=production
- NPM_CONFIG_PREFIX=/home/node/.npm-global
- PATH=$PATH:/home/node/.npm-global/bin
volumes:
- ./proj/:/home/node/app
- /home/node/app/node_modules # mark1
ports:
- "3001:3001"
command: >
bash -c "echo hello
&& ls -lh /home/node/app/
&& npm install
&& npm i -g babel-cli
&& npm i -g flow-bin
&& npm start"
depends_on:
- redis
but there's
"Error: EACCES: permission denied, access
'/home/node/app/node_modules'".
If I comment out the #mark1 line, the container runs, however node_modules will be written onto the host (since ./proj is mounted)
I have read these two articles on the topic:
https://blog.getjaco.com/jaco-labs-nodejs-docker-missing-manual/
http://jdlm.info/articles/2016/03/06/lessons-building-node-app-docker.html
but neither meets my goal.
Update:
I added a line of ls -lh /home/node/app/ and found node_modules is owned by root. This could be the problem.
I ended up using a Dockerfile. It's minimum. (I keep some commented out lines for anyone may find them useful.)
We need to change the owner of node_modules inside the container. It seems the node:9 image doesn't require this. So this is only for node:9-alpine. (update: Sorry. I forgot to remove the built container with docker system prune. Both images need this. Here is a discussion on permissions/ownership of named volumes`)
FROM node:9-alpine
#ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
#ENV PATH=$PATH:/home/node/.npm-global/bin
RUN mkdir -p /home/node/app/node_modules
RUN chown -R node:node /home/node/app
#USER node
#WORKDIR /home/node/app
#RUN npm install --silent --progress=false ; \
# npm i -g babel-cli --silent --progress=false ;\
# npm i -g flow-bin --silent --progress=false
The docker-compose.yml ended up being like:
services:
# ...
node:
# image: "node:9-alpine"
build: ./proj
user: "node"
working_dir: /home/node/app
environment:
# - NODE_ENV=production
- NPM_CONFIG_PREFIX=/home/node/.npm-global
- PATH=$PATH:/home/node/.npm-global/bin
volumes:
- ./proj/:/home/node/app
- /home/node/app/node_modules/
ports:
- "3006:3001"
command: >
/bin/sh -c "echo hello
&& ls -lh /home/node/app/
&& npm install
&& npm i -g babel-cli
&& npm i -g flow-bin
&& npm start"
depends_on:
- redis

Resources