So im trying to get the environment for my project set up to use docker. Project structure is as follows.
/client
/server
/nginx
docker-compose.yml
docker-compose.override.yml
docker-compose.prod.yml
in the Dockerfile for each /client, /server, and nginx I have a base image that installs my dependencies then a development image that installs dev-dependencies and a production image that builds or runs the image for client and server respectively
ex.
# start from a node image
FROM node:14.8.0-alpine as base
WORKDIR /client
COPY package.json package-lock.json ./
RUN npm i --only=prod
FROM base as development
RUN npm install --only=dev
CMD [ "npm", "run", "start" ]
FROM base as production
COPY . .
RUN npm run build
so here is where my problem comes in.
In /nginx I want nginx in development just act as a revers proxy for create-react-app, but when I am in production I want to take client/build from the production client image and copy it into the nginx server to be served statically without the overhead of the entire build tool chain for react.
ie.
FROM nginx:stable-alpine as base
FROM base as development
COPY development.conf /etc/nginx/nginx.conf
FROM base as production
COPY production.conf /etc/nginx/nginx.conf
COPY --from=??? /client/build /usr/share/nginx/html
^
what goes here?
If anyone has any clue how to get this to work without having pull from docker hub and having to push images up to docker hub every time a change is made that would be great.
You can COPY --from= another image by name. Just like docker run, the image needs to be local, and Docker won't contact Docker Hub or another registry server if you already have the image.
# Most basic form; "myapp" is the containing directory name
COPY --from=myapp_client /client/build /usr/share/nginx/html
Compose doesn't directly have a way to specify this build dependency, but running docker-compose build twice should do the trick.
If you're planning to deploy this, you probably want some control over the name and tag of the image. In docker-compose.yml you can specify both build: and image:, which well tell Compose what name to use when it builds the image. You can also use environment variables almost everywhere in the Compose file, and pass ARG into a build to configure it. Combining all of these would give you:
version: '3.8'
services:
client:
build: ./client
image: registry.example.com/my/client:${TAG:-latest}
nginx:
build:
context: ./nginx
args:
TAG: ${TAG:-latest}
image: registry.example.com/my/client:${TAG:-latest}
FROM nginx:stable-alpine
ARG TAG=latest
COPY --from=registry.example.com/my/client:${TAG} /usr/share/nginx/html
TAG=20210113 docker-compose build
TAG=20210113 docker-compose build
TAG=20210113 docker-compose up -d
# TAG=20210113 docker-compose push
Related
So I have been trying to figure this out for a while now. I am working with node and next.js, to implement WEBRTC using socket.io. I containerized my project and it runs fine on my local machine, I uploaded it on ec2 by watching a youtube tutorial, and whenever I run the task/container it stops with these logs results. says cannot find 'pages' directory which i did initialize in compose file.
docker-compose.yml
version: '3'
services:
app:
image: webrtc
build: .
ports:
- 3000:3000
volumes:
- ./pages:/app/pages
- ./public:/app/public
- ./styles:/app/styles
- ./hooks:/app/hooks
Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY next.config.js ./next.config.js
CMD ["yarn", "dev"]
I think you need to COPY the whole directory incl. "pages", currently you're only copying the config and the project files...
Instead of COPY next.config.js ./next.config.js try COPY . . if feasible.
Otherwise if it's required using docker-compose with the volumes make sure the mapping to EFS is set up correctly: https://docs.docker.com/cloud/ecs-compose-features/#persistent-volumes
This would be a related matter then: How to mount EFS inside a docker container?
I am trying to set up a skeleton project for a web app. Since I have no experience using docker I followed this tutorial for a Flask+Vue+Docker setup:
https://www.section.io/engineering-education/how-to-build-a-vue-app-with-flask-sqlite-backend-using-docker/
The backend and frontend on their own run correct, now I wanted to dockerize the parts as described with docker-compose and separate containers for back- and frontend. Now when I try to connect to localhost://8080 I get this:
"This page isnt working, localhost did not send any data"
This is my frontend dockerfile:
#Base image
FROM node:lts-alpine
#Install serve package
RUN npm i -g serve
# Set the working directory
WORKDIR /app
# Copy the package.json and package-lock.json
COPY package*.json ./
# install project dependencies
RUN npm install
# Copy the project files
COPY . .
# Build the project
RUN npm run build
# Expose a port
EXPOSE 5000
# Executables
CMD [ "serve", "-s", "dist"]
and this is the docker-compose.yml
version: '3.8'
services:
backend:
build: ./backend
ports:
- 5000:5000
frontend:
build: ./frontend
ports:
- 8080:5000
In the Docker-Desktop GUI for the frontend container I get the log message "Accepting connections at http://localhost:3000" but when I open it in browser it connects me to the 8080 port.
During research I found that many people say I have to make the app serve on 0.0.0.0 to work from a docker container, but I don't know how to configure that. I tried adding
devServer: {
public: '0.0.0.0:8080'
}
to my vue.config.js which did not change anything. Others suggested to change the docker run command to incorporate the host change, but I don't use that but use docker-compose up to start the app.
Sorry for my big confusion, I hope someone can help me out here. I really hope it's something simple I am overlooking.
Thanks to everyone trying to help in advance!
As far as I know, volume in Docker is some permanent data for the container, which can map local folder and container folder.
In early day, I am facing Error: Cannot find module 'winston' issue in Docker which mentioned in:
docker - Error: Cannot find module 'winston'
Someone told me in this post:
Remove volumes: - ./:/server from your docker-compose.yml. It overrides the whole directory contains node_modules in the container.
After I remove volumes: - ./:/server, the above problem is solved.
However, another problem occurs.
[solved but want explanation]nodemon --legacy-watch src/ not working in Docker
I solve the above issue by adding back volumes: - ./:/server, but I don't know what is the reason of it
Question
What is the cause and explanation for above 2 issues?
What happen between build and volumes, and what is the relationship between build and volumes in docker-compose.yml
Dockerfile
FROM node:lts-alpine
RUN npm install --global sequelize-cli nodemon
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
...
test-web:
environment:
- NODE_ENV=local
- PORT=3030
build: . <------------------------ It takes Dockerfile in current directory
command: >
./wait-for-db-redis.sh test-db npm run dev
volumes:
- ./:/server <------------------------ how and when does this line works?
ports:
- "3030:3030"
depends_on:
- test-db
When you don't have any volumes:, your container runs the code that's built into the image. This is good! But, the container filesystem is completely separate from the host filesystem, and the image contains a fixed copy of your application. When you change your application, after building and testing it in a non-Docker environment, you need to rebuild the image.
If you bind-mount a volume over the application directory (.:/server) then the contents of the host directory replace the image contents; any work you do in the Dockerfile gets completely ignored. This also means /server/node_modules in the container is ./node_modules on the host. If the host and container environments don't agree (MacOS host/Linux container; Ubuntu host/Alpine container; ...) there can be compatibility issues that cause this to break.
If you also mount an anonymous volume over the node_modules directory (/server/node_modules) then only the first time you run the container the node_modules directory from the image gets copied into the volume, and then the volume content gets mounted into the container. If you update the image, the old volume contents take precedence (changes to package.json get ignored).
When the image is built only the contents of the build: block have an effect. There are no volumes: mounted, environment: variables aren't set, and the build environment isn't attached to networks:.
The upshot of this is that if you don't have volumes at all:
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
It is completely disconnected from the host environment. You need to docker-compose build the image again if your code changes. On the other hand, you can docker push the built image to a registry and run it somewhere else, without needing a separate copy of Node or the application source code.
If you have a volume mount replacing the application directory then everything in the image build is ignored. I've seen some questions that take this to its logical extent and skip the image build, just bind-mounting the host directory over an unmodified node image. There's not really benefit to using Docker here, especially for a front-end application; install Node instead of installing Docker and use ordinary development tools.
I am trying to host a development environment on my Windows machine which hosts a frontend and backend container. So far I have only been working on the backend. All files are on the C Drive which is shared via Docker Desktop.
I have the following docker-compose file and Dockerfile, the latter is inside a directory called backend within the root directory.
Dockerfile:
FROM node:12.15.0-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
EXPOSE 5000
CMD [ "npm", "start" ]
docker-compose.yml:
version: "3"
services:
backend:
container_name: backend
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- ./backend:/usr/app
environment:
- APP_PORT=80
ports:
- '5000:5000'
client:
container_name: client
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./client:/app
ports:
- '80:8080'
For some reason, when I make changes in my local files they are not reflecting inside the container. I am testing this by slightly modifying the outputs of one of my files, but I am having to rebuild the container each time to see the changes take effect.
I have worked with Docker in PHP applications before, and have basically done the same thing. So I am unsure why this is not working with by Node.js app. I am wondering if I am just missing something glaringly obvious as to why this is not working.
Any help would be appreciated.
The difference between node and PHP here is that php automatically picks up file system changes between requests, but a node server doesn't.
I think you'll see that the file changes get picked up if you restart node by bouncing the container with docker-compose down then up (no need to rebuild things!).
If you want node to pick up file system changes without needing to bounce the server you can use some of the node tooling. nodemon is one: https://www.npmjs.com/package/nodemon. Follow the installation instructions for local installation and update your start script to use nodemon instead of node.
Plus I really do think you have a mistake in your dockerfile and you need to copy the source code into your working directory. I'm assuming you got your initial recipe from here: https://dev.to/alex_barashkov/using-docker-for-nodejs-in-development-and-production-3cgp. This is the docker file is below. You missed a step!
FROM node:10-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]
I am building a set of connected node services using docker-compose and can't figure out the best way to handle node modules. Here's what should happen in a perfect world:
Full install of node_modules in each container happens on initial build via each service's Dockerfile
Node modules are cached after the initial load -- i.e. functionality so that npm only installs when package.json has changed
There is a clear method for installing npm modules -- whether it needs to be rebuilt or there is an easier way
Right now, whenever I npm install --save some-module and subsequently run docker-compose build or docker-compose up --build, I end up with the module not actually being installed.
Here is one of the Dockerfiles
FROM node:latest
# Create app directory
WORKDIR /home/app/api-gateway
# Intall app dependencies (and cache if package.json is unchanged)
COPY package.json .
RUN npm install
# Bundle app source
COPY . .
# Run the start command
CMD [ "npm", "dev" ]
and here is the docker-compose.myl
version: '3'
services:
users-db:
container_name: users-db
build: ./users-db
ports:
- '27018:27017'
healthcheck:
test: exit 0'
api-gateway:
container_name: api-gateway
build: ./api-gateway
command: npm run dev
volumes:
- './api-gateway:/home/app/api-gateway'
- /home/app/api-gateway/node_modules
ports:
- '3000:3000'
depends_on:
- users-db
links:
- users-db
It looks like this line might be overwriting your node_modules directory:
# Bundle app source
COPY . .
If you ran npm install on your host machine before running docker build to create the image, you have a node_modules directory on your host machine that is being copied into your container.
What I like to do to address this problem is copy the individual code directories and files only, eg:
# Copy each directory and file
COPY ./src ./src
COPY ./index.js ./index.js
If you have a lot of files and directories this can get cumbersome, so another method would be to add node_modules to your .dockerignore file. This way it gets ignored by Docker during the build.