I am using Docker Compose for my local development environment for a Full Stack Javascript project.
part of my Docker Compose file look like this
version: "3.5"
services:
frontend:
build:
context: ./frontend/
dockerfile: dev.Dockerfile
env_file:
- .env
ports:
- "${FRONTEND_PORT_NUMBER}:${FRONTEND_PORT_NUMBER}"
container_name: frontend
volumes:
- ./frontend:/code
- frontend_deps:/code/node_modules
- ../my-shared-module:/code/node_modules/my-shared-module
I am trying to develop a custom Node module called my-shared-module, that's why i added - ../my-shared-module:/code/node_modules/my-shared-module to the Docker Compose file. The node module is hosted in a private Git repo, and is defined like this in package.json
"dependencies": {
"my-shared-module": "http://gitlab+deploy-token....#gitlab.com/.....git",
My problem is,
When I run update my node modules in the docker container using npm install, it download my-shared-module from my private Git repo into /code/node_modules/my-shared-module, and that overwrites the files in host ../my-shared-module, because they are synced.
So my question is, is it possible to have 1 way volume sync in Docker?
when host changes, update container
when container changes, don't update host ?
Unfortunately I don't think this is possible in Docker. Mounting a host volume is always two-way unless you consider a readonly mount to be one-way, but that prevents you from being able modify the file system with things like npm install.
Your best options here would either be to rebuild the image with the new files each time, or bake into your CMD a step to copy the mounted files into a new folder outside of the mounted volume. That way any file changes won't be persisted back to the host machine.
You can script something to do this. Mount your host node_modules to another directory inside the container, and in the entrypoint, copy the directory:
version: "3.5"
services:
frontend:
build:
context: ./frontend/
dockerfile: dev.Dockerfile
env_file:
- .env
ports:
- "${FRONTEND_PORT_NUMBER}:${FRONTEND_PORT_NUMBER}"
container_name: frontend
volumes:
- ./frontend:/code
- frontend_deps:/code/node_modules
- /code/node_modules/my-shared-module
- ../my-shared-module:/host/node_modules/my-shared-module:ro
Then add an entrypoint script to your Dockerfile with something like:
#!/bin/sh
if [ -d /host/node_modules/my-shared-module ]; then
cp -r /host/node_modules/my-shared-module/. /code/node_modules/my-shared-module/.
fi
exec "$#"
Related
As far as I know, volume in Docker is some permanent data for the container, which can map local folder and container folder.
In early day, I am facing Error: Cannot find module 'winston' issue in Docker which mentioned in:
docker - Error: Cannot find module 'winston'
Someone told me in this post:
Remove volumes: - ./:/server from your docker-compose.yml. It overrides the whole directory contains node_modules in the container.
After I remove volumes: - ./:/server, the above problem is solved.
However, another problem occurs.
[solved but want explanation]nodemon --legacy-watch src/ not working in Docker
I solve the above issue by adding back volumes: - ./:/server, but I don't know what is the reason of it
Question
What is the cause and explanation for above 2 issues?
What happen between build and volumes, and what is the relationship between build and volumes in docker-compose.yml
Dockerfile
FROM node:lts-alpine
RUN npm install --global sequelize-cli nodemon
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
...
test-web:
environment:
- NODE_ENV=local
- PORT=3030
build: . <------------------------ It takes Dockerfile in current directory
command: >
./wait-for-db-redis.sh test-db npm run dev
volumes:
- ./:/server <------------------------ how and when does this line works?
ports:
- "3030:3030"
depends_on:
- test-db
When you don't have any volumes:, your container runs the code that's built into the image. This is good! But, the container filesystem is completely separate from the host filesystem, and the image contains a fixed copy of your application. When you change your application, after building and testing it in a non-Docker environment, you need to rebuild the image.
If you bind-mount a volume over the application directory (.:/server) then the contents of the host directory replace the image contents; any work you do in the Dockerfile gets completely ignored. This also means /server/node_modules in the container is ./node_modules on the host. If the host and container environments don't agree (MacOS host/Linux container; Ubuntu host/Alpine container; ...) there can be compatibility issues that cause this to break.
If you also mount an anonymous volume over the node_modules directory (/server/node_modules) then only the first time you run the container the node_modules directory from the image gets copied into the volume, and then the volume content gets mounted into the container. If you update the image, the old volume contents take precedence (changes to package.json get ignored).
When the image is built only the contents of the build: block have an effect. There are no volumes: mounted, environment: variables aren't set, and the build environment isn't attached to networks:.
The upshot of this is that if you don't have volumes at all:
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
It is completely disconnected from the host environment. You need to docker-compose build the image again if your code changes. On the other hand, you can docker push the built image to a registry and run it somewhere else, without needing a separate copy of Node or the application source code.
If you have a volume mount replacing the application directory then everything in the image build is ignored. I've seen some questions that take this to its logical extent and skip the image build, just bind-mounting the host directory over an unmodified node image. There's not really benefit to using Docker here, especially for a front-end application; install Node instead of installing Docker and use ordinary development tools.
I have 2 containers, the first ('client') writes files to a volume while the second ('server') needs to read them (this is a simplified version of my requirement). My problem is that I don't know how to access the files from the second container using node.js when it is set with a WORKDIR /app.
(I've seen examples of how to access volume using volumes_from, like this one: https://phoenixnap.com/kb/how-to-share-data-between-docker-containers which works on my tests, but it doesn't demonstrate my settings)
this is my docker-compose file (simplified):
volumes:
aca_storage:
services:
server-test:
image: aca-server:0.0.1
container_name: aca-server-test
command: sh -c "npm install && npm run start:dev"
ports:
- 8080:8080
volumes_from:
- client-test:ro
environment:
NODE_ENV: development
client-test:
image: aca-client:0.0.1
container_name: aca-client-test
ports:
- 81:80
volumes:
- aca_storage:/app/files_to_share
This is the docker file for the aca-server image:
FROM node:alpine
WORKDIR /app
COPY ["./package*.json", "./"]
RUN npm install -g nodemon
RUN npm install --production
COPY . .
CMD [ "node", "main.js"]
on my server's node app I'm trying to read files like this:
fs.readdir(PATH_TO_SHARED_VOLUME, function (err, files) {
if (err) {
return console.log('Unable to scan directory: ' + err);
}
files.forEach(function (file) {
console.log(file);
});
});
but all my tests to fill PATH_TO_SHARED_VOLUME with valid path failed. For example:
/aca_storage
/aca_storage/_data
/acaproject_aca_storage
/acaproject_aca_storage/_data
(acaproject is the VS Code workspace name which I noticed is being added automatically)
Using docker cli on the 'aca-server-test' container, I'm getting:
/app #
which with ls exposes only the files/folders on my node.js app but doesn't allow access to the volume 'aca_storage' as happens with the examples I can find on the internet.
If relevant, my environment is:
Windows 10 Home with WSL2
Docker Desktop set as Linux Containers
I'm noob with Linux and Docker so as many details as possible will be appreciated.
You can mount the same storage into different containers on different paths. I'd avoid using volumes_from:, which is inflexible and a little bit opaque.
version: '3.8'
volumes:
aca_storage:
services:
client:
volumes:
- aca_storage:/app/data
server:
volumes:
- aca_storage:/app/files_to_share
In each container the mount path needs to match what the application code is expecting, but they don't necessarily need to be the same path. With this configuration, in the server code, you'd set PATH_TO_SHARED_VOLUME = '/app/files_to_share', for example.
If VSCode is adding a bind mount to replace the server image's /app directory with your local development code, volumes_from: will also copy this mount into the client container. That could result in odd behavior.
Sharing files between containers adds many complications; it makes it hard to scale the setup and to move it to a clustered setup like Docker Swarm or Kubernetes. A simpler approach could be for the client container to HTTP POST the data to the server container, which can then manage its own (unshared) storage.
I am trying to dockerize application where I have a php backend and vuejs frontend. Backend is working as I expect however after running npm run build within the frontend container, I need to copy build files from dist folder to nginx container or to host and then use volume to bring those files to nginx container.
I tried to use named volume
services:
frontend:
.....
volumes:
- static:/var/www/frontend/dist
nginx:
.....
volumes:
- static:/var/www/frontend/dist
volumes:
static:
I also tried to do following as suggested on here to bring back dist folder to host
services:
frontend:
.....
volumes:
- ./frontend/dist:/var/www/frontend/dist
However none of the above options is working for me. Below is my docker-compose.yml file and frontend Dockerfile
version: "3"
services:
database:
image: mysql:5.7.22
.....
backend:
build:
context: ./docker/php
dockerfile: Dockerfile
.....
frontend:
build:
context: .
dockerfile: docker/node/Dockerfile
target: 'build-stage'
container_name: frontend
stdin_open: true
tty: true
volumes:
- ./frontend:/var/www/frontend
nginx:
build:
context: ./docker/nginx
dockerfile: Dockerfile
container_name: nginx
restart: unless-stopped
ports:
- 80:80
volumes:
- ./backend/public:/var/www/backend/public:ro
- ./frontend/dist:/var/www/frontend/dist:ro
depends_on:
- backend
- frontend
Frontend Dockerfile
# Develop stage
FROM node:lts-alpine as develop-stage
WORKDIR /var/wwww/frontend
COPY /frontend/package*.json ./
RUN npm install
COPY /frontend .
# Build stage
FROM develop-stage as build-stage
RUN npm run build
You can combine the frontend image and the Nginx image into a single multi-stage build. This basically just involves copying your docker/node/Dockerfile as-is into the start of docker/nginx/Dockerfile, and then COPY --from=build-stage into the final image. You will also need to adjust some paths since you'll need to make the build context be the root of your project.
# Essentially what you had in the question
FROM node:lts AS frontend
WORKDIR /frontend
COPY frontend/package*.json .
RUN npm install
COPY frontend .
RUN npm run build
# And then assemble the Nginx image with content
FROM nginx
COPY --from=frontend /frontend/dist /var/www/html
Once you've done this, you can completely delete the frontend container from your docker-compose.yml file. Note that it never did anything – the image didn't declare a CMD, and the docker-compose.yml didn't provide a command: to run either – and so this shouldn't really change your application.
You can use a similar technique to copy the static files from your PHP application into the Nginx proxy. When all is said and done, this leaves you with a simpler docker-compose.yml file:
version: "3.8"
services:
database:
image: mysql:5.7.22
.....
backend:
build: ./docker/php # don't need to explicitly name Dockerfile
# note: will not need volumes: to export files
.....
# no frontend container any more
nginx:
build:
context: . # because you're COPYing files from other components
dockerfile: docker/nginx/Dockerfile
restart: unless-stopped
ports:
- 80:80
# no volumes:, everything is built into the image
depends_on:
- backend
(There are two practical problems with trying to share content using Docker named volumes. The first is that volumes never update their content once they're created, and in fact hide the content in their original image, so using volumes here actually causes changes in your source code to be ignored in favor of arbitrarily old content. In environments like Kubernetes, the cluster doesn't even provide the copy-on-first-use that Docker has, and there are significant limitations on how files can be shared between containers. This works better if you build self-contained images and don't try to share files.)
I am aiming to configure docker so that when I modify a file on the host the change is propagated inside the container file system.
You can think of this as hot reloading for server side node code.
The nodemon file watcher should restart the server in response to file changes.
However these file changes on the host volume don't seem to be reflected inside the container when I inspect the container using docker exec pokerspace_express_1 bash and inspect a modified file the changes are not propagated inside the container from the host.
Dockerfile
FROM node:8
MAINTAINER therewillbecode
# Create app directory
WORKDIR src/app
RUN npm install nodemon -g
# Install app dependencies
COPY package.json .
# For npm#5 or later, copy package-lock.json as well
# COPY package.json package-lock.json ./
RUN npm install
CMD [ "npm", "start" ]
docker-compose.yml
version: '2'
services:
express:
build: .
depends_on:
- mongo
environment:
- MONGO_URL=mongo:27017/test
- SERVER_PORT=3000
volumes:
- ./:/src/app
ports:
- '3000:3000'
links:
- mongo
mongo:
image: mongo
ports:
- '27017:27017'
mongo-seed:
build: ./mongo-seed
links:
- mongo
.dockerignore
.git
.gitignore
README.md
docker-compose.yml
How can I ensure that host volume file changes are reflected in the container?
Try something like this in your Dockerfile:
CMD ["nodemon", "-L"]
Some people had a similar issue and were able to resolve it with passing -L (which means “legacy watch”) to nodemon.
References:
https://github.com/remy/nodemon/issues/419
http://fostertheweb.com/2016/02/nodemon-inside-docker-container/#why-isnt-nodemon-reloading
Right, so with Docker we need to re-build the image or figure out some clever solution.
You probably do not want to rebuild the image every time you make a change to your source code.
Let's figure out a clever solution. Let's generalize the Dockerfile a bit to solve your problem and also help others.
So this is the boilerplate Dockerfile:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Remember, during the image building process we are creating a temporary container. When we make the copies we are essentially taking a snapshot of the contents /src and /public. Its a snapshot that is locked in time and by default will not be updated by making changes to the code.
So in order to get these changes to files /src and /public, we need to abandon doing a straight copy, we are going to adjust the docker run command that we use to start up our container.
We are going to make use of a feature called volume.
With Docker volume we setup a placeholder inside our Docker container, so instead of copying over our entire/src directory we can imagine we are going to put a reference to those files and give us access to the files and folders inside of the local machine.
We are setting up a mapping from a folder inside the container to a folder outside a container. The command to use is a bit painful, but once its documented here you can bookmark this answer.
docker run -p 3000:3000 -v /app/node_modules -v $(pwd):/app <image_id>
-v $(pwd):/app used to set up a volume in present working directory. This is a shortcut. So we are saying get the present working directory, get everything inside of it and map it up to our running container. It's long winded I know.
To implement this you will have to first rebuild your docker image by running:
docker build -f Dockerfile.dev .
Then run:
docker run -p 3000:3000 -v $(pwd):/app <image_id>
Then you are going to very quickly get an error message, the react-scripts not found error. You will see that message because I skipped the -v /app/node_modules.
So what's up with that?
The volume command sets up a mapping and when we do, we are saying take everything inside of our present working directory and map it up to our /appfolder, but the issue is there is no /node_modules folder which is where all our dependencies exist.
So the /node_modules folder got overwritten.
So we are essentially pointing to nothing and thats why we need that -v /app/node_modules with no colon because the colon is to map up the folder inside a container to a folder outside the container. Without the colon we are saying want it to be a placeholder, don't map it up against anything.
Now, go ahead and run: docker run -p 3000:3000 -v $(pwd):/app <image_id>
Once done, you can make all the changes you want to your project and see them "hot reload" in your browser. No need to figure out how to implement Nodemon.
So whats happening there is any changes made to your local file system is getting propagated into your container, the server inside your container sees the change and updates.
Now, I know its hard and annoying to remember such a long command, in enters Docker Compose.
We can make use of Docker Compose to dramatically simplify the command we have to run to start up the container.
So to implement that you create a Docker Compose file and inside of it you will include the port setting and the two volumes that you need.
Inside your root project, make a new file called docker-compose.yml.
Inside there you will add this:
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
Then run: docker-compose up
Daniel's answer partially worked for me, but the hot reloading still doesn't work. I'm using a Windows host and had to change his docker-compose.yml to
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /App/node_modules
- .:/App
(I changed the volumes arguments from /app/node_modules to /App/node_modules and from .:/app to .:/App. This enables changes to be passed to the container, however the hot reloading still doesn't work. I have to use docker-compose up --build each time I want to refresh the app.)
I'm using docker-compose to set up nginx and node
services:
nginx:
container_name: nginx
build: ./nginx/
ports:
- "80:80"
- "443:443"
links:
- node:node
volumes_from:
- node
volumes:
- /etc/nginx/ssl:/etc/nginx/ssl
node:
container_name: node
build: .
env_file: .env
volumes:
- /usr/src/app
- ./logs:/usr/src/app/logs
expose:
- "8000"
environment:
- NODE_ENV=production
command: npm run package
I have node and nginx share the same volume so that nginx can serve the static content generated by node.
When i update the source code in node. I remove the node container and rebuild it via the below
docker rm node
docker-compose -f docker-compose.prod.yml up --build -d node
I can see that the new node container has the updated source code with the proper updated static content
docker exec -it node bash
root#e0cd1b990cd2:/usr/src/app# cat public/style.css
this shows the updated content i want to see
.project_detail .owner{color:#ccc;padding:10px}
However, when i login to the nginx container
docker exec -it nginx bash
root#a459b271e787:/# cat /usr/src/app/public/style.css
.project_detail .owner{padding:10px}
as you can see , nginx is not able to see the newly updated static files served by node - despite the node update. It however works if i restart the nginx container as well.
Am i doing something wrong? Do i have to restart both nginx and node containers to see the updated content?
Instead of sharing volume of one container with another, share a common directory on the host with both the containers. For example, if the directory is at /home/user/app, then it should be present in volumes section like:
volumes:
- /home/user/app:/usr/src/app
This should be done for both the containers.