Node.js docker container not updating to changes in volume - node.js

I am trying to host a development environment on my Windows machine which hosts a frontend and backend container. So far I have only been working on the backend. All files are on the C Drive which is shared via Docker Desktop.
I have the following docker-compose file and Dockerfile, the latter is inside a directory called backend within the root directory.
Dockerfile:
FROM node:12.15.0-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
EXPOSE 5000
CMD [ "npm", "start" ]
docker-compose.yml:
version: "3"
services:
backend:
container_name: backend
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- ./backend:/usr/app
environment:
- APP_PORT=80
ports:
- '5000:5000'
client:
container_name: client
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./client:/app
ports:
- '80:8080'
For some reason, when I make changes in my local files they are not reflecting inside the container. I am testing this by slightly modifying the outputs of one of my files, but I am having to rebuild the container each time to see the changes take effect.
I have worked with Docker in PHP applications before, and have basically done the same thing. So I am unsure why this is not working with by Node.js app. I am wondering if I am just missing something glaringly obvious as to why this is not working.
Any help would be appreciated.

The difference between node and PHP here is that php automatically picks up file system changes between requests, but a node server doesn't.
I think you'll see that the file changes get picked up if you restart node by bouncing the container with docker-compose down then up (no need to rebuild things!).
If you want node to pick up file system changes without needing to bounce the server you can use some of the node tooling. nodemon is one: https://www.npmjs.com/package/nodemon. Follow the installation instructions for local installation and update your start script to use nodemon instead of node.
Plus I really do think you have a mistake in your dockerfile and you need to copy the source code into your working directory. I'm assuming you got your initial recipe from here: https://dev.to/alex_barashkov/using-docker-for-nodejs-in-development-and-production-3cgp. This is the docker file is below. You missed a step!
FROM node:10-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]

Related

Can not run a (node.js, websocket) container on EC2. Logs results posted below

So I have been trying to figure this out for a while now. I am working with node and next.js, to implement WEBRTC using socket.io. I containerized my project and it runs fine on my local machine, I uploaded it on ec2 by watching a youtube tutorial, and whenever I run the task/container it stops with these logs results. says cannot find 'pages' directory which i did initialize in compose file.
docker-compose.yml
version: '3'
services:
app:
image: webrtc
build: .
ports:
- 3000:3000
volumes:
- ./pages:/app/pages
- ./public:/app/public
- ./styles:/app/styles
- ./hooks:/app/hooks
Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY next.config.js ./next.config.js
CMD ["yarn", "dev"]
I think you need to COPY the whole directory incl. "pages", currently you're only copying the config and the project files...
Instead of COPY next.config.js ./next.config.js try COPY . . if feasible.
Otherwise if it's required using docker-compose with the volumes make sure the mapping to EFS is set up correctly: https://docs.docker.com/cloud/ecs-compose-features/#persistent-volumes
This would be a related matter then: How to mount EFS inside a docker container?

Using Docker with Node image to develop a VuejS (NuxtJs) app

The situtaion
I have to work on a VueJs (NuxtJs) spa, so I'm trying to use Docker with a Node image to avoid installing it on my pc, but can't figure out how to make it work.
The project
The source cose is in its own application folder, since it is versioned, and at the root level there is the docker-compose.yaml file
The folder structure
my-project-folder
├ application
| └ ...
└ docker-compose.yaml
The docker-compose.yaml
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
image: node:lts-alpine
working_dir: /app
volumes:
- ./application:/app
The problem
The container start but quit immediately with exit status 0 (so it executed correctly), but this way I can't use it to work on the project.
Probably there is something I'm missing about the Node image or Docker in general; what i would like to to do is connecting to the docker container to run npm commands like install, run start etc and then check the application on the browser on localhost:3000 or whatever it is.
I would suggest to use Dockerfile with base image as node and then create your entrypoint which runs the application. That will eliminate the need to use volumes which is used when we want to maintain some state for our containers.
Your Dockerfile may look something like this:
FROM node:lts-alpine
RUN mkdir /app
COPY application/ /app/
EXPOSE 3000
CMD npm start --prefix /app
You can then either run it directly through docker run command or use docker-compose.yaml as following :
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
build:
context: .
ports:
- 3000:3000

Docker with nodemon does not reload my api when code changes

I've been working with docker some weeks ago and I was able to hold this issue, stop docker containers and start them over again to see the changes that I had made in my code but now is really anoying because every single change I do have to kill docker and then "docker-compose up".
However my friend is using the same container on his apple machine but when he makes changes to any server side code he does not have to restart his app.
I can see the changes when I go into the container but those changes are not reflected on live(browser).
My Dockerfile
FROM node:8.11.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# Copy application files
COPY tools ./tools/
COPY migrations ./migrations/
COPY seeds ./seeds/
# Attempts to copy "build" folder even if it doesn't exist
COPY .env build* ./build/
RUN npm install -g nodemon
RUN git clone https://github.com/vishnubob/wait-for-it.git
EXPOSE 8080
CMD ["nodemon", "-L", "server"]
My docker-compose.yml
api:
build: ./
hostname: api
container_name: api
ports:
- "${APP_PORT}:3000"
volumes:
- ./:/usr/src/app
env_file:
- ".env"
command: node tools/run.js
Any sugestion?

Docker not propagating file changes from host to container

I am aiming to configure docker so that when I modify a file on the host the change is propagated inside the container file system.
You can think of this as hot reloading for server side node code.
The nodemon file watcher should restart the server in response to file changes.
However these file changes on the host volume don't seem to be reflected inside the container when I inspect the container using docker exec pokerspace_express_1 bash and inspect a modified file the changes are not propagated inside the container from the host.
Dockerfile
FROM node:8
MAINTAINER therewillbecode
# Create app directory
WORKDIR src/app
RUN npm install nodemon -g
# Install app dependencies
COPY package.json .
# For npm#5 or later, copy package-lock.json as well
# COPY package.json package-lock.json ./
RUN npm install
CMD [ "npm", "start" ]
docker-compose.yml
version: '2'
services:
express:
build: .
depends_on:
- mongo
environment:
- MONGO_URL=mongo:27017/test
- SERVER_PORT=3000
volumes:
- ./:/src/app
ports:
- '3000:3000'
links:
- mongo
mongo:
image: mongo
ports:
- '27017:27017'
mongo-seed:
build: ./mongo-seed
links:
- mongo
.dockerignore
.git
.gitignore
README.md
docker-compose.yml
How can I ensure that host volume file changes are reflected in the container?
Try something like this in your Dockerfile:
CMD ["nodemon", "-L"]
Some people had a similar issue and were able to resolve it with passing -L (which means “legacy watch”) to nodemon.
References:
https://github.com/remy/nodemon/issues/419
http://fostertheweb.com/2016/02/nodemon-inside-docker-container/#why-isnt-nodemon-reloading
Right, so with Docker we need to re-build the image or figure out some clever solution.
You probably do not want to rebuild the image every time you make a change to your source code.
Let's figure out a clever solution. Let's generalize the Dockerfile a bit to solve your problem and also help others.
So this is the boilerplate Dockerfile:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Remember, during the image building process we are creating a temporary container. When we make the copies we are essentially taking a snapshot of the contents /src and /public. Its a snapshot that is locked in time and by default will not be updated by making changes to the code.
So in order to get these changes to files /src and /public, we need to abandon doing a straight copy, we are going to adjust the docker run command that we use to start up our container.
We are going to make use of a feature called volume.
With Docker volume we setup a placeholder inside our Docker container, so instead of copying over our entire/src directory we can imagine we are going to put a reference to those files and give us access to the files and folders inside of the local machine.
We are setting up a mapping from a folder inside the container to a folder outside a container. The command to use is a bit painful, but once its documented here you can bookmark this answer.
docker run -p 3000:3000 -v /app/node_modules -v $(pwd):/app <image_id>
-v $(pwd):/app used to set up a volume in present working directory. This is a shortcut. So we are saying get the present working directory, get everything inside of it and map it up to our running container. It's long winded I know.
To implement this you will have to first rebuild your docker image by running:
docker build -f Dockerfile.dev .
Then run:
docker run -p 3000:3000 -v $(pwd):/app <image_id>
Then you are going to very quickly get an error message, the react-scripts not found error. You will see that message because I skipped the -v /app/node_modules.
So what's up with that?
The volume command sets up a mapping and when we do, we are saying take everything inside of our present working directory and map it up to our /appfolder, but the issue is there is no /node_modules folder which is where all our dependencies exist.
So the /node_modules folder got overwritten.
So we are essentially pointing to nothing and thats why we need that -v /app/node_modules with no colon because the colon is to map up the folder inside a container to a folder outside the container. Without the colon we are saying want it to be a placeholder, don't map it up against anything.
Now, go ahead and run: docker run -p 3000:3000 -v $(pwd):/app <image_id>
Once done, you can make all the changes you want to your project and see them "hot reload" in your browser. No need to figure out how to implement Nodemon.
So whats happening there is any changes made to your local file system is getting propagated into your container, the server inside your container sees the change and updates.
Now, I know its hard and annoying to remember such a long command, in enters Docker Compose.
We can make use of Docker Compose to dramatically simplify the command we have to run to start up the container.
So to implement that you create a Docker Compose file and inside of it you will include the port setting and the two volumes that you need.
Inside your root project, make a new file called docker-compose.yml.
Inside there you will add this:
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
Then run: docker-compose up
Daniel's answer partially worked for me, but the hot reloading still doesn't work. I'm using a Windows host and had to change his docker-compose.yml to
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /App/node_modules
- .:/App
(I changed the volumes arguments from /app/node_modules to /App/node_modules and from .:/app to .:/App. This enables changes to be passed to the container, however the hot reloading still doesn't work. I have to use docker-compose up --build each time I want to refresh the app.)

Docker with node bcrypt — invalid ELF header

I've tried every solution from this post and this post
I'm not finding a solution to get rid of the following error when running docker-compose up:
module.js:598
return process.dlopen(module, path._makeLong(filename));
^
Error: /code/node_modules/bcrypt/lib/binding/bcrypt_lib.node: invalid ELF header
Here's my latest attempt docker-compose.yml
version: "2"
services:
app:
build: ./client
ports:
- "3000:3000"
links:
- auth
volumes:
- ./client:/code
auth:
build: ./auth-service
ports:
- "3002:3002"
links:
- db
volumes:
- ./auth-service:/code
db:
...
And my auth service Dockerfile:
FROM node:7.7.1
EXPOSE 3002
WORKDIR /code
COPY package.json /code
RUN npm install
COPY . /code
CMD npm start
After trying each of the solution from the above two links, I rebuild the containers and it always results in the same error.
Also worth noting, the service runs fine locally, when I don't use docker.
How do I get docker to work with bcrypt?
Update
I was able to get it working by doing the following:
finding the id of the container: docker ps
accessing the container: docker exec -t -i containerId /bin/bash
installing bcrypt: npm install bcrypt
This isn't ideal for portability
I spent a few hours trying to solve this and in the end I came up with the following solution.
My compose file looks like this.....
version: "3"
services:
server:
build:
context: ./server
volumes:
- ./server:/usr/src/app
- /usr/src/app/node_modules/
ports:
- 3050:3050
depends_on:
- db
command: ["nodemon", "./bin/www"]
The second volume mount there is the important one as this gets around the local node_module issue.
Just for reference my dockerfile is like this:
FROM node
RUN npm install -g nodemon
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
EXPOSE 3050
CMD ["nodemon", "./bin/www"]
was struggling with this one few times, the ".dockerignore" solution wont work if you use volumes sadly, since its only related to the "Copy" command and only when you build the container.
the only solution i found which i think makes the most sense, is to volume only what you need, what does it mean - divide your source code and the "configurations" files (such as package.json):
- src
-- morecode
-- morecode2
-- server.js
- index.js
- package.json
- jslint.json
- tsconfig.json
- .env
- .dockerignore
- ... etc
and put the volume only on the "src" folder, that way your builds will also be much faster plus your node modules will be built and installed on the correct operation system, d'ont forget to add .dockerignore to node_modules to prevent builds form taking unnecessary longer time
do note that doing so will require re-build of the application every time your adding new package, but if you use npm best practice and you divide the npm installation in your docker file to be cached, it will be faster
The reason this error occurs is that the node module bcrypt is first compiled on your original machine (specific for your OS) and when an image is built on docker it cannot run since the OS is no longer the same. solution create a .dockerignore file in your root folder and add node_modules to this file.

Resources