I am new to docker. I am trying to create a container for react and express and run both the containers on same network using docker compose.
Below is my dockerfile for frontend:
FROM node:alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm","run","start"]
Below is my dockerfile for backend
FROM node:alpine
WORKDIR /app
COPY package*.json ./
RUN NODE_ENV=development npm install
COPY . .
EXPOSE 5000
CMD ["npm","run","server"]
Below is my docker-compose.yml
version: '3'
services:
client:
build:
context: './frontend'
dockerfile: Dockerfile
ports:
- 3000:3000
container_name: react_cont
environment:
- WATCHPACK_POLLING=true
networks:
- mern
volumes:
- ./frontend:/app
depends_on:
- server
server:
build:
context: './backend'
dockerfile: Dockerfile
ports:
- 5000:5000
container_name: express_cont
networks:
- mern
volumes:
- ./backend:/app
networks:
mern:
react container is getting is created and running successfully but the express container is not getting created with an error
sh: nodemon: not found
I had installed nodemon as my dev dependency.
Any help is appreciated. Thanks in advance.
Answering my own Question.
I had installed nodemon globally in my machine but forgot to install it as a dependency for my current project.
Since nodemon was installed globally in my machine i was not getting any errors while i was trying to run my server.js using nodemon. nodemon server.js in scripts did not throw any error while i was in developing my project locally prior moving it to docker container.
But since neither my package.json had nodemon as my dependency and i had not installed it separately in my container nodemon did not get installed and it gave me error.
You can try to delete node_modules folder in your source code and add flag --production=false explicitly to the npm install command. I think it's caching problem.
You may need to install nodemon package globally in your Docker:
RUN NODE_ENV=development npm install && npm --global install nodemon
I am currently setting up a Docker container that will be used to Debug a NodeJS application. This container needs to support live-reloading (using nodemon) and needs to be a Linux container (my workstation is a Windows machine).
My current setup is the following:
Dockerfile.debug
FROM node:current-alpine
VOLUME /app
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production --registry=http://172.16.102.123:8182/repository/npm/
RUN npm install -g nodemon
ENV NODE_ENV=test
EXPOSE 8000
EXPOSE 9229
CMD [ "nodemon", "--inspect=0.0.0.0:9229", "--ignore", "dist/test/**/*.js", "dist/index.js" ]
docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile.debug
volumes:
- .:/app
- /app/node_modules
ports:
- 8000:8000
Everything works fine except the dependencies because some of these are plattform specific. That means, it is not possible to simply mount the node_modules directory into the container (like I do with the rest of the codebase). I tried setting up my files in such a way, that the dependencies are different for each platform but I either end up with an empty node_modules directory or with the node_modules directory from the host (the current set up gives me an empty directory). Does anybody know how to fix my problem? I have looked at other solutions (like this one) but they did not work.
I've been working with docker some weeks ago and I was able to hold this issue, stop docker containers and start them over again to see the changes that I had made in my code but now is really anoying because every single change I do have to kill docker and then "docker-compose up".
However my friend is using the same container on his apple machine but when he makes changes to any server side code he does not have to restart his app.
I can see the changes when I go into the container but those changes are not reflected on live(browser).
My Dockerfile
FROM node:8.11.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# Copy application files
COPY tools ./tools/
COPY migrations ./migrations/
COPY seeds ./seeds/
# Attempts to copy "build" folder even if it doesn't exist
COPY .env build* ./build/
RUN npm install -g nodemon
RUN git clone https://github.com/vishnubob/wait-for-it.git
EXPOSE 8080
CMD ["nodemon", "-L", "server"]
My docker-compose.yml
api:
build: ./
hostname: api
container_name: api
ports:
- "${APP_PORT}:3000"
volumes:
- ./:/usr/src/app
env_file:
- ".env"
command: node tools/run.js
Any sugestion?
The build folder is ready to be deployed.
You may serve it with a static server:
serve -s build
---> b252a9088991
Removing intermediate container cb5c1e2629c9
Step 16/16 : RUN serve -s build
---> Running in c27b54b31108
serve: Running on port 5000
created and dockerized react application using react-app and getting output like above on "docker-compose up" command
but nothing is showing on http://0.0.0.0:5000/ or http://localhost:5000/
version: '3'
services:
web:
build: .
image: react-cli
container_name: react-cli
volumes:
- .:/app
ports:
- '3000:3000'
above my docker-compose.yml file
FROM scratch
FROM mhart/alpine-node:6.12.0
RUN npm install -g npm --prefix=/usr/local
RUN ln -s -f /usr/local/bin/npm /usr/bin/npm
CMD [ "/bin/sh" ]
ENV NPM_CONFIG_LOGLEVEL warn
RUN npm install -g serve
CMD serve -s build
EXPOSE 3000
COPY package.json package.json
COPY semantic.json semantic.json
COPY npm-shrinkwrap.json npm-shrinkwrap.json
RUN npm install gulp-header --save-dev
RUN npm install --no-optional
COPY . .
RUN npm run build --production
RUN serve -s build
and this is my Dockerfile
Most likely you are not exposing container port on host. I don't see how your compose file looks, but you probably need to add
ports:
- "5000:5000"
To your container definition in compose file.
I have an app with the following services:
web/ - holds and runs a python 3 flask web server on port 5000. Uses sqlite3.
worker/ - has an index.js file which is a worker for a queue. the web server interacts with this queue using a json API over port 9730. The worker uses redis for storage. The worker also stores data locally in the folder worker/images/
Now this question only concerns the worker.
worker/Dockerfile
FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
RUN npm install
COPY . /worker/
docker-compose.yml
redis:
image: redis
worker:
build: ./worker
command: npm start
ports:
- "9730:9730"
volumes:
- worker/:/worker/
links:
- redis
When I run docker-compose build, everything works as expected and all npm modules are installed in /worker/node_modules as I'd expect.
npm WARN package.json unfold#1.0.0 No README data
> phantomjs#1.9.2-6 install /worker/node_modules/pageres/node_modules/screenshot-stream/node_modules/phantom-bridge/node_modules/phantomjs
> node install.js
<snip>
But when I do docker-compose up, I see this error:
worker_1 | Error: Cannot find module 'async'
worker_1 | at Function.Module._resolveFilename (module.js:336:15)
worker_1 | at Function.Module._load (module.js:278:25)
worker_1 | at Module.require (module.js:365:17)
worker_1 | at require (module.js:384:17)
worker_1 | at Object.<anonymous> (/worker/index.js:1:75)
worker_1 | at Module._compile (module.js:460:26)
worker_1 | at Object.Module._extensions..js (module.js:478:10)
worker_1 | at Module.load (module.js:355:32)
worker_1 | at Function.Module._load (module.js:310:12)
worker_1 | at Function.Module.runMain (module.js:501:10)
Turns out none of the modules are present in /worker/node_modules (on host or in the container).
If on the host, I npm install, then everything works just fine. But I don't want to do that. I want the container to handle dependencies.
What's going wrong here?
(Needless to say, all packages are in package.json.)
This happens because you have added your worker directory as a volume to your docker-compose.yml, as the volume is not mounted during the build.
When docker builds the image, the node_modules directory is created within the worker directory, and all the dependencies are installed there. Then on runtime the worker directory from outside docker is mounted into the docker instance (which does not have the installed node_modules), hiding the node_modules you just installed. You can verify this by removing the mounted volume from your docker-compose.yml.
A workaround is to use a data volume to store all the node_modules, as data volumes copy in the data from the built docker image before the worker directory is mounted. This can be done in the docker-compose.yml like this:
redis:
image: redis
worker:
build: ./worker
command: npm start
ports:
- "9730:9730"
volumes:
- ./worker/:/worker/
- /worker/node_modules
links:
- redis
I'm not entirely certain whether this imposes any issues for the portability of the image, but as it seems you are primarily using docker to provide a runtime environment, this should not be an issue.
If you want to read more about volumes, there is a nice user guide available here: https://docs.docker.com/userguide/dockervolumes/
EDIT: Docker has since changed it's syntax to require a leading ./ for mounting in files relative to the docker-compose.yml file.
The node_modules folder is overwritten by the volume and no more accessible in the container. I'm using the native module loading strategy to take out the folder from the volume:
/data/node_modules/ # dependencies installed here
/data/app/ # code base
Dockerfile:
COPY package.json /data/
WORKDIR /data/
RUN npm install
ENV PATH /data/node_modules/.bin:$PATH
COPY . /data/app/
WORKDIR /data/app/
The node_modules directory is not accessible from outside the container because it is included in the image.
The solution provided by #FrederikNS works, but I prefer to explicitly name my node_modules volume.
My project/docker-compose.yml file (docker-compose version 1.6+) :
version: '2'
services:
frontend:
....
build: ./worker
volumes:
- ./worker:/worker
- node_modules:/worker/node_modules
....
volumes:
node_modules:
my file structure is :
project/
│── worker/
│ └─ Dockerfile
└── docker-compose.yml
It creates a volume named project_node_modules and re-use it every time I up my application.
My docker volume ls looks like this :
DRIVER VOLUME NAME
local project_mysql
local project_node_modules
local project2_postgresql
local project2_node_modules
I recently had a similar problem. You can install node_modules elsewhere and set the NODE_PATH environment variable.
In the example below I installed node_modules into /install
worker/Dockerfile
FROM node:0.12
RUN ["mkdir", "/install"]
ADD ["./package.json", "/install"]
WORKDIR /install
RUN npm install --verbose
ENV NODE_PATH=/install/node_modules
WORKDIR /worker
COPY . /worker/
docker-compose.yml
redis:
image: redis
worker:
build: ./worker
command: npm start
ports:
- "9730:9730"
volumes:
- worker/:/worker/
links:
- redis
There's elegant solution:
Just mount not whole directory, but only app directory. This way you'll you won't have troubles with npm_modules.
Example:
frontend:
build:
context: ./ui_frontend
dockerfile: Dockerfile.dev
ports:
- 3000:3000
volumes:
- ./ui_frontend/src:/frontend/src
Dockerfile.dev:
FROM node:7.2.0
#Show colors in docker terminal
ENV COMPOSE_HTTP_TIMEOUT=50000
ENV TERM="xterm-256color"
COPY . /frontend
WORKDIR /frontend
RUN npm install update
RUN npm install --global typescript
RUN npm install --global webpack
RUN npm install --global webpack-dev-server
RUN npm install --global karma protractor
RUN npm install
CMD npm run server:dev
UPDATE: Use the solution provided by #FrederikNS.
I encountered the same problem. When the folder /worker is mounted to the container - all of it's content will be syncronized (so the node_modules folder will disappear if you don't have it locally.)
Due to incompatible npm packages based on OS, I could not just install the modules locally - then launch the container, so..
My solution to this, was to wrap the source in a src folder, then link node_modules into that folder, using this index.js file. So, the index.js file is now the starting point of my application.
When I run the container, I mounted the /app/src folder to my local src folder.
So the container folder looks something like this:
/app
/node_modules
/src
/node_modules -> ../node_modules
/app.js
/index.js
It is ugly, but it works..
Due to the way Node.js loads modules, node_modules can be anywhere in the path to your source code. For example, put your source at /worker/src and your package.json in /worker, so /worker/node_modules is where they're installed.
There is also some simple solution without mapping node_module directory into another volume. It's about to move installing npm packages into final CMD command.
Disadvantage of this approach:
run npm install each time you run container (switching from npm to yarn might also speed up this process a bit).
worker/Dockerfile
FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
COPY . /worker/
CMD /bin/bash -c 'npm install; npm start'
docker-compose.yml
redis:
image: redis
worker:
build: ./worker
ports:
- "9730:9730"
volumes:
- worker/:/worker/
links:
- redis
Installing node_modules in container to different from project folder, and setting NODE_PATH to your node_modules folder helps me (u need to rebuild container).
I'm using docker-compose. My project file structure:
-/myproject
--docker-compose.yml
--nodejs/
----Dockerfile
docker-compose.yml:
version: '2'
services:
nodejs:
image: myproject/nodejs
build: ./nodejs/.
volumes:
- ./nodejs:/workdir
ports:
- "23005:3000"
command: npm run server
Dockerfile in nodejs folder:
FROM node:argon
RUN mkdir /workdir
COPY ./package.json /workdir/.
RUN mkdir /data
RUN ln -s /workdir/package.json /data/.
WORKDIR /data
RUN npm install
ENV NODE_PATH /data/node_modules/
WORKDIR /workdir
There are two seperate requirements I see for node dev environments... mount your source code INTO the container, and mount the node_modules FROM the container (for your IDE). To accomplish the first, you do the usual mount, but not everything... just the things you need
volumes:
- worker/src:/worker/src
- worker/package.json:/worker/package.json
- etc...
( the reason to not do - /worker/node_modules is because docker-compose will persist that volume between runs, meaning you can diverge from what is actually in the image (defeating the purpose of not just bind mounting from your host)).
The second one is actually harder. My solution is a bit hackish, but it works. I have a script to install the node_modules folder on my host machine, and I just have to remember to call it whenever I update package.json (or, add it to the make target that runs docker-compose build locally).
install_node_modules:
docker build -t building .
docker run -v `pwd`/node_modules:/app/node_modules building npm install
In my opinion, we should not RUN npm install in the Dockerfile. Instead, we can start a container using bash to install the dependencies before runing the formal node service
docker run -it -v ./app:/usr/src/app your_node_image_name /bin/bash
root#247543a930d6:/usr/src/app# npm install
You can also ditch your Dockerfile, because of its simplicity, just use a basic image and specify the command in your compose file:
version: '3.2'
services:
frontend:
image: node:12-alpine
volumes:
- ./frontend/:/app/
command: sh -c "cd /app/ && yarn && yarn run start"
expose: [8080]
ports:
- 8080:4200
This is particularly useful for me, because I just need the environment of the image, but operate on my files outside the container and I think this is what you want to do too.
You can just move node_modules into a / folder.
How it works
FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
RUN npm install \
&& mv node_modules /node_modules
COPY . /worker/
You can try something like this in your Dockerfile:
FROM node:0.12
WORKDIR /worker
CMD bash ./start.sh
Then you should use the Volume like this:
volumes:
- worker/:/worker:rw
The startscript should be a part of your worker repository and looks like this:
#!/bin/sh
npm install
npm start
So the node_modules are a part of your worker volume and gets synchronized and the npm scripts are executed when everything is up.
If you want the node_modules folder available to the host during development, you could install the dependencies when you start the container instead of during build-time. I do this to get syntax highlighting working in my editor.
Dockerfile
# We're using a multi-stage build so that we can install dependencies during build-time only for production.
# dev-stage
FROM node:14-alpine AS dev-stage
WORKDIR /usr/src/app
COPY package.json ./
COPY . .
# `yarn install` will run every time we start the container. We're using yarn because it's much faster than npm when there's nothing new to install
CMD ["sh", "-c", "yarn install && yarn run start"]
# production-stage
FROM node:14-alpine AS production-stage
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn install
COPY . .
.dockerignore
Add node_modules to .dockerignore to prevent it from being copied when the Dockerfile runs COPY . .. We use volumes to bring in node_modules.
**/node_modules
docker-compose.yml
node_app:
container_name: node_app
build:
context: ./node_app
target: dev-stage # `production-stage` for production
volumes:
# For development:
# If node_modules already exists on the host, they will be copied
# into the container here. Since `yarn install` runs after the
# container starts, this volume won't override the node_modules.
- ./node_app:/usr/src/app
# For production:
#
- ./node_app:/usr/src/app
- /usr/src/app/node_modules
I tried the most popular answers on this page but ran into an issue: the node_modules directory in my Docker instance would get cached in the the named or unnamed mount point, and later would overwrite the node_modules directory that was built as part of the Docker build process. Thus, new modules I added to package.json would not show up in the Docker instance.
Fortunately I found this excellent page which explains what was going on and gives at least 3 ways to work around it:
https://burnedikt.com/dockerized-node-development-and-mounting-node-volumes/
If you don't use docker-compose you can do it like this:
FROM node:10
WORKDIR /usr/src/app
RUN npm install -g #angular/cli
COPY package.json ./
RUN npm install
EXPOSE 5000
CMD ng serve --port 5000 --host 0.0.0.0
Then you build it: docker build -t myname . and you run it by adding two volumes, the second one without source: docker run --rm -it -p 5000:5000 -v "$PWD":/usr/src/app/ -v /usr/src/app/node_modules myname
With Yarn you can move the node_modules outside the volume by setting
# ./.yarnrc
--modules-folder /opt/myproject/node_modules
See https://www.caxy.com/blog/how-set-custom-location-nodemodules-path-yarn