I created a new docker container for a Node.js app.
My Dockerfile is:
FROM node:14
# app directory
WORKDIR /home/my-username/my-proj-name
# Install app dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 3016
CMD ["node", "src/app.js"]
After this I ran:
docker build . -t my-username/node-web-app
Then I ran: docker run -p 8160:3016 -d -v /home/my-username/my-proj-name:/my-proj-name my-username/node-web-app
The app is successfully hosted at my-public-ip:8160.
However, any changes I make on my server do not propagate to the docker container. For example, if I touch test.txt in my server, I will not be able GET /test.txt online or see it in the container. The only way I can make changes is to rebuild the image, which is quite tedious.
Did I miss something here when binding the volume or something? How can I make it so that the changes I make locally also appear in the container?
Related
I am not sure what I may be doing wrong but I have the following script.sh file sitting at the root of my project:
script.sh
#!/bin/sh
npm run start
envsubst '\$PORT' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf
nginx -g 'daemon off;'
Then I referenced the above script in my Dockerfile as shown below:
Dockerfile
# Build environment
FROM node:16.14.2
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . ./
# server environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
ENV HOST 0.0.0.0
ENV NODE_ENV production
EXPOSE 8080
COPY script.sh /
RUN chmod +x /script.sh
ENTRYPOINT ["/script.sh"]
After building the Docker image successfully, I attempted to run it as a container but all I keep getting back is the following error:
/script.sh: line 2: npm: not found
I expect that the script should be able to pick up the already installed npm from the environment.
What can I do differently to make this work?
You're trying to run two separate programs, so run them in two separate containers.
# Dockerfile.app
FROM node:16.14.2
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . ./
ENV HOST 0.0.0.0
ENV NODE_ENV production
EXPOSE 8080
CMD npm run start
# Dockerfile.nginx
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
You might use a system like Docker Compose to run the two parts together:
# docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile.app
nginx:
build:
context: .
dockerfile: nginx
ports:
- 8080:80
Running docker-compose up -d will start both containers together. In your Nginx configuration, make sure to proxy_pass http://app:8080, using the Compose service name and the port number the service is listening on, to forward requests to the other container.
(The Nginx Dockerfile looks short, but it's correct. The Docker Hub nginx image already knows how to run the envsubst line from your script in its own entrypoint script and it has a correct default command already.)
There's two basic problems in the setup you show in the question, both related to trying to run two programs in the same container. The first is that you can't merge images, having a second FROM line makes Docker start over from the new base image. (So your final image contains only Nginx, not Node or your built application, hence the npm not found error.) The second you'll run into is that your script will start your application, but not start the Nginx proxy until after the application exits. There are some common workarounds to this (like using a background process) but it essentially results in one process or the other being unmonitored by Docker, so your application could potentially fail and Docker wouldn't notice it to be able to restart it.
I am new to Docker and created following files in a large Node project folder:
Dockerfile
# syntax=docker/dockerfile:1
FROM node:16
# Update npm
RUN npm install --global npm
# WORKDIR automatically creates missing folders
WORKDIR /opt/app
# https://stackoverflow.com/a/42019654/15443125
VOLUME /opt/app
RUN useradd --create-home --shell /bin/bash app
COPY . .
RUN chown -R app /opt/app
USER app
ENV NODE_ENV=production
RUN npm install
# RUN npx webpack
CMD [ "sleep", "180" ]
docker-compose.yml
version: "3.9"
services:
app:
build:
context: .
ports:
- "3000:3000"
volumes:
- ./dist/dockerVolume/app:/opt/app
And I run this command:
docker compose up --force-recreate --build
It builds the image, starts a container and I added a sleep to make sure the container stays up for at least 3 minutes. When I open a console for that container and run cd /opt/app && ls, I can verify that there are a lot of files. project/dist/dockerVolume/app gets created by Docker, but nothing is written to it at any point.
There are no errors or warnings or other indications that something isn't set up correctly.
What am I missing?
First you should move the VOLUME declaration to the end of the Dockerfile, because:
If any build steps change the data within the volume after it has been declared, those changes will be discarded. (Documentation)
After this you will face the issue of how bind mounts and docker volumes work. Unfortunately if you use a bind mount, the contents of the host directory will always replace the files that are already in the container. Files will only appear in the host directory, if they were created during runtime by the container.
Also see:
Docker docs: bind mounts
Docker docs: volumes
To solve the issue, you could use any of these workarounds, depending on your usecase:
Use volumes in your docker-compose.yml file instead of bind mounts (Documentation)
Create the files you want to run on the host instead of in the image, and bind mount them into the container.
Use a bash script in the container that creates the neccessary files (if they are missing) when the container is starting (so the bind mount is already initialized, and the changes will persist) and after that, it starts your processes.
I started to work with docker. I dockerized simple node.js app. I'm not able to access to my container from outside world (means by browser).
Stack:
node.js app with 4 endpoints (I used hapi server).
macOS
docker desktop community version 2.0.0.2
Here is my dockerfile:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install -g nodemon
COPY . .
EXPOSE 8000
CMD ["npm","run", "start-server"]
I did following steps:
I run from command line from my working dir:
docker image build -t ares-maros .
docker container run -d --name rest-api -p 8000:8000 ares-maros
I checked if container is running via docker container ps
Here is the result:
- container is running
I open the browser and type 0.0.0.0:8000 (also tried with 127.0.0.1:8000 or localhost:8000)
result:
So running docker container is not rechable by browser
I also go into the container typing docker exec -it 81b3d9b17db9 sh and try to reach my node-app inside of container via wget/curl and that's works. I get responses fron all node.js endpoints.
Where could be the problem ? Maybe my mac can blocked connection ?
Thanks for help.
Please check the order of the parameters of the following command:
docker container run -d --name rest-api -p 8000:8000 ares-maros
I faced a similar. I was using -p port:port at the end of the command. Simply moving it to after 'Docker run' solved it for me.
Good Morning. I am trying to run the docker file to start my mock api and my UI.
When I run those inside individual terminals, I am able to see the UI up and running. But when I run those inside a docker container the API doesn't start for some reasons.
Can you help me with this?
# My Docker file.
FROM node:11
# Set working directory for API
RUN mkdir /usr/src/api
WORKDIR /usr/src/api
COPY ./YYY/. /usr/src/api/.
RUN npm install
RUN npm start &
# set working directory for UI
RUN mkdir /usr/src/app/
WORKDIR /usr/src/app/
COPY ./ZZZ/. /usr/src/app/.
ENV PATH /usr/src/app/node_modules/.bin:$PATH
EXPOSE 3000
RUN npm install
RUN npm start
Thanks,
Ranjith
The command npm start starts a web server that only listens on the loopback interface of the container. To fix this, in package.json, under start, add —host 0.0.0.0. This will allow you to access the app in your browser using the container ip.
How do I execute a SSHFS Mount to mount a volume on a different server into my docker image / docker container ?
The docker container contains a simple NodeJS web server. This web-page displays pictures. I have to get those image-files from a different server with different IP.
So far I had this without a docker container. For this I had a CronJob which executes the SSHFS mount to my system. Then the NodeJS server had the files and I was able to display the pictures.
Now I have to do the same with a docker container. I'd like to have the volume inside the container but I don't think that it works with the docker run -v /path/ [...] because this would require that I have the files on the host that the container lies in.
Is it possible to add the SSHFS mount into the docker run command or the Dockerfile ? Are there any other alternatives ?
~ cat Dockerfile
FROM node:alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 80
CMD [ "npm", "start" ]