Docker Compose: Accessing my webapp from the browser - node.js

I'm finding the docs sorely lacking for this (or, I'm dumb), but here's my setup:
Webapp is running on Node and Express, in port 8080. It also connects with a MongoDB container (hence why I'm using docker-compose).
In my Dockerfile, I have:
FROM node:4.2.4-wheezy
# Set correct environment variables.
ENV HOME /root
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install;
# Bundle app source
COPY . /usr/src/app
CMD ["node", "app.js"]
EXPOSE 8080
I run:
docker-compose build web
docker-compose build db
docker-compose up -d db
docker-compose up -d web
When I run docker-machine ip default I get 192.168.99.100.
So when I go to 192.168.99.100:8080 in my browser, I would expect to see my app - but I get connection refused.
What silly thing have I done wrong here?

Fairly new here as well, but did you publish your ports on your docker-compose file?
Your Dockerfile will simply expose the ports but not open access to your host. Publishing (with the -p flag on a docker run command or ports on a docker-compose file. Will enable access from outside the container (see ports in the docs)
Something like this may help:
ports:
- "8080:8080"

Related

NPM not found when using npm run start command within shell script from a docker container

I am not sure what I may be doing wrong but I have the following script.sh file sitting at the root of my project:
script.sh
#!/bin/sh
npm run start
envsubst '\$PORT' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf
nginx -g 'daemon off;'
Then I referenced the above script in my Dockerfile as shown below:
Dockerfile
# Build environment
FROM node:16.14.2
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . ./
# server environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
ENV HOST 0.0.0.0
ENV NODE_ENV production
EXPOSE 8080
COPY script.sh /
RUN chmod +x /script.sh
ENTRYPOINT ["/script.sh"]
After building the Docker image successfully, I attempted to run it as a container but all I keep getting back is the following error:
/script.sh: line 2: npm: not found
I expect that the script should be able to pick up the already installed npm from the environment.
What can I do differently to make this work?
You're trying to run two separate programs, so run them in two separate containers.
# Dockerfile.app
FROM node:16.14.2
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . ./
ENV HOST 0.0.0.0
ENV NODE_ENV production
EXPOSE 8080
CMD npm run start
# Dockerfile.nginx
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
You might use a system like Docker Compose to run the two parts together:
# docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile.app
nginx:
build:
context: .
dockerfile: nginx
ports:
- 8080:80
Running docker-compose up -d will start both containers together. In your Nginx configuration, make sure to proxy_pass http://app:8080, using the Compose service name and the port number the service is listening on, to forward requests to the other container.
(The Nginx Dockerfile looks short, but it's correct. The Docker Hub nginx image already knows how to run the envsubst line from your script in its own entrypoint script and it has a correct default command already.)
There's two basic problems in the setup you show in the question, both related to trying to run two programs in the same container. The first is that you can't merge images, having a second FROM line makes Docker start over from the new base image. (So your final image contains only Nginx, not Node or your built application, hence the npm not found error.) The second you'll run into is that your script will start your application, but not start the Nginx proxy until after the application exits. There are some common workarounds to this (like using a background process) but it essentially results in one process or the other being unmonitored by Docker, so your application could potentially fail and Docker wouldn't notice it to be able to restart it.

On what PORT is my docker container with a React app running? Reacts' default PORT or EXPOSE from Dockerfile?

I'm a newbie to Docker so please correct me if anything I'm stating is wrong.
I created a React app and wrote a following Dockerfile in the root repository:
# pull official base image
FROM node:latest
# A directory within the virtualized Docker environment
# Becomes more relevant when using Docker Compose later
WORKDIR /usr/src/app
# Copies package.json and package-lock.json to Docker environment
COPY package*.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . .
# Uses port which is used by the actual application
EXPOSE 8080
# Finally runs the application
CMD [ "npm", "start" ]
My goal is to run the docker image in a way, that I can open the React app in my browser (with localhost).
Since in the Dockerfile I'm Exposing the app to the PORT: 8080. I thought I can run:
docker run -p 8080:8080 -t <name of the docker image>
But apparently the application is accessible through 3000 in the container, cause when I run:
docker run -p 8080:3000 -t <name of the docker image>
I can access it with localhost:8080.
What's the point of the EXPOSE port in the Dockerfile, when the service running in its container is accessible through a different port?
When containerizing a NodeJS app, do I always have to make sure that process.env.PORT in my app is the same as the EXPOSE in the Dockerfile?
EXPOSE is for telling docker what ports from inside the application can be exposed. It doesn't mean anything if you do not use those port inside (container -> host).
The EXPOSE is very handy when using docker run -P -t <name of the docker image> (-P capital P) to let Docker automatically publish all the exposed ports to random ports on the host (try it out. then run docker ps or docker inspect <containerId> and checking the output).
So if your web Server (React app) is running on port 3000 (inside the container) you should EXPOSE 3000 (instead of 8080) to properly integrate with the Docker API.
It's kind of weird.
Its just documentation in a sense.
https://docs.docker.com/engine/reference/builder/#:~:text=The%20EXPOSE%20instruction%20informs%20Docker,not%20actually%20publish%20the%20port.
The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
do I always have to make sure that process.env.PORT in my app is the
same as the EXPOSE in the Dockerfile?
Yes. You should.
And then you also need to make sure that port actually gets published, when you use the docker run command or in your docker-compose.yml file, or however you plan on running docker.
Actually react app runs the default port 3000. so you must to mention ports and expose in docker-compose.yml. Now I'm changing the 3000 port to 8081
frontend:
container_name: frontend
build:
context: ./frontend/app
dockerfile: ../Dockerfile
volumes:
- ./frontend/app:/home/devops/frontend/app
- /home/devops/frontend/app/node_modules
ports:
- "8081:3000"
expose:
- 8081
command: ["npm", "start"]
restart: always
stdin_open: true
And run the docker
$ sudo docker-compose up -d
Then check the running containers for find the running port
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b970baf16d devops_frontend "docker-entrypoint..." 31 seconds ago Up 30 seconds 8081/tcp, 0.0.0.0:8081->3000/tcp frontend
It's resolved. check your public port
$ curl 'http://0.0.0.0:8081'

Run node Docker without port mapping

I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the ‍‍--publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the ‍‍-d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped

Dockerizing a React App: The app starts inside the container, but it not accessible from the exposed port

This is a question specifically for the tutorial at: http://mherman.org/blog/2017/12/07/dockerizing-a-react-app/#.Wv3u23WUthF by Michael Herman
Problem: The app starts inside the container, but it is not accessible from the port I just exposed -p 3000:3000. When Browse to localhost:3000 get a This site can’t be reached error
docker-compose.yaml
version: '3.5'
services:
sample-app:
container_name: sample-app
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- '3000:3000'
environment:
- NODE_ENV=development
Dockerfile
# base image
FROM node:9.6.1
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
# RUN npm install react-scripts#1.1.1 -g --silent # Uncomment to silent logs
RUN npm install react-scripts#1.1.1 -g
# start app
CMD ["npm", "start"]
#CMD tail -f /usr/src/app/README.md
###################################
# To Run sample app:
# docker run -it -v ${PWD}:/usr/src/app -v /usr/src/app/node_modules -p 3000:3000 --rm sample-app
Docker logs : https://docs.google.com/document/d/14LRCgjMLAkmdMiuedxAW2GWUAtxmWeJQCNQB2ezdYXs/edit
After running either the compose or single container. It shows successful startup, but nothing thereafter.
When I docker exec into the container, $ curl localhost:3000 returns the proper index.html page
I start up the container with either:
$ docker run -it -v ${PWD}:/usr/src/app -v /usr/src/app/node_modules -p 3000:3000 --rm sample-app
<- (The image sample-app exists )
or
$ docker-compose up
After eliminating all other factors I assume that your application is listening on localhost. Localhost is scoped to the container itself. Therefore to be able to connect to it, you would have to be inside the container.
To fix this, you need to get your application to listen on 0.0.0.0 instead.
The problem is your are exposing the app on localhost strictly.
You have to modify your package.json to change that:
"start": "http-server -a localhost -p 3000"
into:
"start": "http-server -a 0.0.0.0 -p 3000"
if your contents of package.json differ strongly from that what's above, the important part is the -a option - it has to point to 0.0.0.0 as it means the http-server will listen on all incoming connections.
If you are not sure what to change, just post the essential part of package.json here in your question so we can check it.
I had the same issue when I was using create-react-app but I managed to solve it by not exposing the docker port in the Dockerfile and running the following command docker container run -p ANY_PORT_YOU_WANT:**3000** -d image_name. It so happens that by default react-server runs on PORT 3000 in the docker container and by exposing any other port other than 3000, that connection flow is killed. The start script uses 0.0.0.0 by default. See Issue

How can I set up nodejs and express as a docker container on digital ocean?

I have tried to get this working but I am struggling to expose the node app on port 80. Also I want to be sure ever thing else is secure.
UPDATE:
Trying to be more clear...
I am using this Dockerfile
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8888
CMD [ "node", "index.js" ]
Then I use this command to start the container
$ docker run -p 8888:80 christmedical/christ-medical-server
from my docker public IP I get nothing
In docker run reference documentation, in the expose port section says:
-p=[] : Publish a container᾿s port or a range of ports to the host
format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
If you say you want to access it on port 80 of your host so this should be your command:
docker run -p 80:8888 christmedical/christ-medical-server

Resources