running docker container is not reachable by browser - node.js

I started to work with docker. I dockerized simple node.js app. I'm not able to access to my container from outside world (means by browser).
Stack:
node.js app with 4 endpoints (I used hapi server).
macOS
docker desktop community version 2.0.0.2
Here is my dockerfile:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install -g nodemon
COPY . .
EXPOSE 8000
CMD ["npm","run", "start-server"]
I did following steps:
I run from command line from my working dir:
docker image build -t ares-maros .
docker container run -d --name rest-api -p 8000:8000 ares-maros
I checked if container is running via docker container ps
Here is the result:
- container is running
I open the browser and type 0.0.0.0:8000 (also tried with 127.0.0.1:8000 or localhost:8000)
result:
So running docker container is not rechable by browser
I also go into the container typing docker exec -it 81b3d9b17db9 sh and try to reach my node-app inside of container via wget/curl and that's works. I get responses fron all node.js endpoints.
Where could be the problem ? Maybe my mac can blocked connection ?
Thanks for help.

Please check the order of the parameters of the following command:
docker container run -d --name rest-api -p 8000:8000 ares-maros
I faced a similar. I was using -p port:port at the end of the command. Simply moving it to after 'Docker run' solved it for me.

Related

Can not connect to node app running in Docker container from browser

I am running a nodejs application in a Docker container. The application is hosted on a bluehost centOS VPS to which I connect using SSH. I use the following command to run the app in the container: sudo docker run -p 80:8080 -d skepticalbonobo/dandakou-nodeapp. Then I check that the container is running using sudo docker ps and sure enough it is. But when I try to access the app from Chrome using the domain name or IP address I get: "This site can’t be reached". I have noticed however that in the output of sudo docker ps, under COMMAND I get docker-entrypoint... as opposed to node app.js and I do not know how to fix it.You can pull the container using docker pull skepticalbonobo/dandakou-nodeapp. Here is the content of my Dockerfile:
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY . .
USER root
RUN chown -R node:node . .
EXPOSE 8080
CMD [ "node", "app.js" ]
Thank you!
The default for Nodejs app is 3000.
Run following command and check on which port node app is running
sudo docker run -ti skepticalbonobo/dandakou-nodeapp /bin/sh
Expose in Dockerfile is just for documentation purpose.

docker port mapping ignored when adding volumes to the run command

When I start my docker container with:
docker run -it -d -p 8081:8080 --name ${APP_CONTAINER_NAME} ${APP_IMAGE}
I can access my web application just fine in my browser on: localhost:8081
But if I instead run it with the two volumes below:
docker run -it -d -p 8081:8080 -v ${PWD}:/app -v /app/node_modules --name ${APP_CONTAINER_NAME} ${APP_IMAGE}
The port mapping is ignored - I cannot access it at localhost:8081 but I can access it at localhost:8080.
My dockerfile has:
FROM node:8-alpine
RUN apk update && apk add bash
RUN npm install -g http-server
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
Why does adding the volumes to the second docker run command ignore the port mapping from 8081 to 8080?
As suggested below running without -d (but with volumes):
docker run -it -p 8081:8080 -v ${PWD}:/app -v /app/node_modules --name ${APP_CONTAINER_NAME} ${APP_IMAGE}
gives:
Starting up http-server, serving dist
Available on:
http://127.0.0.1:8080
http://172.17.0.2:8080
Hit CTRL-C to stop the server
But I cannot access it on localhost:8080 or localhost:8081 even though the container is indeed running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
603b1bf02d58 app-image "http-server dist" 11 seconds ago Up 5 seconds 0.0.0.0:8081->8080/tcp app-container
When I instead run it without volumes but still map to 8081 it works:
Starting up http-server, serving dist
Available on:
http://127.0.0.1:8080
http://172.17.0.2:8080
Hit CTRL-C to stop the server
and I can access it on localhost:8081. So something in the application must be messed up when adding the volumes just not sure what. I have also tried to run:
docker volume prune
before starting the container but it has no effect. Any ideas why creating the volumes prevents the application from being accessed?

Run node Docker without port mapping

I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the ‍‍--publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the ‍‍-d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped

Docker expose not working

question:
I've created a simple site using Docker and Aurelia. The site runs in Docker, but is not accessible from my localhost. What I did:
create container
docker build -t randy/node-web-app .
docker run -p 9000:9000 -d randy/node-web-app
97f57c3d0da5d03f53b4ba893fdb866ca528e10e6c4a1b310726e514d8957650
see if the scripts ran:
docker logs 97f57c3d0da5
Application Available At: http://localhost:9000
Going into docker container terminal to see if the site is up:
docker exec -it 97f57c3d0da5 /bin/bash
See if it runs:
curl -i localhost:9000
summary:
HTTP/1.1 200 OK
<!DOCTYPE html>
(I actually see the HTML that it should return, but that's too big to post here.)
return to host terminal:
exit
curl -i localhost:9000
curl: (7) Failed to connect to localhost port 9000: Connection refused
How can I make sure I can access that site from my pc? In the first command, I've set expose on 9000:9000 so that shouldn't be a problem.
Dockerfile:
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install -g aurelia-cli
RUN npm install
EXPOSE 9000
CMD ["npm", "start"]
I used Kitematic instead of the normal version of Docker on the machine where I deployed this image. Kitematic maps the docker image to an (internal) IP address instead of the localhost of the host machine.
My answer was to use 192.168.1.67 as IP instead of 127.0.0.1.
If you install Docker without Kitematic, this should not be an issue.

Docker Compose: Accessing my webapp from the browser

I'm finding the docs sorely lacking for this (or, I'm dumb), but here's my setup:
Webapp is running on Node and Express, in port 8080. It also connects with a MongoDB container (hence why I'm using docker-compose).
In my Dockerfile, I have:
FROM node:4.2.4-wheezy
# Set correct environment variables.
ENV HOME /root
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install;
# Bundle app source
COPY . /usr/src/app
CMD ["node", "app.js"]
EXPOSE 8080
I run:
docker-compose build web
docker-compose build db
docker-compose up -d db
docker-compose up -d web
When I run docker-machine ip default I get 192.168.99.100.
So when I go to 192.168.99.100:8080 in my browser, I would expect to see my app - but I get connection refused.
What silly thing have I done wrong here?
Fairly new here as well, but did you publish your ports on your docker-compose file?
Your Dockerfile will simply expose the ports but not open access to your host. Publishing (with the -p flag on a docker run command or ports on a docker-compose file. Will enable access from outside the container (see ports in the docs)
Something like this may help:
ports:
- "8080:8080"

Resources