I've cloned the following dockerized MEVN app and would like to access it from another PC on the local network.
The box that docker is running on has an ip of 192.168.0.111 but going to http://192.168.0.111:8080/ from another PC just says it can't be reached. I run other services like plex and a minecraft server that can be reached with this ip so I assume it is a docker config issue. I am pretty new to docker.
Here is the Dockerfile for the poral. I made a slight change from the repo adding -p 8080:8080 because I read elsewhere that it would open it up to lan access.
FROM node:16.15.0
RUN mkdir -p /usr/src/www &&
apt-get -y update &&
npm install -g http-server
COPY . /usr/src/vue
WORKDIR /usr/src/vue
RUN npm install
RUN npm run build
RUN cp -r /usr/src/vue/dist/* /usr/src/www
WORKDIR /usr/src/www
EXPOSE 8080
CMD http-server -p 8080:8080 --log-ip
Don't put -p 8080:8080 in the Dockerfile!
You should first build your docker image using docker build command.
docker build -t myapp .
once you've built the image, and confirmed using docker images you can run it using docker run command
docker run -p 8080:8080 myapp
Docker listens 0.0.0.0 IP address and the other machines on the same network can use your ip address to show your website on which port did you use for sharing. For example you use 8080 and actually you listen 0.0.0.0:8080 and the other machines http://192.168.0.111:8080/ can reach that website with your ip address. Without docker you can also listen 0.0.0.0 to share your app on network.
The box that docker is running on
What u mean by saying "BOX"? Is it some kind of virtual box or maybe actual computer with Linux or Windows or maybe MacOS?
Have u checked particular "BOX"'s firewall? (u may need to do "NAT" over firewall to particular in "BOX" running service for incoming requests from outside of "BOX").
I'll be happy to help u our if u'll provide more detailed information about your environment...
Related
I'm posting for a friend. He asked my help and we couldn't find out what's going on.
My situation is: my application works perfectly on Ubuntu 18.04 when it’s not inside a container, but the customer required the use of containers so I created a Dockerfile so it could be started by a Docker container.
Here’s the contente of my Dockerfile
FROM node:8.9.4
ENV HOME=/home/backend
RUN apt-get update
RUN apt-get install -y build-essential libssl-dev
RUN apt-get install -y npm
COPY . $HOME/
WORKDIR $HOME/
RUN npm rebuild node-sass
RUN npm install --global babel-cli
USER root
EXPOSE 6543
CMD ["babel-node", "index.js"]
After building the image, I execute the following Docker run command:
sudo docker run --name backend-api -p 6543:6543 -d backend/backendapi1.0
Taking a look at the log output, I can conclude that the application Works properly:
I’ve created a rule in my nginx to redirect from port 90 to 6543 (before using containers it used to work)
server {
listen 90;
listen [::]:90;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
location / {
proxy_pass http://localhost:6543;
}
}
P.S.: i’ve tried to change from localhost to the container’s IP and it doesn’t work as well.
The fun fact is that when i try na internal telnet on 6543 it accepts the connection and closes it immediately.
P.S.: all ports are open on the firewall.
The application Works normally outside the container (using port 6543 and redirecting in nginx)
I’d appreciate if someone could help us to find out the reason why it’s happening. We don't have much experience creating containers.
Thanks a lot!
Edit: it's an AWS VM, but this is the return when we run the command curl:
We found the solution!!
It was an internar container router problem...
The following Docker run command solved the problem:
sudo docker run --name my_container_name --network="host" -e MONGODB=my_container_ip -p 6543:6543 my_dockerhub_image_name
Thanks a lot!!
I have a NodeJS/Vue app that I can run fine until I try to put it in a Docker container. I am using project structure like:
When I do npm run dev I get the output:
listmymeds#1.0.0 dev /Users/.../projects/myproject
webpack-dev-server --inline --progress --config build/webpack.dev.conf.js
and then it builds many modules before giving me the message:
DONE Compiled successfully in 8119ms
I Your application is running here: http://localhost:8080
then I am able to connect via browser at localhost:8080
Here is my Dockerfile:
FROM node:9.11.2-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD npm run dev
EXPOSE 8080
I then create a docker image with docker build -t myproject . and see the image listed via docker images
I then run docker run -p 8080:8080 myproject and get a message that my application is running here: localhost:8080
However, when I either use a browser or Postman to GET localhost:8080 there is no response.
Also, when I run the container from the command line, it appears to lock up so I have to close the terminal. Not sure if that is related or not though...
UPDATE:
I trying following the Docker logs such as
docker logs --follow
and there is nothing other than the last line that my application is running on localhost:8080
This would seem to indicate that my http requests are never making into my container right?
I also tried the suggestion to
CMD node_modules/.bin/webpack-dev-server --host 0.0.0.0
but that failed to even start.
It occurred to me that perhaps there is a Docker network issue, perhaps resulting in an earlier attempt at kong api learning. So I run docker network ls and see
NETWORK ID NAME DRIVER SCOPE
1f11e97987db bridge bridge local
73e3a7ce36eb host host local
423ab7feaa3c none null local
I have been unable to stop, disconnect or remove any of these networks. I think the 'bridge' might be one Kong created, but it won't let me whack it. There are no other containers running, and I have deleted all images other than the one I am using here.
Answer
It turns out that I had this in my config/index.js:
module.exports = {
dev: {
// Various Dev Server settings
host: 'localhost',
port: 8080,
Per Joachim Schirrmacher excellent help, I changed host from localhost to 0.0.0.0 and that allowed the container to receive the requests from the host.
With a plain vanilla express.js setup this works as expected. So, it must have something to do with your Vue application.
Try the following steps to find the source of the problem:
Check if the container is started or if it exits immediately (docker ps)
If the container runs, check if the port mapping is set up correctly. It needs to be 0.0.0.0:8080->8080/tcp
Check the logs of the container (docker logs <container_name>)
Connect to the container (docker exec -it <container_name> sh) and check if node_modules exists and contains all
EDIT
Seeing your last change of your question, I recommend starting the container with the -dit options: docker run -dit -p 8080:8080 myproject to make it go to the background, so that you don't need to hard-stop it by closing the terminal.
Make sure that only one container of your image runs by inspecting docker ps.
EDIT2
After discussing the problem in chat, we found that in the Vue.js configuration there was a restriction to 'localhost'. After changing it to '0.0.0.0', connections from the container's host system are accepted as well.
With Docker version 18.03 and above it is also possible to set the host to 'host.docker.internal' to prevent connections other than from the host system.
I'm pretty new to Docker, so I'm trying to take a node web app that I've written and Docker-ize it. The app is open source, so you can find it and the Dockerfile here: Paw-Wars
So you don't have to click through, the Dockerfile is here:
FROM mhart/alpine-node
WORKDIR /src
ADD . .
RUN npm install
EXPOSE 5050
COPY config.json /src/config.json
CMD npm run docker
So I open up the Docker Quickstart Terminal (I'm on Mac OS X), go to my path, and build it:
docker build -t paw-wars .
After it builds, I run it:
docker run paw-wars
And it spins up just fine and says it's listening on port 5050. I get the ip from docker-machine ip default, and try to connect to it on port 5050, but I get connection refused. Most searches I've done trying to solve this tell me that I need to make sure to use the correct IP, but I'm almost positive I'm doing that. Not really sure what I'm doing wrong. It's not in the repo, but I've also tried binding to 0.0.0.0 in my app (index.js), but that didn't work either.
Thanks!
The issue is that you MUST specify a port in your docker run command. I thought that using EXPOSE in your dockerfile is sufficient, but it's not. That's just to get the port exposed internally.
docker run -p 5050:5050 paw-wars worked great.
I'm attempting to run a node.js application in debug mode in one Docker container, and attach a debugger from another container onto the application running in the first container.
As such, I'm trying to open up port 5858 to the outside world. However, when I --link another container to the first container (with alias firstContainer), and run nmap -p 5858 firstContainer, I find that port 5858 is closed. The first container has told me that the node.js application is listening on port 5858, I've exposed the port in the Dockerfile, and I've also bound the ports to the corresponding port on my machine (although, I'm not certain that's necessary). When I run nmap on port 8080, all is successful.
How can I open up port 5858 on a Docker container such that I can attach a debugger to this port?
The Dockerfile is:
FROM openshift/base-centos7
# This image provides a Node.JS environment you can use to run your Node.JS
# applications.
MAINTAINER SoftwareCollections.org <sclorg#redhat.com>
EXPOSE 8080 5858
ENV NODEJS_VERSION 0.10
LABEL io.k8s.description="Platform for building and running Node.js 0.10 applications" \
io.k8s.display-name="Node.js 0.10" \
io.openshift.expose-services="8080:http" \
io.openshift.tags="builder,nodejs,nodejs010"
RUN yum install -y \
https://www.softwarecollections.org/en/scls/rhscl/v8314/epel-7-x86_64/download/rhscl-v8314-epel-7-x86_64.noarch.rpm \
https://www.softwarecollections.org/en/scls/rhscl/nodejs010/epel-7-x86_64/download/rhscl-nodejs010-epel-7-x86_64.noarch.rpm && \
yum install -y --setopt=tsflags=nodocs nodejs010 && \
yum clean all -y
# Copy the S2I scripts from the specific language image to $STI_SCRIPTS_PATH
COPY ./s2i/bin/ $STI_SCRIPTS_PATH
# Each language image can have 'contrib' a directory with extra files needed to
# run and build the applications.
COPY ./contrib/ /opt/app-root
# Drop the root user and make the content of /opt/app-root owned by user 1001
RUN chown -R 1001:0 /opt/app-root
USER 1001
# Set the default CMD to print the usage of the language image
CMD $STI_SCRIPTS_PATH/usage
Run with:
docker run -P -p 5858:5858 -p 8080:8080 --name=firstContainer nodejs-sample-app
Taken from/built with instructions from here.
Thanks.
-P automagically maps any exposed ports within a container to a random port on host machine, while -p allows explicit mapping of ports. Using the --link flag allows two docker containers to communicate with each other, but does nothing to expose the ports to the outside world (outside the docker private network).
I am completely stuck on the following.
Trying to setup a express app in docker on an Azure VM.
1) VM is all good after using docker-machine create -driver azure ...
2) Build image all good after:
//Dockerfile
FROM iojs:onbuild
ADD package.json package.json
ADD src src
RUN npm install
EXPOSE 8080
CMD ["node", "src/server.js"]
Here's where I'm stuck:
I have tried all of the following plus many more:
• docker run -P (Then adding end points in azure)
• docker run -p 80:8080
• docker run -p 80:2756 (2756, the port created during docker-machine create)
• docker run -p 8080:80
If someone could explain azure's setup with VIP vs internal vs docker expose.
So at the end of all this, every port that I try to hit with Azure's:
AzureVirtualIP:ALL_THE_PORT
I just always get back a ERR_CONNECTION_REFUSED
For sure the express app is running because I get the console log info.
Any ideas?
Thanks
Starting from the outside and working your way in, debugging:
Outside Azure
<start your container on the Azure VM, then>
$ curl $yourhost:80
On the VM
$ docker run -p 80:8080 -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64f4d98b9c75 laslo:latest node src/server.js 5 seconds ago up 5 seconds 0.0.0.0:80->8080 something_funny
$ curl localhost:80
That 0.0.0.0:80->8080 shows you that your port forwarding is in effect. If you run other containers, don't have the right privileges or have other networking problems, Docker might give you a container without forwarding the ports.
If this works but the first test didn't, then you didn't open the ports to your VM correctly. It could be that you need to set up the Azure endpoint, or that you've got a firewall running on the VM.
In the container
$ docker run -p 80:8080 --name=test -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker exec it test bash
# curl localhost:8080
In this last one, we get inside the container itself. Curl might not be installed, so maybe you have to apt-get install curl first.
If this doesn't work, then your Express server isn't listening on port 80, and you need to check the setup.