"This page isn’t working localhost didn’t send any data." when navigating to localhost port bound to docker container [duplicate] - node.js

I set up a simple Node server in Docker.
Dockerfile
FROM node:latest
RUN apt-get -y update
ADD example.js .
EXPOSE 1337
CMD node example.js
example.js
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n'+new Date);
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
Now build the image
$ docker build -t node_server .
Now run in container
$ docker run -p 1337:1337 -d node_server
$ 5909e87302ab7520884060437e19ef543ffafc568419c04630abffe6ff731f70
Verify the container is running and ports are mapped:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5909e87302ab node_server "/bin/sh -c 'node exa" 7 seconds ago Up 6 seconds 0.0.0.0:1337->1337/tcp grave_goldberg
Now let's attach to the container and verify the server is running inside:
$ docker exec -it 5909e87302ab7520884060437e19ef543ffafc568419c04630abffe6ff731f70 /bin/bash
And in the container command line type:
root#5909e87302ab:/# curl http://localhost:1337
Hello World
Mon Feb 15 2016 16:28:38 GMT+0000 (UTC)
Looks good right?
The problem
When I execute the same curl command on the host (or navigate with my browser to http://localhost:1337) I see nothing.
Any idea why the port mapping between container and host doesn't work?
Things I already tried:
Running with the --expose 1337 flag

Your ports are being exposed correctly but your server is listening to connections on 127.0.0.1 inside your container:
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n'+new Date);
}).listen(1337, '127.0.0.1');
You need to run your server like this:
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n'+new Date);
}).listen(1337, '0.0.0.0');
Note the 0.0.0.0 instead of 127.0.0.1.

Adding EXPOSE 1337 to the docker file
EXPOSE is mandatory if you want to "expose" that port to other containers.
As BMitch comments:
Expose isn't needed to publish a port or to connect container to container over a shared docker network.
It's metadata for publishing all ports with -P and inspecting the image/container.
So:
Running with the --expose 1337 flag
Not exactly: you need to docker run it with -p 1337:1337
You need either:
build an image with the EXPOSE directive in it (used by -P)
or run it with the port published on the host -p 1337:1337
The test curl http://localhost:1337 was done from within the container (no EXPOSE or publish needed).
If you want it to work from the Linux host, you need EXPOSE+-P or you need -p 1337:1337.
Either.
Declaring it expose alone is good for documenting the intent, but does not do anything alone.
For instance:
In that figure, 8080 is EXPOSE'd, published to the Linux host 8888.
And if that Linux host is not the actual host, that same port needs to be fastfowarded to the actual host. See "How to access tomcat running in docker container from browser?".
If localhost does not work from the Linux host, try its IP address:
CID=$(docker run -p 1337:1337 -d node_server)
CIP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${CID})
curl http://${CIP}:1337
Or, as mentioned above, make your server listen from connections coming from any IP: 0.0.0.0 which is the broadcast address or zero network.

Related

Cannot access a Node.js server when hostname set to 127.0.0.1 but works for 0.0.0.0 [duplicate]

This question already has answers here:
Docker app server ip address 127.0.0.1 difference of 0.0.0.0 ip
(2 answers)
Closed 8 months ago.
I am trying to dockerize a Node.js server.
I use the following index.js and Dockerfile files:
const http = require('http');
const hostname = '127.0.0.1';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
FROM node:latest
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm i
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
Then I run docker build . -t webapp and docker run -p 8080:3000 webapp, but when I try to open the app in the browser I see "The webpage at http://localhost:8080/ might be temporarily down or it may have moved permanently to a new web address."
Then when I change the hostname in the index.js to 0.0.0.0, it seems to work fine and I can access my app in the browser under http://localhost:8080/.
What is the difference between running an app in a container on the localhost vs. on 0.0.0.0?
In a container, localhost is the container itself. So when you bind to 127.0.0.1 your program will only accept connections coming from inside the container.
You need to bind to 0.0.0.0 for it to accept connections from outside the container.

Docker container does not respond to http request

I'm trying to send an http request through axios, from my localhost (node server) to a docker container (which contains a simple server in node too) which belongs to a docker network, and identified by an specific IP.
I have used postman, xmlhttprequests, and axios but nothing seems to work. I have also tried with get and post requests but any of those get any answer from the container side.
Do you have any Idea of what am I doing wrong?
the .sh file that Im running to launch the container is:
docker build -t connectimg .
docker network create --subnet=119.18.0.0/16 mynet
docker run -d --name instance2 -p 4002:4000 --net mynet --ip 119.18.0.2 connectimg
and the docker logs result for the instance post-launch is:
{
lo: [
{
address: '127.0.0.1',
netmask: '255.0.0.0',
family: 'IPv4',
mac: '00:00:00:00:00:00',
internal: true,
cidr: '127.0.0.1/8'
}
],
eth0: [
{
address: '119.18.0.2',
netmask: '255.255.0.0',
family: 'IPv4',
mac: '02:42:77:12:00:02',
internal: false,
cidr: '119.18.0.2/16'
}
]
}
Example app listening on port 3000
My Docker Instance Node app code is:
const express = require('express')
const app = express()
const port = 3000
const cors = require('cors')
var os = require('os');
app.use(cors());
app.use(express.json());
app.listen(port, () => {
console.log(`Example app listening on port ${port}`)
})
app.get('/listen', (req,res) => {
console.log('got it');
})
var networkInterfaces = os.networkInterfaces();
console.log(networkInterfaces);
And my Node server piece of code responsible of sending the get request to the instance is:
const connect = (req,res) => {
axios.get('http://119.18.0.2:3000/listen').then(resp => {
console.log(resp.data);
});
}
and the error I keep getting is:
ETIMEDOUT 119.18.0.2:3000
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1159:16)
Firstly, your URI http://119.18.0.2:3000/listen is incorrect. The docker network cannot be accessed directly as it is not a network that the host knows of.
The option -p 4002:4000 is what is exposing your docker container to the host(and the network you're connected to). 4002 is the port exposed to the host and port 4000 is the port your container is exposing INSIDE the docker network
To access the container from the host your URI would become http://localhost:4002/listen
To access the container from a different machine on the same network the URI would become http://<ip-address-of-this-machine>:4002/listen. You can find your IP using ipconfig in command prompt on Windows, or ifconfig in terminal on Linux based systems.
Secondly, your port allocations are mismatched. You set the port in your node app using const port = 3000 and exposed port 4000 of the container using -p 4002:4000 in your docker run command.
Either change your node application to expose port 4000 using const port = 4000
OR
Change your docker run command to expose port 3000 of the container by using -p 4002:3000.
Docker networks can be a bit confusing at first. Read up on them or check the documentation(hella useful), it will serve you well in future development. :)
EDIT: You can properly containerize your node application using a DockerFile similar to this:
FROM node:lts-alpine3.15
LABEL maintainer="MyDevName"
WORKDIR /usr/app
COPY ./myNodeApp ./
RUN npm install
CMD ["npm", "start"]
So that your node app runs automatically on start.
the .sh file that Im running to launch the container is:
docker build -t connectimg .
docker network create --subnet=119.18.0.0/16 mynet
docker run -d --name instance2 -p 4002:4000 --net mynet --ip 119.18.0.2 connectimg
if will leverage docker-compose, you might not need the script.
I'm trying to send an http request through axios, from my localhost (node server) to a docker container (which contains a simple server in node too) which belongs to a docker network, and identified by an specific IP.
seems like 2 things need to be tweaked:
use 0.0.0.0:4200 in the dockerized server.
verify the network that you associate with the container is reachable from your host operating system. if the docker network is not that important, you can just ditch it

pm2 works fine on port 80, but no other ports are accessible from another computer in local network (connection refused)

I started pm2 server on my server running ubuntu 20.
To simplify the problem, I created a hello world app using node, like this (/var/www/html/pip/exampleserver.js):
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(3000, '0.0.0.0');
console.log('Server running at http://0.0.0.0:3000/');
After starting the server with 'pm2 start exampleserver.js', the server is started, and it works fine when I access it from localhost :
wget -qO- localhost:3000
Result : "Hello World"
But I get ERR_CONNECTION_REFUSED when accessing the page from another computer on the same network (from http://10.200.96.23:3000, either from browser or command line)
However, when I change the script to listen to port 80 and restart pm2, it works just fine from the other computer (from http://10.200.96.23). It shows the result "Hello World".
Here is the result of netstat and ufw status that i've disabled in advance :
netstat and ufw status
Screenshot of error connection refused :
ERR_CONNECTION_REFUSED
I tried other ports, but nothing works other than 80. I don't want to run this server on port 80. What could be the problem?
Port 80 is open by default but not for other ports.
#aRvi is correct, you might need to open port 300 for that to be accessible.
Also pm2 monitor will help you see code errors and monitor your application.
This saves my day many times when debugging my codes in PM2
First run
pm2 list
and delete the processes with
pm2 delete [ID]
Define a new ufw rule
ufw allow 3000/tcp
and run your application again.

unable to access node:9.1 container in host [duplicate]

I set up a simple Node server in Docker.
Dockerfile
FROM node:latest
RUN apt-get -y update
ADD example.js .
EXPOSE 1337
CMD node example.js
example.js
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n'+new Date);
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
Now build the image
$ docker build -t node_server .
Now run in container
$ docker run -p 1337:1337 -d node_server
$ 5909e87302ab7520884060437e19ef543ffafc568419c04630abffe6ff731f70
Verify the container is running and ports are mapped:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5909e87302ab node_server "/bin/sh -c 'node exa" 7 seconds ago Up 6 seconds 0.0.0.0:1337->1337/tcp grave_goldberg
Now let's attach to the container and verify the server is running inside:
$ docker exec -it 5909e87302ab7520884060437e19ef543ffafc568419c04630abffe6ff731f70 /bin/bash
And in the container command line type:
root#5909e87302ab:/# curl http://localhost:1337
Hello World
Mon Feb 15 2016 16:28:38 GMT+0000 (UTC)
Looks good right?
The problem
When I execute the same curl command on the host (or navigate with my browser to http://localhost:1337) I see nothing.
Any idea why the port mapping between container and host doesn't work?
Things I already tried:
Running with the --expose 1337 flag
Your ports are being exposed correctly but your server is listening to connections on 127.0.0.1 inside your container:
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n'+new Date);
}).listen(1337, '127.0.0.1');
You need to run your server like this:
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n'+new Date);
}).listen(1337, '0.0.0.0');
Note the 0.0.0.0 instead of 127.0.0.1.
Adding EXPOSE 1337 to the docker file
EXPOSE is mandatory if you want to "expose" that port to other containers.
As BMitch comments:
Expose isn't needed to publish a port or to connect container to container over a shared docker network.
It's metadata for publishing all ports with -P and inspecting the image/container.
So:
Running with the --expose 1337 flag
Not exactly: you need to docker run it with -p 1337:1337
You need either:
build an image with the EXPOSE directive in it (used by -P)
or run it with the port published on the host -p 1337:1337
The test curl http://localhost:1337 was done from within the container (no EXPOSE or publish needed).
If you want it to work from the Linux host, you need EXPOSE+-P or you need -p 1337:1337.
Either.
Declaring it expose alone is good for documenting the intent, but does not do anything alone.
For instance:
In that figure, 8080 is EXPOSE'd, published to the Linux host 8888.
And if that Linux host is not the actual host, that same port needs to be fastfowarded to the actual host. See "How to access tomcat running in docker container from browser?".
If localhost does not work from the Linux host, try its IP address:
CID=$(docker run -p 1337:1337 -d node_server)
CIP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${CID})
curl http://${CIP}:1337
Or, as mentioned above, make your server listen from connections coming from any IP: 0.0.0.0 which is the broadcast address or zero network.

Docker node.js app not listening on port

I'm working on a node.js web application and use localhost:8080 to test it by sending requests from Postman. Whenever I run the application (npm start) without using Docker, the app works fine and listens on port 8080.
When I run the app using Docker, The app seems to be running correctly (it displays that it is running and listening on port 8080), however it is unreachable using Postman (I get the error: Could not get any response). Do you know what the reason for this could be?
Dockerfile:
FROM node:8
WORKDIR /opt/service
COPY package.json .
COPY package-lock.json .
RUN npm i
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
I build and run the application in Docker using:
docker build -t my-app .
docker run my-app
I have tried binding the port as described below, but I also wasn't able to reach the server on port 8181.
docker run -p 8181:8080 my-app
In my application, the server listens in the following way (using Express):
app.listen(8080, () => {
console.log('listening on port 8080');
})
I have also tried using:
app.listen(8080, '0.0.0.0', () => {
console.log('listening on port 8080');
})
The docker port command returns:
8080/tcp -> 0.0.0.0:8181
Do you guys have nay idea what the reason for this could be?
UPDATE: Using the IP I obtained from the docker-machine (192.168.99.100:8181) I was able to reach the app. However, I want to be able to reach it from localhost:8181.
The way you have your port assignment setup requires you to use the docker machine's ip address, not your localhost. You can find your docker machines ip using:
docker-machine ip dev
If you want to map the container ip to your localhost ports you should specify the localhost ip before the port like this:
docker run -p 127.0.0.1:8181:8080 my-app
Similar question:

Resources