Port problems to access a service inside a container - node.js

I'm posting for a friend. He asked my help and we couldn't find out what's going on.
My situation is: my application works perfectly on Ubuntu 18.04 when it’s not inside a container, but the customer required the use of containers so I created a Dockerfile so it could be started by a Docker container.
Here’s the contente of my Dockerfile
FROM node:8.9.4
ENV HOME=/home/backend
RUN apt-get update
RUN apt-get install -y build-essential libssl-dev
RUN apt-get install -y npm
COPY . $HOME/
WORKDIR $HOME/
RUN npm rebuild node-sass
RUN npm install --global babel-cli
USER root
EXPOSE 6543
CMD ["babel-node", "index.js"]
After building the image, I execute the following Docker run command:
sudo docker run --name backend-api -p 6543:6543 -d backend/backendapi1.0
Taking a look at the log output, I can conclude that the application Works properly:
I’ve created a rule in my nginx to redirect from port 90 to 6543 (before using containers it used to work)
server {
listen 90;
listen [::]:90;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
location / {
proxy_pass http://localhost:6543;
}
}
P.S.: i’ve tried to change from localhost to the container’s IP and it doesn’t work as well.
The fun fact is that when i try na internal telnet on 6543 it accepts the connection and closes it immediately.
P.S.: all ports are open on the firewall.
The application Works normally outside the container (using port 6543 and redirecting in nginx)
I’d appreciate if someone could help us to find out the reason why it’s happening. We don't have much experience creating containers.
Thanks a lot!
Edit: it's an AWS VM, but this is the return when we run the command curl:

We found the solution!!
It was an internar container router problem...
The following Docker run command solved the problem:
sudo docker run --name my_container_name --network="host" -e MONGODB=my_container_ip -p 6543:6543 my_dockerhub_image_name
Thanks a lot!!

Related

Accessing A Web App Running In Docker From Another Machine

I've cloned the following dockerized MEVN app and would like to access it from another PC on the local network.
The box that docker is running on has an ip of 192.168.0.111 but going to http://192.168.0.111:8080/ from another PC just says it can't be reached. I run other services like plex and a minecraft server that can be reached with this ip so I assume it is a docker config issue. I am pretty new to docker.
Here is the Dockerfile for the poral. I made a slight change from the repo adding -p 8080:8080 because I read elsewhere that it would open it up to lan access.
FROM node:16.15.0
RUN mkdir -p /usr/src/www &&
apt-get -y update &&
npm install -g http-server
COPY . /usr/src/vue
WORKDIR /usr/src/vue
RUN npm install
RUN npm run build
RUN cp -r /usr/src/vue/dist/* /usr/src/www
WORKDIR /usr/src/www
EXPOSE 8080
CMD http-server -p 8080:8080 --log-ip
Don't put -p 8080:8080 in the Dockerfile!
You should first build your docker image using docker build command.
docker build -t myapp .
once you've built the image, and confirmed using docker images you can run it using docker run command
docker run -p 8080:8080 myapp
Docker listens 0.0.0.0 IP address and the other machines on the same network can use your ip address to show your website on which port did you use for sharing. For example you use 8080 and actually you listen 0.0.0.0:8080 and the other machines http://192.168.0.111:8080/ can reach that website with your ip address. Without docker you can also listen 0.0.0.0 to share your app on network.
The box that docker is running on
What u mean by saying "BOX"? Is it some kind of virtual box or maybe actual computer with Linux or Windows or maybe MacOS?
Have u checked particular "BOX"'s firewall? (u may need to do "NAT" over firewall to particular in "BOX" running service for incoming requests from outside of "BOX").
I'll be happy to help u our if u'll provide more detailed information about your environment...

Couchbase Server Container, inside of a Linux(Centos) container fails to connect

I have created a Linux Centos container through the following Dockerfile:
FROM centos
RUN cd /etc/yum.repos.d/
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
RUN yum update -y
RUN yum makecache --refresh
RUN yum -y install gdb
Inside that Linux container, I am running a script that first initializes a Couchbase docker image, at http://localhost:8091. After initializing the Couchbase docker image, I try to make an http.request to the Couchbase server that should be up and running, through
http.request({
host: host,
port: 8091,
path: '/'
}
but that doesn’t seem to work. Do you have any ideas as to why it might be happening? Could it be the fact that I haven’t exposed port 8091 in my Linux container? Or could it be that I am not setting up the Couchbase server right? The issue is that I am trying to run a container, inside of another container and then from the outer container(Linux) make a call to the interior one (Couchbase).
Thank you in advance.

AWS Linux (Ubuntu) hosted application is not accessible from public ip

I hosted a nodejs(express hello world app) application on AWS Linux(Ubuntu 16.04) on free-tier. When i do wget http://localhost:8080 it runs successfully and saved the output in index.html file.
But when i do the same thing with the public ip (wget http://35.154.40.189:8080) of my instance, it says
Connecting to 35.154.40.189:8080... failed: No route to host.
I also used the steps given in http://www.lauradhamilton.com/how-to-set-up-a-nodejs-web-server-on-amazon-ec2 to forward all ipv4 traffic to my application but it doesn't work.
I also enabled port 8080 from aws console.
netstat -atn says
netstate -ntlp says
I tried everything which i get on internet but unable to resolve the issue. Now i'm too much frustrated. Any help would be highly appreciable.
MAke Your Instance in AWS first
Enable inbound rule as u mention in picutre
Enable user group after ssh connection with AWS ubuntu instance
once instance start running then Install node properly
sudu apt-get update
sudo apt-get install libssl-dev g++ make
download source code of node from web node.tar.gz wih command wget link
https://nodejs.org/dist/v6.9.1/node-v6.9.1.tar.gz
tar -xvf node -v0.10.32.tar.gz
now goto node after unzip .gz
./configure && make && sudo make && sudo make install
boom your node server is ready on new AWS instances
or watch this https://www.youtube.com/watch?v=WxhFq64FQzA&t=1693s

How to map ports with - Express + Docker + Azure

I am completely stuck on the following.
Trying to setup a express app in docker on an Azure VM.
1) VM is all good after using docker-machine create -driver azure ...
2) Build image all good after:
//Dockerfile
FROM iojs:onbuild
ADD package.json package.json
ADD src src
RUN npm install
EXPOSE 8080
CMD ["node", "src/server.js"]
Here's where I'm stuck:
I have tried all of the following plus many more:
• docker run -P (Then adding end points in azure)
• docker run -p 80:8080
• docker run -p 80:2756 (2756, the port created during docker-machine create)
• docker run -p 8080:80
If someone could explain azure's setup with VIP vs internal vs docker expose.
So at the end of all this, every port that I try to hit with Azure's:
AzureVirtualIP:ALL_THE_PORT
I just always get back a ERR_CONNECTION_REFUSED
For sure the express app is running because I get the console log info.
Any ideas?
Thanks
Starting from the outside and working your way in, debugging:
Outside Azure
<start your container on the Azure VM, then>
$ curl $yourhost:80
On the VM
$ docker run -p 80:8080 -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64f4d98b9c75 laslo:latest node src/server.js 5 seconds ago up 5 seconds 0.0.0.0:80->8080 something_funny
$ curl localhost:80
That 0.0.0.0:80->8080 shows you that your port forwarding is in effect. If you run other containers, don't have the right privileges or have other networking problems, Docker might give you a container without forwarding the ports.
If this works but the first test didn't, then you didn't open the ports to your VM correctly. It could be that you need to set up the Azure endpoint, or that you've got a firewall running on the VM.
In the container
$ docker run -p 80:8080 --name=test -d laslo
882a5e774d7004183ab264237aa5e217972ace19ac2d8dd9e9d02a94b221f236
$ docker exec it test bash
# curl localhost:8080
In this last one, we get inside the container itself. Curl might not be installed, so maybe you have to apt-get install curl first.
If this doesn't work, then your Express server isn't listening on port 80, and you need to check the setup.

Docker container with Node and Express on Mac, not showing in browser?

I'm delving into Docker and I'm trying to rack my brains on why this isn't working for me. I've read many articles and tutorials on how to setup this up and I seem to have everything in place, but my actual app just isn't showing up in the browser (localhost:3001). I'm using the latest version of Docker on my Mac, running Mavericks, using boot2docker. I definitely have boot2docker running as the docker commands run fine and I get no errors that seem to relate.
The super simple project looks like this:
src/
..index.js
..package.json
Dockerfile
The src/index.js file looks like this:
var express = require('express'),
app = express();
app.get('/', function(req, res){
res.send('Hello world!');
});
app.listen(3001);
The src/package.json file looks like this:
{
"name": "node-docker-example",
"version": "0.0.1",
"description": "A NodeJS webserver to run inside a docker container",
"main": "index.js",
"dependencies": {
"express": "*"
}
}
The Dockerfile file looks like this:
FROM ubuntu:14.04
# make sure apt is up to date
RUN apt-get update
# install nodejs and npm
RUN apt-get install -y nodejs npm git git-core
# add source files
ADD /src /srv
# set the working directory to run commands from
WORKDIR /srv
# install the dependencies
RUN npm install
# expose the port so host can have access
EXPOSE 3001
# run this after the container has been instantiated
CMD ["nodejs", "index.js"]
With all of this in place, I then build it just locally:
$ docker build -t me/foo .
No problems... Then I've tried some alternative ways to make it run, but none of these work and I can't see any response when viewing in my browser (localhost:3001)
$ docker run -i -t me/foo
$ docker run -i -t -p 3001:3001 me/foo
$ docker run -i -t -p 127.0.0.1:3001:3001 me/foo
Nothing seems to work, no errors come up... Well, apart from that localhost:3001 in the browser does absolutely nothing.
Please help me! I love the idea of docker, but I can't get the simplest thing running. Thanks!
boot2docker has an extra network
There's one extra layer of networking getting in the way. Remember that boot2docker has it's own OS and additional network IP, so try url=http://$(boot2docker ip):3001;curl -v "${url}" from a terminal on your mac and see if that returns HTML from your express app. If so, you can browse to your app with open "${url}".
I was able to take your files (thank you for posting full files!) and build and run your image locally.
Build, run, and test it like this
docker build -t foo .
docker run -i -t -p 3001:3001 foo
I think the key thing to note is that for docker build the -t argument means "tag" but for docker run it means "allocate a tty".
Test it like this (in a separate terminal from where it's running interactively)
curl -s "$(boot2docker ip):3001"
Here's where you went wrong
Or at least my guesses:
$ docker run -i -t me/foo
doesn't map any ports
$ docker run -i -t -p 3001:3001 me/foo
I think in theory this variant should work. If not, I'm pretty sure it's a boot2docker-specific networking issue at the IP layer.
$ docker run -i -t -p 127.0.0.1:3001:3001 me/foo
This is telling docker to bind to loopback on the docker server, not your mac, so you'll never be able to connect to this from your mac.

Resources