Nginx with docker does not work with different port mapping - linux

I have created a basic static website and I'm trying to deploy my site on nginx server with docker.
It works fine when I run docker run -d -p 80:80 -v /path/to/site:/usr/share/nginx/html nginx
But when I map it with different port like -p 8080:80 it gives me a bad gateway error.
The initial homepage loads up fine http://localhost:8080,but when I try to navigate in the website like a blog post, it redirects to http://localhost/blog rather than http://localhost:8080/blog. I have not changed any configuration whatsoever, I have left it default.
Initially I thought it must be with my site but it works fine with httpd & docker. But I want to run it on a nginx server.
Please help me !!

Related

Two docker container (nginx and a web app) not working together (linux)

I built both containers using a Dockerfile (for each). I have the NGINX container pointing (proxy_pass http://localhost:8080) to the port that the web app is exposed (via -p 8080:80). I am able to get it to work when I just install NGINX in the linux machine, but when I use a dockerized NGINX, I just get the default NGINX index.html. Do I have to build both containers using Docker-Compose.yml file (as oppose to Dockerfile) when I want the containers working together? Sorry, if I didn't put any code, but at this point, I'm just wanting to know if I'm taking the correct approach (using Dockerfile or Docker-Compose).
The Nginx proxy needs access to the host (!) network for this to work, e.g.:
docker container run ... --net=host ... nginx
Without it, localhost refers to the proxy (localhost) which likely has nothing on :8080 and certainly not your web app.
Alternatively, if the proxy's container (!), can resolve|access the host then processes in the container can refer to host-accessible ports using the host's DNS name or IP.
Docker Compose (conventionally) solves this by putting the containers onto a new virtual network. The difference then would be that, rather than mapping everything onto host ports, each container (called a service) gets a unique name and a container called proxy could refer to a container called web on port 8080 as http://web:8080.
You may achieve similar results with Docker only by creating a network and then running containers on it, e.g:
docker network create ${NETWORK}
docker container run ... --net=${NETWORK} --name=proxy ...
...

Follow docker port mapping inside container

I have a cloud pc with static external ip, f.e. 162.243.100.100
Inside I installed docker with nginx, and mapped 80 port like this
docker run -it -p 80:80 nginx
I'm able to access nginx demo page from curl 162.243.100.100 from host machine.
I'm able to access nginx demo page from curl localhost from inside said container.
But I want to have able to access ngninx demo page from command curl 162.243.100.100 from inside said nginx containeer.
Seem this not follow port mapping, and just give me timeout error.
I thin I need to do something with network settings, but not know what.
The short answer is "don't do that."
The default iptables rules and routing tables from Docker aren't setup to route traffic from a container out to the host and back into the container through the docker-proxy. Considering how much this is an anti-pattern, I don't expect it to be a priority to change this behavior. It's much easier to work with the tool and use docker networks and the container name when talking from container to container.
Create a network and start your container in that network:
docker network create --subnet 172.16.0.0/16 dockernet
docker run -it -p 80:80 --net=dockernet nginx

Can't see my server on an azure vm

I have an Azure vm with Ubuntu 12 and I want to make an Angular2 server with angular-cli. When I run ng serve --port 80 I just can't see the page with a browser. When I check with CURL from the vm I can see the HTML properly but I get an error when I try to see it from the internet.
The strangest thing is that anything else I try works perfectly, I also have an old node server on the vm and when I run that I can see it normally from CURL and from the internet.
Okay so apparently I needed to specify host : sudo ng serve --host 0.0.0.0 --port 80.
github.com/angular/angular-cli/issues/2375
Thanks to evilSnobu.

How to connect to docker container's link alias from host

I have 3 separate pieces to my dockerized application:
nodeapp: A node:latest docker container running an expressjs app that returns a JSON object when accessed from /api. This server is also CORs enabled according to this site.
nginxserver: A nginx:latest static server that simply hosts an index.html file that allows the user to click a button which would make the XMLHttpRequest to the node server above.
My host machine
The node:latest has its port exposed to the host via 3000:80.
The nginx:latest has its port exposed to the host via 8080:80.
From host I am able to access both nodeapp and nginxserver individually: I can make requests and see the JSON object returned from the node server using curl from the command line, and the button (index.html) is visible on the screen when I hit localhost:8080.
However, when I try clicking the button the call to XMLHttpRequest('GET', 'http://nodeapp/api', true) fails without seemingly hitting the nodeapp server (no log is present). I'm assuming this is because host does not understand http://nodeapp/api.
Is there a way to tell docker that while a container is running to add its container linking alias to my hosts file?
I don't know if my question is the proper solution to my problem. It looks as though I'm getting a CORs error returned but I don't think it is ever hitting my server. Does this have to do with accessing the application from my host machine?
Here is a link to an example repo
Edit: I've noticed that the when using the stack that clicking the button sends a response from my nginx container. I'm confused as to why it is routing through that server as the nodeapp is in my hosts file so it should recognize the correlation there?
Problem:
nodeapp exists in internal network, which is visible to your nginxserver only, you can check this by enter nginxserver
docker exec -it nginxserver bash
# cat /etc/hosts
Most important, your service setup is not correct, nginxserver shall act as reverse proxy in front of nodeapp
host (client) -> nginxserver -> nodeapp
Dirty Quick Solution:
If you really really want your client (host) to access internal application nodeapp, then you simple change below code
XMLHttpRequest('GET', 'http://nodeapp/api', true)
To
XMLHttpRequest('GET', 'http://localhost:3000/api', true)
Since in your docker-compose.yml, nodeapp service port 80 is exposed in home network as 3000, which can be accessed directly.
Better solution
You need redesign your service stack, to make nginxserver as frontend node, see one sample http://schempy.com/2015/08/25/docker_nginx_nodejs/

Can't get docker to accept request over the internet

So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.
My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.
I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?
EDIT: I'm also just using the official Jenkins image from docker hub.
If you are running this:
docker run -p 8080:8080 jenkins
Then to connect to jenkins you will have to connect to (in essence you are doing port forwarding):
http://127.0.0.1:8080 or http://localhost:8080
If you are just running this:
docker run jenkins
You can connect to jenkins using the container's IP
http://<containers-ip>:8080
The Dockerfile when the Jenkins container is built already exposes port 8080
The Docker Site has a great amount of information on container networks.
https://docs.docker.com/articles/networking
"By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers."
You will need to provide special options when invoking docker run in order for containers to accept incoming connections.
Use the -P or --publish-all=true|false for containers to accept incoming connections.
The below should allow you to access it from another network:
docker run -P -p 8080:8080 jenkins
if you can connect to Jenkins over local network from a machine different than the one docker is running on but not from outside your local network, then the problem is not docker. In this case the problem is what ever machine who is receiving outside connection (normally your router, modem or ...) does not know to which machine the outside request should be forwarded.
You have to make sure you are forwarding the proper port on your external IP to proper port on the machine which is running Docker. This can be normally done on your internet modem/router.

Resources