My express https server works locally but not in a docker container - node.js

I currently have two docker containers running:
ab1ae510f069 471b8de074c4 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:3001->3001/tcp hopeful_bassi
2d4797b77fbf 5985005576a6 "nginx -g 'daemon of…" 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp wizardly_cori
One is my client and the other (port 3001) is my server.
The issue I'm facing is I've just added SSL to my site, but now I can't access the server. My theory is that the server needs both port 443 and port 3001 open, but I can't have port 443 open on both containers. I can run the server via HTTP locally, so I think that also points to this conclusion.
Is there anything I can do to have both using https? The client won't talk to the server if the server uses http (for obvious reasons).
Edit:
I'm now not sure if it is to do with port 443, as I killed my client and tried to just run the server, but it still gave me connection refused:
docker run -dit --restart unless-stopped -p 3001:3001 -p 443:443 471b8de074c4

If you open the port 443 for a docker container, it means that a docker-managed tool will be started. This (anyways, highly sup-optimal) tool will forward the TCP requrest to your host port 443 to the container.
If you want two containers to use the port 443, docker would want to start this portforwarder twice, on the same port. As your docker output shows, it could happen only once. Maybe digging (deeply) in the (nearly non-existent) docker logs, you can see also the relevant error message.
The problem you've found is not docker-dependant, it is the same problem what you would face also in a container-less environment - you simply can't start multiple service processes listening on the same TCP port.
Also the solution is coming from the world before the containers.
You need a central proxy service, this would listen on your port 443, and forward the requests - depending on the asked virtualhost - to the corresponding container.
Dig into the docker containers, it is nearly sure that such a https forward proxy exists. This third container will forward the requests where you want. Of course you will need to configure it.
From that moment, you don't even need to have https in your containers (although you can if you want), what helps a lot in productive, correctly certified ssl environments. Only your proxy will need the certificates. So:
/---> container A (tcp:80)
tcp:443 -- proxy
\---> container B (tcp:80)

Related

Have a Lando web server listen on localhost:8000

Long story, but due to previous configurations of 3rd party services, it'll be much easier if I can have Lando listen on port 8000 instead of the assigned docker port (e.g. it's different every time). I've tried doing overrides such as
overrides:
ports:
- '8000'
Is it possible to configure Lando so that my apache server listens on port 8000?
It is not an answer to your question, but if you use Lando proxy, you would not have a problem with changing appserver port and you can point your http request to the selected hostname.
Example of .lando.yml:
name: test
proxy:
appserver:
- test.localhost
- my.local-domain.test

Docker containers work on port 80 only

I tried this on multiple machines (Win10 and server 2016), same result
using this tutorial:
https://docs.docker.com/docker-for-windows/#set-up-tab-completion-in-powershell
This works
docker run -d -p 80:80 --name webserver nginx
Any other port, fails with
docker run -d -p 8099:8099 --name webserver nginx --> ERR_EMPTY_RESPONSE
Looks like Docker/nginx is listening failing on this port, but failing. Telneting to this port shows that the request goes through, but disconnects right away. This is different from when a port is not being listened to on at all.
There are two ports in that list. The first port is the one docker publishes on the host for you to connect to remotely. The second port is where to send that traffic in the container. Docker doesn't modify the application, so the application itself needs to be listening on that second port. By default, nginx listens on port 80. Therefore, you could run:
docker run -d -p 8099:80 --name webserver nginx
To publish on port 8099 and send that traffic to an app inside the container listening on port 80.

Amazon Linux cannot access nginx on port 80

I have installed nginx on my AMI by yum
sudo yum install nginx
And then, I open all port in my AMI security group
All traffic - All - All - 0.0.0.0/0
And then, I start nginx by command
sudo service nginx start
And then, I access my nginx web service by http://public-ip
but I cannot access by this way.
I try to check the connection in my server.
ssh my_account#my_ip
And then,
wget http://localhost -O-
And It worked fine.
I cannot figure out what is the root cause, and then I change nginx port from 80 to 8081 and I restart the nginx server.
And then, I try to access again. It worked fine. WTH...
http://public-ip:8081
I don't know exactly what is going on?
Could you tell me what is the problem.
I see a few possibilities:
You are blocking the connections with a firewall on the host.
Security Group rules disallow this access
You are in a VPC and have not set up an Internet Gateway or route to host
Your Nginx configurations are set to explicitly listen on host and port combinations such that it responds to "localhost" but not to the public IP or host name. You could post your Nginx configs and be more specific about how it doesn't work when you try remotely. It is timing out? Not resolving? Receiving an HTTP response but not what you expected?

Debug a NodeJS application inside Docker

I'm moving my NodeJS application to docker, and the last problem that I have encountered is debugging the application.
My setup: OSx, boot2docker, docker (based on centos), WebStorm as IDE and debugger.
Here's what I have by now:
Forward 5858 from docker to boot2docker:
docker run -p 5858:5858 ...
Forward 5858 port from boot2docker to host:
VBoxManage controlvm boot2docker-vm natpf1 "boot2docker5858,tcp,127.0.0.1,5858,,5858"
This same setup works to foreword my application ports to host machine.
Port 5858 on the other hand, doesn't seem to react if accessed from outside the docker container.
Inside the docker container it works just fine.
Any idea what can be done to make this work?
Well, I have finally figured it out.
As it seems, node listens only on 127.0.0.1:5858.
To make it listen on all ports, I installed HAProxy on the docker, that forwards the requests from 0.0.0.0:5859 to 127.0.0.1:5858.
Here's the HAProxy configuration if anybody ever needs:
listen l1 0.0.0.0:5859
mode tcp
timeout client 180000
timeout server 180000
timeout connect 4000
server srv1 127.0.0.1:5858
And than add to your Dockerfile:
COPY haproxy.conf haproxy.conf
RUN haproxy -D -f /haproxy.conf

My websites running in docker containers, how to implement virtual host?

I am running two websites in two docker containers respectively in a vps.
e.g. www.myblog.com and www.mybusiness.com
How can I implement virtualhost in the vps so that the two websites can both use port 80.
I asked this question somewhere else, and was suggested to take a look at: https://github.com/hipache/hipache and https://www.tutum.co/
They look a bit curving. I am trying to find if there is a straightforward way to achieve that. Thanks!
In addition, forgot to mention my vps is a Ubuntu 14.04 box.
Take a look at jwilder/nginx-proxy project.
Automated nginx proxy for Docker containers using docker-gen
It's the easiest way to proxy your docker containers. You don't need to edit the proxy config file every time you restart a container or start a new one. It all happens automatically for you by docker-gen which generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
Usage
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock \
jwilder/nginx-proxy
Then start any containers you want proxied with an env var VIRTUAL_HOST=subdomain.youdomain.com
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
Multiple Ports
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
You need a reverse proxy. We use nginx and haproxy. They both work well, and are easy to run from a docker container. A nice way to run the entire setup would be to use docker-compose (formerly fig) to create the two website containers with no externally visible ports, and use a, say, haproxy container with links to both website containers. Then the entire combination exposes exactly one port (80) to the network, and the haproxy container forwards traffic to one or the other container based on the hostname of the request.
---
proxy:
build: proxy
ports:
- "80:80"
links:
- blog
- work
blog:
build: blog
work:
build: work
Then a haproxy config such as,
global
log 127.0.0.1 local0
maxconn 2000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
log global
option dontlognull
option redispatch
retries 3
timeout connect 5000s
timeout client 1200000s
timeout server 1200000s
### HTTP frontend
frontend http_proxy
mode http
bind *:80
option forwardfor except 127.0.0.0/8
option httplog
option http-server-close
acl blog_url hdr_beg(host) myblog
use_backend blog if blog_url
acl work_url hdr_beg(host) mybusiness
use_backend work if work_url
### HTTP backends
backend blog
mode http
server blog1 blog:80 check
backend work
mode http
server work1 work:80 check

Resources