Dockerized Node on AWS Elastic Beanstalk. Error 502 BadGateway - node.js

I have a Node app (Isomorphic React app) dockerized and deployed to AWS Elastic Beanstalk. I have all the information below but if you want a tldr: How do you configure port forwarding between a host and a container in AWS Elastic Beanstalk i.e. 5000:3000?
I want my application to work like this (the numbers are ports):
End User --80--> EC2 Instance / Nginx --5000--> Container --3000--> Application
I used the Dockerfile to EXPOSE 5000. I know that it's just a suggestion but as far as I know Amazon uses it to expose ports in the docker container instead of a docker-compose.yml. The app runs on port 3000. Code in the node to run on port 3000:
process.env.PORT || 3000
When trying to access the site using port 80, I'm getting a 502 Bad Gateway error.
I SSHed into the EC2 instance hosting my docker container. The nginx config for elasticbeanstalk looks like this:
upstream docker {
server [CONTAINER_IP]:5000;
keepalive 256;
}
The IP is the correct IP of the container (I checked by using Docker Inspect). When I run: sudo docker ps -a
I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
REDACTED REDACTED "node server.js" 34 hours ago Up 34 hours 5000/tcp jolly_williams
If I run netstat on the EC2 host instance I see that port 80 is open:
tcp 0 0 0.0.0.0:80
When I look at the logs on the EC2 host instance I see this error multiple times for different resources:
request: "GET / HTTP/1.1", upstream: "http://[CONTAINER_IP]:5000/", host: [Beanstalk URL] connection refused
Now here is the kicker, it works when I run:
sudo curl [ContainerIP]:3000 (from the EC2 host intance)
-or-
sudo docker exec -ti [CONTAINER_NAME] curl http://localhost:3000
Anyone know why the node is running off 3000 and not 5000? What can I do?

Talked to AWS Support. They're saying that the docker port has to be the same port the application is listening on i.e. no port mapping

Related

Nginx Load Balancer High availability running in Docker

I am running, nginx load balancer container for my backend web server,
Since I have only one container running as nginx load balancer, In case of container die/crash, clients cannot reach webservers,
Below is the nginx.conf
events {}
http {
upstream backend {
server 1.18.0.2;
server 1.18.0.3;
}
# This server accepts all traffic to port 80 and passes it to the upstream.
# Notice that the upstream name and the proxy_pass need to match.
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
Below is the Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
I am running nginx container like below
docker run -d -it -p 8082:80 nginx-ls
I am accessing above server with http://serve-ip:8082
Container may crash or die any time, in that case my application is not reachable, So I tried to run another container like below and I am getting below error since I am using same port and it is oblivious we cannot repurpose same port in same host.
docker run -d -it -p 8082:80 nginx-ls
06a20239bd303fab2bbe161255fadffd5b468098424d652d1292974b1bcc71f8
docker: Error response from daemon: driver failed programming external connectivity on endpoint
suspicious_darwin (58eeb43d88510e4f67f618aaa2ba06ceaaa44db3ccfb0f7335a739206e12a366): Bind for
0.0.0.0:8082 failed: port is already allocated.
So I ran in different port, It works fine
docker run -d -it -p 8083:80 nginx-ls
But How do we tell/configure clients use port 8083 container when container 8082 is down
or is there any other best method to achieve nginx load balancer with high availability?
Note: For some reasons, I cannot use docker-compose

Cannot Access Google App Engine Instance Externally

I'm running a node JS app on Google Cloud Services using the cloud shell. I've deployed using gcloud app deploy, everything reports as a success. If I use gcloud app logs tail -s default I can see the logs, it says my app is listening on port 3000, that's the first debug message I see from my app.
When I invoke the endpoint without the port on the end, i.e.
https://myapp.appspot.com/myendpoint
I get an error,
"GET /myendpoint" 502
If I try with port 3000, i.e.
https://myapp.appspot.com:3000/myendpoint
The request just times out and I get no log messages from the shell.
I have port 3000 opened on the firewall, and my app.yaml is,
runtime: nodejs
env: flex
service: default
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Update 1:
I've also tried adding a forwarding port to my app.yaml,
network:
forwarded_ports:
- 3000/tcp
And allowed port 3000 in the VPC Firewall, but this seems to make no difference.
Update 2:
I can SSH into the instance and access the endpoint using a wget http://127.0.0.1:3000/myendpoint command but still no external access.
Update 3:
I've also tried port 443 too, listening on IP 0.0.0.0. But it seems to bind to IPV6 ip address 0 and changes the port to 8443 (somehow). This is just insane...
I resolved the issue by binding my service to port 8080, and removing the "service" field from my app.yaml. the external calls are all routed to port 8080 by default.
External calls have no port specified.

Docker containers work on port 80 only

I tried this on multiple machines (Win10 and server 2016), same result
using this tutorial:
https://docs.docker.com/docker-for-windows/#set-up-tab-completion-in-powershell
This works
docker run -d -p 80:80 --name webserver nginx
Any other port, fails with
docker run -d -p 8099:8099 --name webserver nginx --> ERR_EMPTY_RESPONSE
Looks like Docker/nginx is listening failing on this port, but failing. Telneting to this port shows that the request goes through, but disconnects right away. This is different from when a port is not being listened to on at all.
There are two ports in that list. The first port is the one docker publishes on the host for you to connect to remotely. The second port is where to send that traffic in the container. Docker doesn't modify the application, so the application itself needs to be listening on that second port. By default, nginx listens on port 80. Therefore, you could run:
docker run -d -p 8099:80 --name webserver nginx
To publish on port 8099 and send that traffic to an app inside the container listening on port 80.

docker connection refused nodejs app

I launch docker container:
docker run --name node-arasaac -p 3000:3000 juanda/arasaac
And my node.js app works ok.
If I want to change host port:
docker run --name node-arasaac -p 8080:3000 juanda/arasaac
Web page is not loaded, logs from browser console:
Failed to load resource: net::ERR_CONNECTION_REFUSED
http://localhost:3000/app.318b21e9156114a4d93f.js Failed to load resource: net::ERR_CONNECTION_REFUSED
Do I need to have the same port both in host and container? It seems it knows how to resolve http://localhost:8080 so it loads my website, but internal links in the webpage go to port 3000 and it's not as good :-(
When you are running your node.js app in a docker container it will only expose the ports externally that you designate with your -p (lowercase) command. The first instance with, "-p 3000:3000", maps the host port 3000 to port 3000 being exposed from within your docker container. This provides a 1 to 1 mapping, so any client that is trying to connect to your node.js service can do so through the HOST port of 3000.
When you do "-p 8080:3000", docker maps the host port of 8080 to the node.js container port of 3000. This means any client making calls to your node.js app through the host (meaning not within the same container as your node.js app or not from a linked or networked docker container) will have to do so through the HOST port of 8080.
So if you have external services that expect to access your node.js at port 3000 they won't be able.

Debug a NodeJS application inside Docker

I'm moving my NodeJS application to docker, and the last problem that I have encountered is debugging the application.
My setup: OSx, boot2docker, docker (based on centos), WebStorm as IDE and debugger.
Here's what I have by now:
Forward 5858 from docker to boot2docker:
docker run -p 5858:5858 ...
Forward 5858 port from boot2docker to host:
VBoxManage controlvm boot2docker-vm natpf1 "boot2docker5858,tcp,127.0.0.1,5858,,5858"
This same setup works to foreword my application ports to host machine.
Port 5858 on the other hand, doesn't seem to react if accessed from outside the docker container.
Inside the docker container it works just fine.
Any idea what can be done to make this work?
Well, I have finally figured it out.
As it seems, node listens only on 127.0.0.1:5858.
To make it listen on all ports, I installed HAProxy on the docker, that forwards the requests from 0.0.0.0:5859 to 127.0.0.1:5858.
Here's the HAProxy configuration if anybody ever needs:
listen l1 0.0.0.0:5859
mode tcp
timeout client 180000
timeout server 180000
timeout connect 4000
server srv1 127.0.0.1:5858
And than add to your Dockerfile:
COPY haproxy.conf haproxy.conf
RUN haproxy -D -f /haproxy.conf

Resources