Use RabbitMQ cluster with an Amazon elb - node.js

I have set two ec2 instances that form a rabbitmqcluster. I want to reach the cluster via an ELB. For testing purpose I open for security group of both instances all tcp ports and http port.
On elb I listen on http 80 and tcp 5672 (sames terminations on instances)In the security group of elb all tcp ports are open and http 80
Health check is made via tcp on port 5672.
I use amqp.node node.js client
To connect to Elb I use this function :
require ('amqplib/callback_api').connect ('amqp://<endpoint_of_elb>,...)
But connection isn't established, I can't see why
(Note :
- ec2 instances of rabbitmq are in the same AZ of ELB's vpc
-I also can't see why health check fails if I remove to listen on http 80 with elb)

Related

Aws ec2 - how to setup load balancer to match docker container on ec2 instance

Background
I am having an EC2 instance, and a docker container running on port 3030.
In my docker container, there is a nodejs server which contain REST api setting.
I just create an Application Load Balancer with a target group(HTTP: 80) which points to above ec2 instance, in order to setup public http endpoint to send api request
The DNS name of the Load Balancer is my-docker-test-server-dev-123456789.ap-southeast-1.elb.amazonaws.com.
Problem
I tried to send http request POST https://my-docker-test-server-dev-123456789.ap-southeast-1.elb.amazonaws.com/login
in order to try the login api on Postman,
but error occurs
HTTP 504: Gateway timeout
Update
I am using default security group for my load balancer.
Inbound Rule
Type Protocol Port range Source Description - optional
All traffic All All 0.0.0.0/0 –
All traffic All All ::/0 –
All traffic All All sg-d987a2bc / default –
Update 2
Now updated the target group to point to HTTP:3030 as suggested by comment, but still same errors.
Health Check for the group:
unhealthy
Request timed out
Update 3
EC2 Instnace > Security
Inbound Rule
Port range Protocol Source Security groups
22 TCP 0.0.0.0/0 launch-wizard-9
Based on the comments.
The issue was by incorrectly set security group (SG) of the instances and the target group (TG) port. In the first case, since the docker application is exposed on the port 3030 on the instance, SG must allow inbound traffic on that port. The inbound SG rule was missing.
In the TG case, the original traffic port was 80. However, since the docker works at port 3030, TG port needed to be change to port 3030.
So the traffic looks as follows:
Clinet ---(HTTP:80) ---> ALB ---> TG --- (HTTP:3030) ---> Instance with docker on port 3030

Elastic Beanstalk listening for tcp connections

I have nodejs app which is listening for http connections on port 3000 and also listening for tcp connections on port 5001 using nodejs' net library.
I am hosting the app using aws elastic beanstalk and it's classic load balancer.
My CLB listeners:
My Elastic Beanstalk load balancer listeners:
Also port 5001 is enabled in ec2 instance and load balancer security groups.
When trying to send tcp packet to load balancer's dns name it goes through, but never reaches the ec2 instance. Is there something else I have to configure for this to be possible?
The problem was I had net.listen host set to localhost instead of 0.0.0.0

Building a secure HTTPs web server with Fargate + ACM + ALB

I am trying for the simplest deploy to get an https web server up and running in Fargate.
I have used Amazon Certificate Manager to create a public certificate.
I have an Application Load Balancer that is talking to the Fargate container on two ports:
80 for http and
443 for https
This is the problem: when I run my webserver on port 80 (http) and connect via the ALB, it works fine (not secure, but it serves up the html).
When I run my webserver on port 443 with TLS enabled, it does not connect via the ALB.
Another point is that when running my webserver with TLS enabled on port 443, I do not have the certificate or certificate key, and so am confused how to get that from Amazon.
Another question I have is: does it make sense for me to say that the ELB will communicate with the client over HTTPS but that the ELB can communicate with the container via HTTP? Is this secure?
My networking knowledge is very rusty.
does it make sense for me to say that the ELB will communicate with the client over HTTPS but that the ELB can communicate with the container via HTTP?
Yes. You should make sure your web server is accepting traffic from the ALB on port 80. This is done at the application level, on the web server, and with your target group, which is what the ALB will use to determine how it routes traffic to your web server. This is way it typically works:
client --(443)--> ALB --(80)--> web server
Some things to check:
Target group is configured to send traffic to your FG web server on port 80
Target group health check is configured to check port 80
FG task security group has ingress from ALB on port 80
Web server is configured to listen on port 80
Sidenote: You can configure your target group to send traffic to the target (web server in Fargate) on 443, but as you said, without the proper certificate setup in the container, you won't be able to properly terminate SSL and it just wouldn't work. You would need to upload your own cert to ACM for this to work, which sends you down a security rabbit hole, namely how to avoid baking your private key into your Docker image.

Azure App hosted docker port 25

When creating an Azure App Service with a Docker image. Is it possible to listen on other ports than 80 and 443 from the Docker image?
My requirement is that TCP port 25 from the Docker image is externally reachable.
As Azure Web App sandbox states about Networking Restrictions/Considerations:
Network endpoint listening
The only way an application can be accessed via the internet is through the already-exposed HTTP (80) and HTTPS (443) TCP ports; applications may not listen on other ports for packets arriving from the internet.
However, applications may create a socket which can listen for connections from within the sandbox. For example, two processes within the same app may communicate with one another via TCP sockets; connection attempts incoming from outside the sandbox, albeit they be on the same machine, will fail. See the next topic for additional detail.

Mutliple http routes on Elastic Beanstalk load balancer for node app with multiple servers

Having difficulty phrasing my question, so I could not find much info on it so I will explain:
I have a node.js app that hosts a restify/express api on port 8081.
This same api hosts a websocket server on port 8083.
All this works wonderfully on localhost by specifying the ports, but in a hosted environment it needs to run on port 80 http. (omitting 443 for simplicity).
I am using AWS Elastic Beanstalk (nginx server). When I deployed my app, it creates an EC2 and ELB (load balancer) instance. The ELB then has a public dns which I use to access the api on port 80. There is no special listeners configured (only 80 and 443). So I am not sure how it gets to the api on port 8081. The EC2 instance also only allows 80 and 443.
The api works fine using it with the ELB public dns on port 80.
Now I have added the websocket server in there.
My problem is - I need another public dns on port 80 to go to the socket server on port 8083 of the same Beanstalk app. How would I approach this?
I would appreciate any thoughts and ideas.
It appears that..
Elastic Beanstalk creates a Classic load balancer which does not support websockets.
Default nginx setup on AWS does not allow Upgrade headers.
However, I got it working in the following manner:
Default EB setup (with classic ELB) serves the API as it normally did.
Then I created an ALB (Application load balancer) from the EC2 dashboard.
I added a target that routes to my EC2 instance (that EB created) on port 8083 (my websocket listener). My API runs on port 8081. Then add the target to the new ALB on on the Listeners tab.
This will allow traffic that hits the new ALB on port 80 to route to port 8083 of the server where my application is hosted.
In my .ebextensions file in the project, I added the following that will update nginx settings to allow the Upgrade header that is needed for websockets:
Add to .ebextensions
container_commands:
enable_websockets:
command: |
sed -i '/\s*proxy_set_header\s*Connection/c \
proxy_set_header Upgrade $http_upgrade;\
proxy_set_header Connection "upgrade";\
' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf
So basically I have two load balancers. The default one that routes 80 to 8081, and another (ALB) that routes 80 to 8083.
This is by no means perfect. Auto scaling/load balancing would probably not work. But for now is serves the API and websocket server from the same application.

Resources