Route53 + Elastic Load Balancer not responding, but individual instances are working - node.js

This is a confusing situation but let me try my level best to present my problem.
I am trying my hand at setting up an aws app architecture by following this blueprint: https://s3.amazonaws.com/awsmedia/architecturecenter/AWS_ac_ra_web_01.pdf
I don't require the web server part of it, so the components that I am trying to set up are Route53 -> Elastic Load Balancer -> (A subnet containing two ec2 instances that contain my nodejs app)
I have created a hosted zone on Route53, and I created a set record with an alias set to the ELB.
At first, I did not setup NGINX on my EC2 instances and in my ELB configuration I registered my EC2 instances on the port on which the application runs, i.e 9000. At this point, if I tried to access my app via the domain name, the page was unreachable, and I tried to access it via the ELB DNS, it returned with 504.
Then I set up Nginx on my instances and registered instances on ELB with port 80, and this time ELB returned with 503, however the page is still unreachable via Route53
I am using an application load balancer with HTTPS protocol.
So, at this point, I can't access my app via Route53, and I am getting 503 when I access it via ELB DNS, however, if I point my browser to the public dns of either of my ec2 instances, I am getting response from my APIs.
Can anyone help me with this?

Related

AWS Route 53 blocking APIs

I have a NodeJS server up and running on AWS EC2. It is a t2 micro with no load balancers and no auto-scaling. On the security groups, I have:
Next, I have my domains set up on Route 53 whose nameservers are placed on name.com, and a wildcard SSL from GoDaddy.com which I have set up on ACM. I have made an endpoint on Route 53 to point to EC2 Elastic IP. Everything is set up as it should be.
Now here comes the problem. Whenever I try to hit my server endpoint using the Route 53 endpoint, it fails after two or three calls, but the elastic IP works fine. Same thing happens if I point the Route 53 URL on other resource such as CloudFront.
Normally, this wouldn't bother me if I built this server for mobile apps only. But the issue is that the Web and Server won't communicate until they are both on HTTPS.
So any ideas???
I'll provide more information of needed.

Adding domain name to ECS application with AWS ELB

I have an application which is running on an AWS ECS cluster which has 2 instances. I'm using EC2 instance type for ECS. I also have an application load balancer attached to this ECS cluster which uses dynamic port mapping. Right now, the application is working fine with the Load balancer's domain name.
I'm planning to add SSL feature for the load balancer and also a domain name for my application. For simplicity, I'm planning to use AWS ACM to create SSL certificates for HTTPS connectivity. But I'm not very aware of the domain name registration and all.
So I'm not sure on where to attach this domain if I go for a new domain registration. What IP do I use for domain registration? Or If I have a domain name, can I attach it to my application.
But still, I'm not sure where to point to. Any help regarding attaching domain to app with ecs and aws alb will be appreciated.
Thanks in advance.
Basically, you have to create an A record in your DNS server pointing to the ELB.
Amazon has Route53 for registering domains, if your domain is registered with Route53 it's easy as selecting the ELB from the list on the route53 console.
If you host your domain on a different registrar (e.g. GoDaddy) then make sure your ELB is publicly available and use its address for host address if your domain A record.

How to configure Nodejs that is in AWS EC2 Instance to accept HTTPS request from client side

I would like to ask about how to configure Nodejs (backend/server) to accept HTTPS request from client side (Front end).
What we did.
Registered domain name in AWS.
List item
Requested SSL in ACM.
Create bucket in S3, and store our front-end code (angular 5) inside of it.
Created distribution in cloud front and put our custom ssl there and connect to bucket in S3.
We set up also ec2 instance and store our back-end code (node js) there.
In our front end code we connect to the ip of our ec2 instances so that we can connect to backend.
The problem:
The front-end can't access the backend in ec2 instances because the front end is https and the backend is http (we don't know how to configure it from http to https in AWS EC2 Instance).
Do you know how to setup web app in aws which front end and backend code is separated?
What did we missed?
What did we missed?
If I understand you correctly, you have a Cloudfront distribution serving angular which is then attempting to connect to an EC2 instance - I presume the IP address or public DNS entry for the EC2 is hard-coded into the angular code.
This is not a good arrangement - if your EC2 goes down or the IP address changes you will need to push a new site to S3 - and then this change will take time to propagate through Cloudfront.
What you should rather be doing is this.
create an application load balancer
create a target group and add your EC2 to that target group.
add a listener on the ALB, listening on the port your web app connects on, with a rule that forwards to the HTTP port of the back-end EC2.
Add a route 53 DNS Alias record for the ALB (because ALBs do sometimes go away or change their IP address)
Change your front-end code to point at the Route 53 Alias record.
(This is an incredibly simplistic way of doing things that leaves your EC2 open to the internet etc etc).
You should also give serious thought to putting your EC2 into an autoscaling group that spans at least two availability zones, and to setting its minimum size to ensure at least servers are running at any one time.
AWS EC2 instances can go away at any time, and when they do your app goes down with them.

How to setup SSL for instance inside the ELB and communicating with a node instance outside the ELB

I have create an architecture on AWS (hope it should not be wrong) by using the ELB, autoscaling, RDS and one node ec2 instance outside the ELB. Now I am not getting, that, how I can implement the SSL on this architecture.
Let me explain this in brief:
I have created one Classic Load Balancer.
Created on autoscaling group.
Assign instances to autoscaling group.
And lastly I have created one Instance that I am using for the node and this is outside the Load Balancer and Autoscaling group.
Now when I have implemented the SSL to my Load Balancer, the inner instances are communicating with the node instance on the HTTP request and because the node instance is outside the load balancer so the request is getting blocked.
Can someone please help me to implement the SSL for this architecture.
Sorry if you got confused with my architecture, if there is any other best architecture could be possible then please let me know I can change my architecture.
Thanks,
When you have static content, your best bet is to serve it from Cloudfront using an S3 bucket as its origin.
About SSL, you could set the SSL at your ELB level, follow the documentation .
Your ELB listens on two ports: 80 and 443 and communicates with your ASG instances only using their open port 80.
So when secure requests come to the ELB, it forwards them to your server ( EC2 in the ASG ). Then, your server, listening on port 80, receives the request; if the request have the X-FORWARDED-PROTO HTTPS, the server does nothing, otherwise it sets it and forward/rewrite the URL to be a secure one and the process restart.
I hope this helps and be careful of ERR_TOO_MANY_REDIRECTS
Have you considered using an Application Load Balancer with two target groups and a listener rule?
If the single EC2 instance is just hosting static content, and is serving content on a common path (e.g. /static), then everything can sit behind a shared load balancer with one common certificate that you can configure with ACM.
"because the node instance is outside the load balancer so the request
is getting blocked."
If they're in the same VPC you should check the security group that you've assigned to your instances. Specifically you're going to want to allow connections coming in to the ports 443 and/or 80 on the stand-alone instance to be accessible from the security group assigned to the load balancer instances - let's call those 'sg-load_balancer' (check your AWS Console to see what the actual security group id is).
To check this - select the security group for the stand-alone instance, notice the tabs at the bottom of the page. Click on the 'Inbound' tab. You should see a set of rules... You'll want to make sure there's one for HTTP and/or HTTPS and in the 'Source' instead of putting the IP address put the security group for the load balancer instances -- it'll start with sg- and the console will give you a dropdown to show you valid entries.
If you don't see the security group for the load balancer instances there's a good chance they're not in the same VPC. To check - bring up the console and look for the VPC Id on each node. That'll start with vpc_. These should be the same. If not you'll have to setup rules and routing tables to allow traffic between them... That's a bit more involved, take a look at a similar problem to get some ideas on how to solve that problem: Allowing Amazon VPC A to get to a new private subnet on VPC B?

Forward from AWS ELB to insecure port on the EC2 instance

I fear that this might be a programming question, but I am also hopeful that it is common enough that you might have some suggestions.
I am moving to a fail-over environment using AWS elastic load balancers to direct the traffic to the EC2 instances. Currently, I have set up the ELB with a single EC2 instance behind it. You will see why in a moment. This is still in test mode, although it is delivering content to my customers using this ELB -> EC2 path.
In each of my production environments (I have two) I have an AWS certificate on the load balancer and a privately acquired security certificate on the EC2 instance. The load balancer listeners are configured to send traffic received on port 443 to the secure port (443) on the EC2 instance. This is working; however, as I scale up to more EC2 instances behind the load balancer, I have to buy a security certificate for each of these EC2 instances.
Using a recommendation that was proposed to me, I have set up a test environment with a new load balancer and its configured EC2 server. This ELB server sends messages received on its port 443 to port 80 on the EC2 system. I am told that this is the way it should be done - limit encryption/decryption to the load balancer and use unencrypted communication between the load balancer and its instances.
Finally, here is my problem. The HTML pages being served by this application use relative references to the embedded scripts and other artifacts within each page. When the request reaches the EC2 instance (the application server) it has been demoted to HTTP, regardless of what it was originally.This means that the references to these embedded artifacts are rendered as insecure (HTTP). Because the original page reference was secure (HTTPS), the browser refuses to load these insecure resources.
I am already using the header X-Forwarded-Proto within the application to determine if the original request at the load balancer was HTTP or HTTPS. I am hoping against hope that there is some parameter in the EC2 instance that tells it to render relative reference in accordance to the received X-Forwarded-Proto header. Barring that, do you have any ideas about how others have solved this problem?
Thank you for your time and consideration.
First of all it is the right way to go by having the SSL termination at ELB/ALB and then having a security group assigned to EC2 that only accepts traffic from ELB/ALB.
However responding with https urls based on the X-Forwarded-Proto request headers or based on custom configuration, needs to be handle in your application code or webserver.

Resources