AWS EC2 LoadBalancing SSL nodeJS - Where am I going wrong - node.js

I am fairly new to all this (being an app/mobile web developer).
I have setup an instance on EC2 which runs perfectly under http.
I want to add https support as I want to write a service worker.
I have used Amazons Certificate Manager to obtain a certificate
I have created an ELB and added a listener at 443 for https
I am not entirely sure whether my ELB and EC2 instance are connected. Following some instructions I attempted to create a CNAME rule in my Route53 setup but it would not accept it (pointing to the ELB DNS).
My understanding is that if they are then my http nodejs instance should now automatically support https.
This is currently not the case. My nodejs code is unchanged (it still only creates a http server listening at port 3002.
When I do a http call to the domain (http://example.com:3002) it works but a https call (https://example.com:3002) does not with a Site can not be reached failure.
This leads me to believe that the ELB and the EC2 are not associated. Can anyone suggest where I may have gone wrong as I have hunted the internet for 3 days and not found any step by step instructions for this.

You need to focus on this part of your question:
I am not entirely sure whether my ELB and EC2 instance are connected.
Following some instructions I attempted to create a CNAME rule in my
Route53 setup but it would not accept it (pointing to the ELB DNS).
Why are you not sure they are connected? You should be able to look at the health check section in the load balancer UI and see that the server instance is "connected" and healthy. If it isn't, then that is the first thing you need to fix.
Regarding the CNAME in Route53, what do you mean it wouldn't accept it? What are the details of that issue? Until you have your DNS pointing to the load balancer you won't actually be using the load balancer, so that's another issue you need to fix.
When I do a http call to the domain (http://example.com:3002) it works
but a https call (https://example.com:3002) does not with a Site can
not be reached failure.
If you had an error setting up the DNS then of course this isn't going to work. You shouldn't even be attempting to test this yet until you get the DNS configured.

Related

GCP: Allowing Public Ingress Web Traffic from the Load Balancer ONLY

Disclaimers: I come from AWS background but relatively very new to GCP. I know there are a number of existing similar questions (e.g, here and here etc) but I still cannot get it work since the exact/detailed instructions are still missing. So please bear with me to ask this again.
My simple design:
Public HTTP/S Traffic (Ingress) >> GCP Load Balancer >> GCP Servers
GCP Load Balancer holds the SSL Cert. And then it uses Port 80 for downstream connections to the Servers. Therefore, LB to the Servers are just HTTP.
My question:
How do I prevent the incoming HTTP/S Public Traffic from reaching to the GCP Servers directly? Instead, only allow the Load Balancer (as well as it's Healthcheck Traffic)?
What I tried so far:
I went into Firewall Rules and removed the previously allowing rule of Ports 80/443 (Ingress Traffic) from 0.0.0.0/0. And then, added (allowed) the External IP address of Load Balancer.
At this point, I simply expected the Public Traffic should be rejected but the Load Balancer's. But in reality, both seemed to be rejected. Nothing reached the Servers anymore. The Load Balancer's External IP wasn't seemed to be recognised.
Later I also noticed the "Healthchecks" were also not recognised anymore. Therefore Healthchecks couldn't reach to Servers and then failed. Hence the Instances were dropped by Load Balancer.
Please also note that: I cannot pursue the approach of simply removing the External IPs on the Servers. (Although many people say this would work.) But we still want to maintain the direct SSH accesses to the Servers (by not using a Bastion Instance). Therefore I still need the External IPs, on each and every Web Servers.
Any clear (and kind) instructions will be very much appreciated. Thank you all.
You're able to setup HTTPS connectivity between your load balancer and your back-end servers while using HTTP(S) load balancer. To achieve this goal you should install HTTPS certificates on your back-end servers and configure web-servers to use them. If you decided to completely switch to HTTPS and disable HTTP on your back-end servers you should switch your health check from HTTP to HTTPS also.
To make health check working again after removing default firewall rule that allow connection from 0.0.0.0/0 to ports 80 and 443 you need to whitelist subnets 35.191.0.0/16 and 130.211.0.0/22 which are source IP ranges for health checks. You can find step by step instructions how to do it in the documentation. After that, access to your web servers still be restricted but your load balancer will be able to use health check and serve your customers.

How to setup SSL for instance inside the ELB and communicating with a node instance outside the ELB

I have create an architecture on AWS (hope it should not be wrong) by using the ELB, autoscaling, RDS and one node ec2 instance outside the ELB. Now I am not getting, that, how I can implement the SSL on this architecture.
Let me explain this in brief:
I have created one Classic Load Balancer.
Created on autoscaling group.
Assign instances to autoscaling group.
And lastly I have created one Instance that I am using for the node and this is outside the Load Balancer and Autoscaling group.
Now when I have implemented the SSL to my Load Balancer, the inner instances are communicating with the node instance on the HTTP request and because the node instance is outside the load balancer so the request is getting blocked.
Can someone please help me to implement the SSL for this architecture.
Sorry if you got confused with my architecture, if there is any other best architecture could be possible then please let me know I can change my architecture.
Thanks,
When you have static content, your best bet is to serve it from Cloudfront using an S3 bucket as its origin.
About SSL, you could set the SSL at your ELB level, follow the documentation .
Your ELB listens on two ports: 80 and 443 and communicates with your ASG instances only using their open port 80.
So when secure requests come to the ELB, it forwards them to your server ( EC2 in the ASG ). Then, your server, listening on port 80, receives the request; if the request have the X-FORWARDED-PROTO HTTPS, the server does nothing, otherwise it sets it and forward/rewrite the URL to be a secure one and the process restart.
I hope this helps and be careful of ERR_TOO_MANY_REDIRECTS
Have you considered using an Application Load Balancer with two target groups and a listener rule?
If the single EC2 instance is just hosting static content, and is serving content on a common path (e.g. /static), then everything can sit behind a shared load balancer with one common certificate that you can configure with ACM.
"because the node instance is outside the load balancer so the request
is getting blocked."
If they're in the same VPC you should check the security group that you've assigned to your instances. Specifically you're going to want to allow connections coming in to the ports 443 and/or 80 on the stand-alone instance to be accessible from the security group assigned to the load balancer instances - let's call those 'sg-load_balancer' (check your AWS Console to see what the actual security group id is).
To check this - select the security group for the stand-alone instance, notice the tabs at the bottom of the page. Click on the 'Inbound' tab. You should see a set of rules... You'll want to make sure there's one for HTTP and/or HTTPS and in the 'Source' instead of putting the IP address put the security group for the load balancer instances -- it'll start with sg- and the console will give you a dropdown to show you valid entries.
If you don't see the security group for the load balancer instances there's a good chance they're not in the same VPC. To check - bring up the console and look for the VPC Id on each node. That'll start with vpc_. These should be the same. If not you'll have to setup rules and routing tables to allow traffic between them... That's a bit more involved, take a look at a similar problem to get some ideas on how to solve that problem: Allowing Amazon VPC A to get to a new private subnet on VPC B?

Forward from AWS ELB to insecure port on the EC2 instance

I fear that this might be a programming question, but I am also hopeful that it is common enough that you might have some suggestions.
I am moving to a fail-over environment using AWS elastic load balancers to direct the traffic to the EC2 instances. Currently, I have set up the ELB with a single EC2 instance behind it. You will see why in a moment. This is still in test mode, although it is delivering content to my customers using this ELB -> EC2 path.
In each of my production environments (I have two) I have an AWS certificate on the load balancer and a privately acquired security certificate on the EC2 instance. The load balancer listeners are configured to send traffic received on port 443 to the secure port (443) on the EC2 instance. This is working; however, as I scale up to more EC2 instances behind the load balancer, I have to buy a security certificate for each of these EC2 instances.
Using a recommendation that was proposed to me, I have set up a test environment with a new load balancer and its configured EC2 server. This ELB server sends messages received on its port 443 to port 80 on the EC2 system. I am told that this is the way it should be done - limit encryption/decryption to the load balancer and use unencrypted communication between the load balancer and its instances.
Finally, here is my problem. The HTML pages being served by this application use relative references to the embedded scripts and other artifacts within each page. When the request reaches the EC2 instance (the application server) it has been demoted to HTTP, regardless of what it was originally.This means that the references to these embedded artifacts are rendered as insecure (HTTP). Because the original page reference was secure (HTTPS), the browser refuses to load these insecure resources.
I am already using the header X-Forwarded-Proto within the application to determine if the original request at the load balancer was HTTP or HTTPS. I am hoping against hope that there is some parameter in the EC2 instance that tells it to render relative reference in accordance to the received X-Forwarded-Proto header. Barring that, do you have any ideas about how others have solved this problem?
Thank you for your time and consideration.
First of all it is the right way to go by having the SSL termination at ELB/ALB and then having a security group assigned to EC2 that only accepts traffic from ELB/ALB.
However responding with https urls based on the X-Forwarded-Proto request headers or based on custom configuration, needs to be handle in your application code or webserver.

DNS and Google Cloud Platform load balancer

I am trying to use google cloud platform for the first time and seem a bit hung up on something that I would think should be easy.
I created an instance group and am trying to create a load balancer to point both http and https traffic to that instance group. When I configured the front end for the load balancer I added both http and https; however, doing so created two ip addresses and I can only point the DNS to one of these records. I am assuming I am just missing a simple step, as I am used to working with AWS.
Any help would be much appreciated.
Think I answered my own question. I just pointed to the https ip address and both http and https requests ended up working.

Can CloudFlare perform automatic failover to a different backend?

I am looking for an easy way to fail over to a different DC quickly, does CloudFlare offer anything special in this regards with things like health checks or is it just like a standard DNS service?
Update: CloudFlare started a closed beta for the Traffic Manager feature which allows to do exactly this kind of failover:
https://www.cloudflare.com/traffic-manager/
AWS Failover:
The following solution seems to work well when you are hosting your backend system on AWS:
I setup a AWS Route 53 zone with a separate domain (e.g. failover-example.com). Route 53 allows you to setup health checks on the backend server (e.g. the load balancer) with DNS failover. AWS will remove the unhealthy backend system from the DNS record list.
In cloudflare I setup a CNAME for example.com record to failover-example.com and activate the cloudflare proxy on example.com.
The result is that the browser resolves the IP address of example.com to a cloudflare IP address. Cloudflare queries the AWS DNS server to lockup failover-example.com. Cloudflare fetches the content from the resolved IP address and returns the content back to the browser.
In my tests the switch to the other backend system occurs after ca. 20 seconds.
The separate domain is required because cloudflare does not route the traffic through the proxy when the CNAME is a subdomain of example.com.
I have tried to visualize the failover. In theory the failover works with any DNS failover capable service and not only with Route53:
The browser connects always with CloudFlare and hence a DNS failover of the backend system does never effect the browser of the user.
We don't have automatic failover at this time (something we're looking at). We can support the additional DNS entries in your zone file, of course, but you would currently have to manually make the change in that circumstance.
To add -- in the mean time, I'd recommend looking at https://runbook.io
Several other DIY options:
http://blog.booru.org/?p=12
https://vpsboard.com/topic/3341-running-your-own-failover-dns-setup/
https://github.com/marccerrato/python-dns-failover
You'd want to decide if these are the right options for you, of course.

Resources