How to use HTTPS on localhost:3000 - node.js

My application requirement :
""The URL of your web form, to be displayed in a frame in the Worker's web browser. This URL must use the HTTPS protocol.""
My AWS EC2 instance has node js running on it. For some reason I am having issues running it as a production server serve -s build
But when I npm start in my project folder it runs a development server on port 3000 and I can access it via http://ec2----------.compute-1.amazonaws.com:3000/
But this does not work with https. Is there a way I can access the same url using https? Something like :
https://ec2----------.compute-1.amazonaws.com:3000/
The ways that I have looked so far : Reverse Proxy and Nginx.
But could not understand it well.

If you use an elastic load balancer in front of the EC2 instance then AWS provides a very easy way to get HTTPS working. If you want to access the instance directly you will need to configure HTTPS in your node.js or use an HTTPS service to proxy the traffic to your node.js app.

Step 1 : Choosing the Load Balancer
The 2 choices when you create a load balancer :
Application Load Balancer : If your application is running on particular ports or in dev mode or you need path-based routing. It is a good option in terms of the routing decision are done at the application layer. It can only listen from HTTP and HTTPS.
Classical Load Balancer : If you need to take the routing decisions right from the transport layer. You may choose one.
I will continue with the Application Load Balancer, although most of the stages are same.
Step 2 : Configuring the Load Balancer
Simple and quick configuration :
Name : Name your load balancer.
Scheme :
internet facing : choose this if you want the requests from the client over the internet.
internal : choose this if you want the requests from the client using a private IP
address.
IP Address Type : ipv4
Listener
A listener is a process that checks for connection requests, using the protocol and port that you configured.
There can be only two listeners in the application load balancer, which are :
HTTP on port 80
HTTPS on port 443
Availability Zones
Load balancer's main job is to maintain traffic across different areas and regions. There are multiple availability zones in one region. These can be imagined as placing multiple servers in us-east These availability zones each have a separate subnet. But only one subnet can be selected for a particular zone.
You need to select at least 2 such availability zones having distinct subnets. This basically helps the load balancer to balance the load on at least 2 servers.
Step 3 : Configure Security Settings and Add Instance
Configuring security settings consists of specifying the certificates if you have selected to listen to https in the previous step. Since you selected the https listener, AWS needs to use the certificate. You can learn how to get a certificate from AWS Certificate Manager. Over here you have to select :
Certificate Type : Choose an existing certificate from AWS Certificate Manager (ACM)
Certificate Name : It pops up the certificate name in the drop down list.
Select the latest security policy
Security Policy : ELBSecurity-2016-08
Select the existing security group made for your instance.
Step 4 : Target Groups
Create a target group. Name it according to what it listens and where it targets.
You have to mention a path and a port where the listener targets the traffic to.
Step 5 : Deploy
After you review the settings, deploy and create your load balancer. This will do all the cleansing and management. It is like hiring a manager for you server traffic. You can go and meditate now for some time.
The load balancer will take almost a minute to be up and about. After the load balancer is active. Copy the DNS link of the load balancer on the main load balancer dashboard since we will need it in the next step. It will look something like this :
load-balancer-name-xxxxxxxxxx.us-east-x.xxx.amazonaws.com (A Record)
Step 6 : Map your domain name to the Load Balancer
Provides a reliable and cost-effective way to route visitors to websites by translating domain names (such as www.example.com) into the numeric IP addresses (such as 192.0.2.1) that computers use to connect to each other. AWS assigns URLs to your resources, such as load balancers. However, you might want a URL that is easy for users to remember. For example, you can map your domain name to a load balancer.
Go to Route 53 and select the hosted zone and the record set for your domain name.
You need to create a new record :
1) Leave the domain name blank.
2) Select Yes for Alias.
3) Paste the DNS link for the Load Balancer in the Alias Target.
4) Create.
This step is basically a transfer of risk. It routes the domain name to the dns of the load balancer. Hence solves our purpose of handling traffic. The rest of the job is handled by the ELB, which translates its statistics into the health reports, based on which you can create and replace more instances.
Have a great one!
Citation : https://sites.google.com/gwmail.gwu.edu/aws-tools/aws-elastic-load-balancer?authuser=0

Related

GCP: Allowing Public Ingress Web Traffic from the Load Balancer ONLY

Disclaimers: I come from AWS background but relatively very new to GCP. I know there are a number of existing similar questions (e.g, here and here etc) but I still cannot get it work since the exact/detailed instructions are still missing. So please bear with me to ask this again.
My simple design:
Public HTTP/S Traffic (Ingress) >> GCP Load Balancer >> GCP Servers
GCP Load Balancer holds the SSL Cert. And then it uses Port 80 for downstream connections to the Servers. Therefore, LB to the Servers are just HTTP.
My question:
How do I prevent the incoming HTTP/S Public Traffic from reaching to the GCP Servers directly? Instead, only allow the Load Balancer (as well as it's Healthcheck Traffic)?
What I tried so far:
I went into Firewall Rules and removed the previously allowing rule of Ports 80/443 (Ingress Traffic) from 0.0.0.0/0. And then, added (allowed) the External IP address of Load Balancer.
At this point, I simply expected the Public Traffic should be rejected but the Load Balancer's. But in reality, both seemed to be rejected. Nothing reached the Servers anymore. The Load Balancer's External IP wasn't seemed to be recognised.
Later I also noticed the "Healthchecks" were also not recognised anymore. Therefore Healthchecks couldn't reach to Servers and then failed. Hence the Instances were dropped by Load Balancer.
Please also note that: I cannot pursue the approach of simply removing the External IPs on the Servers. (Although many people say this would work.) But we still want to maintain the direct SSH accesses to the Servers (by not using a Bastion Instance). Therefore I still need the External IPs, on each and every Web Servers.
Any clear (and kind) instructions will be very much appreciated. Thank you all.
You're able to setup HTTPS connectivity between your load balancer and your back-end servers while using HTTP(S) load balancer. To achieve this goal you should install HTTPS certificates on your back-end servers and configure web-servers to use them. If you decided to completely switch to HTTPS and disable HTTP on your back-end servers you should switch your health check from HTTP to HTTPS also.
To make health check working again after removing default firewall rule that allow connection from 0.0.0.0/0 to ports 80 and 443 you need to whitelist subnets 35.191.0.0/16 and 130.211.0.0/22 which are source IP ranges for health checks. You can find step by step instructions how to do it in the documentation. After that, access to your web servers still be restricted but your load balancer will be able to use health check and serve your customers.

Aws load balancer for Server Sent Events or Websockets

I'm trying to load balance a nodejs Server sent event backend and I need to know if there is a way to distribute the new connections to the instances with the least connected clients. The problem I have is when scaling up, the routing continues sending new connections to the already saturated instance and since the connections are long lived this simply won't work.
What options do I have for horizontal scaling long lived connections?
It looks like you want a Load Balancer that can provide both "sticky sessions" and use the "least connection" instead of "round-robin" policy. Unfortunately, NGINX cannot provide this.
HAProxy (High Availability Proxy) allows for this:
backend bk_myapp
cookie MyAPP insert indirect nocache
balance leastconn
server srv1 10.0.0.1:80 check cookie srv1
server srv2 10.0.0.2:80 check cookie srv2
If you need ELB functionality and want to roll it all manually, take a look at this guide.
You might also want to make sure classic AWS ELB "sticky session" configuration or the newer ALB "sticky session" option does not meet your needs. ELB normally sends connection to upstream server with the least "load", and when combining with sticky sessions might be enough.
Since you are using AWS, I'd recommend Elastic Beanstalk for your Node.js application deployment. The official documentation provides good examples, like this one. Note that Beanstalk will automatically create an Elastic Load Balancer for you, which is what you're looking for.
By default, Elastic Beanstalk creates an Application Load Balancer for
your environment when you enable load balancing with the Elastic
Beanstalk console or the EB CLI. It configures the load balancer to
listen for HTTP traffic on port 80 and forward this traffic to
instances on the same port.
[...]
Note:
Your environment must be in a VPC with subnets in at least two
Availability Zones to create an Application Load Balancer. All new AWS
accounts include default VPCs that meet this requirement. If your
environment is in a VPC with subnets in only one Availability Zone, it
defaults to a Classic Load Balancer. If you don't have any subnets,
you can't enable load balancing.
Note that the configuration of a proper health check path is key to properly balance requests, as you mentioned in your question.
In a load balanced environment, Elastic Load Balancing sends a request
to each instance in an environment every 10 seconds to confirm that
instances are healthy. By default, the load balancer is configured to
open a TCP connection on port 80. If the instance acknowledges the
connection, it is considered healthy.
You can choose to override this setting by specifying an existing
resource in your application. If you specify a path, such as /health,
the health check URL is set to HTTP:80/health. The health check URL
should be set to a path that is always served by your application. If
it is set to a static page that is served or cached by the web server
in front of your application, health checks will not reveal issues
with the application server or web container.
EDIT: If you're looking for sticky sessions, as I described in the comments, follow the steps provided in this guide:
To enable sticky sessions using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
On the Description tab, choose Edit attributes.
On the Edit attributes page, do the following:
a. Select Enable load balancer generated cookie stickiness.
b. For Stickiness duration, specify a value between 1 second and 7 days.
c. Choose Save.

Azure Load balancing to Multiple Sites with Disaster Recovery

I am trying to configure applications on 2 different Azure sites having their local load balancing capabilities. I can use Traffic manager to distribute the traffic and have weighted routing to force everything to my primary site.
But i want this to occur automatically where i can map a service pointing to the internal load balancers at both sites and evaluate the sites are up and running or not to decide where to forward the traffic. This will allow me not to manually configure the Traffic Manager in case of disaster.
Note : The services are hosted on IIS on IaaS VMs. ILB1 and ILB2 are respective loadbalancer for Site1 and Site2.
Any help is appreciated!
Thanks
As far as I know, we can't add internal load balancer as traffic manager endpoints.
But I want this to occur automatically where I can map a service
pointing to the internal load balancers at both sites and evaluate them
sites are up and running or not to decide where to forward the
traffic.
By default, we can set multiple sites around the world with traffic manager, traffic manager will probe the health of all sites, forward network traffic to the right site.
We can use traffic manager profile to manage network traffic, traffic Manager profiles use traffic-routing methods to control the distribution of traffic to your cloud services or website endpoints.
For example, we create website 1 on site 1 (primary site), create website 2 on site 2. If we use the weighted method, network traffic will to site 1. When site 1 is down, traffic manager will know site 1 was down, will route network traffic to site 2.
Traffic manager works as a DNS level Load Balancer, it will route network to the available site by default.
About traffic manager probe settings, we can via the Azure portal to modify it, like this:
By the way, if you want to use traffic manager, we can add public IP address to traffic manager endpoint.
Update:
As a workaround, we can deploy a S2S VPN between two locations, and use Haproxy to work as load balancer, then add two VMs to public load balancer, like this:
We can use Haproxy to set primary website, more information about Haproxy, please refer to this link.

How to setup SSL for instance inside the ELB and communicating with a node instance outside the ELB

I have create an architecture on AWS (hope it should not be wrong) by using the ELB, autoscaling, RDS and one node ec2 instance outside the ELB. Now I am not getting, that, how I can implement the SSL on this architecture.
Let me explain this in brief:
I have created one Classic Load Balancer.
Created on autoscaling group.
Assign instances to autoscaling group.
And lastly I have created one Instance that I am using for the node and this is outside the Load Balancer and Autoscaling group.
Now when I have implemented the SSL to my Load Balancer, the inner instances are communicating with the node instance on the HTTP request and because the node instance is outside the load balancer so the request is getting blocked.
Can someone please help me to implement the SSL for this architecture.
Sorry if you got confused with my architecture, if there is any other best architecture could be possible then please let me know I can change my architecture.
Thanks,
When you have static content, your best bet is to serve it from Cloudfront using an S3 bucket as its origin.
About SSL, you could set the SSL at your ELB level, follow the documentation .
Your ELB listens on two ports: 80 and 443 and communicates with your ASG instances only using their open port 80.
So when secure requests come to the ELB, it forwards them to your server ( EC2 in the ASG ). Then, your server, listening on port 80, receives the request; if the request have the X-FORWARDED-PROTO HTTPS, the server does nothing, otherwise it sets it and forward/rewrite the URL to be a secure one and the process restart.
I hope this helps and be careful of ERR_TOO_MANY_REDIRECTS
Have you considered using an Application Load Balancer with two target groups and a listener rule?
If the single EC2 instance is just hosting static content, and is serving content on a common path (e.g. /static), then everything can sit behind a shared load balancer with one common certificate that you can configure with ACM.
"because the node instance is outside the load balancer so the request
is getting blocked."
If they're in the same VPC you should check the security group that you've assigned to your instances. Specifically you're going to want to allow connections coming in to the ports 443 and/or 80 on the stand-alone instance to be accessible from the security group assigned to the load balancer instances - let's call those 'sg-load_balancer' (check your AWS Console to see what the actual security group id is).
To check this - select the security group for the stand-alone instance, notice the tabs at the bottom of the page. Click on the 'Inbound' tab. You should see a set of rules... You'll want to make sure there's one for HTTP and/or HTTPS and in the 'Source' instead of putting the IP address put the security group for the load balancer instances -- it'll start with sg- and the console will give you a dropdown to show you valid entries.
If you don't see the security group for the load balancer instances there's a good chance they're not in the same VPC. To check - bring up the console and look for the VPC Id on each node. That'll start with vpc_. These should be the same. If not you'll have to setup rules and routing tables to allow traffic between them... That's a bit more involved, take a look at a similar problem to get some ideas on how to solve that problem: Allowing Amazon VPC A to get to a new private subnet on VPC B?

Azure Multiple Public IPs on a Virtual Machine Scale Set with Resource Manager

We try to migrate our Platform from classical IIS hosting to a service fabric micro service architecture. So fare we learned that a service fabric lives in a virtual machine scale set and uses Load balancer to communicate to the outside world.
The Problem we now facing is that we have different access points to our application. Like one for browser, one for mobile app. Both use the standard https port, but are different applications.
In iis we could use host headers to direct traffic to one or the other application. But with service fabric we can’t. easiest way for us would be multiple public IP’s. With that we could handle it with dns.
We considered a couple solutions with no success.
Load balancer with Multiple public ip’s. Problem: it looks like that only works with Cloud Services and we need to work with the new Resource Manager World there it seems to be not possible to have multiple public ip’s.
Multiple public load balancer. Problem: Scale Sets accept only on load balancer instance pert load balancer type.
Application Gateway. Seems not to support multiple public ip’s or host header mapping.
Path mapping. Problem: we have the same path in different applications.
My questions are:
Is there any solution to use multiple IP’s and map the traffic internally to different ports?
Is there any option to use host header mapping with service fabric?
Any suggestion how I can solve my problem?
Piling on some Service Fabric-specific info to Eli's answer: Yes you can do all of this and use an http.sys-based self-hosted web server to host multiple sites using different host names on a single VIP, such as Katana or WebListener in ASP.NET Core 1.
The piece to this that is currently missing in Service Fabric is a way to configure the hostname in your endpoint definition in ServiceManifest.xml. Service Fabric services run under Network Service by default on Windows, which means the service will not have access to create a URL ACL for the URL it wants to open an endpoint on. To help with that, when you specify an HTTP endpoint in an endpoint definition in ServiceManifest.xml, Service Fabric automatically creates the URL ACL for you. But currently, there is no place to specify a hostname, so Service Fabric uses "+", which is the strong wildcard that matches everything.
For now, this is merely an inconvenience because you'll have to create a setup entry point with your service that runs under elevated privileges to run netsh to setup the URL ACL manually.
We do plan on adding a hostname field in ServiceManifest.xml to make this easier.
It's definitely possible to use ARM templates to deploy a Service Fabric cluster with multiple IPs. You'll just have to tweak the template a bit:
Create multiple IP address resources (e.g. using copy) - make sure you review all the resources using the IP and modify them appropriately
In the load balancer:
Add multiple frontendIPConfigurations, each tied to its own IP
Add loadBalancingRules for each port you want to redirect to the VMs from a specific frontend IP configuration
Add probes
As for host header mapping, this is handled by the Windows HTTP Server API (see this article). All you have to do is use a specific host name (or even a URL path) when configuring an HTTP listener URL (in OWIN/ASP.NET Core).

Resources