NodeJs/ExpressJs TLS configuration on load balancer - node.js

I have 3 app servers running a NodeJS/ExpressJS app and a load balancer in front of them that routes the incoming requests (round robin). This setup is currently http based and we would like to have TLS certificate installed to make it https. Our devops guy has left the company and we have a huge gaping hole on understanding and maintaining this. I am pretty sure this setup does not have Apache or nginx in front of the app servers. So how does load balancing work without ngix or Apache? Does the load balancer have to run on a server by itself? if so is that where we need to install the TLS/SSL certificate ? All servers we use are linux based.

Generally, HTTPS termination happens on load balancer and normal HTTP connection is made between load balancer and server. By doing this, please ensure servers are not publicly exposed (not accessible from internet directly without load balancer).
Install the certificate on load balancer and configure it to make http request from loadbalancer to server. I am assuming you are using some IaaS (such as AWS, GCP etc) and these settings are readily available in their documentation.

Typically in AWS, either AWS Application Load Balancer is used for load balancing, which is a managed service provided by AWS where you will pay per hour and based on the traffic. Also, it is possible to terminate the TLS/SSL at Load Balancer Level (Where its also possible to use Free SSL Certificates from AWS Certificate Manager).
Since this Load Balancer is a managed service, the High Availability and Fault Tolerance is taken cared by AWS.

Related

GCP: Allowing Public Ingress Web Traffic from the Load Balancer ONLY

Disclaimers: I come from AWS background but relatively very new to GCP. I know there are a number of existing similar questions (e.g, here and here etc) but I still cannot get it work since the exact/detailed instructions are still missing. So please bear with me to ask this again.
My simple design:
Public HTTP/S Traffic (Ingress) >> GCP Load Balancer >> GCP Servers
GCP Load Balancer holds the SSL Cert. And then it uses Port 80 for downstream connections to the Servers. Therefore, LB to the Servers are just HTTP.
My question:
How do I prevent the incoming HTTP/S Public Traffic from reaching to the GCP Servers directly? Instead, only allow the Load Balancer (as well as it's Healthcheck Traffic)?
What I tried so far:
I went into Firewall Rules and removed the previously allowing rule of Ports 80/443 (Ingress Traffic) from 0.0.0.0/0. And then, added (allowed) the External IP address of Load Balancer.
At this point, I simply expected the Public Traffic should be rejected but the Load Balancer's. But in reality, both seemed to be rejected. Nothing reached the Servers anymore. The Load Balancer's External IP wasn't seemed to be recognised.
Later I also noticed the "Healthchecks" were also not recognised anymore. Therefore Healthchecks couldn't reach to Servers and then failed. Hence the Instances were dropped by Load Balancer.
Please also note that: I cannot pursue the approach of simply removing the External IPs on the Servers. (Although many people say this would work.) But we still want to maintain the direct SSH accesses to the Servers (by not using a Bastion Instance). Therefore I still need the External IPs, on each and every Web Servers.
Any clear (and kind) instructions will be very much appreciated. Thank you all.
You're able to setup HTTPS connectivity between your load balancer and your back-end servers while using HTTP(S) load balancer. To achieve this goal you should install HTTPS certificates on your back-end servers and configure web-servers to use them. If you decided to completely switch to HTTPS and disable HTTP on your back-end servers you should switch your health check from HTTP to HTTPS also.
To make health check working again after removing default firewall rule that allow connection from 0.0.0.0/0 to ports 80 and 443 you need to whitelist subnets 35.191.0.0/16 and 130.211.0.0/22 which are source IP ranges for health checks. You can find step by step instructions how to do it in the documentation. After that, access to your web servers still be restricted but your load balancer will be able to use health check and serve your customers.

Can Azure Application Gateway distribute request to specific URL?

I have a use case where my cluster has 3 VMs working as head node in HPC Pack and a bunch of other VMs working as compute nodes.
So basically, after creating this cluster, i must install a special HCP client, from this client, i type the DNS name of each of VMs to access the HPC management interface.
For example: https://head-node-1.azure.com
Of course, if i access this DNS from Chrome, i only see IIS page.
I wants to create a load balancer with its DNS name. Let's say https://load-balancer.azure.com
So from my client, every time i access load balancer DNS name, i can see the management interface, not IIS page.
How can i do that?
Not sure I'm understanding you correctly. Basically, Azure Application Gateway supports URL path-based routing rules.
Actually, Application Gateway supports web-based traffic load balancing. [Azure load balancer][2] supports stream-based traffic. If you want to listen to the protocol HTTP or HTTPS, you can use Application Gateway. Per your description, you could not access HPC management interface from web explorer, you could use a 4 layer load balancing based on TCP/UDP.
So you could create a public-facing load balancing and add the head node VMs as the backend pools. Create a health probe and load balancing rules to specify the ports you want to listen for your HPC management interface on the each of VMs.
Hope this helps, let me know if you have any concerns.

Forward from AWS ELB to insecure port on the EC2 instance

I fear that this might be a programming question, but I am also hopeful that it is common enough that you might have some suggestions.
I am moving to a fail-over environment using AWS elastic load balancers to direct the traffic to the EC2 instances. Currently, I have set up the ELB with a single EC2 instance behind it. You will see why in a moment. This is still in test mode, although it is delivering content to my customers using this ELB -> EC2 path.
In each of my production environments (I have two) I have an AWS certificate on the load balancer and a privately acquired security certificate on the EC2 instance. The load balancer listeners are configured to send traffic received on port 443 to the secure port (443) on the EC2 instance. This is working; however, as I scale up to more EC2 instances behind the load balancer, I have to buy a security certificate for each of these EC2 instances.
Using a recommendation that was proposed to me, I have set up a test environment with a new load balancer and its configured EC2 server. This ELB server sends messages received on its port 443 to port 80 on the EC2 system. I am told that this is the way it should be done - limit encryption/decryption to the load balancer and use unencrypted communication between the load balancer and its instances.
Finally, here is my problem. The HTML pages being served by this application use relative references to the embedded scripts and other artifacts within each page. When the request reaches the EC2 instance (the application server) it has been demoted to HTTP, regardless of what it was originally.This means that the references to these embedded artifacts are rendered as insecure (HTTP). Because the original page reference was secure (HTTPS), the browser refuses to load these insecure resources.
I am already using the header X-Forwarded-Proto within the application to determine if the original request at the load balancer was HTTP or HTTPS. I am hoping against hope that there is some parameter in the EC2 instance that tells it to render relative reference in accordance to the received X-Forwarded-Proto header. Barring that, do you have any ideas about how others have solved this problem?
Thank you for your time and consideration.
First of all it is the right way to go by having the SSL termination at ELB/ALB and then having a security group assigned to EC2 that only accepts traffic from ELB/ALB.
However responding with https urls based on the X-Forwarded-Proto request headers or based on custom configuration, needs to be handle in your application code or webserver.

How to provide SSL to APIs?

I used self signed openssl for APIs but when they are used client side it is showing the error message in secured response. How to provide original ssl cert? And I'm using elastic bean stalk in aws to host APIs. In that I have come across ACM and that is integrated with Elastic Load Balancing and Amazon CloudFront. So which one should I use from those two? If I use any of those two, will that be enough in production mode? Or should I use any other one?
You can setup a certificate with ACM that matches your DNS record. Then point that DNS record to your Elastic Beanstalk Environments DNS record. Which will be something like ENV-name.76p5XXXX22.us-east-1.elasticbeanstalk.com
AWS has a document you can follow here.
Let's begin.
For development purposes, self signed certificate is okay. You can set NODE_TLS_REJECT_UNAUTHORIZED=0 in environment variables.
For AWS Elastic Beanstalk behind Load Balancer, you can have 2 ways -
One way encrypted - In this you add a certificate in your load balancer only. This way, Client to Load balancer is encrypted and then load balancer to instances is unencrypted. This is safe. I use this. This way I don't have to use any certificates on my instances and I run a normal HTTP server on instances. You can choose to allow only HTTPS or not from load balancer settings.
End to end encrypted - In this you use a certificate on your instances as well and you can choose to forward encrypted traffic directly from Load Balancer to your instances or you can decrypt and re-encrypt traffic and send to instances. I don't have any experience with this. The first option is suitable for most cases. Refer to this: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html

Use of SSL in Azure internal load balancer scenario

We have a internal load balancer deployed in Azure, with 4 VM currently in the same load balancer set. We have a software deployed as IaaS, they essentially running a windows service taking traffic from a pre-configured port (not 443)
I am trying to figure out how this will work, to my understanding, internal load balancer does not offload SSL, so my call will be end to end from client to the VM (could be any of the 4), I can configure the software to listen for a secure socket on same load balancer ports, but how should I configure my client to call 4 servers, which certificate to use in this case?
And, what if we have more VM adding to the picture?
Azure Load Balancer (including the Internal one) operates at the network layer, so it does not do SSL offloading or things like cookie-based affinity. If that's what you need, you may look into something like Azure Application Gateway or third-party layer 7 load balancers (Nginx Plus, Barracuda WAF, etc).
In your case, with the standard ILB, all requests will be routed to one of the 4 VMs, and all of them will need to have the SSL certificate installed (the same one in all VMs). SSL certificates, indeed, are bound to a specific hostname, but not a specific machine: if you need to load balance, you're free to re-use the same certificate (and private key) on every instance, as long as they all respond to the same hostname publicly.
Azure Load balancer does not provide SSL offloading. You could leverage KEMP LoadMaster-for-Azure and configure SSL offloading, by uploading certificate on the loadMaster and allow non ssl or SSL traffic to the 4 internal VMs. You could find the details in the below link
https://kemptechnologies.com/solutions/microsoft-load-balancing/loadmaster-azure/
Regards,
Krishna

Resources