Redirection on Elastic Load Balance based routing - amazon

I need help in this scenario
My an ec2 instance is receiving requests, I don't want on client end to change server path based on few requests (especially on chat)
My first ec2 instance bypass requests based on few path patterns to my other created instance. (In other words, I want to redirect traffic from first to second)
Is there any way to fulfill above scenario.

You can use AWS CloudFront as a proxy for your use case where you can plugin the two EC2 instances behind CloudFront as Origins and Add behavior rules (Path rules) to switch traffic to one or the other.
Your client will send requests only to the CloudFront URL(Or DNS mapped through Route53) and won't be knowing about the EC2 instances behind. This approach will work, if your EC2 instances are publicly accessible but will be cost effective and reduce the load on your services, if you happen to cache content.
Alternative approach is to use an Application Load Balancer with path based routing configuration.
Following tutorial will guide you through the steps.
Tutorial: Use Path-Based Routing with Your Application Load Balancer

Related

Aws load balancer for Server Sent Events or Websockets

I'm trying to load balance a nodejs Server sent event backend and I need to know if there is a way to distribute the new connections to the instances with the least connected clients. The problem I have is when scaling up, the routing continues sending new connections to the already saturated instance and since the connections are long lived this simply won't work.
What options do I have for horizontal scaling long lived connections?
It looks like you want a Load Balancer that can provide both "sticky sessions" and use the "least connection" instead of "round-robin" policy. Unfortunately, NGINX cannot provide this.
HAProxy (High Availability Proxy) allows for this:
backend bk_myapp
cookie MyAPP insert indirect nocache
balance leastconn
server srv1 10.0.0.1:80 check cookie srv1
server srv2 10.0.0.2:80 check cookie srv2
If you need ELB functionality and want to roll it all manually, take a look at this guide.
You might also want to make sure classic AWS ELB "sticky session" configuration or the newer ALB "sticky session" option does not meet your needs. ELB normally sends connection to upstream server with the least "load", and when combining with sticky sessions might be enough.
Since you are using AWS, I'd recommend Elastic Beanstalk for your Node.js application deployment. The official documentation provides good examples, like this one. Note that Beanstalk will automatically create an Elastic Load Balancer for you, which is what you're looking for.
By default, Elastic Beanstalk creates an Application Load Balancer for
your environment when you enable load balancing with the Elastic
Beanstalk console or the EB CLI. It configures the load balancer to
listen for HTTP traffic on port 80 and forward this traffic to
instances on the same port.
[...]
Note:
Your environment must be in a VPC with subnets in at least two
Availability Zones to create an Application Load Balancer. All new AWS
accounts include default VPCs that meet this requirement. If your
environment is in a VPC with subnets in only one Availability Zone, it
defaults to a Classic Load Balancer. If you don't have any subnets,
you can't enable load balancing.
Note that the configuration of a proper health check path is key to properly balance requests, as you mentioned in your question.
In a load balanced environment, Elastic Load Balancing sends a request
to each instance in an environment every 10 seconds to confirm that
instances are healthy. By default, the load balancer is configured to
open a TCP connection on port 80. If the instance acknowledges the
connection, it is considered healthy.
You can choose to override this setting by specifying an existing
resource in your application. If you specify a path, such as /health,
the health check URL is set to HTTP:80/health. The health check URL
should be set to a path that is always served by your application. If
it is set to a static page that is served or cached by the web server
in front of your application, health checks will not reveal issues
with the application server or web container.
EDIT: If you're looking for sticky sessions, as I described in the comments, follow the steps provided in this guide:
To enable sticky sessions using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
On the Description tab, choose Edit attributes.
On the Edit attributes page, do the following:
a. Select Enable load balancer generated cookie stickiness.
b. For Stickiness duration, specify a value between 1 second and 7 days.
c. Choose Save.

How to setup SSL for instance inside the ELB and communicating with a node instance outside the ELB

I have create an architecture on AWS (hope it should not be wrong) by using the ELB, autoscaling, RDS and one node ec2 instance outside the ELB. Now I am not getting, that, how I can implement the SSL on this architecture.
Let me explain this in brief:
I have created one Classic Load Balancer.
Created on autoscaling group.
Assign instances to autoscaling group.
And lastly I have created one Instance that I am using for the node and this is outside the Load Balancer and Autoscaling group.
Now when I have implemented the SSL to my Load Balancer, the inner instances are communicating with the node instance on the HTTP request and because the node instance is outside the load balancer so the request is getting blocked.
Can someone please help me to implement the SSL for this architecture.
Sorry if you got confused with my architecture, if there is any other best architecture could be possible then please let me know I can change my architecture.
Thanks,
When you have static content, your best bet is to serve it from Cloudfront using an S3 bucket as its origin.
About SSL, you could set the SSL at your ELB level, follow the documentation .
Your ELB listens on two ports: 80 and 443 and communicates with your ASG instances only using their open port 80.
So when secure requests come to the ELB, it forwards them to your server ( EC2 in the ASG ). Then, your server, listening on port 80, receives the request; if the request have the X-FORWARDED-PROTO HTTPS, the server does nothing, otherwise it sets it and forward/rewrite the URL to be a secure one and the process restart.
I hope this helps and be careful of ERR_TOO_MANY_REDIRECTS
Have you considered using an Application Load Balancer with two target groups and a listener rule?
If the single EC2 instance is just hosting static content, and is serving content on a common path (e.g. /static), then everything can sit behind a shared load balancer with one common certificate that you can configure with ACM.
"because the node instance is outside the load balancer so the request
is getting blocked."
If they're in the same VPC you should check the security group that you've assigned to your instances. Specifically you're going to want to allow connections coming in to the ports 443 and/or 80 on the stand-alone instance to be accessible from the security group assigned to the load balancer instances - let's call those 'sg-load_balancer' (check your AWS Console to see what the actual security group id is).
To check this - select the security group for the stand-alone instance, notice the tabs at the bottom of the page. Click on the 'Inbound' tab. You should see a set of rules... You'll want to make sure there's one for HTTP and/or HTTPS and in the 'Source' instead of putting the IP address put the security group for the load balancer instances -- it'll start with sg- and the console will give you a dropdown to show you valid entries.
If you don't see the security group for the load balancer instances there's a good chance they're not in the same VPC. To check - bring up the console and look for the VPC Id on each node. That'll start with vpc_. These should be the same. If not you'll have to setup rules and routing tables to allow traffic between them... That's a bit more involved, take a look at a similar problem to get some ideas on how to solve that problem: Allowing Amazon VPC A to get to a new private subnet on VPC B?

Forward from AWS ELB to insecure port on the EC2 instance

I fear that this might be a programming question, but I am also hopeful that it is common enough that you might have some suggestions.
I am moving to a fail-over environment using AWS elastic load balancers to direct the traffic to the EC2 instances. Currently, I have set up the ELB with a single EC2 instance behind it. You will see why in a moment. This is still in test mode, although it is delivering content to my customers using this ELB -> EC2 path.
In each of my production environments (I have two) I have an AWS certificate on the load balancer and a privately acquired security certificate on the EC2 instance. The load balancer listeners are configured to send traffic received on port 443 to the secure port (443) on the EC2 instance. This is working; however, as I scale up to more EC2 instances behind the load balancer, I have to buy a security certificate for each of these EC2 instances.
Using a recommendation that was proposed to me, I have set up a test environment with a new load balancer and its configured EC2 server. This ELB server sends messages received on its port 443 to port 80 on the EC2 system. I am told that this is the way it should be done - limit encryption/decryption to the load balancer and use unencrypted communication between the load balancer and its instances.
Finally, here is my problem. The HTML pages being served by this application use relative references to the embedded scripts and other artifacts within each page. When the request reaches the EC2 instance (the application server) it has been demoted to HTTP, regardless of what it was originally.This means that the references to these embedded artifacts are rendered as insecure (HTTP). Because the original page reference was secure (HTTPS), the browser refuses to load these insecure resources.
I am already using the header X-Forwarded-Proto within the application to determine if the original request at the load balancer was HTTP or HTTPS. I am hoping against hope that there is some parameter in the EC2 instance that tells it to render relative reference in accordance to the received X-Forwarded-Proto header. Barring that, do you have any ideas about how others have solved this problem?
Thank you for your time and consideration.
First of all it is the right way to go by having the SSL termination at ELB/ALB and then having a security group assigned to EC2 that only accepts traffic from ELB/ALB.
However responding with https urls based on the X-Forwarded-Proto request headers or based on custom configuration, needs to be handle in your application code or webserver.

How to use HTTPS on localhost:3000

My application requirement :
""The URL of your web form, to be displayed in a frame in the Worker's web browser. This URL must use the HTTPS protocol.""
My AWS EC2 instance has node js running on it. For some reason I am having issues running it as a production server serve -s build
But when I npm start in my project folder it runs a development server on port 3000 and I can access it via http://ec2----------.compute-1.amazonaws.com:3000/
But this does not work with https. Is there a way I can access the same url using https? Something like :
https://ec2----------.compute-1.amazonaws.com:3000/
The ways that I have looked so far : Reverse Proxy and Nginx.
But could not understand it well.
If you use an elastic load balancer in front of the EC2 instance then AWS provides a very easy way to get HTTPS working. If you want to access the instance directly you will need to configure HTTPS in your node.js or use an HTTPS service to proxy the traffic to your node.js app.
Step 1 : Choosing the Load Balancer
The 2 choices when you create a load balancer :
Application Load Balancer : If your application is running on particular ports or in dev mode or you need path-based routing. It is a good option in terms of the routing decision are done at the application layer. It can only listen from HTTP and HTTPS.
Classical Load Balancer : If you need to take the routing decisions right from the transport layer. You may choose one.
I will continue with the Application Load Balancer, although most of the stages are same.
Step 2 : Configuring the Load Balancer
Simple and quick configuration :
Name : Name your load balancer.
Scheme :
internet facing : choose this if you want the requests from the client over the internet.
internal : choose this if you want the requests from the client using a private IP
address.
IP Address Type : ipv4
Listener
A listener is a process that checks for connection requests, using the protocol and port that you configured.
There can be only two listeners in the application load balancer, which are :
HTTP on port 80
HTTPS on port 443
Availability Zones
Load balancer's main job is to maintain traffic across different areas and regions. There are multiple availability zones in one region. These can be imagined as placing multiple servers in us-east These availability zones each have a separate subnet. But only one subnet can be selected for a particular zone.
You need to select at least 2 such availability zones having distinct subnets. This basically helps the load balancer to balance the load on at least 2 servers.
Step 3 : Configure Security Settings and Add Instance
Configuring security settings consists of specifying the certificates if you have selected to listen to https in the previous step. Since you selected the https listener, AWS needs to use the certificate. You can learn how to get a certificate from AWS Certificate Manager. Over here you have to select :
Certificate Type : Choose an existing certificate from AWS Certificate Manager (ACM)
Certificate Name : It pops up the certificate name in the drop down list.
Select the latest security policy
Security Policy : ELBSecurity-2016-08
Select the existing security group made for your instance.
Step 4 : Target Groups
Create a target group. Name it according to what it listens and where it targets.
You have to mention a path and a port where the listener targets the traffic to.
Step 5 : Deploy
After you review the settings, deploy and create your load balancer. This will do all the cleansing and management. It is like hiring a manager for you server traffic. You can go and meditate now for some time.
The load balancer will take almost a minute to be up and about. After the load balancer is active. Copy the DNS link of the load balancer on the main load balancer dashboard since we will need it in the next step. It will look something like this :
load-balancer-name-xxxxxxxxxx.us-east-x.xxx.amazonaws.com (A Record)
Step 6 : Map your domain name to the Load Balancer
Provides a reliable and cost-effective way to route visitors to websites by translating domain names (such as www.example.com) into the numeric IP addresses (such as 192.0.2.1) that computers use to connect to each other. AWS assigns URLs to your resources, such as load balancers. However, you might want a URL that is easy for users to remember. For example, you can map your domain name to a load balancer.
Go to Route 53 and select the hosted zone and the record set for your domain name.
You need to create a new record :
1) Leave the domain name blank.
2) Select Yes for Alias.
3) Paste the DNS link for the Load Balancer in the Alias Target.
4) Create.
This step is basically a transfer of risk. It routes the domain name to the dns of the load balancer. Hence solves our purpose of handling traffic. The rest of the job is handled by the ELB, which translates its statistics into the health reports, based on which you can create and replace more instances.
Have a great one!
Citation : https://sites.google.com/gwmail.gwu.edu/aws-tools/aws-elastic-load-balancer?authuser=0

Azure Multiple Public IPs on a Virtual Machine Scale Set with Resource Manager

We try to migrate our Platform from classical IIS hosting to a service fabric micro service architecture. So fare we learned that a service fabric lives in a virtual machine scale set and uses Load balancer to communicate to the outside world.
The Problem we now facing is that we have different access points to our application. Like one for browser, one for mobile app. Both use the standard https port, but are different applications.
In iis we could use host headers to direct traffic to one or the other application. But with service fabric we can’t. easiest way for us would be multiple public IP’s. With that we could handle it with dns.
We considered a couple solutions with no success.
Load balancer with Multiple public ip’s. Problem: it looks like that only works with Cloud Services and we need to work with the new Resource Manager World there it seems to be not possible to have multiple public ip’s.
Multiple public load balancer. Problem: Scale Sets accept only on load balancer instance pert load balancer type.
Application Gateway. Seems not to support multiple public ip’s or host header mapping.
Path mapping. Problem: we have the same path in different applications.
My questions are:
Is there any solution to use multiple IP’s and map the traffic internally to different ports?
Is there any option to use host header mapping with service fabric?
Any suggestion how I can solve my problem?
Piling on some Service Fabric-specific info to Eli's answer: Yes you can do all of this and use an http.sys-based self-hosted web server to host multiple sites using different host names on a single VIP, such as Katana or WebListener in ASP.NET Core 1.
The piece to this that is currently missing in Service Fabric is a way to configure the hostname in your endpoint definition in ServiceManifest.xml. Service Fabric services run under Network Service by default on Windows, which means the service will not have access to create a URL ACL for the URL it wants to open an endpoint on. To help with that, when you specify an HTTP endpoint in an endpoint definition in ServiceManifest.xml, Service Fabric automatically creates the URL ACL for you. But currently, there is no place to specify a hostname, so Service Fabric uses "+", which is the strong wildcard that matches everything.
For now, this is merely an inconvenience because you'll have to create a setup entry point with your service that runs under elevated privileges to run netsh to setup the URL ACL manually.
We do plan on adding a hostname field in ServiceManifest.xml to make this easier.
It's definitely possible to use ARM templates to deploy a Service Fabric cluster with multiple IPs. You'll just have to tweak the template a bit:
Create multiple IP address resources (e.g. using copy) - make sure you review all the resources using the IP and modify them appropriately
In the load balancer:
Add multiple frontendIPConfigurations, each tied to its own IP
Add loadBalancingRules for each port you want to redirect to the VMs from a specific frontend IP configuration
Add probes
As for host header mapping, this is handled by the Windows HTTP Server API (see this article). All you have to do is use a specific host name (or even a URL path) when configuring an HTTP listener URL (in OWIN/ASP.NET Core).

Resources