AWS ELB terminates instance when using offline.htm for IIS - iis

Currently we using AWS ELB with multiple instances for hosting our IIS website(s).
When using app_offline.htm to display a maintenance message, the ELB healthcheck will terminate all the instances. This is due to the fact that the app_offline.htm page will return a 503 message and the ELB will determine this a non healthy host.
Is there a way to gracefully solve this problem, without modifying the health check (time) parameters within the AWS ELB. (and not deploying a "maintenance site")
thanx in advance.

Yes, use the EC2 health checks instead of the ELB health checks in your AutoScaling group, at least while in maintenance. Otherwise, your instance will be terminated when the ELB Health Check fails due to the 503.
You can also remove the instance under maintenance from your AutoScaling group.
In the AutoScaling docs, see:
Temporarily Removing Instances
Health Checks

Related

Aws load balancer for Server Sent Events or Websockets

I'm trying to load balance a nodejs Server sent event backend and I need to know if there is a way to distribute the new connections to the instances with the least connected clients. The problem I have is when scaling up, the routing continues sending new connections to the already saturated instance and since the connections are long lived this simply won't work.
What options do I have for horizontal scaling long lived connections?
It looks like you want a Load Balancer that can provide both "sticky sessions" and use the "least connection" instead of "round-robin" policy. Unfortunately, NGINX cannot provide this.
HAProxy (High Availability Proxy) allows for this:
backend bk_myapp
cookie MyAPP insert indirect nocache
balance leastconn
server srv1 10.0.0.1:80 check cookie srv1
server srv2 10.0.0.2:80 check cookie srv2
If you need ELB functionality and want to roll it all manually, take a look at this guide.
You might also want to make sure classic AWS ELB "sticky session" configuration or the newer ALB "sticky session" option does not meet your needs. ELB normally sends connection to upstream server with the least "load", and when combining with sticky sessions might be enough.
Since you are using AWS, I'd recommend Elastic Beanstalk for your Node.js application deployment. The official documentation provides good examples, like this one. Note that Beanstalk will automatically create an Elastic Load Balancer for you, which is what you're looking for.
By default, Elastic Beanstalk creates an Application Load Balancer for
your environment when you enable load balancing with the Elastic
Beanstalk console or the EB CLI. It configures the load balancer to
listen for HTTP traffic on port 80 and forward this traffic to
instances on the same port.
[...]
Note:
Your environment must be in a VPC with subnets in at least two
Availability Zones to create an Application Load Balancer. All new AWS
accounts include default VPCs that meet this requirement. If your
environment is in a VPC with subnets in only one Availability Zone, it
defaults to a Classic Load Balancer. If you don't have any subnets,
you can't enable load balancing.
Note that the configuration of a proper health check path is key to properly balance requests, as you mentioned in your question.
In a load balanced environment, Elastic Load Balancing sends a request
to each instance in an environment every 10 seconds to confirm that
instances are healthy. By default, the load balancer is configured to
open a TCP connection on port 80. If the instance acknowledges the
connection, it is considered healthy.
You can choose to override this setting by specifying an existing
resource in your application. If you specify a path, such as /health,
the health check URL is set to HTTP:80/health. The health check URL
should be set to a path that is always served by your application. If
it is set to a static page that is served or cached by the web server
in front of your application, health checks will not reveal issues
with the application server or web container.
EDIT: If you're looking for sticky sessions, as I described in the comments, follow the steps provided in this guide:
To enable sticky sessions using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
On the Description tab, choose Edit attributes.
On the Edit attributes page, do the following:
a. Select Enable load balancer generated cookie stickiness.
b. For Stickiness duration, specify a value between 1 second and 7 days.
c. Choose Save.

Routing requests away from an unhealthy (503ing) instance?

We have a Web App hosted on multiple (scaled-out) Premium Dv2 instances using Azure App Service.
Occasionally our application fails to start-up after a restart. This will result in a 503 Service Unavailable response for requests to that instance. But when this happens, requests still get routed evenly between this instance and the healthy instances.
Shouldn't the load-balancer rather route requests away from this instance? Can this be achieved?
NOTE: We are not using API Management or App Service Environment.
Shouldn't the load-balancer rather route requests away from this instance?
Azure Load Balancer can probe the health of the various server instances. When a probe fails to respond, the load balancer stops sending new connections to the unhealthy instances.
AFAIK, before you get 503 error, it still get routed to that instance.
But when this happens, requests still get routed evenly between this instance and the healthy instances.
I found the following possible scenes that you still get routed when the instances are unhealthy.
1.The timeout and frequency values set in SuccessFailCount determine whether an instance is confirmed to be running or not running. In the Azure portal, the timeout is set to two times the value of the frequency.
2.The HTTP server doesn't respond at all after the timeout period. Depending on the timeout value that is set, multiple probe requests might go unanswered before the probe gets marked as not running.
3.If you have web roles that use w3wp.exe, you also get automatic monitoring of your website. Failures in your website code return a non-200 status to the load balancer probe.Consequently, the load balancer doesn't take that instance out of rotation.
4.The TCP server doesn't respond at all after the timeout period. When the probe is marked as not running depends on the number of failed probe requests that were configured to go unanswered before marking the probe as not running.
For more detail, you could refer to this article.

How to setup SSL for instance inside the ELB and communicating with a node instance outside the ELB

I have create an architecture on AWS (hope it should not be wrong) by using the ELB, autoscaling, RDS and one node ec2 instance outside the ELB. Now I am not getting, that, how I can implement the SSL on this architecture.
Let me explain this in brief:
I have created one Classic Load Balancer.
Created on autoscaling group.
Assign instances to autoscaling group.
And lastly I have created one Instance that I am using for the node and this is outside the Load Balancer and Autoscaling group.
Now when I have implemented the SSL to my Load Balancer, the inner instances are communicating with the node instance on the HTTP request and because the node instance is outside the load balancer so the request is getting blocked.
Can someone please help me to implement the SSL for this architecture.
Sorry if you got confused with my architecture, if there is any other best architecture could be possible then please let me know I can change my architecture.
Thanks,
When you have static content, your best bet is to serve it from Cloudfront using an S3 bucket as its origin.
About SSL, you could set the SSL at your ELB level, follow the documentation .
Your ELB listens on two ports: 80 and 443 and communicates with your ASG instances only using their open port 80.
So when secure requests come to the ELB, it forwards them to your server ( EC2 in the ASG ). Then, your server, listening on port 80, receives the request; if the request have the X-FORWARDED-PROTO HTTPS, the server does nothing, otherwise it sets it and forward/rewrite the URL to be a secure one and the process restart.
I hope this helps and be careful of ERR_TOO_MANY_REDIRECTS
Have you considered using an Application Load Balancer with two target groups and a listener rule?
If the single EC2 instance is just hosting static content, and is serving content on a common path (e.g. /static), then everything can sit behind a shared load balancer with one common certificate that you can configure with ACM.
"because the node instance is outside the load balancer so the request
is getting blocked."
If they're in the same VPC you should check the security group that you've assigned to your instances. Specifically you're going to want to allow connections coming in to the ports 443 and/or 80 on the stand-alone instance to be accessible from the security group assigned to the load balancer instances - let's call those 'sg-load_balancer' (check your AWS Console to see what the actual security group id is).
To check this - select the security group for the stand-alone instance, notice the tabs at the bottom of the page. Click on the 'Inbound' tab. You should see a set of rules... You'll want to make sure there's one for HTTP and/or HTTPS and in the 'Source' instead of putting the IP address put the security group for the load balancer instances -- it'll start with sg- and the console will give you a dropdown to show you valid entries.
If you don't see the security group for the load balancer instances there's a good chance they're not in the same VPC. To check - bring up the console and look for the VPC Id on each node. That'll start with vpc_. These should be the same. If not you'll have to setup rules and routing tables to allow traffic between them... That's a bit more involved, take a look at a similar problem to get some ideas on how to solve that problem: Allowing Amazon VPC A to get to a new private subnet on VPC B?

AWS EC2 LoadBalancing SSL nodeJS - Where am I going wrong

I am fairly new to all this (being an app/mobile web developer).
I have setup an instance on EC2 which runs perfectly under http.
I want to add https support as I want to write a service worker.
I have used Amazons Certificate Manager to obtain a certificate
I have created an ELB and added a listener at 443 for https
I am not entirely sure whether my ELB and EC2 instance are connected. Following some instructions I attempted to create a CNAME rule in my Route53 setup but it would not accept it (pointing to the ELB DNS).
My understanding is that if they are then my http nodejs instance should now automatically support https.
This is currently not the case. My nodejs code is unchanged (it still only creates a http server listening at port 3002.
When I do a http call to the domain (http://example.com:3002) it works but a https call (https://example.com:3002) does not with a Site can not be reached failure.
This leads me to believe that the ELB and the EC2 are not associated. Can anyone suggest where I may have gone wrong as I have hunted the internet for 3 days and not found any step by step instructions for this.
You need to focus on this part of your question:
I am not entirely sure whether my ELB and EC2 instance are connected.
Following some instructions I attempted to create a CNAME rule in my
Route53 setup but it would not accept it (pointing to the ELB DNS).
Why are you not sure they are connected? You should be able to look at the health check section in the load balancer UI and see that the server instance is "connected" and healthy. If it isn't, then that is the first thing you need to fix.
Regarding the CNAME in Route53, what do you mean it wouldn't accept it? What are the details of that issue? Until you have your DNS pointing to the load balancer you won't actually be using the load balancer, so that's another issue you need to fix.
When I do a http call to the domain (http://example.com:3002) it works
but a https call (https://example.com:3002) does not with a Site can
not be reached failure.
If you had an error setting up the DNS then of course this isn't going to work. You shouldn't even be attempting to test this yet until you get the DNS configured.

Delete Azure VM Instance from load balanced Cloud Service

I have 2 Azure vm's (Linux) being load balanced by a public Azure Cloud Service. Both instances show in the Azure Management portal for the same cloud service. I want to take down one instance and perform some maintenance. However since the instance is still showing even though the VM has been shutdown it the Cloud Service is still directing traffic to it. How do I delete an instance from the Cloud Service or stop the Cloud Service from directing traffic to a particular VM instance? Then afterwards how does one re-associate an existing VM to that service? (i.e. change from one Cloud Service to another).
Note: SSH works into the VM but other ports used by the VM are not working acting like they are trying to go to the other VM even though the correct endpoints are created to the active VM.
The purpose of a port probe in a load-balanced set is for the load balancer to be able to detect whether or not a VM is able to accept traffic. When configuring the load-balanced endpoint you can specify a webpage or a TCP endpoint for the probe - and this should be present on each instance. Traffic will be directed to the VM as long as the webpage returns 200 OK or the TCP endpoint accepts the connection when the load balancer probes. You can specify the time interval between probes and the number of probes that must fail before the endpoint is deemed dead and should be taken out of rotation (defaults are every 15 seconds and 2 probes).
You can take a VM out of load-balancer rotation by ensuring that the configured probe page returns something other than 200 OK and then bring it back into rotation by having it once again send a 200 OK.
When I have needed to keep my webservice running and returning status of 200 I have had to resort to removing the endpoint from the load-balanced set. It is pretty simple to do but it does take usually a minute for the webPortal to remove the endpoint and then again once you recreate the endpoint to put it back in the set.

Resources