Schedule based start/stop of EC2 Instances in Autoscaling groups - python-3.x

Our requirement is we have Tibco BW components on top of AMAZON Ec2 instances and we need to start and stop instances on the timings provided by Business.Please note all EC2 instances are within the Autoscaling groups.
I was able to start and stop the EC2 instances when there is no autoscaling group involved.I had built a Lambda function and was triggering that function from Cloudwatch which was working fine.Nut I am not sure how to extend that to Ec2 instances which are having Autoscaling groups.
The expected result is that Applications on EC2 instances will be stopped depending on Schedule provided by Business.All the EC2 instances are within the Autoscaling group

You can use Scheduled Scaling to modify an Auto Scaling group so that it adds/removes instances.
You can configure it to change one of three variables:
The Minimum number of instances. For example, increasing the minimum might launch additional instances.
The Maximum number of instances, which might cause instances to be terminated.
The Desired number of instances, which will set the quantity 'now', but the quantity might change later based upon other rules you have in place (eg when things get busy).
It is quite common for companies to increase the minimum quantity at the start of the day to provide more instances before things get busy. Similarly, it is common to decrease the minimum number of instances at night or on weekends to allow instances to scale-in if there are scaling rules in place to detect idle capacity.
Please note that Auto Scaling will either Launch new instances or Terminate existing instances. It does not start or stop instances.
See: Scheduled Scaling for Amazon EC2 Auto Scaling

Related

Is it a recommended practice to scale down the AKS usernodepools to 0 dring non work hours?

We are using AKS 1.19.1 in our sandbox environment and have both system and user nodepools seperately. We have multiple applications running in our user nodepools along with istio as service mesh. Current node count of the usernodepool is 24 and auto scaling is enabled.
Now as part of cost optimisation, we are planning to scale down the usernodepool to zero during the non working hours ( say after office time or during night time).
Reference:- https://learn.microsoft.com/en-us/azure/aks/scale-cluster
Will this be recommended way to do scale down to zero nodes for such a cluster with nodepool size of 25 nodes
if yes,
When we renable autoscaling property (everyday after making the count to zero at night) , will this automatically increase node count and application pods will be auto sterted or, we need to rollout the restart of the pods seperately?
What are the factors it depends to back the normal running state and how long it may take?
Is there any way to schedule this feature of scaling down to zero during night and then again renable autoscaling property at morning automatically
Just use KEDA and let it manage the scale up / down for you:
https://learn.microsoft.com/en-us/learn/modules/aks-app-scale-keda/

Delete the Azure Virtual Machine Scale Set instances using Runbook

First I want to know if is possible to delete vmss instances based on cpu performance of the instances but not by using scaling.
I have a scale set in which the instances have different cpu average and I want to remove only the instances with the lowest cpu performance, let's say instances with less than 20% cpu performance.
The idea is to make a cycle to pass through all the instances and then a condition where I select all the vmss instances with less than 20% cpu performance. Inside the condition to delete the selected the vmss instances.
Why don't use the automatic scale, it's the best and simplest way to scale the VMSS. If you use the Runbook, you need to get the CPU performance yourself every interval. And I don't know a simple way to get it. You just can easily see the CPU performance in the Azure portal. Go to use the automatic scale, that's the way.

What happens if a spot instance isn't available for an AWS autoscaling group?

If I have an autoscaling group that consists of on-demand and spot instances with a minimum of 4 on-demand instances, and the extra capacity consisting of spot instances, what happens if it needs to scale up with a spot instance, and there isn't an available spot instance (because I've been outbid, or if there aren't any spare instances to fulfil the spot request)?
Will it still scale up using an on-demand instance?
Will the autoscaling group fail to scale up?
Other info:
I'm using a "Lowest Price" Spot Allocation Strategy
The max_spot_price is capped at the on-demand price.
My Google foo seems to be failing me as I can't seem to find any answers on the web. I would appreciate if anybody could shed some light on this issue.
Thanks in advance!
An AutoScaling Group in AWS will not failover to on demand if there's no spot capacity. This is essentially the trade off your getting for the lower price of spot instances. To work around this, try adding more AZs and/or instance types (not as much of an issue now that weights are supported and ALB can route based on Least Outstanding Requests)
If you have multiple instance types and AZs setup in the ASG, this happens after your on demand base is met:
1) Tries to launch the spot instance(s) based on your allocation strategy and number of spot pools
2) If the desired instance type(s) aren't available, try all the other types in that AZ
3) If no spot instances are available in that AZ, that launch request will fail and it will try again in another enabled AZ
4) If there are no spot instances, in any of the types you have selected, in any of the AZs you have on the ASG, then nothing will launch and the ASG will periodically retry until it reaches the desired capacity.
Think of it like this, there's only so many servers in their data centers. If spot evictions are happening because they need capacity for on demand instances, and everyone running spot failed over to ondemand for that instance type; there would probably suddenly also be an on demand instance capacity issue in that AZ.

Azure auto scale-in will kick out the 2nd user who might be connected to the instance which is automatically terminated due to auto scale-in

I have a question related to azure auto scaling, when load/stress, increased the instances will scale out and after the scale out if another user connected with one of the instance in the same scale set and in a few minutes, let's say load decreased and in that case scale-in will start and it might possibly the 2nd user will be kicked out since he was connected with one of the instance which was scaled-in/destroyed automatically.
Is there any way to stop the scale-in for that particular instance where a user is connected and he is performing his task?
I think there is no way to stop the scale-in when it is started. But as I know, the scale operation is orderly. When scale-in, the instances with the highest IDs are removed first. For more details, see FAQ of Scale Set, and the question "If I reduce my scale set capacity from 20 to 15, which VMs are removed".
So I suggest the 2nd user can use the instance with smallest IDs in a plan. This will avoid being kicked out as much as possible.

Amazon EC2 boot time

Our web app performs a random number of tasks for a user initiated action. We have built a small system where a master server calculates the number of worker servers that are needed to complete the task, and the same number of EC2 instances are "Turned On" which pick up the tasks and perform the same.
"Turned On" because the time taken to span an instance from an AMI is extremely high. So the idea is have a pool of worker instances and start and stop them as per requirement.
Also considering how amazon charges when you start up an instance (You are billed for 1 hour every time you Turn on an instance). The workers once spawned will be active for an hour and will accept other tasks during this period.
We have managed to get this architecture up and running, however the boot up time still bothers us as it fluctuates between 40 to 80 seconds. Is there some way we can reduce the same.
Below is the stack information of the things running on the worker instance
Ubuntu AMI
Node JS (using forever-service for auto startup on boot)
Docker (the tasks are performed inside individual docker containers)
Have you taken a look at AWS lambda ? (https://aws.amazon.com/lambda ).
Lambda supports node.js and will automatically manage the scaling of required worker infrastructure, depending on the number of requests. This will avoid your "one hour bill" problem. You only pay for used processing time.

Resources