Terraform destroy azure load balancer - azure

I've been trying to create an Azure virtual machine scale set with terraform and it's creating it fine, but when I try to perform Terraform destroy, I receive this message below. Any ideas on how I could solve this issue?
Error: Error waiting for completion of Load Balancer "vmss-see-d-01-LB" (Resource Group "RG-VMSS-D-SEE-01"):
Code="Canceled"
Message="Operation was canceled."
Details=[{
"code":"CanceledAndSupersededDueToAnotherOperation",
"message":"Operation PutLoadBalancerOperation (81ab2118-37e3-4552-a2f7-e1e12bccb1e5) was canceled and superseded by operation InternalOperation (1d4e2e27-f457-4941-b3b8-e6352f84ddd1)."
}]

As the error shows, you must put the virtual machine scale set behind a Load Balancer. While the VMSS was in the backend pool of the load balancer, and you also create a nat rule or load balancer for it, then there are dependencies between the VMSS and the Load Balancer: the Load Balancer depends on the VMSS. So if you want to delete the VMSS directly, then the error comes.
So the right sequence to delete the VMSS is that delete the nat rule or the load balancer rule associated with the VMSS, then remove the VMSS from the backend pool of the load balancer. When all the above steps are finished. The last step is deleting the VMSS.
Hope it can help you understand why does the error happen to you.

Related

Azure AKS stuck in a bad state with error VM scale set and load balancer must belong to same virtual network

While migrating a cluster we moved the vnet used by the AKS from one resource group (the one with the nodepool created by the AKS) to a different RG where we created the AKS cluster. This however, resulted in an unexpected state. The original vnet in the nodepool resource group stayed as is while it copied the vnet in to the AKS RG with the same ID. So now we have to vnet with the same name but in two different resource groups. Afterwards when we tried to create a new nodepool we received the following error:
Code="VMScaleSetMustBelongToSameVnetAsLB" Message="VM scale set
references virtual network
/subscriptions/12345/resourceGroups/project-test-k8s-mc-rg/providers/Microsoft.Network/virtualNetworks/AKS-VNET-931
which is different than load balancer virtual network
/subscriptions/12345/resourceGroups/project-test-k8s-rg/providers/Microsoft.Network/virtualNetworks/AKS-VNET-931. VM scale set and load balancer must belong to same virtual network."
The cluster was created with a managed vnet.
We tried searching for ways to change the load balancer created by AKS to use a different vnet, we do not see any options. We cannot afford to recreate the cluster at this stage. So do we have any other options to fix this issue?
There was no direct option to change the load balancer created by AKS to use a different VNet. If the load balancer uses an IP address in a different subnet, ensure the AKS cluster identity also has read access to that subnet. The VM scale set and load balancer must always belong to the same virtual network.
We can modify only address space and subnet only. Found one blog by "Ajay Kumar" refer tutorial for more information.

Load balancing ACIs inside a Vnet

I have Azure Container Instances inside a vnet and I want to implement load balancing but cannot think of a workable solution. For context, it will be a set of VMs contacting the load balancing resource which would direct the request to one of the ACIs.
Things I have tried thus far are Azure Load Balancer (does not work with ACI) and Azure Traffic Manager (cannot be inside a vnet). I don't think an application gateway is a feasible solution either. I want to know if anyone has faced this scenario before and how did they overcome it or if someone has a potential solution that I can test out?
Well, to access the ACI inside a VNet through a Load Balancer, you just need to create a Load Balancer and add the backend pool with the IP address of the ACI, here is a screenshot for it:
Then create a health probe and load balancer rule for the port you need. When all things are OK, you can access the ACI inside the VNet through the Public IP address:
Result:
ACI:
Load Balancer:

Azure - Can't create load balancer for the ScaleSet

I created a Scale Set (using a template) with an existing virtual network.
This existing virtual network has already a Load Balancer (with a public IP) with specific VMs.
Now, I can't connect to the VMs in the scale set, There's no option to add the scale set to the Load Balancer or to add the scale set's VMs to the Load Balancer. Creating a new Load Balancer doesn't help.
It seems that the only option for adding a backend pool is using an availability set or a single VM (which is not in the Scale Set).
Is there any way to solve this? to somehow add the Scale Set to the Load Balancer or to connect to it?
The goal was to create the scale set to be in the existing Load Balancer (in the network with the other VMs), but unfortunately it didn't work.
It is not posible to add vms in different availability sets to the same lb. VMSS has its own availability set (by desing). so this is not possible.
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/ccf69a9c-0a6a-47bc-afca-561cf66cdebd/multiple-availability-sets-on-single-load-balancer?forum=WAVirtualMachinesVirtualNetwork
You can work around by creating vm in the network that will act as a load balancer, but that's obviously not a PAAS solution
The goal was to create the scale set to be in the existing Load
Balancer (in the network with the other VMs), but unfortunately it
didn't work.
It is not possible and no need. Please refer to this official document. Azure VMSS instances are behind a load balancer. Also VMSS's intance could not add to a existing load balancer.
Now, I can't connect to the VMs in the scale set.
Do you create inbound NAT rules for your instance? Also, you could create a jump VM in the same VNet to login one instance. See this question.
If you could not login your VM from a jump VM, it is not a VMSS issue. You should check your instance. If you don't do any change for your instances. You could create a ticket to Azure to solve this issue.

Multiple vmss behind single Azure Load Balancer

We have multiple background worker vmss that do not need a public IP to work.
I want to be able to connect to arbitrary vm (e.g. to troubleshoot via rdp, or to collect some snapshots using remote profiler etc).
When there's only one VMSS per load balancer all works like a charm. I've setup nat pools for each port used on VMs and all works fine.
Now, if I'm trying to add one more vmss to the same load balancer (using its own nat / backend pools) the deployment fails with
Virtual Machine /subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/|providers|Micr
osoft.Compute|virtualMachineScaleSets|...|virtualMachines|0 is using different Availability Set than other Virtual Machines connected to the Load Balancer(s) ...
message.
As far as I know there's no way to set up availability set for vmss. Are there any options but keeping own load balancer/public ip for each VMSS?
UPD I've found similar scheme for VM+Availability Set setup (see ILB endpoint section).
Something like this for VMSS?
Your are right, we can't change availability set for vmss.
if I'm trying to add one more vmss to the same load balancer
As we know, we can't add different availability sets to single load balancer, so we can't add one or more VMSS to the same load balancer.
Are there any options but keeping own load balancer/public ip for each
VMSS?
We have multiple background worker vmss that do not need a public IP
to work.
Are those VMss in same VNet? If yes, we can deploy a new VM in the same Vnet, we can connect to this VM, then use this VM to connect to VMSS instances with internal IP addresses, in this way, this new VM work as a jumpbox. we can use this jumpbox to troubleshoot.
Update:
Is it possible then to have multiple vmss in same VNet and assign own
public api/load balancer for each of it?
Yes, we can create a new Azure VM with public IP, then install HAproxy on it, make this VM work as a load balancer, add all VMSS instances which in the same Vnet to HAproxy backend pool, in this way, we can access this VM's public IP address + your NAT port to connect VMss instance.

Cannot migrate Azure VMs from Classic to ARM: RoleStateUnknown

I have successfully used this recipe in the past to migrate a virtual network including attached VMs: https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-cli-migration-classic-resource-manager
But today, and also when I tried last week, no matter the number of reboots, I get this error (Azure CLI):
> azure network vnet prepare-migration "<RG-NAME>"
info: Executing command network vnet prepare-migration
error: BadRequest : Migration is not allowed for HostedService
<CLOUD-SERVICE> because it has VM <VM-NAME> in State :
RoleStateUnknown. Migration is allowed only when the VM is in one of the
following states - Running, Stopped, Stopped Deallocated.
The VM is in fact running smoothly, and so says the Azure portal.
So any ideas how to get out of this mess?
Have you edited the NSG or local firewall of the VM? Please do not restrict the outbound traffic from the VM. It may break the VM Agent.
Also, please check if the VM Agent is running properly. If the VM Agent is not reachable, this issue may occur.
==============================================================================
Only issue is that I don't seem to be able to moved the Reserve IP to my new load balancer.
If we migrate the cloud service with a preserved public IP address, this public IP address will be migrated to ARM and be assigned to a load balancer automatically. (The load balancer is auto-created.) Then, you are able to re-assign this static public IP address to your load balancer.
Here is the screenshot of my lab:
Before the migration
After the migration
I can re-associate the IP with new load balancer after I delete the auto-created one.

Resources