I have successfully used this recipe in the past to migrate a virtual network including attached VMs: https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-cli-migration-classic-resource-manager
But today, and also when I tried last week, no matter the number of reboots, I get this error (Azure CLI):
> azure network vnet prepare-migration "<RG-NAME>"
info: Executing command network vnet prepare-migration
error: BadRequest : Migration is not allowed for HostedService
<CLOUD-SERVICE> because it has VM <VM-NAME> in State :
RoleStateUnknown. Migration is allowed only when the VM is in one of the
following states - Running, Stopped, Stopped Deallocated.
The VM is in fact running smoothly, and so says the Azure portal.
So any ideas how to get out of this mess?
Have you edited the NSG or local firewall of the VM? Please do not restrict the outbound traffic from the VM. It may break the VM Agent.
Also, please check if the VM Agent is running properly. If the VM Agent is not reachable, this issue may occur.
==============================================================================
Only issue is that I don't seem to be able to moved the Reserve IP to my new load balancer.
If we migrate the cloud service with a preserved public IP address, this public IP address will be migrated to ARM and be assigned to a load balancer automatically. (The load balancer is auto-created.) Then, you are able to re-assign this static public IP address to your load balancer.
Here is the screenshot of my lab:
Before the migration
After the migration
I can re-associate the IP with new load balancer after I delete the auto-created one.
Related
SETUP:
I have 2 Ubuntu VMs sitting behind an internet facing standard load balancer. LB is zone redundant, 2 VMs are set up as HA in zones 1 and 2.
VMs are spun up with a Virtual Machine Scale Set, and entire infrastructure is deployed with Terraform.
Applications running on containers in VMs are exposed on port 5050.
Inbound rules are set to allow traffic on port 80, 5050.
Vms are in the LB backend pool.
PROBLEM:
When VMs are up and running, I access the console the VMs are unable to connect to Ubuntu repo or any external package for download.
Deleting and scaling out VMs - same issue.
Load balancer rules
Load balancer health probe
However, when I delete the LB rules and Lb-probe, and recreate them, I immediately am able to download packages from ubuntu repo or any other external link.
I also deleted one VM and scaled out new a VM(after recreating lb rules and probe) and ubuntu packages, and docker packages install successfully.
This is driving me crazy, has anyone come across this?
I can not reproduce this issue in the same scenario when I deploy the entire infrastructure via the Azure portal.
According to control outbound connectivity for Standard Load Balancer:
If you want to establish outbound connectivity to a destination
outside of your virtual network, you have two options:
assign a Standard SKU public IP address as an Instance-Level Public IP address to the virtual machine resource or
place the virtual machine resource in the backend pool of a public Standard Load Balancer.
Both will allow outbound connectivity from the virtual network to outside of the virtual > network.
So, this issue may happen due to the load balancer rules that have not taken effect on the initial time or not got configuration correctly or the public-facing load-balancing frontend IP has not got provisioned. Or, you may check if there is any firewall or restriction on outbound traffic from your vmss instance.
When I have provisioned these resources. I have to associate an NSG that whitelist the allowed traffic to the subnet of VMSS instances. This will trigger Standard LB to begin to receive the incoming traffic. Also, I have changed the Upgrade policy to automatic.
Hope this information could help you.
I had the same issue. Once I added a load balancing rule, my VMs had internet access.
I am trying to setup a custom build agent on a Windows VM in Azure. I installed the build agent from Azure Pipelines. The VM shows in the agent pool, but is offline. For this VM I used the default settings, so it automatically created a virtual network, public IP, and network security group. The network security group is modified to allow RDP traffic from my IP address only, and to allow HTTPS traffic. I am assuming something with this setup is preventing Azure Pipelines from sending data to the VM.
My first question is how do I get this setup to work. What am I missing?
My second question is how do I get this to work in a more secure way by removing the default link between the public IP and the VM, and ultimately blocking direct access to the VM with a firewall?
VM only needs outbound HTTPS access to Azure Devops
You dont need public ip for the agent vm
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops#im-running-a-firewall-and-my-code-is-in-azure-repos-what-urls-does-the-agent-need-to-communicate-with
We have a single cloud service (within same resource group) in Azure that contains two VMs and would like each of the VMs to host some external facing websites on port 80.
Take for example:
VM1 = www.domain1.com:80, www.domain2.com:80
VM2 = www.domain3.com:80, www.domain4.com:80
It seems with Azure all resources within a Cloud Service share the same VIP, DNS Name and Endpoints settings and in particular the VIP is pointing to the cloud service and not any VM. Since all the settings are shared we cannot create a DNS record pointing directly to the individual VMs.
Is this a limitation of Azure or setup issue?
We've tried giving each VM it's own Instance IP address but only 1 can be assigned as we receive an error message on the 2nd VM:
Failed to update IP address in the virtual machine 'XXXX'. Error: The operation '08faed40bf2fad76a67fac50be475a33' failed: 'The server encountered an internal error. Please retry the request.'
I did notice that the Virtual IP address assignment = Dynamic - But not sure if that's related to the above error?
I am using Azure Powershell cmdlets to reserve the current IP of my running Windows VM instance. To test it, I created a new VM, ran it up, then ran New-AzureReservedIP with the -ServiceName tack so it knows reserve teh current IP. It worked a charm! I shut down the machine and it reallocaed the same IP address on startup.
Now if I do the exact same thing on an actual production Windows VM I get the below error. Could it be that this machine has been running for about 3 years and is not supported for some reason? Syntax is correct as my initial test and in the same location etc..
BadRequest : Cannot reserve the ip of deployment xxxxxxxx
Has anyone else had this problem?
If this cloud service is 3 years old it is quite possible that you are using an affinity-group VNET to host the VM. It is not possible to reserve an IP address from such a VNET. The following is documented:
Reserved IPs are only supported in regional VNets. It is not supported for VNets that are associated with affinity groups. For more information about associating a VNet with a region or an affinity group, see [About Regional VNets and Affinity Groups][2].
I had a need to add additional public IP addresses to an Azure VM and found a working solution here:
Azure VM: More than one Public IP
Essentially this creates a reserved IP in Azure and then adds the reserved IP to a cloud service. Once it's bound to a cloud service it can be mapped to a VM endpoint.
This all works great but there is one bit I don't understand - The IP address of the reserved IP and the resultant VM endpoint don't match. I have to set up DNS to point to the IP address of the endpoint to make this work. Is there something I am not doing right, or is this just the way reserved VMs work?
It looks like this unanswered question is the same issue:
azure reserved IP for VM is diffrent than the given
Thanks!
The "Azure Cloud Service" is a container that provides internet connectivity to "Azure VMs". Thus, you assign the Internet facing Public IP to the Cloud Service. This article is relatively good at explaining the relationship: Azure Cloud Services
From above link:
Here’s a definition of an Azure IaaS cloud service that will make it easy for you to understand what it is in the context of Azure Infrastructure Services:
A cloud service is a network container where you can place virtual machines.
All virtual machines in that container can communicate with each other directly through Azure (and therefore don’t have to go out to the Internet to communicate with each other).
This container is also assigned a DNS name that is reachable from the Internet.
A rudimentary DNS server is created and can provide name resolution for all virtual machines within the same cloud service container (note that name resolution provided by the DNS server is only available to the virtual machines that are located within the cloud service).
One or more Virtual IP Addresses (VIPs) are assigned to the container and these IP addresses can be used to allow inbound connections from the Internet to the virtual machines.
Certain services (like FTP) may require your vm have a public IP: Azure VM Public IP
(IaaS v1) An Azure cloud service comes with a permanent DNS name - something.cloudapp.net - and has a single VIP allocated whenever there are VMs deployed in it OR whenever a reserved IP address is associated with it. Traffic is either load balanced or NATted (port forwarded) to the VM from the Azure Load Balancer sitting on the VIP. You can also associate a public instance-level IP address (PIP) with a VM, which gives it an additional IP address. The VIP always has a DNS name (something.cloudapp.net) while the PIP has one only if you specifically add it, I did a post which goes into these differences.
(IaaS v2) VMs are not deployed into cloud services and only have a public IP address if one is specifically added - either by configuring a PIP on the NIC of the VM (and optionally giving it a cloudapp.azure.com DNS name) or by configuring a load balancer and either load balancing or NATting traffic to it. This load balancer is configured with a public IP address and can optionally have a cloudapp.azure.com DNS name associated with it. (Ignoring internal load balancers in this discussion.)