Windows container - switch-statement

I have created Azure VM with Windows server 2016 TP5.
On this server i am able to create images containers and docker engine installation.
My issue as follows
First time everything working proper. But I shutdown VM through azure portal and next day started from portal, I am not able to start any container created on that VM error is as belwo
PS C:> docker start iisdemo
Error response from daemon: failed to create endpoint iisdemo on network nat: HNS failed with error : Failed to create e
ndpoint
Error: failed to start containers: iisdemo
Virtual switch details as below .
Befor stutdown VM, Virtual switch type was Internal (Powershell command à Get-VMSwitch)
Name SwitchType NetAdapterInterfaceDescription
---- ---------- ------------------------------
Nat Internal
After start VM, Virtual switch type was changed to blank (no switch type assign to VM Switch)
Name SwitchType NetAdapterInterfaceDescription
---- ---------- ------------------------------
Nat
Please help me how to resolve this issue

Ran into the same issue with an Azure TP5 VM. Rebooting didn't work, removing static mappsings didn't work, but re-deploying to a new Azure host got it working.
It's not a solution, but a workaround, at least.

Thank you for posting your question.
When you shutdown a VM with the Azure Portal the virtual Machine will be on deallocated status. This means that you lose your IP settings and get dynamically a new IP adres for the VM.
Here you find more information about IP addresses in Azure and deployment.
Hope this information is helpful for you.
Best Regards,
#Jamesvandenberg

Related

Azure VM Nic With Loopback Address and No Connectivity

we have a vm which is in Azure but we're not able to connect to it. I checked the serial log under boot diagnostic and it has the following error:
======== Microsoft Azure VM Health Report - Start 2020-02-19T20:50:47.2786733Z ========
{"reportTime":"2020-02-19T20:47:44.9492172Z","networkAdapters":[{"name":"Loopback
Pseudo-Interface
1","status":"Up","macAddress":"","ipProperties":[{"protocolVersion":4,"address":"127.0.0.1","isDhcpEnabled":false},{"protocolVersion":6,"address":"::1","isDhcpEnabled":false}]}],"remoteAccess":null,"accounts":{"windows":{"adminAccountPasswordExpired":false,"adminAccountDisabled":false}},"services":[{"errorControl":"Normal","exitCode":0,"name":"TermService","processId":2708,"serviceType":"Share
Process","startMode":"Auto","startName":"NT
Authority\NetworkService","state":"Running","status":"OK"}
The key part I think is the "name":"Loopback Pseudo-Interface 1"It has no mac address as above and uses IP of 127.0.0.1, Has anyone come across this before and know how to get the NIC to be recognised. I've changed the NIC and changed IP, but cannot seem to resolve this.
By default, 127.0.0.1 is assigned to a loopback interface. It represents the localhost address. The captured health report is meanless and not enough to identify why you are not able to connect to that Azure VM.
First, you can verify if the VM status is running on the overview of the virtual machine portal. Then check if there is any port blocking in the NSG on the networking of VM, then try to RDP or SSH to that Azure VM. You can get more details on diagnose and solve problems under overview.
If you still could not connect to it. Try to resize or redeploy your Azure VM. If you have important data, please backup your OS and data disk before you redeploy it. You also could get more details on how to use boot diagnostics to troubleshoot virtual machines in Azure.
Hope this could help you.

Cannot connect to AzureVM gives error as "An internal error occured"

I tried redeploy option on azure vm.
tried to reset configuration. It gave error as failed to reset rdp configuration. Stopped and restarted 3-4 times but same issue.
I see that azure vm agent is also unresponsive.
In boot diagnostics I see an exclamation mark in image.Please help.click to see boot diagnostics image
There must be some network configuration issue, try to reset network config of VM using portal and do cross check with your VNet too.
Just changed IP address of my VM.
I was able to connect after that.
Thanks for your replies.

Cannot SSH access Azure VM after persitenly mount File Share(storage account)

I've got an issue: after I create a persistent mount point following instructions in https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-linux and rebooting my VM, I cannot SSH access my VM, any idea to fix?
It looks like this could be a two way issue, a VM credential issue, or a problem from the node you are using to connect to the VM. This article refers to how to troubleshoot the VM side on Azure: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/troubleshoot-rdp-connection
TLDR from the link:
Quick troubleshooting steps
After each troubleshooting step, try reconnecting to the VM:
Reset Remote Desktop configuration.
Check Network Security Group rules / Cloud Services endpoints.
Review VM console logs.
Reset the NIC for the VM.
Check the VM Resource Health.
Reset your VM password.
Restart your VM.
Redeploy your VM.
I'd suggest also trying to ssh from/to different nodes to check where the problem is, VM or starting point.

Cannot migrate Azure VMs from Classic to ARM: RoleStateUnknown

I have successfully used this recipe in the past to migrate a virtual network including attached VMs: https://learn.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-cli-migration-classic-resource-manager
But today, and also when I tried last week, no matter the number of reboots, I get this error (Azure CLI):
> azure network vnet prepare-migration "<RG-NAME>"
info: Executing command network vnet prepare-migration
error: BadRequest : Migration is not allowed for HostedService
<CLOUD-SERVICE> because it has VM <VM-NAME> in State :
RoleStateUnknown. Migration is allowed only when the VM is in one of the
following states - Running, Stopped, Stopped Deallocated.
The VM is in fact running smoothly, and so says the Azure portal.
So any ideas how to get out of this mess?
Have you edited the NSG or local firewall of the VM? Please do not restrict the outbound traffic from the VM. It may break the VM Agent.
Also, please check if the VM Agent is running properly. If the VM Agent is not reachable, this issue may occur.
==============================================================================
Only issue is that I don't seem to be able to moved the Reserve IP to my new load balancer.
If we migrate the cloud service with a preserved public IP address, this public IP address will be migrated to ARM and be assigned to a load balancer automatically. (The load balancer is auto-created.) Then, you are able to re-assign this static public IP address to your load balancer.
Here is the screenshot of my lab:
Before the migration
After the migration
I can re-associate the IP with new load balancer after I delete the auto-created one.

Unable to start VM-DeploymentVNetAddressAllocationFailure

After been Running for about 3 months my vm has stop working properly. According to the Azure console the vm is "Running" but I'm unable to ssh into the VM, the server seems to be unresponsive.
I tried to Stop/Start it again and now I'm getting the following message:
error: Networking.DeploymentVNetAddressAllocationFailure : Unable to allocate the required address spaces for the deployment in a new or predefined subnet that is contained within the specified virtual network.
info: Error information has been recorded to azure.err
error: vm start command failed
Since the VM was configure to use a Static IP address, I've removed the static IP and also change the IP address to something else but without much success. According to the console with server is "Running" but I'm still unable to ssh into the machine
Any help would be greatly appreciated
I understand that you are unable to SSH into the Azure Virtual Machines although it shows "Running" in the Management Portal. Suggest you to try the below mentioned steps
• Restart the Virtual Machine from the Management Portal
• Resize the Virtual Machine from the Portal but ensure to Back up the Data in the Temporary Drive ( D:) since the data will be lost while Resizing.
• If the issue is still not fixed, please recreate the VM by retaining the attached Disk’s and attaching the same to the new VM
• If the issue still persists, you might want to recover the VM by attaching the OS Disk to another VM . Please refer to the below mentioned blog
http://blogs.msdn.com/b/mast/archive/2014/11/20/recovering-azure-vm-by-attaching-os-disk-to-another-azure-vm.aspx
Hope this helps !
Regards,
Sowmya

Resources