Hyper-V isolation not activated in Windows Server version 1803 - azure

I am trying to run Windows Server 2016 containers on a Windows Server version 1803 Service Fabric cluster in Hyper-V isolation mode, but it fails with:
No hypervisor is present on this system.
It seems that the docker daemon is not configured and needs to be activated for hyper-v isolation. How can I active Hyper-V on the Windows Server (Datacenter-Core-1803-with-Containers-smalldisk)?
DETAILS
HOST OS on ServiceFabric node
Publisher: MicrosoftWindowsServer
Offer: WindowsServerSemiAnnual
SKU: Datacenter-Core-1803-with-Containers-smalldisk
Version: 1803.0.20181017
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion
BuildLabEx REG_SZ 17134.1.amd64fre.rs4_release.180410-1804
CONTAINER OS
Windows Server 2016, builds 14393 (Long-Term Servicing Channel)
Docker Command
docker run --isolation=hyperv -it mcr.microsoft.com/windows/servercore:ltsc2016 cmd
Error response from daemon: container Error response from daemon:
container
0499ef6e3f17843644323fa62b822fd30b89cc8f4ac2ab7d05396fec51252ac7
encountered an error during CreateContainer: failure in a Windows
system call: No hypervisor is present on this system.
EDIT
Hyper-V is installed, I checked that with the following command:
Get-WindowsFeature -ComputerName xxx

You can only do nested virtualization on the Dv3 and Ev3 VM sizes. The Dv3 and Ev3 sizes are also some of the first VM’s to be running on Windows Server 2016 hosts. Windows 2016 hosts enable Nested Virtualization and Hyper-V Containers for these new VM sizes. Nested virtualization allows you to run a Hyper-V server on an Azure virtual machine. With nested virtualization you can run a Hyper-V Container in a virtualized container host, set-up a Hyper-V lab in a virtualized environment, or to test multi-machine scenarios. You can find more information on Nested Virtualization on Azure.

Related

Connect to Docker on a different machine on a Windows

Related to this but not an (exactly) similar question, how do you run Docker on Windows but the "actual" Docker runs on a different machine.
So it's like, you have a Windows 10 guest VM machine as a development environment with Docker on it, but since it's a guest VM and nested virtualization is still in question, the actual docker "server" runs on a different guest VM (perhaps a Linux server or something)
So the question is:
How to set up a remote Docker?
How do you connect to a remote Docker?
How do you make that as if the remote Docker is running on your local machine, that is, commands work and you can map ports, etc.?

Is SSL3 protocol supported in Azure Kubernetes Service

We are running windows 2019 Server Core OS image in our container. By default my container supports all the following protocols
TLS
TLS1.1
TLS1.2
My application is using ssl3 for some request. I want to enable ssl3 on the container
I tried editing the registry on container, but it requires a restart for windows to recognize change in registry related to ssl3. Since in docker, there is no concept of restarting windows inside container, how can I enable ssl3 protocol on windows docker 2019 image.
Can i use init containers for changing windows registry. If so, how?
I am also to any other options
Extra Information
I am using following command to know ssl3 is not there on windows 2019 Server OS container.
nmap --script ssl-enum-ciphers -p 443 mysite.com --unprivileged
I am getting following output
SSL3 is not supported on Azure Containers

Error running Microsoft Azure computer vision Cognitive Services Read Text container on-prem - Illegal Instruction

I'm trying to run the preview version of a Computer Vision docker container on an Red Hat Enterprise Linux Server release 7.5 on-premises.
I've pulled the docker container containerpreview.azurecr.io/microsoft/cognitive-services-read:latest and run like this
docker run --rm -it -p 5000:5000 --memory 16g --cpus 8
containerpreview.azurecr.io/microsoft/cognitive-services-read
Eula=accept Billing={ENDPOINT} ApiKey={API_KEY}
the service is up, the swagger is visible and the status endpoint returns OK.
However when I try to use the /vision/v2.0/read/core/Analyze endpoint the machine give me this log
Initialize on-prem Read 2.0 GA...
/var/tmp/scleXV71Y: line 8: 10 Illegal instruction (core dumped) dotnet Microsoft.CloudAI.Containers.OneOcr.2.0.dll SecurityPrototype=true $ARGS
Searching similar issues this seems to be an error related to the AVX support of the machine.
If I check the AVX support on the machine with the command
grep avx /proc/cpuinfo
it seems to support AVX but not AVX2
However I executed the same steps on a Windows 10 machine that also supports AVX but not AVX2 and it works fine.
#Gabriella Esposito Azure cognitive services containers require the host to have AVX2 installed or supported on linux systems. Please check the requirements of host computer here.
You can try to enable this and run the container again.

Installing docker on azure virtual machine windows 10

I am getting an error upon installing docker on azure virtual machine.
m/c configuration: azure vm, windows 10 enterprise, Intel 2.4 GHz, 7 GB RAM, 64-bit operating system, x64-based processor.
I went through a few blogs and they asked me to enable nested virtualization on azure vm as follows.
Set-VMProcessor -VMName MobyLinuxVM -ExposeVirtualizationExtensions $true
But this also didn't help and the virtual m/c MobyLinuxVM failed to start.
I have installed Hyper-V and Container components from windows features. But the error shows "because one of the Hyper-V components is not running" whereas all the components of Hyper-V are running.
I checked the task manager performance tab and I don't see the virtualization option there. I can't modify the virtualization settings in the BIOS as I am installing docker on an Azure VM. Also I tried disabling the windows firewall but that didn't help.
So how to run docker on azure virtual m/c windows 10 enterprise.
Here is a solution if you are getting this error on Azure Windows 10 VM where you have installed Docker:
Ensure Windows Hyper-V featutes are enabled by running PowerShell cmdlet:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All -Verbose
Ensure Windows Containers feature is enabled by running PowerShell cmdlet:
Enable-WindowsOptionalFeature -Online -FeatureName Containers -All -Verbose
Ensure Hypervisor is set to auto start in the Boot Configuration Database (BCD) by running in elevated command prompt the command:
bcdedit /set hypervisorlaunchtype Auto
After running all of the above and you restart the Azure VM, Docker should be starting normally.
Azure doesnt yet allow for nested virtualization.
You need to use DSv3 or E3 instances for that. Just use docker like you normally would
Microsoft offers images with preinstalled docker enterprise. This works even on a B2s VM. Just select any of the "Microsoft Server 2019/2016 Datacenter with containers" image while creating the VM.

docker build fails on a cloud VM

I have an Ubuntu 16.04 (Xenial) running inside an Azure VM. I have followed the instructions to install Docker and all seems fine and dandy.
One of the things that I need to do when I trigger docker run is to pass --net=host, which allows me to run apt-get update and other internet-dependent commands within the container.
The problem comes in when I try to trigger docker build based on an existing Ubuntu image. It fails:
The problem here is that there is no way to pass --net=host to the build command. I see that there are issues open on the Docker GitHub (#20987, #10324) but no clear resolution.
There is an existing answer on Stack Overflow that covers the scenario I want, but that doesn't work within a cloud VM.
Any thoughts on what might be happening?
UPDATE 1:
Here is the docker version output:
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 22:11:10 2016
OS/Arch: linux/amd64
Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 22:11:10 2016
OS/Arch: linux/amd64
UPDATE 2:
Here is the output from docker network ls:
NETWORK ID NAME DRIVER SCOPE
aa69fa066700 bridge bridge local
1bd082a62ab3 host host local
629eacc3b77e none null local
Another approach would be to try letting docker-machine provision the VM for you and see if that works. There is a provider for Azure, so you should be able to set your subscription id on a local Docker client (Windows or Linux) and follow the instructions to get a new VM provisioned with Docker and it will also setup your local environment variables to communicate with the Docker VM instance remotely. After it is setup running docker ps or docker run locally would run the commands as if you were running them on the VM. Example:
#Name at end should be all lower case or it will fail.
docker-machine create --driver azure --azure-subscription-id <omitted> --azure-image canonical:ubuntuserver:16.04.0-LTS:16.04.201608150 --azure-size Standard_A0 azureubuntu
#Partial output, see docker-machine resource group in Azure portal
Running pre-create checks...
(azureubuntu) Completed machine pre-create checks.
Creating machine...
(azureubuntu) Querying existing resource group. name="docker-machine"
(azureubuntu) Resource group "docker-machine" already exists.
(azureubuntu) Configuring availability set. name="docker-machine"
(azureubuntu) Configuring network security group. location="westus" name="azureubuntu-firewall"
(azureubuntu) Querying if virtual network already exists. name="docker-machine-vnet" location="westus"
(azureubuntu) Configuring subnet. vnet="docker-machine-vnet" cidr="192.168.0.0/16" name="docker-machine"
(azureubuntu) Creating public IP address. name="azureubuntu-ip" static=false
(azureubuntu) Creating network interface. name="azureubuntu-nic"
(azureubuntu) Creating virtual machine. osImage="canonical:ubuntuserver:16.04.0-LTS:16.04.201608150" name="azureubuntu" location="westus" size="Standard_A0" username="docker-user"
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env azureubuntu
#Set environment using PowerShell (or login to the new VM) and see containers on remote host
docker-machine env azureubuntu | Invoke-Expression
docker info
docker network inspect bridge
#Build a local docker project using the remote VM
docker build MyProject
docker images
#To clean up the Azure resources for a machine (you can create multiple, also check docker-machine resource group in Azure portal)
docker-machine rm azureubuntu
Best I can tell that is working fine. I was able to build a debian:wheezy DockerFile that uses apt-get on the Azure VM without any issues. This should allow the containers to run using the default bridged network as well instead of the host network.
According to I can't get Docker containers to access the internet? using sudo systemctl restart docker might help, or enable net.ipv4.ip_forward = 1 or disable the firewall.
Also you may need to update the dns servers in /etc/resolv.conf on the VM

Resources