I want to add Windows Server Container to Azure kubernetes Cluster.Currenty,using the Azure Rest API to manage the cluster.But It is showing the following error:
{
"code": "AzureCNIOnlyForWindows",
"message": "Windows agent pools can only be added to AKS clusters using Azure-CNI."
}
{
"location": "location1",
"tags": {
"tier": "production",
"archv2": ""
},
"properties": {
"kubernetesVersion": "",
"dnsPrefix": "dnsprefix1",
"agentPoolProfiles": [
{
"name": "nodepool1",
"count": 3,
"vmSize": "Standard_DS1_v2",
"osType": "Linux"
}
],
"linuxProfile": {
"adminUsername": "*******",
"ssh": {
"publicKeys": [
{
"keyData": "keydata"
}
]
}
},
"networkProfile": {
"loadBalancerSku": "basic"
},
"windowsProfile": {
"adminUsername": "********",
"adminPassword": "************************"
},
"servicePrincipalProfile": {
"clientId": "clientid",
"secret": "secret"
},
"addonProfiles": {},
"enableRBAC": true,
"enablePodSecurityPolicy": true
}
}
{
"code": "AzureCNIOnlyForWindows",
"message": "Windows agent pools can only be added to AKS clusters using Azure-CNI."
}
From your question, I assume that you want to add the Windows node pool to the AKS cluster. And then the error means you do not use the Azure-CNI network type for your AKS cluster. For the Windows node pool, see below:
In order to run an AKS cluster that supports node pools for Windows
Server containers, your cluster needs to use a network policy that
uses Azure CNI (advanced) network plugin.
So the solution for you is to create a new AKS cluster with the Azure-CNI network type. And then add the Windows node pool again. Take a look at the steps that Create AKS cluster for Windows node pool through Azure CLI. And in REST API, you need to set the networkPlugin in the properties.networkProfile with value azure. See NetworkPlugin.
Related
I have a container that runs in ACI and uses mounted storageaccount file shares. In the template I have a section:
"type": "Microsoft.ContainerInstance/containerGroups",
...
"volumeMounts": [
{
"mountPath": "/aci/sra/
"name": "acisra",
"readOnly": false
},
{
"mountPath": "/aci/fastq
"name": "acifastq",
"readOnly": false
},
{
"mountPath": "/aci/cache
"name": "acicache",
"readOnly": false
}
],
...
"volumes": [
{
"name": "acisra",
"azureFile": {
"readOnly": false,
"shareName": "acisra",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountName": "[parameters('storageAccountName')]"
}
},
{
"name": "acicache",
"azureFile": {
"readOnly": false,
"shareName": "acicache",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountName": "[parameters('storageAccountName')]"
}
},
{
"name": "acifastq",
"azureFile": {
"readOnly": false,
"shareName": "acifastq",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountName": "[parameters('storageAccountName')]"
}
}
],
This allows me to run bash scripts referencing paths such as:
/aci/cache/$ID
Unfortunately, I am finding ACI has poor networking performance. I am also running an Azure Function that makes use of the the same fileshares with an always-on App Service Plan and the performance is excellent.
I found this article explaining how to run a container on an App Service Plan. This works great, the Log Stream shows my container is working.
https://learn.microsoft.com/en-us/azure/app-service/quickstart-custom-container?tabs=dotnet&pivots=container-linux-vscode
However, my container is not able to make the fileshare reference.
Is there a way to tell the App Service to make fileshare connection available to the container is a manner similar to the "Microsoft.ContainerInstance/containerGroups" template.
Is there a way to tell the App Service to run n instances of the container ?
I created a Container Instance in Azure and used a Hello World image.
However, I do not get any IP to access the webserver!
TL;DR. You might want to use the CLI (or other methods) to deploy ACI.
Be sure to set either --ip_address public or --dns-name-label <blah> in order to create public IP address.
Example:
az container create --resource-group myResourceGroup --name mycontainer --image mcr.microsoft.com/azuredocs/aci-helloworld --dns-name-label aci-demo --ports 80
Ref: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart
Investigation details:
I was able to reproduce the same issue when deploying ACI via the Azure portal. My guess is: there is a bug with the Azure portal , it is not sending the public IP / DNS label to Azure control plane so the public IP address was not created.
In this screenshot, I've also provided the DNS name label and set IP address to public, we can check the ARM template portal uses.
The ARM template looks like the following, where you can see dnsNameLabel was defined as a parameter, but not being referenced / applied in the resource creation section
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string"
},
"containerName": {
"type": "string"
},
"imageType": {
"type": "string",
"allowedValues": [
"Public",
"Private"
]
},
"imageName": {
"type": "string"
},
"osType": {
"type": "string",
"allowedValues": [
"Linux",
"Windows"
]
},
"numberCpuCores": {
"type": "string"
},
"memory": {
"type": "string"
},
"restartPolicy": {
"type": "string",
"allowedValues": [
"OnFailure",
"Always",
"Never"
]
},
"ports": {
"type": "array"
},
"dnsNameLabel": {
"type": "string"
}
},
"resources": [
{
"location": "[parameters('location')]",
"name": "[parameters('containerName')]",
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2021-07-01",
"properties": {
"containers": [
{
"name": "[parameters('containerName')]",
"properties": {
"image": "[parameters('imageName')]",
"resources": {
"requests": {
"cpu": "[int(parameters('numberCpuCores'))]",
"memoryInGB": "[float(parameters('memory'))]"
}
},
"ports": "[parameters('ports')]"
}
}
],
"restartPolicy": "[parameters('restartPolicy')]",
"osType": "[parameters('osType')]"
},
"tags": {}
}
]
}
I have tested in my environment.
I followed this document Quickstart - Deploy Docker container to container instance - Portal - Azure Container Instances | Microsoft Docs to create a container instance via azure portal.
But the issue with creating the container from portal is IP address and FQDN are blank after creating the container instance. There might be some issue with creation of container instance via portal
I created another container instance using CLI by following this document Quickstart - Deploy Docker container to container instance - Azure CLI - Azure Container Instances | Microsoft Docs
The container instance is successfully created with IP address and FQDN.
So, we can create container instance with CLI for workaround.
Try to update OsProfile after Virtual Machine is migrated from on premise to azure cloud because I need to install provisionVMAgent under osProfile ?
Using this API Version-
Reference Url for APi -https://learn.microsoft.com/en-us/rest/api/compute/virtualmachines/createorupdate#request-body
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}?api-version=2019-03-01
{
"location": "westus",
"properties": {
"hardwareProfile": {
"vmSize": "Standard_D1_v2"
},
"storageProfile": {
"osDisk": {
"name": "myVMosdisk",
"image": {
"uri": "http://{existing-storage-account-name}.blob.core.windows.net/{existing-container-name}/{existing-generalized-os-image-blob-name}.vhd"
},
"osType": "Windows",
"createOption": "FromImage",
"caching": "ReadWrite",
"vhd": {
"uri": "http://{existing-storage-account-name}.blob.core.windows.net/{existing-container-name}/myDisk.vhd"
}
}
},
"osProfile": {
"adminUsername": "{your-username}",
"computerName": "myVM",
"adminPassword": "{your-password}"
},
"networkProfile": {
"networkInterfaces": [
{
"id": "/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/{existing-nic-name}",
"properties": {
"primary": true
}
}
]
}
}
}
Response In Postman -
{
"error": {
"code": "PropertyChangeNotAllowed",
"message": "Changing property 'osProfile' is not allowed.",
"target": "osProfile"
}
Is This Possible to update Os Profile after Vm migration ? Or
Can I install provisionVMAgent in virtual machine after migration of VM ?
You could see provisionVMAgent working at the VM provision time from this, So it's not possible to update it after VM has created.
In this case, when you have created a custom-image VM from an unmanaged generalized os image, you could manually install the Windows VM Agent. The VM Agent is supported on Windows Server 2008 R2 and later.
The VM Agent can be installed by double-clicking the Windows installer file. For an automated or unattended installation of the VM agent, run the following command:
msiexec.exe /i WindowsAzureVmAgent.2.7.1198.778.rd_art_stable.160617-1120.fre /quiet
Hope this could help you.
This definition clearly mentions you can use networkPolicy property as part of the networkProfile and set it to Calico, but that doesnt work. AKS creating just times out with all the nodes being in Not Ready state.
you need enable underlying provider feature:
az feature list --query "[?contains(name, 'Container')].{name:name, type:type}" # example to list all features
az feature register --name EnableNetworkPolicy --namespace Microsoft.ContainerService
az provider register -n Microsoft.ContainerService
after that you can just use REST API\ARM Template to create AKS:
{
"location": "location1",
"tags": {
"tier": "production",
"archv2": ""
},
"properties": {
"kubernetesVersion": "1.12.4", // has to be 1.12.x, 1.11.x doesnt support calico AFAIK
"dnsPrefix": "dnsprefix1",
"agentPoolProfiles": [
{
"name": "nodepool1",
"count": 3,
"vmSize": "Standard_DS1_v2",
"osType": "Linux"
}
],
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": "keydata"
}
]
}
},
"servicePrincipalProfile": {
"clientId": "clientid",
"secret": "secret"
},
"addonProfiles": {},
"enableRBAC": false,
"networkProfile": {
"networkPlugin": "azure",
"networkPolicy": "calico", // set policy here
"serviceCidr": "xxx",
"dnsServiceIP": "yyy",
"dockerBridgeCidr": "zzz"
}
}
}
ps.
Unfortunately, helm doesnt seem to work at the time of writing (I suspect this is because kubectl port-forward which helm relies on doesnt work as well ).
I have been fighting with this for a couple of hours, and I can't work out how to get Kubernetes to configure an Azure Load Balancer when its an IaaS Kubernetes cluster.
This work OOB with AKS, as you'd expect.
Obviously, I am missing where I can input the Service Principal to allow k8s to be able to configure resources in Azure.
The official recommended way is ACS-Engine. Describing it is a bit too much for an answer, but when you define your cluster you are supposed to provide Azure credentials to it:
{
"apiVersion": "vlabs",
"properties": {
"orchestratorProfile": {
"orchestratorType": "Kubernetes"
},
"masterProfile": {
"count": 1,
"dnsPrefix": "",
"vmSize": "Standard_D2_v2"
},
"agentPoolProfiles": [
{
"name": "agentpool1",
"count": 3,
"vmSize": "Standard_D2_v2",
"availabilityProfile": "AvailabilitySet"
}
],
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": ""
}
]
}
},
"servicePrincipalProfile": {
"clientId": "CLIENT_ID_GOES_HERE",
"secret": "CLIENT_SECRET_GOES_HERE"
}
}
}
After you provision a cluster with proper input it will work out of the box. Walkthrough