Azure Load Balancer on Kubernetes IaaS Cluster - azure

I have been fighting with this for a couple of hours, and I can't work out how to get Kubernetes to configure an Azure Load Balancer when its an IaaS Kubernetes cluster.
This work OOB with AKS, as you'd expect.
Obviously, I am missing where I can input the Service Principal to allow k8s to be able to configure resources in Azure.

The official recommended way is ACS-Engine. Describing it is a bit too much for an answer, but when you define your cluster you are supposed to provide Azure credentials to it:
{
"apiVersion": "vlabs",
"properties": {
"orchestratorProfile": {
"orchestratorType": "Kubernetes"
},
"masterProfile": {
"count": 1,
"dnsPrefix": "",
"vmSize": "Standard_D2_v2"
},
"agentPoolProfiles": [
{
"name": "agentpool1",
"count": 3,
"vmSize": "Standard_D2_v2",
"availabilityProfile": "AvailabilitySet"
}
],
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": ""
}
]
}
},
"servicePrincipalProfile": {
"clientId": "CLIENT_ID_GOES_HERE",
"secret": "CLIENT_SECRET_GOES_HERE"
}
}
}
After you provision a cluster with proper input it will work out of the box. Walkthrough

Related

How to mount StorageAccount Fileshares in containers hosted by Azure App Service

I have a container that runs in ACI and uses mounted storageaccount file shares. In the template I have a section:
"type": "Microsoft.ContainerInstance/containerGroups",
...
"volumeMounts": [
{
"mountPath": "/aci/sra/
"name": "acisra",
"readOnly": false
},
{
"mountPath": "/aci/fastq
"name": "acifastq",
"readOnly": false
},
{
"mountPath": "/aci/cache
"name": "acicache",
"readOnly": false
}
],
...
"volumes": [
{
"name": "acisra",
"azureFile": {
"readOnly": false,
"shareName": "acisra",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountName": "[parameters('storageAccountName')]"
}
},
{
"name": "acicache",
"azureFile": {
"readOnly": false,
"shareName": "acicache",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountName": "[parameters('storageAccountName')]"
}
},
{
"name": "acifastq",
"azureFile": {
"readOnly": false,
"shareName": "acifastq",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountName": "[parameters('storageAccountName')]"
}
}
],
This allows me to run bash scripts referencing paths such as:
/aci/cache/$ID
Unfortunately, I am finding ACI has poor networking performance. I am also running an Azure Function that makes use of the the same fileshares with an always-on App Service Plan and the performance is excellent.
I found this article explaining how to run a container on an App Service Plan. This works great, the Log Stream shows my container is working.
https://learn.microsoft.com/en-us/azure/app-service/quickstart-custom-container?tabs=dotnet&pivots=container-linux-vscode
However, my container is not able to make the fileshare reference.
Is there a way to tell the App Service to make fileshare connection available to the container is a manner similar to the "Microsoft.ContainerInstance/containerGroups" template.
Is there a way to tell the App Service to run n instances of the container ?

Azure Container instances no IP

I created a Container Instance in Azure and used a Hello World image.
However, I do not get any IP to access the webserver!
TL;DR. You might want to use the CLI (or other methods) to deploy ACI.
Be sure to set either --ip_address public or --dns-name-label <blah> in order to create public IP address.
Example:
az container create --resource-group myResourceGroup --name mycontainer --image mcr.microsoft.com/azuredocs/aci-helloworld --dns-name-label aci-demo --ports 80
Ref: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart
Investigation details:
I was able to reproduce the same issue when deploying ACI via the Azure portal. My guess is: there is a bug with the Azure portal , it is not sending the public IP / DNS label to Azure control plane so the public IP address was not created.
In this screenshot, I've also provided the DNS name label and set IP address to public, we can check the ARM template portal uses.
The ARM template looks like the following, where you can see dnsNameLabel was defined as a parameter, but not being referenced / applied in the resource creation section
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string"
},
"containerName": {
"type": "string"
},
"imageType": {
"type": "string",
"allowedValues": [
"Public",
"Private"
]
},
"imageName": {
"type": "string"
},
"osType": {
"type": "string",
"allowedValues": [
"Linux",
"Windows"
]
},
"numberCpuCores": {
"type": "string"
},
"memory": {
"type": "string"
},
"restartPolicy": {
"type": "string",
"allowedValues": [
"OnFailure",
"Always",
"Never"
]
},
"ports": {
"type": "array"
},
"dnsNameLabel": {
"type": "string"
}
},
"resources": [
{
"location": "[parameters('location')]",
"name": "[parameters('containerName')]",
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2021-07-01",
"properties": {
"containers": [
{
"name": "[parameters('containerName')]",
"properties": {
"image": "[parameters('imageName')]",
"resources": {
"requests": {
"cpu": "[int(parameters('numberCpuCores'))]",
"memoryInGB": "[float(parameters('memory'))]"
}
},
"ports": "[parameters('ports')]"
}
}
],
"restartPolicy": "[parameters('restartPolicy')]",
"osType": "[parameters('osType')]"
},
"tags": {}
}
]
}
I have tested in my environment.
I followed this document Quickstart - Deploy Docker container to container instance - Portal - Azure Container Instances | Microsoft Docs to create a container instance via azure portal.
But the issue with creating the container from portal is IP address and FQDN are blank after creating the container instance. There might be some issue with creation of container instance via portal
I created another container instance using CLI by following this document Quickstart - Deploy Docker container to container instance - Azure CLI - Azure Container Instances | Microsoft Docs
The container instance is successfully created with IP address and FQDN.
So, we can create container instance with CLI for workaround.

Unable to Add Windows Node Pull to Cluster Using Rest API

I want to add Windows Server Container to Azure kubernetes Cluster.Currenty,using the Azure Rest API to manage the cluster.But It is showing the following error:
{
"code": "AzureCNIOnlyForWindows",
"message": "Windows agent pools can only be added to AKS clusters using Azure-CNI."
}
{
"location": "location1",
"tags": {
"tier": "production",
"archv2": ""
},
"properties": {
"kubernetesVersion": "",
"dnsPrefix": "dnsprefix1",
"agentPoolProfiles": [
{
"name": "nodepool1",
"count": 3,
"vmSize": "Standard_DS1_v2",
"osType": "Linux"
}
],
"linuxProfile": {
"adminUsername": "*******",
"ssh": {
"publicKeys": [
{
"keyData": "keydata"
}
]
}
},
"networkProfile": {
"loadBalancerSku": "basic"
},
"windowsProfile": {
"adminUsername": "********",
"adminPassword": "************************"
},
"servicePrincipalProfile": {
"clientId": "clientid",
"secret": "secret"
},
"addonProfiles": {},
"enableRBAC": true,
"enablePodSecurityPolicy": true
}
}
{
"code": "AzureCNIOnlyForWindows",
"message": "Windows agent pools can only be added to AKS clusters using Azure-CNI."
}
From your question, I assume that you want to add the Windows node pool to the AKS cluster. And then the error means you do not use the Azure-CNI network type for your AKS cluster. For the Windows node pool, see below:
In order to run an AKS cluster that supports node pools for Windows
Server containers, your cluster needs to use a network policy that
uses Azure CNI (advanced) network plugin.
So the solution for you is to create a new AKS cluster with the Azure-CNI network type. And then add the Windows node pool again. Take a look at the steps that Create AKS cluster for Windows node pool through Azure CLI. And in REST API, you need to set the networkPlugin in the properties.networkProfile with value azure. See NetworkPlugin.

How to install AKS with Calico enabled

This definition clearly mentions you can use networkPolicy property as part of the networkProfile and set it to Calico, but that doesnt work. AKS creating just times out with all the nodes being in Not Ready state.
you need enable underlying provider feature:
az feature list --query "[?contains(name, 'Container')].{name:name, type:type}" # example to list all features
az feature register --name EnableNetworkPolicy --namespace Microsoft.ContainerService
az provider register -n Microsoft.ContainerService
after that you can just use REST API\ARM Template to create AKS:
{
"location": "location1",
"tags": {
"tier": "production",
"archv2": ""
},
"properties": {
"kubernetesVersion": "1.12.4", // has to be 1.12.x, 1.11.x doesnt support calico AFAIK
"dnsPrefix": "dnsprefix1",
"agentPoolProfiles": [
{
"name": "nodepool1",
"count": 3,
"vmSize": "Standard_DS1_v2",
"osType": "Linux"
}
],
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": "keydata"
}
]
}
},
"servicePrincipalProfile": {
"clientId": "clientid",
"secret": "secret"
},
"addonProfiles": {},
"enableRBAC": false,
"networkProfile": {
"networkPlugin": "azure",
"networkPolicy": "calico", // set policy here
"serviceCidr": "xxx",
"dnsServiceIP": "yyy",
"dockerBridgeCidr": "zzz"
}
}
}
ps.
Unfortunately, helm doesnt seem to work at the time of writing (I suspect this is because kubectl port-forward which helm relies on doesnt work as well ).

VMs lose outbound connectivity after subsequent template deploy

I'm using an ARM template to deploy a virtual network, VPN gateway, and a number of virtual machines (I've tried a standalone VM, and a VMSS). I'm also deploying a PowerShell DSC module to each VM which copies over some code and installs it as a service.
There is a recurring issue where on subsequent deployments, probably half of the time, the deployment script fails because the deployment of the DSC extension fails due to lack of network connectivity and being unable to resolve the hostname of the storage account where the code is being offered.
When I connect to the vnet VPN and remote into the VM in question, there is always no outbound network activity. If I compare the /ipconfig all settings with the other VMs, the settings are identical (except with slightly different local IP). However one VM will be unable to ping any public IPs or resolve any hosts. Just trying to enter an nslookup session fails to connect to the DNS server itself immediately, even though other VMs are using that same DNS server just fine.
Usually just restarting the VM in question fixes the issue.
My vnet setup is pretty basic and I haven't specified my own DNS, so I'm just using "Azure DNS".
Currently my VMs in the template are configured as dependent on the virtual network. I'm not sure if I'm also supposed to make them dependent on the gateway as well.
Here's the config I'm using on the VMs:
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "[concat(variables('scalesetName'), '-nic')]",
"properties": {
"primary": "true",
"ipConfigurations": [
{
"name": "[concat(variables('scalesetName'), '-ipconfig')]",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'), '/subnets/', variables('subnetName'))]"
}
}
}
]
}
}
]
},
And the configuration of the virtual network:
{
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"apiVersion": "[variables('networkApiVersion')]",
"tags": {
"displayName": "VirtualNetwork"
},
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "[variables('subnetPrefix')]"
}
},
{
"name": "GatewaySubnet",
"properties": {
"addressPrefix": "[variables('gatewaySubnetPrefix')]"
}
}
]
}
},
{
"type": "Microsoft.Network/virtualNetworkGateways",
"name": "[variables('gatewayName')]",
"apiVersion": "[variables('networkApiVersion')]",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "VpnGateway"
},
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/', variables('gatewayPublicIPName'))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
],
"properties": {
"gatewayType": "Vpn",
"vpnType": "RouteBased",
"enableBgp": "false",
"sku": {
"name": "[variables('gatewaySku')]",
"tier": "[variables('gatewaySku')]"
},
"ipConfigurations": [
{
"name": "vnetGatewayConfig",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"subnet": {
"id": "[concat(resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName')),'/subnets/', 'GatewaySubnet')]"
},
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('gatewayPublicIPName'))]"
}
}
}
],
"vpnClientConfiguration": {
"vpnClientAddressPool": {
"addressPrefixes": [
"[variables('vpnClientAddressPoolPrefix')]"
]
},
"vpnClientRootCertificates": [
{
"name": "RootCertificate",
"properties": {
"PublicCertData": "<snip>"
}
}
]
}
}
},
Any ideas why deploying is randomly breaking outbound VM traffic until I restart the VM?
Please note that azure blocks 'Ping' to/from the azure network.
Please answer the following:
a. Do you have any UDR rules on your subnet that force route your traffic to VPN gateway?
b. What is the storage account configuration for the VM. Is it using premium storage or standard storage.

Resources