I've setup kubernetes in azure using the azure acs and the azure cli.
az account list
az account set --subscription foobar
az group create --name foobar --location westus
az acs create --orchestrator-type=kubernetes --resource-group foobar --master-count 1 --name=foobar --dns-prefix=foobar
I want to be able to setup a site to site vpn, so that kubernetes can reach internal services in my datacenter.
Unfortunatly azure acs sets up kubernetes on a 10.0.0.0 network which overlaps with other resources in azure and my datacenter.
I can't find any way to change which subnet kubernetes runs on in acs. Is there a way to change the prefered network?
There does not appear to be a way to choose network from the acs create command
az acs create --name
--resource-group
[--admin-password]
[--admin-username]
[--agent-count]
[--agent-vm-size]
[--client-secret]
[--dns-prefix]
[--generate-ssh-keys]
[--location]
[--master-count]
[--no-wait]
[--orchestrator-type {Custom, DCOS, Kubernetes, Swarm}]
[--service-principal]
[--ssh-key-value]
[--tags]
[--validate]
[--windows]
No, there's no way of doing that. There might be a way to create a new kubernetes to existing vnet, but I'm not aware of that.
Your another option would be to delete all vm's and recreate them in the new vnet. No guarantee it would work.
With ACS through its CLI you can specify subnet id so the acs is created in a particular VNET. However this is only available in certain regions
Related
I created Databricks workspace using Azure CLI:
az databricks workspace create
--name myprj-t-dbx
--location canadacentral
--resource-group rg-myprj-t
--managed-resource-group myprj-t-dbx-mrg
--sku Premium
--private-subnet /subscriptions/2208da08-xxxxxxxxxxx27/resourceGroups/rg-da-t-vnet/providers/Microsoft.Network/virtualNetworks/da-t-vnet/subnets/myprj-dbx-priv-t-snet
--public-subnet /subscriptions/2208da08-xxxxxxxxxxx27/resourceGroups/rg-da-t-vnet/providers/Microsoft.Network/virtualNetworks/da-t-vnet/subnets/myprj-dbx-publ-t-snet
The subnets are created in advance by our network engineers.
They want me to use private endpoints on vnet to connect to the workspace.
When I try to create it (using a third subnet):
az network private-endpoint create
--name myprj-t-dbx-pep
--connection-name myprj-t-dbx-pepc
--private-connection-resource-id /subscriptions/2208da08xxxxxxxxxx27/resourceGroups/rg-myprj-t/providers/Microsoft.Databricks/workspaces/myprj-t-dbx
--subnet /subscriptions/2208da08-xxxxxxxxxxx27/resourceGroups/rg-da-t-vnet/providers/Microsoft.Network/virtualNetworks/da-t-vnet/subnets/myprj-t-snet
--group-id web
--resource-group rg-myprj-t
I get:
ERROR: (NonVNetInjectedWorkspaceNotSupported) Call to Microsoft.Databricks/workspaces failed.
Error message: The workspace 'myprj-t-dbx' is not custom VNet injected.
Currently only custom VNet injected workspaces can create private endpoint connection
I think that you're missing the --vnet argument to the az databricks workspace create. You need to provide name of the VNet as well.
P.S. I would also recommend to pass --enable-no-public-ip to avoid having public IPs for the cluster nodes
I've changed the way that I am creating Databricks workspace.
Instead of regular Azure CLI that creates workspace
az databricks workspace create...
I've used a template with VNet injection from https://rajanieshkaushikk.com/2020/12/05/how-to-deploy-databricks-in-your-private-vnet-without-exposing-public-ip-address-vnet-injection/
also using CLI:
az deployment group create
--name DatabriksVNetInj
--resource-group rg-myprj-test
--template-file ./Databricks-ARM/azuredeploy.json
--parameters workspaceName=myproj-t-dbx...
As I understood from the documentation , if you use azure portal to create AKS cluster , you can't use the basic load balancer ,which is free in my current subscription. So how can I then use the basic load balancer with aks.
You must use the CLI to create an AKS with a Basic load balancer.
az aks create -g MyRG -n MyCluster --load-balancer-sku basic
It's clearly stated in the infobox in the Portal.
I'm attempting to create a new AKS cluster using Kubernetes version 1.19.7 and virtual machine scale sets and connect it to an existing on-prem vnet. On my first attempt, everything succeeded except for the creation of the actual ACI in Azure. The aci-connector node got created in Kubernetes but remained in a CrashLoopBackOff state, each time with the following error in the kubernetes logs:
Error: error initializing provider azure: error setting up network
profile: unable to delegate subnet 'xxxxxxxxx' to Azure Container
Instance since it references the route table
'/subscriptions/yyyyyyyy/resourceGroups/zzzzzzzz/providers/Microsoft.Network/routeTables/rrrrrrr'.
I tried recreating the cluster differently, according to limitations buried in MS documentation (using service principal, with empty subnet containing no other resources, with proper role permissions applied to the service account). Still no luck. Tried a few other tweaks on the networking side as well, but to no avail.
Here are the Azure CLI commands I used (names obfuscated) with/without service principal:
Using managed identity
az aks create -g yyyyyyyyy -n zzzzzzzz --aad-admin-group-object-ids 00000000-0000-0000-0000-000000000000 --aci-subnet-name myAciSubnet --assign-identity /subscriptions/xxxxxxx/resourcegroups/yyyyyyy/providers/Microsoft.ManagedIdentity/userAssignedIdentities/k8s-admin-qa --docker-bridge-address 172.17.0.1/16 --dns-service-ip 10.2.0.10 --enable-aad --enable-addons virtual-node --enable-managed-identity --generate-ssh-keys --kubernetes-version 1.19.7 --location eastus2 --network-plugin azure --service-cidr 10.2.0.0/16 --subscription xxxxxxx --vnet-subnet-id /subscriptions/xxxxxxx/resourceGroups/myNetworkResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet
Using Service Principal
az aks create -g yyyyyyy -n zzzzzzz --aad-admin-group-object-ids 00000000-0000-0000-0000-000000000000 --aci-subnet-name myAciSubnet --docker-bridge-address 172.17.0.1/16 --dns-service-ip 10.2.0.10 --enable-aad --enable-addons virtual-node --generate-ssh-keys --kubernetes-version 1.19.7 --location eastus2 --network-plugin azure --service-cidr 10.2.0.0/16 --subscription xxxxxxx --vnet-subnet-id /subscriptions/xxxxxxx/resourceGroups/myNetworkResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet --service-principal ppppppppp --client-secret SSSSSSSSSSS
If anyone out there has been able to successfully deploy/configure an AKS cluster using ACI with virtual machine scale sets, connected to an on-prem network, or can otherwise assist in troubleshooting or configuration, I'd love to hear from you!
The subnet for the ACI should have no other resources except the ACI and also no attached route table. Because Azure will attach a profile of the container group for it. And the error shows the subnet you want to use for the ACI already attached a routing table. So you can create a new subnet with nothing or just disassociate the routing table from the subnet.
Assuming having access to an Azure subscription with a fully configured Azure Kubernetes Service, via
az login
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
az aks browse --resource-group somegroup --name somecluster
i can get access to Kubernetes Dashboard.
Is there a way to give temporary access to Kubernetes Dashboard to some person who does not have access to the Azure Subscription the AKS is associated with?
yes, just create appropriate kubernetes config (so the user can port-forward the dashboard pod) to the cluster and then the user will be able to connect to the dashboard.
I'm trying to follow this guide to setting up a K8s cluster with external-dns' Azure DNS provider.
The guide states that:
When your Kubernetes cluster is created by ACS, a file named /etc/kubernetes/azure.json is created to store the Azure credentials for API access. Kubernetes uses this file for the Azure cloud provider.
When I create a cluster using aks (e.g. az aks create --resource-group myResourceGroup --name myK8sCluster --node-count 1 --generate-ssh-keys) this file doesn't exist.
Where do the API credentials get stored when using AKS?
Essentially I'm trying to work out where to point this command:
kubectl create secret generic azure-config-file --from-
file=/etc/kubernetes/azure.json
From what I can see when using AKS the /etc/kubernetes/azure.json doesn't get created. As an alternative I followed the instructions for use with non Azure hosted sites and created a service principal (https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md#optional-create-service-principal)
Creating the service principal produces some json that contains most of the detail. This can be used to manually create the azure.json file and the secret can be created from it.
Use this command to get credentials:
az aks get-credentials --resource-group myResourceGroup --name myK8sCluster
Source:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough
Did you try this command ?
cat ~/.kube/config
It provided all i needed for my CI to connect to the Kubernetes Cluster and use API