I need to deploy Azure container instance in differents Resource Groups.
In one Resource Group I allocate only the ACI and on another Resource Group I allocate the Vnet
Is this possible? I think that is not possible by design
It's possible.
You can create an azure container instance in a virtual network that is in a different resource group from the container instance resource group.
Suppose you have created a vNet myvnet and subnet aci-subnet in the RG myvnetRG for your ACI. Then you could use the following deployment examples.
VnetId=$(az network Vnet show -g myvnetRG -n myvnet --query 'id' -o tsv)
az container create -n appcontainer -g containerRG --image mcr.microsoft.com/azuredocs/aci-helloworld --vnet $VnetId --subnet aci-subnet
You can also deploy a container group to an existing virtual network by using a YAML file, then specify several additional properties like network profile and ID in the YAML.
It is possible by design, but why would you want to do that? It is not a recommended design thought.
If your resource groups in the different regions, you could configure a Vnet-to-Vnet connection. For your reference:
https://learn.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal
Related
I created an aks using az cli with minimal parameters and specified a node-count and auto scaling. This created a nodepool and VMSS etc. and an accompanying vnet and subnet automatically.
How do I find out the created vnet and subnet using az cli?
az aks nodepool list --cluster-name aks -g rg-aks
report vnetSubnetId and podSubnetId as null.
Using
az vmss list
does show the subnet but I haven't found any properties of the vmss linking it to the nodepool or aks cluster to enable finding it.
The autogenerated name is something like:
aks-nodepool1-15343534-vmss
Which I guess I could filter for along the lines of aks-nodepool1-*-vmss but that seems dodgy and flaky.
I have tested in my environment
The VNET is created along with the VMSS in a different resource group which starts with MC_
To get the subnet ID, you can use the below script:
$CLUSTER_RESOURCE_GROUP = az aks show --resource-group RGName --name AKSClusterName --query nodeResourceGroup -o tsv
$VMSS_NAME = az vmss list -g $CLUSTER_RESOURCE_GROUP --query "[0].name"
az vmss show -g $CLUSTER_RESOURCE_GROUP -n $VMSS_NAME --query virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].subnet.id
I'm having some trouble attaching a NIC (in resource group A) to a subnet belonging to a Vnet and NSG in a different resource group (say B). I have Contributor role in resource group A, but only Reader role in resource group B. Is this possible? If so, what am I doing wrong? Here's what it looks like (with UIDs shortened).
% az network nic create --resource-group A --name bastion-nic --vnet-name VN-B --subnet SubnetB
(InvalidResourceReference) Resource /subscriptions/40ef-b75f-c05a034bf2ff/resourceGroups/A/providers/Microsoft.Network/virtualNetworks/VN-B/subnets/SubnetB referenced by resource /subscriptions/b75f-c05a034bf2ff/resourceGroups/A/providers/Microsoft.Network/networkInterfaces/bastion-nic was not found. Please make sure that the referenced resource exists, and that both resources are in the same region.
Code: InvalidResourceReference
I tested the same scenario in my environment .
Scenario: I created a user , 2 resource groups i.e. contributorTest with Contributor access for the user and readerTest with Reader access for the user.
If I use the command you are using then it gives me the same error message as you. To describe the issue when you are using vnet-name, the command thinks that the vnet is also present in the same resource which has been mentioned in the command.
az network nic create --resource-group contributorTest --name bastion-nic --vnet-name ansumantest-vnet --subnet default
So , for example in the above command resource group is contributorTest and we have just provided vnet name and subnet name , which it thinks is present in the same group. So it throws the error as below:
As a Solution you can use the below command to create NIC if the VNet is in different resource group:
az network nic create --resource-group contributorTest --name bastion-nic --subnet /subscriptions/subID/resourceGroups/readerTest/providers/Microsoft.Network/virtualNetworks/ansumantest-vnet/subnets/default
In the above command , we are not providing vnet name & subnet name , as a alternative we have provided the resourceID of the subnet.
Note: The above solution should work only if you have contributor access on both the resource group , in your case you will be getting the below error:
To describe the issue here, while you are creating a NIC it requires to join that NIC to the Subnet which you have specified but as you have reader access only on the VNET resource group it doesn't allow you to join the NIC and subnet.
So , Final solution can be :
Either have the VNET and subnet in the same resource group you are creating NIC on and have a Contributor access on it and use the command you are using .
Grant Contributor Access to the user for the second resource group and use the second Command that I have mentioned as a solution.
Output for the second command after providing contributor access for both the resource groups:
I have spot instance nodes in Azure Kubernetes Cluster. I want to simulate the eviction of a node so as to debug my code but not able to. All I could find in azure docs is we can simulate eviction for a single spot instance, using the following:
az vm simulate-eviction --resource-group test-eastus --name test-vm-26
However, I need to simulate the eviction of a spot node pool or a spot node in an AKS cluster.
For simulating evictions, there is no AKS REST API or Azure CLI command because evictions of the underlying infrastructure is not handled by AKS RP.
Only during creation of the AKS cluster the AKS RP can set eviction Policy on the underlying infrastructure by instructing the Azure Compute RP to do so.
Instead to simulate the eviction of node infrastructure, the customer can use az vmsss simulate-eviction command or the corresponding REST API.
az vmss simulate-eviction
az vmss simulate-eviction --instance-id
--name
--resource-group
[--subscription]
Reference Documents:
https://learn.microsoft.com/en-us/cli/azure/vmss?view=azure-cli-latest#az_vmss_simulate_eviction
https://learn.microsoft.com/en-us/rest/api/compute/virtual-machine-scale-set-vms/simulate-eviction
Use the following commands to get the name of the vmss with nodepool:
1.
az aks nodepool list -g $ClusterRG --cluster-name $ClusterName -o
table
Get the desired node pool name from the output
2.
CLUSTER_RESOURCE_GROUP=$(az aks show –resource-group YOUR_Resource_Group --name YOUR_AKS_Cluster --query
nodeResourceGroup -o tsv)
az vmss list -g $CLUSTER_RESOURCE_GROUP --query "[?tags.poolName == '<NODE_POOL_NAME>'].{VMSS_Name:name}" -o tsv
References:
https://louisshih.gitbooks.io/kubernetes/content/chapter1.html
https://ystatit.medium.com/azure-ssh-into-aks-nodes-471c07ad91ef
https://learn.microsoft.com/en-us/cli/azure/vmss?view=azure-cli-latest#az_vmss_list_instances
(you may create vmss if you dont have it configured. Refer :create a VMSS)
I'm attempting to create a new AKS cluster using Kubernetes version 1.19.7 and virtual machine scale sets and connect it to an existing on-prem vnet. On my first attempt, everything succeeded except for the creation of the actual ACI in Azure. The aci-connector node got created in Kubernetes but remained in a CrashLoopBackOff state, each time with the following error in the kubernetes logs:
Error: error initializing provider azure: error setting up network
profile: unable to delegate subnet 'xxxxxxxxx' to Azure Container
Instance since it references the route table
'/subscriptions/yyyyyyyy/resourceGroups/zzzzzzzz/providers/Microsoft.Network/routeTables/rrrrrrr'.
I tried recreating the cluster differently, according to limitations buried in MS documentation (using service principal, with empty subnet containing no other resources, with proper role permissions applied to the service account). Still no luck. Tried a few other tweaks on the networking side as well, but to no avail.
Here are the Azure CLI commands I used (names obfuscated) with/without service principal:
Using managed identity
az aks create -g yyyyyyyyy -n zzzzzzzz --aad-admin-group-object-ids 00000000-0000-0000-0000-000000000000 --aci-subnet-name myAciSubnet --assign-identity /subscriptions/xxxxxxx/resourcegroups/yyyyyyy/providers/Microsoft.ManagedIdentity/userAssignedIdentities/k8s-admin-qa --docker-bridge-address 172.17.0.1/16 --dns-service-ip 10.2.0.10 --enable-aad --enable-addons virtual-node --enable-managed-identity --generate-ssh-keys --kubernetes-version 1.19.7 --location eastus2 --network-plugin azure --service-cidr 10.2.0.0/16 --subscription xxxxxxx --vnet-subnet-id /subscriptions/xxxxxxx/resourceGroups/myNetworkResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet
Using Service Principal
az aks create -g yyyyyyy -n zzzzzzz --aad-admin-group-object-ids 00000000-0000-0000-0000-000000000000 --aci-subnet-name myAciSubnet --docker-bridge-address 172.17.0.1/16 --dns-service-ip 10.2.0.10 --enable-aad --enable-addons virtual-node --generate-ssh-keys --kubernetes-version 1.19.7 --location eastus2 --network-plugin azure --service-cidr 10.2.0.0/16 --subscription xxxxxxx --vnet-subnet-id /subscriptions/xxxxxxx/resourceGroups/myNetworkResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet --service-principal ppppppppp --client-secret SSSSSSSSSSS
If anyone out there has been able to successfully deploy/configure an AKS cluster using ACI with virtual machine scale sets, connected to an on-prem network, or can otherwise assist in troubleshooting or configuration, I'd love to hear from you!
The subnet for the ACI should have no other resources except the ACI and also no attached route table. Because Azure will attach a profile of the container group for it. And the error shows the subnet you want to use for the ACI already attached a routing table. So you can create a new subnet with nothing or just disassociate the routing table from the subnet.
I've setup kubernetes in azure using the azure acs and the azure cli.
az account list
az account set --subscription foobar
az group create --name foobar --location westus
az acs create --orchestrator-type=kubernetes --resource-group foobar --master-count 1 --name=foobar --dns-prefix=foobar
I want to be able to setup a site to site vpn, so that kubernetes can reach internal services in my datacenter.
Unfortunatly azure acs sets up kubernetes on a 10.0.0.0 network which overlaps with other resources in azure and my datacenter.
I can't find any way to change which subnet kubernetes runs on in acs. Is there a way to change the prefered network?
There does not appear to be a way to choose network from the acs create command
az acs create --name
--resource-group
[--admin-password]
[--admin-username]
[--agent-count]
[--agent-vm-size]
[--client-secret]
[--dns-prefix]
[--generate-ssh-keys]
[--location]
[--master-count]
[--no-wait]
[--orchestrator-type {Custom, DCOS, Kubernetes, Swarm}]
[--service-principal]
[--ssh-key-value]
[--tags]
[--validate]
[--windows]
No, there's no way of doing that. There might be a way to create a new kubernetes to existing vnet, but I'm not aware of that.
Your another option would be to delete all vm's and recreate them in the new vnet. No guarantee it would work.
With ACS through its CLI you can specify subnet id so the acs is created in a particular VNET. However this is only available in certain regions