I am currently working on deploying a VM on Azure using Terraform. The VM deployed correctly when using client_id, subscription_id, client_secret and tenant_id in the AzureRM provider block. However, I want to make use of managed identities so I don't have to expose the client_secret.
Things I tried:
For this, I followed this guide
I included the azuread provider block, used the "use_msi = true" to indicate that managed identities should be used. Also included the azurerm_subscription block,azurerm_Client_config, as well as a resource definition. Then added the role assignment to the VM.
Code:
terraform {
required_providers {
azuread = {
source = "hashicorp/azuread"
}
}
}
provider "azurerm" {
features {}
//client_id = "XXXXXXXXXXXXXX"
//client_secret = "XXXXXXXXXXXXXX"
//subscription_id = "XXXXXXXXXXXXXX"
tenant_id = "TENANT_ID"
//use_msi = true
}
provider "azuread" {
use_msi = true
tenant_id = "TENANT_ID"
}
#Resource group definition
resource "azurerm_resource_group" "myVMachineRG" {
name = "testnew-resources"
location = "westus2"
}
resource "azurerm_virtual_network" "myVNet" {
name = "testnew-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.myVMachineRG.location
resource_group_name = azurerm_resource_group.myVMachineRG.name
}
resource "azurerm_subnet" "mySubnet" {
name = "testnew-internal-subnet"
resource_group_name = azurerm_resource_group.myVMachineRG.name
virtual_network_name = azurerm_virtual_network.myVNet.name
#256 total IPs
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "myNIC" {
name = "testnew-nic"
location = azurerm_resource_group.myVMachineRG.location
resource_group_name = azurerm_resource_group.myVMachineRG.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.mySubnet.id
private_ip_address_allocation = "Dynamic"
}
}
#ADDED HERE:
data "azurerm_subscription" "current" {}
data "azurerm_client_config" "example" {
}
resource "azurerm_virtual_machine" "example" {
name = "testnew-vm"
location = azurerm_resource_group.myVMachineRG.location
resource_group_name = azurerm_resource_group.myVMachineRG.name
network_interface_ids = ["${azurerm_network_interface.myNIC.id}"]
vm_size = "Standard_F2"
#Option to delete disks when Terraform destroy is performed.
#This is to ensure that we don't keep wasting balance
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "OSDISK"
caching = "Readwrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
#Just for testing purposes, would be better to use a KeyVault reference here instead.
os_profile {
computer_name = "XXXXXXXXXXXXXX"
admin_username = "XXXXXXXXXXXXXX"
admin_password = "XXXXXXXXXXXXXX"
}
#Force password to authenticate
os_profile_linux_config {
disable_password_authentication = false
}
identity {
type = "SystemAssigned"
}
}
data "azurerm_role_definition" "contributor" {
name = "Contributor"
}
resource "azurerm_role_assignment" "example" {
//name = azurerm_virtual_machine.example.name
scope = data.azurerm_subscription.current.id
role_definition_name = "Contributor"
//role_definition_id = "${data.azurerm_subscription.current.id}${data.azurerm_role_definition.contributor.id}"
//principal_id = azurerm_virtual_machine.example.identity[0].principal_id
principal_id = data.azurerm_client_config.example.object_id
}
Error:
Error: building AzureRM Client: obtain subscription() from Azure CLI: parsing json result from the Azure CLI: waiting for the Azure CLI: exit status 1: ERROR: Please run 'az login' to setup account.
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 9, in provider "azurerm":
│ 9: provider "azurerm" {
I dont understand why it's still asking to use az login when I am trying to use Managed Identity for log-in.
Redacted the tenantID for security purposes.
Any help would be greatly appreciated :)
I tried to reproduce the same requirement in my environment and was able to deploy it successfully.
Need to provide name of the managed identity if you are authenticating via managed identities in terraform.
Add msi_name under azuread provider.
Note: As you have given, make sure that managed identities should have enough permissions (contributor role) to authenticate and create resources otherwise deployment will fail.
main.tf
data "azurerm_subscription" "current" {}
variable "subscription_id" {
default = "xxxxxxxxxxxx"
}
provider "azurerm"{
features{}
subscription_id = var.subscription_id
}
provider "azuread"{
features{}
use_msi = true
msi-name = "jahnaviidentity" //Give Name of the Managed identity
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "main" {
name = "main-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "main-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "<configurationname>"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "main-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
vm_size = "Standard_DS1_v2"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "<computername>"
admin_username = "<admin/username>"
admin_password = "xxxxxx"
}
os_profile_linux_config {
disable_password_authentication = false
}
identity {
type = "SystemAssigned"
}
}
Output:
terraform init:
terraform plan:
terraform apply:
Deployed successfully in Azure Portal:
your provider block has usemsi commented out for azurerm (the one that's failing.) Is that just a code transfer mistake to this question? Would have put this in comments but my reputation is not high enough.
Looks like azurerm might also need the subscription id (unlike azuread)
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/managed_service_identity
The use_msi property should be in azurerm as well. From the above link:
enter image description here
Also, just ot be sure, you've already configured the managed identity to use for this purpose, right?
Related
When deploying a custom script extension for a VM in Azure, it times out after 15 minutes. The timeout block is set to 2hrs. I cannot figure out why it keeps timing out. Could anyone point me in the right direction please? Thanks.
Resource to deploy (https://i.stack.imgur.com/lIfKj.png)
Error (https://i.stack.imgur.com/GFYRL.png)
In Azure, each resource will take a particular amount of time for provisioning. For Virtual Network Gateway's/ Virtual machines, timeout is up to 2 hours as mentioned in terraform timeouts.
Therefore, the timeout block we provide for any virtual machine has to be less than two hours (2h).
I tried creating a replica for azure vm extension resource by using below terraform code and it deployed successfully.
timeout block:
timeouts {
create = "1h30m"
delete = "20m"
}
azure_VM_extension:
resource "azurerm_virtual_machine_extension" "xxxxx" {
name = "xxxxname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"commandToExecute": "hostname && uptime"
}
SETTINGS
tags = {
environment = "Production"
}
timeouts {
create = "1h30m"
delete = "20m"
}
}
Created a virtual machine by adding required configurations under resource group.
main.tf:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "xxxxxRG" {
name = "xxxxx-RG"
location = "xxxxxx"
}
resource "azurerm_virtual_network" "example" {
name = "xxxxx"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "xxxxx"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "xxxxxx"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "xxxxconfiguration"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_storage_account" "example" {
name = "xxxxx"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
resource "azurerm_storage_container" "example" {
name = "xxxxxx"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
resource "azurerm_virtual_machine" "example" {
name = "xxxxxxVM"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_F2"
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "xxxxx"
vhd_uri = "${azurerm_storage_account.example.primary_blob_endpoint}${azurerm_storage_container.example.name}/myosdisk1.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "xxxxxname"
admin_username = "xxxx"
admin_password = "xxxxxx"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
resource "azurerm_virtual_machine_extension" "example" {
name = "hostname"
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
"commandToExecute": "hostname && uptime"
}
SETTINGS
tags = {
environment = "Production"
}
timeouts {
create = "1h30m"
delete = "20m"
}
}
Executed:
terraform init:
terraform plan:
terraform apply:
Extension added successfully after deployment:
You can upgrade status if you want to use extensions.
I resolved the issue by changing the type_handler_version to 1.9.
I'm trying to create a Terraform project to create everything I need in an Azure subscription, so resource groups, vnets, subnets and VM's.
However when I've run this once and try again, it states that it cannot delete a subnet that is in use. I haven't changed anything about the subnet or the VM connected to it.
Error: creating/updating Virtual Network: (Name "" / Resource Group ""): network.VirtualNetworksClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InUseSubnetCannotBeDeleted" Message="Subnet build-agent is in use by /subscriptions/mysub/resourceGroups/myrg/providers/Microsoft.Network/networkInterfaces/mynic/ipConfigurations/internal and cannot be deleted. In order to delete the subnet, delete all the resources within the subnet. See aka.ms/deletesubnet." Details=[]
terraform {
required_version = ">= 1.1.0"
backend "azurerm" {
}
required_providers {
azurerm = {
version = "=3.5.0"
source = "hashicorp/azurerm" # https://registry.terraform.io/providers/hashicorp/azurerm/latest
}
}
}
# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
locals {
name_suffix = "<mysuffix>"
}
resource "azurerm_resource_group" "rg-infra" {
name = "rg-${local.name_suffix}"
location = "UK South"
}
resource "azurerm_virtual_network" "vnet-mgmt" {
name = "vnet-${local.name_suffix}"
location = azurerm_resource_group.rg-infra.location
resource_group_name = azurerm_resource_group.rg-infra.name
address_space = ["<myiprange>"]
subnet {
name = "virtual-machines"
address_prefix = "<myiprange>"
}
subnet {
name = "databases"
address_prefix = "<myiprange>"
}
}
data "azurerm_virtual_network" "network" {
name = "vnet-${local.name_suffix}"
resource_group_name = azurerm_resource_group.rg-infra.name
}
resource "azurerm_subnet" "sb-ansible" {
name = "build-agent"
resource_group_name = azurerm_resource_group.rg-infra.name
virtual_network_name = data.azurerm_virtual_network.network.name
address_prefixes = ["<myiprange>"]
depends_on = [azurerm_virtual_network.vnet-mgmt]
}
data "azurerm_subnet" "prd-subnet" {
name = "build-agent"
virtual_network_name = data.azurerm_virtual_network.network.name
resource_group_name = azurerm_resource_group.rg-infra.name
depends_on = [azurerm_subnet.sb-ansible]
}
resource "azurerm_network_interface" "ni-ansible" {
name = "nic-ansible-${local.name_suffix}"
location = azurerm_resource_group.rg-infra.location
resource_group_name = azurerm_resource_group.rg-infra.name
ip_configuration {
name = "internal"
subnet_id = data.azurerm_subnet.prd-subnet.id
private_ip_address_allocation = "Dynamic"
}
lifecycle {
ignore_changes = ["ip_configuration"]
}
depends_on = [azurerm_subnet.sb-ansible]
}
resource "azurerm_linux_virtual_machine" "ansible-vm" {
name = "ansible-build-agent"
resource_group_name = azurerm_resource_group.rg-infra.name
location = azurerm_resource_group.rg-infra.location
size = "Standard_D2as_v4"
admin_username = "myadminuser"
network_interface_ids = [
azurerm_network_interface.ni-ansible.id,
]
admin_ssh_key {
username = "myadminuser"
public_key = ""
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
lifecycle {
ignore_changes = ["source_image_reference"]
}
depends_on = [azurerm_network_interface.ni-ansible]
}
Any help on why it's behaving like this, or a workaround would be greatly appreciated!
Many thanks
Turns out you can't mix nested subnets in the vnet block with an explicitly defined azurerm_subnet
I want to create AKS and ACR resources in my Azure environment. The script is able to create the two resources, and I am able to connect to each of them. But the AKS node cannot pull images from the ACR. After some research, I found I need to create a Private Endpoint between the AKS and ACR.
The strange thing is that if I create the PE using Terraform the AKS and ACR still cannot communicate. If I create the PE manually, they can communicate. I compared the parameters of the two PEs on the UI and they look the same.
Could someone help me define the PE using the following script? Or let me know what I did wrong?
Thanks!
Full TF script without the Private Endpoint
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=2.97.0"
}
}
required_version = ">= 1.1.7"
}
provider "azurerm" {
features {}
subscription_id = "xxx"
}
resource "azurerm_resource_group" "rg" {
name = "aks-rg"
location = "East US"
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "my-aks"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "myaks"
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_B2s"
}
identity {
type = "SystemAssigned"
}
}
resource "azurerm_container_registry" "acr" {
name = "my-aks-acr-123"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
sku = "Premium"
admin_enabled = true
network_rule_set {
default_action = "Deny"
}
}
resource "azurerm_role_assignment" "acrpull" {
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
role_definition_name = "AcrPull"
scope = azurerm_container_registry.acr.id
skip_service_principal_aad_check = true
}
Then you need to create a VNET, a Subnet (no part of this code ) plus a private DNS zone:
Private DNS zone:
resource "azurerm_private_dns_zone" "example" {
name = "mydomain.com"
resource_group_name = azurerm_resource_group.example.name
}
AKS Part:
resource "azurerm_kubernetes_cluster" "aks" {
name = "my-aks"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
dns_prefix = "myaks"
private_cluster_enabled = true
default_node_pool {
name = "default"
node_count = 2
vm_size = "Standard_B2s"
}
identity {
type = "SystemAssigned"
}
}
You need to create the ACR and a private endpoint for the ACR:
resource "azurerm_container_registry" "acr" {
name = "my-aks-acr-123"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
public_network_access_enabled = false
sku = "Premium"
admin_enabled = true
}
resource "azurerm_private_endpoint" "acr" {
name = "pvep-acr"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
subnet_id = YOUR_SUBNET
private_service_connection {
name = "example-acr"
private_connection_resource_id = azurerm_container_registry.acr.id
is_manual_connection = false
subresource_names = ["registry"]
}
private_dns_zone_group {
name = data.azurerm_private_dns_zone.example.name
private_dns_zone_ids = [data.azurerm_private_dns_zone.example.id]
}
}
resource "azurerm_role_assignment" "acrpull" {
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
role_definition_name = "AcrPull"
scope = azurerm_container_registry.acr.id
skip_service_principal_aad_check = true
}
I am new to the DevOps and Terraform domain, and I would like to ask the following. I have already create a VNET (using portal) which called "myVNET" in the resource group "Networks". I am trying to implement a AKS cluster using Terraform. My main.tf file is below
provider "azurerm" {
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.client_secret
tenant_id = var.tenant_id
features {}
}
resource "azurerm_resource_group" "MyPlatform" {
name = var.resourcename
location = var.location
}
resource "azurerm_kubernetes_cluster" "aks-cluster" {
name = var.clustername
location = azurerm_resource_group.MyPlatform.location
resource_group_name = azurerm_resource_group.MyPlatform.name
dns_prefix = var.dnspreffix
default_node_pool {
name = "default"
node_count = var.agentnode
vm_size = var.size
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
network_profile {
network_plugin = "azure"
load_balancer_sku = "standard"
network_policy = "calico"
}
}
My question is the following, how can I attach my cluster to my VNET?
You do that by assigning the subnet ID to the node pool vnet_subnet_id.
data "azurerm_subnet" "subnet" {
name = "<name of the subnet to run in>"
virtual_network_name = "MyVNET"
resource_group_name = "Networks"
}
...
resource "azurerm_kubernetes_cluster" "aks-cluster" {
...
default_node_pool {
name = "default"
...
vnet_subnet_id = data.azurerm_subnet.subnet.id
}
...
You can reference this existing module to build your own module if not use it directly.
My terraform script is giving error like below;
Error: Error creating Container Registry "containerRegistry1" (Resource Group "aks-cluster"): containerregistry.RegistriesClient#Create: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="AlreadyInUse" Message="The registry DNS name containerregistry1.azurecr.io is already in use. You can check if the name is already claimed using following API: https://learn.microsoft.com/en-us/rest/api/containerregistry/registries/checknameavailability"
on terra.tf line 106, in resource "azurerm_container_registry" "acr":
106: resource "azurerm_container_registry" "acr" {
Whole script is below;
I'm beginner at Terraform and tried different combinations but didn't worked. Not sure what can be the problem, is it possible to help?
variable "prefix" {
default = "tfvmex"
}
provider "azurerm" {
version = "=1.28.0"
}
resource "azurerm_resource_group" "rg" {
name = "aks-cluster"
location = "West Europe"
}
resource "azurerm_virtual_network" "network" {
name = "aks-vnet"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = ["10.1.0.0/16"]
}
resource "azurerm_subnet" "subnet" {
name = "aks-subnet"
resource_group_name = azurerm_resource_group.rg.name
address_prefix = "10.1.1.0/24"
virtual_network_name = azurerm_virtual_network.network.name
}
resource "azurerm_kubernetes_cluster" "cluster" {
name = "aks"
location = azurerm_resource_group.rg.location
dns_prefix = "aks"
resource_group_name = azurerm_resource_group.rg.name
kubernetes_version = "1.17.3"
agent_pool_profile {
name = "aks"
count = 1
vm_size = "Standard_D2s_v3"
os_type = "Linux"
vnet_subnet_id = azurerm_subnet.subnet.id
}
service_principal {
client_id = "dxxxx"
client_secret = "xxxx"
}
network_profile {
network_plugin = "azure"
}
}
resource "azurerm_network_interface" "rg" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.subnet.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "rg" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
network_interface_ids = [azurerm_network_interface.rg.id]
vm_size = "Standard_DS1_v2"
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "testtest"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
resource "azurerm_container_registry" "acr" {
name = "containerRegistry1"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Premium"
admin_enabled = false
georeplication_locations = ["West Europe"]
}
resource "azurerm_network_security_group" "example" {
name = "acceptanceTestSecurityGroup1"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
security_rule {
name = "test123"
priority = 100
direction = "Outbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "Test"
}
}
Thanks!
How do I create a Container Registry on Azure with a resource?
From your error message, it appears like you are using a non-unique ACR name in the below pasted terraform resource declaration:
resource "azurerm_container_registry" "acr" {
**name = "containerRegistry1"**
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "Premium"
admin_enabled = false
georeplication_locations = ["West Europe"]
}
Azure CLI has az acr check-name to ensure that ACR name is globally unique.
I think your issue is because your acr name is already globally taken. Not within your subscription, but GLOBALLY.
This is because ACR need a url to be accessed, and the url needs to be unique. That means some other Azure account has taken that name.
Error: Error creating Container Registry "containerRegistry1" (Resource Group "aks-cluster"): containerregistry.RegistriesClient#Create: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="AlreadyInUse" Message="The registry DNS name containerregistry1.azurecr.io is already in use. You can check if the name is already claimed using following API: https://learn.microsoft.com/en-us/rest/api/containerregistry/registries/checknameavailability"
containerregistry1.azurecr.io >>>> this url needs to be globally unique.