I would like to create ADF and storage account using terraform which I know how to do it. After this I want to give ADF identity access to storage account. I can do this using powershell. But idempotency issues will be there when I use powershell. Is it possible to implement access with terraform itself without using powershell?
You should create an azurerm_role_assignment to grant ADF access to the Azure Storage account.
Kindly check the example below. This code snippet assigns Storage Blob Data Reader role to the ADF.
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_data_factory" "example" {
name = "example524657"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
identity {
type = "SystemAssigned"
}
}
resource "azurerm_storage_account" "example" {
name = "examplestr524657"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "RAGRS"
}
resource "azurerm_role_assignment" "example" {
scope = azurerm_storage_account.example.id
role_definition_name = "Storage Blob Data Reader"
principal_id = azurerm_data_factory.example.identity[0].principal_id
}
Related
I have environment in Azure with VNet configuration and resources.
I wanto automate the deployment of Azure Windows VM with existing vnet configuration using Terraform.
Firstofall, if you use the existing vnet configuration in the terraform code, it will throw the error code: "Resource already exists". To resolve this, one can run terraform import to import this resource into the "terraform state file" and let terraform import it into the current executable file.
terraform import from terraform registry:
terraform import azurerm_virtual_network.existing /subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Network/virtualNetworks/<vnet>
When trying to deploy existing resources in Terraform, you should include a data block. Check the complete script below for your issue, and it was successfully deployed as follows.
data "azurerm_virtual_network" "existing"{
name = "jahnavivnet"
resource_group_name = "example-resources"
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "existing" {
name = "jahnavivnet"
address_space = ["10.0.0.0/16"]
resource_group_name = "example-resources"
location = "West Europe"
}
resource "azurerm_subnet" "existing" {
name = "default"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.existing.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "NICxxx"
location = "West Europe"
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "default"
subnet_id = azurerm_subnet.existing.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_windows_virtual_machine" "example" {
name = "xxxxxnameofVM"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
size = "Standard_F2"
admin_username = "user"
admin_password = "xxxx"
network_interface_ids = [
azurerm_network_interface.example.id,
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
}
Executed terraform init:
Executed terraform plan:
Executed terraform apply --auto-approve:
Deployed virtual machine with the existing virtual network configuration.
Refer terraform templates.
I have an issue creating a VM on Azure using Terraform.
We have a policy restricting from creating certain vm sizes for our subscription, but we created an exemption for a specific ResourceGroup.
I can create VM with the wanted size using my ServicePrincipal and with the following command:
$ az login --service-principal -u ... -p ... --tenant ...
$ az vm create --resource-group ... --name ... --image ... --admin-username ... --generate-ssh-keys --location ... --size ...
The VM is created successfully with the wanted size.
But, when I try to create the VM using Terraform, with the same VM size, I'm getting the following error:
level=error msg=Error: creating Linux Virtual Machine "..." (Resource Group "..."): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status= Code="SkuNotAvailable" Message="The requested size for resource '/subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/...' is currently not available in location '...' zones '...' for subscription '...'. Please try another size or deploy to a different location or zones. See https://aka.ms/azureskunotavailable for details."
After running
az vm list-skus --location ... --size ... --all --output table
The output for the wanted size is:
restrictions
---
NotAvailableForSubscription, type: Zone, locations: ..., zones: 1,2,3
It looks like the size is unavailable, but using the CLI or Azure portal, I'm able to create VM with this size.
The terraform is running with the same service principal as the CLI command, in the same subscription, tenant and resource group.
Do you have an idea what can cause this problem creating the VM using terraform?
Thanks
Here is the terraform script to create a VM with specified configurations
location = "East US"
vm_size = "Standard_NC12s_v3"
Step1: Copy the below code in "main tf" file
provider "azurerm" {
features {}
}
variable "prefix" {
default = "rg_swarna"
}
resource "azurerm_resource_group" "example" {
name = "${var.prefix}-resources"
location = "East US"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = "rg_swarna-resources"//azurerm_resource_group.example.name
virtual_network_name = "rg_swarna-network"//azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "main" {
name = "${var.prefix}-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "testconfiguration1"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "main" {
name = "${var.prefix}-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.main.id]
//vm_size = "Standard_DS1_v2"
vm_size = "Standard_NC12s_v3"
# Uncomment this line to delete the OS disk automatically when deleting the VM
# delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
# delete_data_disks_on_termination = true
storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags = {
environment = "staging"
}
}
Step2:
run below commands
terraform plan
terraform apply -auto-approve
Step3:
Verify the results in Azure Portal
I want to assign User Assigned managed Identity to VMSS created in MC resource group so that all the pods created in K8S have access to associated Key Vault.
I have done it through powershell script, Here's the Script:
$aksNodeVmss = Get-AzVmss -ResourceGroupName "$aksMcRg"
Update-AzVmss -ResourceGroupName $aksMcRg -Name $aksNodeVmss.Name -IdentityType UserAssigned -IdentityID $id
But I want to do it in Terraform but I'm unable to find a solution to it.
The VMSS identity is the kubelet identity of your nodepool. AKS nowadays supporting "bring your own" kubelet identity while creating the cluster, so no need for updating the identities.
resource "azurerm_user_assigned_identity" "kubelet" {
name = "uai-kubelet"
location = <YOUR_LOCATION>
resource_group_name = <YOUR_RG>
}
resource "azurerm_user_assigned_identity" "aks" {
name = "uai-aks"
location = <YOUR_LOCATION>
resource_group_name = <YOUR_RG>
}
# This can be also a custom role with Microsoft.ManagedIdentity/userAssignedIdentities/assign/action allowed
resource "azurerm_role_assignment" "this" {
scope = <YOUR_RG>
role_definition_name = "Managed Identity Operator"
principal_id = azurerm_user_assigned_identity.aks.principal_id
}
Then assign the identity to the kublet:
resource "azurerm_kubernetes_cluster" "aks" {
...
identity {
type = "UserAssigned"
identity_ids = [azurerm_user_assigned_identity.aks.id]
}
kubelet_identity {
client_id = azurerm_user_assigned_identity.kubelet.client_id
object_id = azurerm_user_assigned_identity.kubelet.principal_id
user_assigned_identity_id = azurerm_user_assigned_identity.kubelet.id
}
}
trying to create ACR and integrate the same with existing AKS cluster
below is the resource block where will be doing role assignment {User Assigned Managed Identity} to aks nodepool and trying datablock to fetch existing aks details
#Create Resource Group
resource "azurerm_resource_group" "acr_rg" {
location = var.location
name = "${var.global-prefix}-${var.repo-id}-rg"
}
#Create ACR Registry for Powerme
resource "azurerm_container_registry" "acr" {
name = var.repo-id
resource_group_name = azurerm_resource_group.acr_rg.name
location = azurerm_resource_group.acr_rg.location
sku = "Premium"
admin_enabled = true
}
#Featching AKS details for Integration with ACR
data "azurerm_kubernetes_cluster" "aks_cluster" {
resource_group_name = var.aks_rg
name = var.k8s_cluster
}
#Role assignment of vmss to acr
resource "azurerm_role_assignment" "acrpull_role" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = data.azurerm_kubernetes_cluster.aks_cluster.kubelet_identity[0].object_id
}
error
$ terraform plan
╷
│ Error: Error: Managed Kubernetes Cluster "mycluster" was not found in Resource Group "myresource"
│
│ with data.azurerm_kubernetes_cluster.aks_cluster,
│ on acr.tf line 17, in data "azurerm_kubernetes_cluster" "aks_cluster":
│ 17: data "azurerm_kubernetes_cluster" "aks_cluster" {
╵
If you have multiple subscriptions access, Please ensure you are setting the subscription that has the aks cluster and the resource group by using the below command:
az account set --subscription "your subscription you want to use"
After the Subscription is set , You will be successfully able to get the AKS cluster and resource group using the same code .
I am deploying resources to Azure with Terraform. I want to assign roles to AD users by using their email address. In the azurerm_role_assignment resource, only the object id of the user can be used. I have tried it with email but it logically fails.
resource "azurerm_role_assignment" "example" {
scope = data.azurerm_subscription.primary.id
role_definition_name = "Reader"
principal_id = data.azurerm_client_config.example.object_id
}
With az powershell, the role can be assigned with the user's sign-in name : New-AzRoleAssignment -SignInName <userupn> .
Is there way to do it with terraform?
I have found the answer. The data azuread_users can be used as a solution:
data "azuread_users" "users" {
user_principal_names = ["kat#hashicorp.com"]
}
resource "azurerm_role_assignment" "rbac_wvd" {
scope = data.azurerm_subscription.primary.id
role_definition_name = "Reader"
principal_id = data.azuread_users.wvd_user.object_ids[0]
}