I have two subscriptions and a VM in my Azure account. I have assigned two Service Identities to the VM where each MSI is assigned with one subscription. I want my terraform script to use both of them in my providers block. How to proceed with this situation.
I tried to provide client id of the MSI within the provider block but terraform somewhat considers 1 MSI as default and goes along with it.
You can define multiple providers in the terraform script, then use the MSI authenticate. And you can choose which provider to use with the provider property when you create the resources.
Related
Input: client_id, subscription_id, resource-group-name, .
Manual / command line steps:
Approving at
https://login.microsoftonline.com/<tenant-id>/oauth2/authorize?client_id=<client_id>&response_type=code
Creating a new role (az role definition create --output none --role-definition)
Creating a role assignment (az role assignment create).
Steps 2-3 are pretty easy since I could leverage azurerm TF Provider and, more speficially, its azurerm_role_definition and azurerm_role_assignment resources but I'm kinda confused about step #1.
Update: after googling it seems like step #1 is very similar to Enable Azure Active Directory in your App Service app if that helps.
Before you can even get Terraform to interact with Azure/Azure AD resources you need to get Terraform to authenticate to it.
If you're running your Terraform code locally, the process is generally to authenticate using the Azure CLI - az login and then you provide the code shown by the CLI, to the authentication page.
If you want to do this non-interactively, the best practice is you'd need to get the Terraform code run on a machine that either has Managed Identities enabled. Either a System-Assigned or a User-Assigned identity.
Another possible but less direct approach would be to use a Service Principal with a Client Secret for Terraform to authenticate. this is kinda like the link you provided for the App Service.
Try to follow the steps in those two links above as these are from Terraform and have all required steps to ensure you are able to set it up right.
I want my terraform scripts to be able to authenticate on multiple azure subscriptions using multiple service principal.
Here is what I think:
Create a service principal (App registration).
Deploy terraform scripts in azure container instances
Give the "contributor" role to my service principal on the subscription (x)
Configure terraform scripts with environment variables to select the right credentials when I want to create resources in this subscription.
$ export ARM_SUBSCRIPTION_ID=159f2485-xxxx-xxxx-xxxx-xxxxxxxxxxxx # Client subscription
$ export ARM_CLIENT_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # client_id of the service principal
$ export ARM_CLIENT_SECRET=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
$ export ARM_TENANT_ID=72f988bf-xxxx-xxxx-xxxx-xxxxxxxxxxxx # the same tenant for all clients
Is this correct?
Do you have a more secure way to authenticate on multiple subscriptions when using terraform cloud? (ideally without client_secret)
If the container instance can run Terraform script, then there is no problem with the steps. You give permission to the service principal and change the environment variable ARM_SUBSCRIPTION_ID for different subscriptions, then Terraform script works for different subscriptions.
A safer way is to use the authentication with Azure CLI. If you set different subscriptions with the CLI command:
az account set --subscription="SUBSCRIPTION_ID"
Then the Terraform script will also work for different subscriptions. In this way you don't need to set the secret as the environment variable.
Problem
I have an azure pipeline YAML file. It is able to run through a service connection which accesses a service principal with all the proper authority, etc.
But I am now trying to clean up the code; we have multiple service principals running on multiple subscriptions and resource groups. They need to create storage accounts, which need to be unique.
So I am trying to create a storage account built partially from the associated subscription and resource group of the service principal creating the storage account.
Example Solution
For the subscription, it is fairly easy. I can do something like this, from within a PowerShell script called inside the pipeline:
$subscriptionId = $(az account show --query 'id' -o tsv)
Write-Output "##vso[task.setvariable variable=AZURE_SUBSCRIPTION_ID;isoutput=true;issecret=true]$subscriptionId"
Now I have the variables $subscription ID and AZURE_SUBSCRIPTION_ID set, and can access subscription information within the pipeline itself.
Question
But how can I do something similar with resource groups?
There is no equivalent to az account show with resource groups, without knowing the resource group name itself. (Eg, I have to type az group show -name <RG-name>, but it is precisely the name that I am trying to get.)
Again, to be clear, I am running inside of a particular resource group and subscription, it is those that are associated with the service connection. Now I simply want that information available to the pipeline.
I'm not sure if I completely understand what you are trying to accomplish. But I suspect that the options below might help.
Get role assignments
If you created separate service connections for each individual resource group you can simply check the role assignments for the SPN and determine the scope of the service connection.
If you, for example, use the Azure PowerShell task, you have configured it with a Service Connection. So when the task starts, it has the context of the service principal. You can then do Get-AzRoleAssignment which should output the Resource Groups to which its authorised. Again, this is only useful if you use a service connection per RG, as you otherwise get results for multiple RGs. (Or for subscriptions and Management groups, if you also assigned a role to those scropes)
Use the Azure DevOps API
You can use the Get Service Endpoint request of the Azure DevOps API to get the service connections. The JSON output will contain information regarding the scope of the service connection.
If you find working with the API directly a bit hard, you can try the PSDevOps PowerShell module to interact with the Azure DevOps API. It has the Get-ADOServiceEndpoint command that allows you to get the available service endpoints.
I'm an owner of an Azure resource group but not have permissions on the subscription or on the management group.
When configuring the "azurerm" provider inside my .tf file, I've added subscription id and tenant id (I'm not the owner of that subscription).
--------------------- UPDATE ---------------------
I'm trying to apply Linux virtual machine using Terraform but having authorization issues while planning the .tf file.
I've listed all my accounts using Azure CLI (want to connect the second subscription in the output below):
I've succeeded authenticating to the subscription using Azure CLI with the command (it worked):
az account set --subscription="SUBSCRIPTION_ID"
It's my default and current subscription:
Also, I was able to create and manage resources inside my resource group in that subscription using Azure CLI.
However, I added the exact tenant ID and the exact subscription ID inside my .tf file and still got the same credentials errors during the "terraform plan".
Using Azure CLI or Azure portal I am able to create and manage resources inside the resource group's scope, although using terraform I'm facing problems.
Thank you :)
According to your story, you just set the tenant id and subscription id in the azure provider, so it seems you authenticate via Azure CLI. No matter you have a user account or a service principal, the owner role of the resource group is enough to create virtual machine in the resource group. In this way, you need to logging into the Azure CLI first. As it shows in the link I have provided.
We are building a solution in Azure Government and we will be using Terraform to deploy the solution. Seems the preferred method is to create a Service Principal for Terraform with the Service Principal having the Contributor role scoped to the subscription.
The one issue with this we are looking to solve is that this gives the Service Principal management plane access to the Key Vault...since it is in the subscription. With Contributor role, the service principal is not able to create new access policies (assign itself or others permissions to the data plane) but we are looking for a way that can remove the service principal from having any management plane permissions.
We have tried putting a ReadOnly lock on the Key Vault before creating the Service Principal but the lock does not stop the service principal from getting the Contributor permissions on the Key Vault.
Outside of creating a new role that has Contributor for everything EXCEPT for Key Vault, does anyone have any creative ideas that might help achieve this?
Yes, the root cause of all security issues is that the Service Principal's contributor role assignment is at the subscription level/scope, which enables it to do quite damage specially if multiple applications are deployed to the same subscription (e.g. delete any resource group).
One approach would be:
Provision one resource group for the Azure Key Vault specific to the application and region (the latter in case of geo-distributed applications).
Provision the Azure Key Vault on the resource group created on the previous step.
In our case, the Security Office was responsible for the first 2 steps, where they had monitoring (e.g. email, text-messages, etc.) for any change in the Azure Key Vault (e.g. new keys/secrets/certificates added/deleted/changed, permission changes, etc.).
Provision a second resource group, which will serve as a container for the application components (e.g. Azure Function, Azure SQL Server/Database, Azure Service Bus Namespace/Queue, etc.).
Create the Service Principal and assign the Contributor role to the
application resource group only, for example:
scope =
/subscriptions/{Subscription Id}/resourceGroups/{Resource Group
Name}
Find a sample PS script to provision a Service Principal with custom scope at https://github.com/evandropaula/Azure/blob/master/ServicePrincipal/PS/Create-ServicePrincipal.ps1.
Give appropriate permissions for the Service Principal in the Azure
Key Vault. In our case, we decided to have separate Service
Principal accounts for deployment (Read-Write permissions on keys/secrets/certificates) and runtime (Read-Only permissions on keys/secrets/certificates);
Find a sample PS script to set Service Principal permission on an Azure Key Vault at https://github.com/evandropaula/Azure/blob/master/KeyVault/PS/Set-ServicePrincipalPermissions.ps1.
Having that said, there are lots of inconveniences with this approach, such as:
The process (e.g. via runbook) to provision the Azure Key Vault (including its resource group) and the application resource group will be outside of the main Terraform template responsible for the application components, which requires coordination with different teams/processes/tools/etc.
Live site involving connectivity often involves coordination among multiple teams to ensure RTO and MTTM (Mean Time To Mitigate) goals are achieved.
The Service Principal will be able to delete the application specific resource group when terraform destroy is executed, but it will fail to recreate it when running terraform apply after that due to lack of permission at the subscription level/scope. Here is the error:
provider.azurerm: Unable to list provider registration status, it is possible that this is due to invalid credentials or the service principal does not have permission to use the Resource Manager API, Azure error: resources.ProvidersClient#List: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '' with object id '' does not have authorization to perform action 'Microsoft.Resources/subscriptions/providers/read' over scope '/subscriptions/{Subscription Id}'.".
Yeah, I know, this is a long answer, but the topic usually requires lots of cross-team discussions/brainstorming to make sure the security controls established by the Security Office are met, Developer productivity is not affected to the point that it will impact release schedules and RTO/MTTM goals are met. I hope this helps a bit!