Terrafrom - Deploy to multiple azure subscriptions - azure

I have been trying to use the same terraform stack to deploy resources in multiple azure subscriptions. Also need to pass parameters between these resources in different subscriptions. I had tried to use multiple Providers, but that is not supported.
Error: provider.azurerm: multiple configurations present; only one configuration is allowed per provider
If you have a way or an idea on how to accomplish this please let me know.

You can use multiple providers by using alias (doku).
# The default provider configuration
provider "azurerm" {
subscription_id = "xxxxxxxxxx"
}
# Additional provider configuration for west coast region
provider "azurerm" {
alias = "y"
subscription_id = "yyyyyyyyyyy"
}
And then specify whenever you want to use the alternative provider:
resource "azurerm_resource_group" "network_x" {
name = "production"
location = "West US"
}
resource "azurerm_resource_group" "network_y" {
provider = "azurerm.y"
name = "production"
location = "West US"
}

Markus answer is correct, but it is the right solution if you need to access more than one subscription in the same set of Terraform sources.
If your purpose is to use one subscription as sandbox and the other for real, you should simply move the provider information out of Terraform scripts. There are more than one way to manage this:
Workspaces
Backend configuration
A wrapper script in bash/Powershell/python in Terragrunt style
Symbolic links can also be used to share files in multiple folders
I use a combination of the last three as workspaces are too rigid for our needs.

I got this error code for a silly reason as a Terraform beginner, maybe someone here has the same problem:
I saved a backup of my main.tf file as something like mymainbackup1.tf and Terraform interpreted it as a real .tf file even though it wasn't main.tf, therefore it thought I had more than one provider registered.
I changed the file to the .txt extension and Terraform stopped interpreting that file and stopped giving the error.

Related

Terraform Import - Is the Resource Label critical?

Our terraform remote state file in Azure has been completely destroyed and we're now faced with the challenge of recreating the state file from scratch. My option is to use the Terraform import command, using the following simple syntax:
terraform import <Terraform Resource Name>.<Resource Label> <Azure Resource ID>
To import the existing resource group for example, I will create the following configuration in a main.tf file.
provider "azurerm" {
version="1.39.0"
}
# create resource group
resource "azurerm_resource_group" "rg"{
name = "rg-terraform"
location = "uksouth"
}
Now, the problem I have is as follows:
When the existing Azure resources were originally created, they were assigned names that used an extremely complex naming convention, with some characters even being randomly generated. To further compound matters, they were all unique and there are hundreds of them. All would have been rosy if they were assigned a simplistic name like "main", as is used commonly in a lot of Terraform examples, but unfortunately, that's not the case.
My question therefore, is this:
When putting together my main.tf configuration file to be used for the Import, is it an absolute requirement that my "Resource Label" (given in my Import command) has to match the original "Resource Label" name from when the resource was created?
If it is a mandatory requirement, is there any way I could retrieve the original "Resource Label" from Azure in the same way that I can for instance obtain the "Azure Resource ID" from the Azure Portal or even an Az CLI query?
How can I ensure any child resources such as Subnets are included in the Import, without having to trawl manually through the Azure Portal to identify each one of them?
No, absolutely not. Choose whatever you want.
No, Azure generally does not know about this label, it is something terraform internal.
Unfortunately you need to import each and every resource manually and separately.
Have you made absolutely sure the current state file is lost? The storage location was not versioned? Does no developer still have a local copy of the state file laying around?

How to automatically import resource to Terraform state?

I want to develop a single Terraform module to deploy my resources, with the resources being stored in separate YAML files. For example:
# resource_group_a.yml
name: "ResourceGroupA"
location: "westus"
# resource_group_b.yml
name: "ResourceGroupB"
location: "norwayeast"
And the following Terraform module:
# deploy/main.tf
variable source_file {
type = string # Path to a YAML file
}
locals {
rg = yamldecode(file(var.source_file))
}
resource "azurerm_resource_group" "rg" {
name = local.rg.name
location = local.rg.location
}
I can deploy the resource groups with:
terraform apply -var="source_file=resource_group_a.yml"
terraform apply -var="source_file=resource_group_b.yml"
But then I run into 2 problems, due to the state that Terraform keeps about my infrastructure:
If I deploy Resource Group A, it deletes Resource Group B and vice-versa.
If I manually remove the .tfstate file prior to running apply, and the resource group already exists, I get an error:
A resource with the ID "/..." already exists - to be managed via Terraform
this resource needs to be imported into the State.
with azurerm_resource_group.rg,
on main.tf line 8 in resource "azurerm_resource_group" "rg"
I can import the resource into my state with
terraform import azurerm_resource_group.reg "/..."
But it's a long file and there may be multiple resources that I need to import.
So my questions are:
How to keep the state separate between the two resource groups?
How to automatically import existing resources when I run terraform apply?
How to keep the state separate between the two resource groups?
I recommend using Terraform Workspaces for this, which will give you separate state files, each with an associated workspace name.
How to automatically import existing resources when I run terraform
apply?
That's not currently possible. There are some third-party tools out there like Terraformer for accomplishing automated imports, but in my experience they don't work very well, or they never support all the resource types you need. Even then they wouldn't import resources automatically every time you run terraform apply.

Using databricks workspace in the same configuration as the databricks provider

I'm having some trouble getting the azurerm & databricks provider to work together.
With the azurerm provider, setup my workspace
resource "azurerm_databricks_workspace" "ws" {
name = var.workspace_name
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
sku = "premium"
managed_resource_group_name = "${azurerm_resource_group.rg.name}-mng-rg"
custom_parameters {
virtual_network_id = data.azurerm_virtual_network.vnet.id
public_subnet_name = var.public_subnet
private_subnet_name = var.private_subnet
}
}
No matter how I structure this, I can't say seem to get the azurerm_databricks_workspace.ws.id to work in the provider statement for databricks in the the same configuration. If it did work, the above workspace would be defined in the same configuration and I'd have a provider statement that looks like this:
provider "databricks" {
azure_workspace_resource_id = azurerm_databricks_workspace.ws.id
}
Error:
I have my ARM_* environment variables set to identify as a Service Principal with Contributor on the subscription.
I've tried in the same configuration & in a module and consuming outputs. The only way I can get it to work is by running one configuration for the workspace and a second configuration to consume the workspace.
This is super suboptimal in that I have a fair amount of repeating values across those configurations and it would be ideal just to have one.
Has anyone been able to do this?
Thank you :)
I've had the exact same issue with a not working databricks provider because I was working with modules. I separated the databricks infra (Azure) with databricks application (databricks provider).
In my databricks module I added the following code at the top, otherwise it would use my azure setup:
terraform {
required_providers {
databricks = {
source = "databrickslabs/databricks"
version = "0.3.1"
}
}
}
In my normal provider setup I have the following settings for databricks:
provider "databricks" {
azure_workspace_resource_id = module.databricks_infra.databricks_workspace_id
azure_client_id = var.ARM_CLIENT_ID
azure_client_secret = var.ARM_CLIENT_SECRET
azure_tenant_id = var.ARM_TENANT_ID
}
And of course I have the azure one. Let me know if it worked :)
If you experience technical difficulties with rolling out resources in this example, please make sure that environment variables don't conflict with other provider block attributes. When in doubt, please run TF_LOG=DEBUG terraform apply to enable debug mode through the TF_LOG environment variable. Look specifically for Explicit and implicit attributes lines, that should indicate authentication attributes used. The other common reason for technical difficulties might be related to missing alias attribute in provider "databricks" {} blocks or provider attribute in resource "databricks_..." {} blocks. Please make sure to read alias: Multiple Provider Configurations documentation article.
From the error message, it looks like Authentication is not configured for provider could you please configure it through the one of following options mentioned above.
For more details, refer Databricks provider - Authentication.
For passing the custom_parameters, you may checkout the SO thread which addressing the similar issue.
In case if you need more help on this issue, I would suggest to open an issue here: https://github.com/terraform-providers/terraform-provider-azurerm/issues

How to share Terraform variables across workpaces/modules?

Terraform Cloud Workspaces allow me to define variables, but I'm unable to find a way to share variables across more than one workspace.
In my example I have, lets say, two workspaces:
Database
Application
In both cases I'll be using the same AzureRM credentials for connectivity. The following are common values used by the workspaces to connect to my Azure subscription:
provider "azurerm" {
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000"
client_secret = "00000000000000000000000000000000"
tenant_id = "00000000-0000-0000-0000-000000000000"
}
It wouldn't make sense to duplicate values (in my case I'll have probably 10 workspaces).
Is there a way to do this?
Or the correct approach is to define "database" and "application" as a Module, and then use Workspaces (DEV, QA, PROD) to orchestrate them?
In Terraform Cloud, the Workspace object is currently the least granular location where you can specify variable values directly. There is no built in mechanism to share variable values between workspaces.
However, one way to approach this would be to manage Terraform Cloud with Terraform itself. The tfe provider (named after Terraform Enterprise for historical reasons, since it was built before Terraform Cloud launched) will allow Terraform to manage Terraform Cloud workspaces and their associated variables.
variable "workspaces" {
type = set(string)
}
variable "common_environment_variables" {
type = map(string)
}
provider "tfe" {
hostname = "app.terraform.io" # Terraform Cloud
}
resource "tfe_workspace" "example" {
for_each = var.workspaces
organization = "your-organization-name"
name = each.key
}
resource "tfe_variable" "example" {
# We'll need one tfe_variable instance for each
# combination of workspace and environment variable,
# so this one has a more complicated for_each expression.
for_each = {
for pair in setproduct(var.workspaces, keys(var.common_environment_variables)) : "${pair[0]}/${pair[1]}" => {
workspace_name = pair[0]
workspace_id = tfe_workspace.example[pair[0]].id
name = pair[1]
value = var.common_environment_variables[pair[1]]
}
}
workspace_id = each.value.workspace_id
category = "env"
key = each.value.name
value = each.value.value
sensitive = true
}
With the above configuration, you can set var.workspaces to contain the names of the workspaces you want Terraform to manage and var.common_environment_variables to the environment variables you want to set for all of them.
Note that for setting credentials on a provider the recommended approach is to set them in environment variables rather than Terraform variables, because that then makes the Terraform configuration itself agnostic to how those credentials are obtained. You could potentially apply the same Terraform configuration locally (outside of Terraform Cloud) using the integration with Azure CLI auth, while the Terraform Cloud execution environment would often use a service principal.
Therefore to provide the credentials in the Terraform Cloud environment you'd put the following environment variables in var.common_environment_variables:
ARM_CLIENT_ID
ARM_TENANT_ID
ARM_SUBSCRIPTION_ID
ARM_CLIENT_SECRET
If you use Terraform Cloud itself to run operations on this workspace managing Terraform Cloud (naturally, you'd need to set this one up manually to bootstrap, rather than having it self-manage) then you can configure var.common_environment_variables as a sensitive variable on that workspace.
If you instead set it via Terraform variables passed into the provider "azurerm" block (as you indicated in your example) then you force any person or system running the configuration to directly populate those variables, forcing them to use a service principal vs. one of the other mechanisms and preventing Terraform from automatically picking up credentials set using az login. The Terraform configuration should generally only describe what Terraform is managing, not settings related to who is running Terraform or where Terraform is being run.
Note though that the state for the Terraform Cloud self-management workspace will include
a copy of those credentials as is normal for objects Terraform is managing, so the permissions on this workspace should be set appropriately to restrict access to it.
You can now use variable sets to reuse variable across multiple workspaces

Adding tags created to child resources created by terraform

Terraform v0.11.9
+ provider.aws v1.41.0
I want to know if there is a way to update a resource that is not directly created in the plan but by a resource in the plan. The example is creating a managed Active Directory by using aws_directory_service_directory This process creates a security group and I want to add tags to the security group. Here is the snippet I'm using to create the resource
resource "aws_directory_service_directory" "NewDS" {
name = "${local.DSFQDN}"
password = "${var.ADPassword}"
size = "Large"
type = "MicrosoftAD"
short_name = "${local.DSShortName}"
vpc_settings {
vpc_id = "${aws_vpc.this.id}"
subnet_ids = ["${aws_subnet.private.0.id}",
"${aws_subnet.private.1.id}",
]
}
tags = "${merge(var.tags, var.ds_tags, map("Name", format("%s", local.VPCname)))}"
}
I can reference the newly created security group using
"${aws_directory_service_directory.NewDS.security_group_id}"
I can't use that to update the resource. I want to add all of the tags I have on the directory to the security, as well as updating the Name tag. I've tried using a local-exec provisioner, but the results have not been consistent and getting the map of tags to the command without hard coding it has not worked.
Thanks
I moved the local provider out of the directory service resource and into a dummy resource.
resource "null_resource" "ManagedADTags"
{
provisioner "local-exec"
{
command = "aws --profile ${var.profile} --region ${var.region} ec2 create-tags --
resources ${aws_directory_service_directory.NewDS.security_group_id} --tags
Key=Name,Value=${format("${local.security_group_prefix}-%s","ManagedAD")}"
}
}
(The command = is a single line)
Using the format command allowed me to send the entire list of tags to the resource. Terraform doesn't "manage" it, but it does allow me to update it as part of the plan.
You can then leverage the aws_ec2_tag resource, which works on non-ec2 resources as well, on conjunction with the provider attribute ignore_tags. Please refer to another answer I made on the topic for more detail.
aws already exposes api for that where you can tag resources not just a resource. not sure why terraform is not implementing that
Just hit this as well. Turns out the tags propagate from the directory service. So if you tag your directory appropriately, the name tag from your directory service will be applied to the security group.

Resources