Enterprise Scale Landing Zone - CAF - Apply/Create Policies - azure

I am using below link to create/assign Standard Azure policies for my Management Groups. The problem is my org. has already created Management Groups and Subscriptions manually using Azure Portal. Can i still create / apply policies using ESLZ TF code and apply to these manually created Management groups using TF code:
https://github.com/Azure/terraform-azurerm-caf-enterprise-scale
When i see the code archetype (policy) is very tightly coupled to MG creation ?
locals.management_groups.tf:
"${local.root_id}-landing-zones" = {
archetype_id = "es_landing_zones"
parameters = local.empty_map
access_control = local.empty_map
}
archetype_id is the policy.

The enterprise scale example follows what they call a "supermodule approach".
The example from locals.management_groups.tf that you posted there is just the configuration layer of the module. If you want CAF to adopt an existing management group instead of creating a new one, you have to terraform import it. That requires you to know the resource address, which you can derive from the the actual resource definition.
So you're looking for resource "azurerm_management_group" and that's in resources.management_groups.tf. That file declares multiple management group resources level_1 to level_6, and they're using for_each to generate the resources from the module's configuration model.
So after you've somehow traced your way through the modules' configuration model and determined where you want your existing management group to end up in the CAFs model, you can run an import as described in terraform's docs and the azurerm provider's docs like this:
terraform import azurerm_management_group.level_3["my-root-demo-corp"] providers/Microsoft.Management/managementGroups/group1
However, even though all of that is technically possible, I'm not sure this is a good approach. The enterprise scale supermodule configuration model is super idiosyncratic to trace through - it's stated goal is to replace "infrastructure as code" with "infrastructure as data". You may find one of the approaches described in Azure Landing Zone for existing Azure infrastructure more practical for your purposes.

You could import the deployed environment into the TF state and proceed from there.
However, do you see an absolute necessity ? IMO, setting up the LZ is a one-time activity. However, one can argue that ensuring compliance across active & DR LZs is a good use-case for a TF based LZ deployment.

Related

Migrate a data block to resource block in Terraform

Initially resources in our authentication provider were created manually through the provider web console. It worked and things went to production this way. The problem is that the configuration is increasing in complexity and I'd like to manage it through terraform files instead of continuing through the provider Web console (no backup, no way to recreate everything easily , etc.)
I initially thought of modelling my configuration with data block for the existing resources and use new resources block for the new resources we need to create. Then I wanted to migrate from the data blocks to terraform managed resources (aka resource block). Is it possible through moved block or something else? Is it possible to do it without having to recreate a new managed resource and destroy the data resource which is most likely to cause down time or disruption to the end-user?
In order to manage the resources which were initially created manually or out of terraform scope by any means, Terraform cli offers import as a native solution by Hashicorp.
Every resource has its own way of importing syntax (starting with terraform import ) which you can find at the bottom of any terraform resource definition.
As an example:
Azurerm windows_virtual_machine Import
terraform import azurerm_windows_virtual_machine.example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Compute/virtualMachines/machine1
Downside of Native import: You have to import all resources one by one and sometimes just for one resource(solution) you have to make multiple import calls
as an example for a windows virtual machine, you might import
azurerm_virtual_machine_extension
azurerm_managed_disk
azurerm_virtual_machine_data_disk_attachment
as separate. It strongly depends on how would you like them to manage them at the end.
BUT
There are few open-source tools available that help If you have lots of resources that you want to bring under terraform management in a lot easier and faster way.
If you working with Azure resources then aztfy is the recommended tool as it is natively from Azure.
It does generate the terraform code, additionally, it has a feature where you can import the azure resource group, it automatically imports and generates config for the resources that the resource group is holding. Not to mention but the tool gives you a nice terminal-based-UI experience.
For other hyperscalers, there are two choices.
terracognita: can generate modules too as per their docs.
terraformer: Developed by Google people but not an official product.

How to import a remote resource while performing an apply in Terraform?

I'm using Terraform to create some resources. One of the side effects of creating the resource is the creation of another resource (let's call this B). The issue is that I can't access B to edit it in terraform because terraform considers it as "out of the state". I can't also import B in the state before the terraform apply is started because B does not exist.
Is there any solution to add (import) a remote resource to the state while running the apply command?
I'm thinking about this as a general question, if there was no solution I can also share the details of the resources I'm creating.
More details:
When I create a "Storage Account" on Azure using Terraform and enable static_website, Azure automatically creates a storage_container named $web. I need to edit one of the attributes of the $web container but Terraform tells me it is not in the current state and needs to be imported. Storage Account is A, Container is B
Unfortunately I do not have an answer to your specific question of importing a resource during an apply. The fundamental premise of Terraform is that it manages resources from creation. Therefore, you need to have a (in this case, azurerm_storage_container) resource declared, before you can import the current state of that resource into your state.
In an ideal world you would be able to explicitly create the container first and specify that the storage account uses that, but a quick look in the docs does not suggest that is an option (and I think is something you have already tried). If it is not exposed in Terraform, that is likely because it is not exposed by the Azure API (Disclaimer: not an Azure user)
The only (bad) answer I can think to suggest, is that you define an azurerm_storage_container data resource in your code, dependent on the the azurerm_storage_account resource, that will be able to pull back the details of the created container. You could then potentially have a null_resource that calls a local-exec provisioner that can fire a CLI command, using the params taken from the data resource to allow you to use the Azure CLI tools to edit the container.
I really hope someone else can come along with a better answer tho :|

How to have terraform import all of the "already exists" resources automatically?

When I run terraform apply -auto-approve I get the following error:
Error: A resource with the ID "/subscriptions/.../resourceGroups/RG-SCUSTFStorage" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_resource_group" for more information.
I underestand that I need to run terraform import to import the resource to my worksapce. The problem is that I need to specify the resource id for all of the missing resources one at a time.
Is any way to have terraform import import all of the "already exists " resources automatically witout entering the resource IDs one at a time?
Unfortunately, you only can import the existing resources one by one with the resource IDs manually:
The import command doesn’t automatically generate the configuration to
manage the infrastructure, though. Because of this, importing existing
infrastructure into Terraform is a multi-step process.
More details here. I will suggest you use remote state storage for all the Terraform scripts before deployment. If you do not have the state file that contains all the deployed resources, then you only can import them one by one.
If you are looking to import Azure resources then aztfy is the recommended tool as it is natively from Azure.
It does generate the terraform code, additionally, it has a feature where you can import the azure resource group, it automatically imports and generates config for the resources that the resource group is holding.
Not to mention but the tool gives you a nice terminal-based-UI experience.
For other hyperscalers, there are two choices.
terracognita : can generate modules too as per their docs.
terraformer : Developed by Google people but not official product.
There isn't a native way in Terraform to import already existing resources, however, there are a couple of tools available that allow you to not only import the resources but also generate Terraform code for them if it doesn't already exist.
For Azure, the best tool to use is Azure-built AZtfy. A tool to bring your existing Azure resources under the management of Terraform.
Another tool to that can be used to import Azure resources is Google Cloud Terraformer which supports Azure.

How can I get existing Azure resources inside subscription in Terraform?

I would like to know how to get the existing resources on subscription level in Terraform. As far as I understand, azurerm_resources provides them on a resource group basis.
In principle this is the same as in How can I get active address space of tagged Azure VNets inside Terraform?, but on subscription level.
EDIT:
I think it turns out to be a problem of using
type = Microsoft.Resources/ResourceGroups
that somehow does not seem to be a valid type for data sources. When I changed the type back to
type=Microsoft.Network/virtualNetworks
the logic actually worked.
the same way:
terraform import resource_type_name.example resourceId
example from official docs:
terraform import azurerm_policy_assignment.assignment1 /subscriptions/00000000-0000-0000-000000000000/providers/Microsoft.Authorization/policyAssignments/assignment1
https://www.terraform.io/docs/providers/azurerm/r/policy_assignment.html

Update WadCfg "only" of existing Azure Service Fabric cluster?

I want to monitor Perfomance metrics of a existing Service Fabric Cluster.
Here is the link of Performance metrics -
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-event-generation-perf
I went through this Microsoft documentation -
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-perf-wad
My problem is, The ARM template I downloaded during Service Fabric creation time is quite big and contains lot of params and I don't have the template-params file. I think it is possible to build the params file but it will be time consuming.
Is it possible to download template and template-params file of
existing service fabric cluster ?
If no, Is it possible to just update the "WadCfg" section to add new
performance counters ?
Your can export your entire resource group with all definitions and parameters, there you can find all parameters(as default parameters) for the resources deployed in the resource group. I've never done for SF cluster, but a quick look to an existing resource group I have I could see the cluster definition included.
This link explain how: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-export-template
In Summary:
Find the resource group where your cluster is deployed
Open the resource group and navigate to 'Automation Scripts'
Click 'Download' on top bar
Open the ARM template with all definitions
Make the modifications and save
Publish the updates
1:
2:
You could also add it to a library and deploy from there, as guided in the link above.
From the docs: Not all resource types support the export template function. To resolve this issue, manually add the missing resources back into your template.
To be honest, I've never deployed this way other than test environments, so I am not sure if it is safe for production.

Resources