Hashicorp Terraform Remote State and Azure - terraform

I realize that Terraform supports Azure, and I've actually been able to get Terraform working with Azure by doing the following:
Create a storage account
Create a blob container
Plugged in access key
Created a file titled backend.tfvars with resource_group_name, storage_account_name, container_name, access_key, key values.
Added following to main.tf:
Main.tf
terraform {
backend "azurerm" {
}
}
I ran terraform init -backend-config="backend.tfvars"
When I look in the blob container, I see the myapp.tfstate file, which means that I've been successful, right?
What exactly does this allow me? I understand that my state file is now saved in Azure, but... how does that help me? I've looked around for documentation explaining this, but for some reason haven't been able to find anything.

Charles is right about nothing store "just" being in another place, but he is wrong there is no difference. There is a difference. Main difference is if you have a team of people working with TF.
You see, state is not only used to store state, but to signal that currently there is an operation going on. Called locking. With centralized storage none of your teammates can accidentally try and change resources when somebody else is doing that already.

Actually, store the Terraform in Azure Storage Account, I think it's no different with local, just replaced the place. But according to the description in the document:
By default, data stored in an Azure Blob is encrypted before being
persisted to the storage infrastructure. When Terraform needs state,
it is retrieved from the backend and stored in memory on your
development system. In this configuration, the state is secured in Azure
Storage and not written to your local disk.
It seems that there is still a little effect on the security of the data.

Related

Restore database on another server in azure with terraform

How do you restore an azure sql database using terraform on another server from a backup?
Terraform docs talk about a create mode "RestoreExternalBackup". How could one use that?
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_database
I researched this issue and found it to be a discrepancy between the Terraform and Azure documentation. I also opened an issue on the GitHub repo with this info.
RestoreExternalBackup is listed as a possible value for CreateMode in the Azure API documentation for databases. However, the create mode documentation doesn't describe how to use it. This option should not be available.
Looking at the Managed Database documentation, it clearly defines how to use the RestoreExternalBackup option. Oddly enough the Terraform documentation doesnt list any create modes for managed databases. https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_managed_database
When trying to use RestoreExternalBackup to create a database the error indicates that option requires a pointer to a storage account with the message "Missing storage container URI". Storage account information is not a valid property when creating a database resource, only for a managed database resource.
Database = https://learn.microsoft.com/en-us/rest/api/sql/2022-05-01-preview/databases/create-or-update?tabs=HTTP
Managed Database Docs = https://learn.microsoft.com/en-us/rest/api/sql/2022-05-01-preview/managed-databases/create-or-update?tabs=HTTP

How to import a remote resource while performing an apply in Terraform?

I'm using Terraform to create some resources. One of the side effects of creating the resource is the creation of another resource (let's call this B). The issue is that I can't access B to edit it in terraform because terraform considers it as "out of the state". I can't also import B in the state before the terraform apply is started because B does not exist.
Is there any solution to add (import) a remote resource to the state while running the apply command?
I'm thinking about this as a general question, if there was no solution I can also share the details of the resources I'm creating.
More details:
When I create a "Storage Account" on Azure using Terraform and enable static_website, Azure automatically creates a storage_container named $web. I need to edit one of the attributes of the $web container but Terraform tells me it is not in the current state and needs to be imported. Storage Account is A, Container is B
Unfortunately I do not have an answer to your specific question of importing a resource during an apply. The fundamental premise of Terraform is that it manages resources from creation. Therefore, you need to have a (in this case, azurerm_storage_container) resource declared, before you can import the current state of that resource into your state.
In an ideal world you would be able to explicitly create the container first and specify that the storage account uses that, but a quick look in the docs does not suggest that is an option (and I think is something you have already tried). If it is not exposed in Terraform, that is likely because it is not exposed by the Azure API (Disclaimer: not an Azure user)
The only (bad) answer I can think to suggest, is that you define an azurerm_storage_container data resource in your code, dependent on the the azurerm_storage_account resource, that will be able to pull back the details of the created container. You could then potentially have a null_resource that calls a local-exec provisioner that can fire a CLI command, using the params taken from the data resource to allow you to use the Azure CLI tools to edit the container.
I really hope someone else can come along with a better answer tho :|

Does terraform support AWS BACKUP restore feature?

Does terraform support aws backup feature for to restore the image from vault (https://www.terraform.io/docs/providers/aws/r/backup_plan.html )?
As I read the document I can see that it does support creating of backup plan, assigning resources and policy, creating vault but doesnot support restore of an image or ebs volume
How do i add the restore block in my terraform template
Terraform's execution model is designed for translating declarative descriptions of an intended state into imperative actions to reach that state automatically, and so its model doesn't really support "exceptional" processes like restoring backups.
However, you can develop a process for restoring backups alongside Terraform whereby the main restore action is done using the AWS Console, AWS CLI, or API in your own automation, and then you inform Terraform after the fact that it should use the restored object via its state manipulation commands.
For example, if you have an EBS volume managed by Terraform using an aws_ebs_volume resource, you might also use Terraform to configure an AWS Backup plan for that volume, and then backups will be created automatically as per your plan.
In the exceptional situation where your existing volume is lost or corrupted and you want to restore the backup, the person responding to the incident can follow the following process:
Create an AWS Backup restore job either using the AWS Console, the AWS CLI, or some software of your own design using the AWS Backup API.
Once the backup job is complete, consult the CreatedResourceARN to find the id if the new object that was created by restoring the backup. In the case of an EBS volume, this will be the final part of the after the :volume/ separator.
Tell Terraform to "forget" the existing EBS volume object that is now destroyed or damaged:
terraform state rm aws_ebs_volume.example
Tell Terraform to import the object created by restoring the backup as the new remote object associated with the Terraform resource:
terraform import aws_ebs_volume.example vol-049df61146c4d7901
If your old EBS volume is still present but corrupted or otherwise damaged, the final step would be to locate and manually destroy the remant of it, because Terraform is no longer managing it and therefore it would otherwise be left in place forever.
After this process is complete, Terraform will consider the new object to be the one managed by that resource, and you can use Terraform as normal with that resource moving forward. The same principle applies to any of the object types supported by AWS Backup, as long as they have a resource type in the AWS provider that supports terraform import.

Accessing Azure Storage Blob from an AKS cluster

A little context: I'm having to migrate a project from AWS, where I'm currently using ECS, to Azure, where I'll be using AKS since their ACS (ECS equivalent) is deprecated.
This is a regular Django app, with its configuration variables being fetched from a server-config.json hosted on a private S3 bucket, the EC2 instance has the correct role with S3FullAccess,
I've been looking into reproducing that same behavior but with Azure Blob Storage instead, having achieved no success whatsoever :-(.
I tried using the Service Principal concept and adding it to the AKS Cluster with Storage Blob Data Owner roles, but that doesn't seem to work. Overall it's been quite the frustrating experience - maybe I'm just having a hard time grasping the right way to use the permissions/scopes. The fact that the AKS Cluster creates its own resource group is something unfathomable - but I've attempted attaching the policies to it as well, to no avail. I then moved onto a solution indicated by Microsoft.
I managed to bind my AKS pods with the correct User Managed Identity through their indicated solution aad-pod-identity, but I feel like I'm missing something. I assigned Storage Blob Data Owner/Contributor to the identity, but still, when I enter the pods and try to access a Blob (using the python sdk), I get a resource not found message.
Is what I'm trying to achieve possible at all? Or will I have to change to a solution using Azure Keyvault/something along those lines?
first off all, you can use AKS Engine which is more or less ACS for Kubernetes now.
As for the access to the blob storage, you dont have to use Managed Service Identity, you can just use account name\key ( which is a bit less secure, but a lot less error prone and more examples exist ). The fact that you are getting resource not found error most likely means your auth part is fine, you just dont have access to the resource, according to this storage blob contributor should be fine if you assigned it at a proper scope. For this to work 100% just give your identity contributor access at subscription level, this way its guaranteed to work.
I've found an example of using python with MSI (here). You should start with that (and grant your identity contributor access) and verify you can list resource groups. when that works making reading blobs working should be trivial.

Make Azure storage account and container before running terraform init?

Correct me if I'm wrong, when you run terraform init you are asked to name a storage account and container for the terraform state.
Can these also automatically be made with terraform?
Edit: I'm using Azure.
I usually split my terraform configurations into two parts.
One that creates a storage account with container, with a specific tag (tf=backend for example). The second one that creates all other resources. I share a backend.tfvars between the two, and in the second one, I get the storage account key using Azure CLI and the previously set tag (that way I don't have to get the key and pass it manually to my second script).
You could even migrate the state of the first terraform configuration once deployed, if you don't want to rely on a local state
Yes, absolutely. You would in general want an S3 bucket for each of your environments, although it's also possible to have a bucket shared across all environments and then set up access controls using bucket policies. Don't create this bucket as part of provisioning other resources, as their lifecycles will likely be different (you would want to retain the bucket for a long time and would be unlikely to want to destroy it).
What you do is you define this bucket in Terraform using local state first. After it is created, you add a remote backend pointing to this bucket.
terraform {
required_version = ">= 0.11.7"
backend "s3" {
bucket = "my-state-bucket"
key = "s3_state_bucket"
region = "us-west-2"
encrypt = "true"
}
}
After you run terraform init, Terraform will ask if you want to migrate the local state file to S3. Answer yes, and after this completes you can delete the local state file, as it's no longer used.
This approach allows you to break out of this chicken and egg situation and still manage all of your infrastructure as code, rather then creating it manually using web console or bash scripts.

Resources