I have a Terraform Cloud account connected to a git repo (a VCS-backed workspace), so I can only do VCS-driven run workflows. I have a VM with an attached volume and I would like to recreate the volume from scratch (yes, losing all data). I have read about plan replace option, but this can not be used in my workspace.
So, which is the best option to perform a volume re-creation with a Terraform VCS-backed workspace?
By the way I'm using an OpenStack as cloud infrastructure and official terraform-provider-openstack/openstack provider.
Related
I would like to know if there is anyway i can create Azure Databricks mount points using the Azure Databricks Resource Provider. Various Azure Service Principals are used to give access to various mount points in ADLS Gen2.
So can these mount points be put inside Databricks with the right Service Principal access, can this be done using Terraform or what is the best way to do this.
Thanks
You can't do it with the azurerm provider as it works only with the Azure-related objects, and DBFS mount is specific for Databricks. But Databricks Terraform provider has databricks_mount resource that is designed for that task. Just take into account that because there is no such thing as "mount API", mounting is performed by spinning off a small cluster, and performing dbutils.fs.mount command inside it.
P.S. Mounts are really not recommended anymore due the fact that all users of the workspace will have access to the mount's content using the permissions of the service principal that was used for mounting.
I initially saved terraform state in multiple workspaces (for each environment) locally as that part of infrastructure was intended to be updated very rarely. However, now I want to store the state for each workspace in a separate azure storage container (one per environment).
What is the best way I can move the terraform state (per workspace) into its own remote storage i.e. azure storage container?
Options I tried:
Using terraform init --migrate-state and using the backend with remote storage seems to be moving the state files for all workspaces into the same storage container (eg: dev). After this, using terraform workspace select would not let me re initialize the terraform init to point to the remote storage for the next workspace/environment.
Trying using multiple backend files for each environment. Since we cannot have multiple backend files in the same code, this is not an option.
Can anyone suggest any other solutions to this? I want to avoid duplicating the code for each environment into separate folders and then reconfiguring the terraform state to point to the respective storage container.
Thanks!
I'm new to terraform, and am trying to use it to create and configure an entire project from scratch. We're currently thinking about it as 1 google project is one environment.
It seems reasonable to store the terraform remote state inside of a bucket in the project that it is configuring. i.e. have terraform create the google cloud project, create a bucket, and then store its own remote state in that bucket that it just created. That also seems very advanced and potentially chicken / egg.
Is it possible to store terraform scripts remote state in the project that the terraform script itself is creating?
You could use terraform to create the project and bucket and then migrate the state into that bucket. But this is a chicken/egg scenario that begs the question, what happens if you need to delete/rebuild the bucket containing the state?
A more sensible approach would be to manually create a master project and remote state bucket. From there you would have a base for a project vending machine to spin up new projects and baseline config.
No - you cannot do this. After days of research, this isn't something that's possible.
As part of the IaC workflow we are implementing through Terraform, for some of the common resources we provision for users, we want to create a centralized remote state store. We are using Azure cloud so the default choice is to use Azure blob storage. We were initially thinking of creating one storage continaer per pipeline and store the state there. But then there was another thought wherein create one container and create directory structure per pipeline and store the state there. I understand blob storage by default is the flat file system. But Azure storage also gives an option to enable hierarchical file structure with ADLS2. Did anyone attempt to store terraform states by enabling hierarchical file system structure in Azure? Is that a valid option at all? Also, can anyone suggest what would be the recommended apporach in my scenario?
Thanks
Tintu
Never tried with ADLS2 by using its hierarchical feature. But since your requirement is to save the statefiles in same container but within different folders, you can try out specifying different folder structure while configuring the backend in backend.tf
terraform init backend-config "key=$somePath/<tfstate-file-name>.tfstate"
And pass different somePath values from a different backend.tfvars files.
I hope this answers your question!
I just installed a DC/OS Cluster on Azure using Terraform. Now I was wondering if it's possible to automatically mount Data Disks of agent nodes under /dcos/volume<N>. As far as I understood the docs, this is a manual task. Wouldn't it be possible to automate this step with Terraform? I was looking through the DC/OS docs and Terraform docs but I couldn't find anything related to auto mounting.
It seems you just can mount the Data disks to the node of AKS manual as a volume. It's a Kubernetes task, not Azure's. Azure only can manage the data disk for you.
What you can do through the Terraform is attach the data disk to the node itself of AKS as a disk, not a volume of the AKS. And the volume, you only can create it through Kubernetes, not Azure. So Terraform also cannot help you achieve it automated.