I wrote infrastructure as code using terraform, and apply it successfully on azure cloud, now i created another 3 vm's using the same networking file and variable file that already used in the pervious IAAC, how can i run only these 3 vm's without generating new error, or "already exists" to create them on the same subscription and same variable/networking configuration.
Thanks
if you understand correctly, you can use
terraform -target
Be notice that -target attribute respects dependencies
Related
I have created some resources in Azure using Terraform such as VNETS, VMs, NSGs etc. Let's assume if I create another VM in the same VNET which was created by Terraform, I want to know if I rerun the Terraform script, will the manually created VM gets destroyed since the manually created VM is not in the state file?
No, Terraform does not interfere with resources that are created outside of terraform. It only manages resources that are included in its state file.
However, if you make manual changes to resources that you created through terraform(for example VNET in your case), terraform would reset them to what is declared in terraform code on the next run/execution.
Background
I was kind of dropped into an IaC project that uses Packer => Terraform => Ansible to create RHEL Virtual Machines on an on-prem VMware Vsphere cluster.
Our vmware module registers output variables that we use once the VMs are created, those variables feed a local_file resource template to build an Ansible inventory with the vm names and some other variables.
Ansible is then run using local_exec with the above created inventory to do configuration actions and run scripts both on the newly deployed VM's and against some external management applications, for example to join the VM to a domain (FreeIPA, sadly no TF good provider is available).
Issue Description
The issue that I have been wrestling with is when we run a terraform destroy (or apply with some VM count changes that destroy a VM resource), we would like to be able to repeat the process in reverse.
Capture the names of the VMs to be destroyed(Output vars from resource creation) so they can be removed from the IPA domain and have some general cleanup.
We've tried different approaches with Destroy Time Provisioners and it just seems like it would require a fundamental change in the approach outlined above to make that work.
Question
I'm wondering if there is a way to get an output variable on destroy that could be used to populate a list the VMs that would be removed.
So far my search has turned up nothing. Thanks for your time.
In general, it is good to plan first, even when destroying:
terraform plan -destroy -out tfplan
Then, you you can proceed with the destroy:
terraform apply tfplan
But at this moment (or before actual destroy), you have a plan what was destroyed, and you can do any analysis or automation on it. Example:
terraform show -json tfplan | jq > tfplan.json
Source:
https://learn.hashicorp.com/tutorials/terraform/plan
Learning Terraform, and in one of the tutorials for terraform with azure a requirement was to log in with the az client. Now my understanding is that this was to create a Service Princlple.
I was trying this with Github actions and my assumption was that the properties obtained for the Service Principle. When I tried running terraform plan everything worked out fine.
However, when I tried to do terraform apply it failed until I explicitly did an az login step in the github workflow job.
What am I missing here? Does terraform plan only compare the new configuration file against the state file, not the actual account? Or does it verify the state against the resource-group/subscription in Azure?
I was a little confused with the documentation on terraform plan
I am using Terraform scripts to create azure services, I am having some doubts regarding Terraform,
1) If I have one environment let say dev in azure having some azure resources how can I copy all the resources to new environment lest say prod using terraform script.
2)what are the impact of re-run the terraform file with additional azure resources, what it will do.
3)What if I want to create an app service with the same name from Terraform script that already present in the azure will it update the resource or do nothing after terraform execution completed.
Please feel free to answer the question, it will be great help.
To answer your questions:
You could create a new workspace with terraform workspace new and copy all configuration files (.tf) to the new environment, then run terraform init, plan, apply.
The terraform will compare the content in your current state file with your configuration file, then update the new attributes or creating new resources other than re-creating the existing resources.
You could run terraform import to import existing infrastructure into Terraform. For referencing existing resources in the portal, you can use data sources.
My deployment workflow is first creating ami with Packer, then deploy using Terraform.
I have a EC2-class, which was created before 2013, so there's no default VPC configured.
When I run packer build packer.json, the tool complains that
amazon-ebs: Adding tag: "Name": "Packer Builder"
==> amazon-ebs: Error launching source instance: VPCResourceNotSpecified: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.
==> amazon-ebs: status code: 400, request id: 35ca5736-f808-4bb9-9a34-3dca24b59259
I was planning to create VPC with Terraform. So the question is, what is the order of execution? Run Terraform first, then Packer. Or run in reverse order? Or, we split out the network configuration (VPC), use Terraform to deploy it once, then followed by Packer, and then terraform the rest of the servers?
Update:
If I use the strategy:
run Network module (mostly static things), followed by Packer, and then run "Frequently changing things" module, how do I share state between Terraform and Packer? Meaning, once I created a new VPC, how do I let Packer know about this new vpc_id? Do I need to modify every Packer file?
The general advice is to split the terraform configuration into reasonable sized parts.
For a small setup it's reasonable is to split it into mostly static things (VPC, subnet, routes, etc). Frequently changing things (EC2, SG, etc). This would also solve your dependency cycle.