How To Capture information about Virtual Machine resources that will be destroyed? - terraform

Background
I was kind of dropped into an IaC project that uses Packer => Terraform => Ansible to create RHEL Virtual Machines on an on-prem VMware Vsphere cluster.
Our vmware module registers output variables that we use once the VMs are created, those variables feed a local_file resource template to build an Ansible inventory with the vm names and some other variables.
Ansible is then run using local_exec with the above created inventory to do configuration actions and run scripts both on the newly deployed VM's and against some external management applications, for example to join the VM to a domain (FreeIPA, sadly no TF good provider is available).
Issue Description
The issue that I have been wrestling with is when we run a terraform destroy (or apply with some VM count changes that destroy a VM resource), we would like to be able to repeat the process in reverse.
Capture the names of the VMs to be destroyed(Output vars from resource creation) so they can be removed from the IPA domain and have some general cleanup.
We've tried different approaches with Destroy Time Provisioners and it just seems like it would require a fundamental change in the approach outlined above to make that work.
Question
I'm wondering if there is a way to get an output variable on destroy that could be used to populate a list the VMs that would be removed.
So far my search has turned up nothing. Thanks for your time.

In general, it is good to plan first, even when destroying:
terraform plan -destroy -out tfplan
Then, you you can proceed with the destroy:
terraform apply tfplan
But at this moment (or before actual destroy), you have a plan what was destroyed, and you can do any analysis or automation on it. Example:
terraform show -json tfplan | jq > tfplan.json
Source:
https://learn.hashicorp.com/tutorials/terraform/plan

Related

Is Terraform Destroying Manually created resources?

I have created some resources in Azure using Terraform such as VNETS, VMs, NSGs etc. Let's assume if I create another VM in the same VNET which was created by Terraform, I want to know if I rerun the Terraform script, will the manually created VM gets destroyed since the manually created VM is not in the state file?
No, Terraform does not interfere with resources that are created outside of terraform. It only manages resources that are included in its state file.
However, if you make manual changes to resources that you created through terraform(for example VNET in your case), terraform would reset them to what is declared in terraform code on the next run/execution.

How to connect Terraform and Ansible together?

I have written a Terraform script to spin-up an infrastructure on Azure. I have also written an Ansible script to patch the VMs launched on Azure with latest updates. But when I am not able to automate the process of patching the VMs once they get launched.
You can use Provisioners in Terraform to execute Ansible Playbooks on Provisioned VM. I'm not sure about your terraform Version. But below code might help. Keep in mind Provisioners are to be used as last resort
provisioner "local-exec" {
command = "ansible-playbook -u user -i '${self.public_ip},' --private-key ${var.ssh_key_private} provision.yml"
}
https://www.terraform.io/docs/language/resources/provisioners/syntax.html
To have end-to-end automation in which the Ansible is run when the instances are launched (and/or at every restart) you can pass in cloud-init configuration from Terraform. This is nice because that config may be referencing other parts of your infrastructure which can be sorted out by Terraform's dependency resolution. You would do this by providing Terraform cloudinit_config to the custom_data argument of the Azure VM in Terraform.
On the Ansible side you can also use the Azure dynamic inventory. With this dynamic inventory you add tags to your resources in Terraform in such a way that they can be filtered and grouped into the Ansible inventory when Ansible is run. This is helpful if the Ansible tasks need to gather facts from hosts.

How can i run specific vm using terraform

I wrote infrastructure as code using terraform, and apply it successfully on azure cloud, now i created another 3 vm's using the same networking file and variable file that already used in the pervious IAAC, how can i run only these 3 vm's without generating new error, or "already exists" to create them on the same subscription and same variable/networking configuration.
Thanks
if you understand correctly, you can use
terraform -target
Be notice that -target attribute respects dependencies

Delete instance from scaleset using terraform

I am trying to remove a particular instance from my scaleset using terraform. I know there is a REST API for this:
https://learn.microsoft.com/en-us/rest/api/compute/virtualmachinescalesets/deleteinstances
However, the page for azure tf doesnt really mention this anywhere.
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_scale_set.html
How do i do this with terraform?
When managing a virtual machine scale set with Terraform, Terraform does not interact with the individual instances at all. Instead, it can change update the settings of a scale set to match what you've written in configuration and then let the scale set itself respond to that new configuration appropriately.
For example, if you wish to have fewer instances of a particular SKU then you might edit your Terraform configuration to have a lower value for the capacity argument for that SKU and run terraform apply. If you accept that plan, Terraform will update the scale set to have a lower capacity and then the remote scale set system will decide how to respond to that.
To delete something Terraform is managing, like the scale set itself, we would remove it from the configuration and run terraform apply. Because Terraform is not managing the individual instances in this scale set, we can't tell Terraform to delete them directly. If you need that sort of control then you'd need to either manage the virtual machines directly with Terraform (not using a scale set at all) or use a separate tool (outside of Terraform) to interact with the API you mentioned.

Cyclic dependency between Packer and Terraform for non-default VPC

My deployment workflow is first creating ami with Packer, then deploy using Terraform.
I have a EC2-class, which was created before 2013, so there's no default VPC configured.
When I run packer build packer.json, the tool complains that
amazon-ebs: Adding tag: "Name": "Packer Builder"
==> amazon-ebs: Error launching source instance: VPCResourceNotSpecified: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.
==> amazon-ebs: status code: 400, request id: 35ca5736-f808-4bb9-9a34-3dca24b59259
I was planning to create VPC with Terraform. So the question is, what is the order of execution? Run Terraform first, then Packer. Or run in reverse order? Or, we split out the network configuration (VPC), use Terraform to deploy it once, then followed by Packer, and then terraform the rest of the servers?
Update:
If I use the strategy:
run Network module (mostly static things), followed by Packer, and then run "Frequently changing things" module, how do I share state between Terraform and Packer? Meaning, once I created a new VPC, how do I let Packer know about this new vpc_id? Do I need to modify every Packer file?
The general advice is to split the terraform configuration into reasonable sized parts.
For a small setup it's reasonable is to split it into mostly static things (VPC, subnet, routes, etc). Frequently changing things (EC2, SG, etc). This would also solve your dependency cycle.

Resources