Delete instance from scaleset using terraform - azure

I am trying to remove a particular instance from my scaleset using terraform. I know there is a REST API for this:
https://learn.microsoft.com/en-us/rest/api/compute/virtualmachinescalesets/deleteinstances
However, the page for azure tf doesnt really mention this anywhere.
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine_scale_set.html
How do i do this with terraform?

When managing a virtual machine scale set with Terraform, Terraform does not interact with the individual instances at all. Instead, it can change update the settings of a scale set to match what you've written in configuration and then let the scale set itself respond to that new configuration appropriately.
For example, if you wish to have fewer instances of a particular SKU then you might edit your Terraform configuration to have a lower value for the capacity argument for that SKU and run terraform apply. If you accept that plan, Terraform will update the scale set to have a lower capacity and then the remote scale set system will decide how to respond to that.
To delete something Terraform is managing, like the scale set itself, we would remove it from the configuration and run terraform apply. Because Terraform is not managing the individual instances in this scale set, we can't tell Terraform to delete them directly. If you need that sort of control then you'd need to either manage the virtual machines directly with Terraform (not using a scale set at all) or use a separate tool (outside of Terraform) to interact with the API you mentioned.

Related

Does terraform guarantee that if no changes were reported by plan, it will be able to recreate resources the same way they currently are?

I have a lot of resources in my Azure subscription. I want to manage them with terraform, so I want to import them using terraform import. I import every resource manually one-by-one. Then I run terraform plan and check that there are no changes to be made reported i.e. that the current infrastructure matches the configuration.
Does this mean that if I were to manually delete some of the resources via Azure portal or cli, I would be able to recreate them wit terraform apply perfectly so that they would have exactly the same configuration as before and would operate in exactly the same way?
In general Terraform cannot guarantee that destroying an object and recreating it will produce an exactly equivalent object.
It is possible for that to work, but it requires a number of things to be true, including:
Your configuration specifies the values for resource arguments exactly as they are in the remote API. For example, if a particular resource type has a case-insensitive (but case-preserving) name then a provider will typically ignore differences in case when planning changes but it will use exactly the case you wrote in the configuration, potentially selecting a different name.
The resource type does not include any "write-only" arguments. Some resource types have arguments that are used only by the provider itself and so they don't get saved as part of the object in the remote API even though they are saved in the Terraform state. terraform import therefore cannot re-populate those into the state, because there is nowhere else to read them from except the Terraform state.
The provider doesn't have any situations where it treats an omitted argument as "ignore the value in the remote system" instead of "unset the value in the remote system". Some providers make special exceptions for certain arguments where leaving them unset allows them to "drift" in the remote API without Terraform repairing them, but if you are using any resource types which behave in that way then the value stored in the remote system will be lost when you delete the remote object and Terraform won't be able to restore that value because it's not represented in your Terraform configuration.
The hashicorp/azurerm provider in particular has many examples of situation 3 in the above list. For example, if you have an azurerm_virtual_network resource which does not include any subnet blocks then the provider will not propose to delete any existing subnets, even though the configuration says that there should be no subnets. However, if you delete the virtual network and then ask Terraform to recreate it then the Terraform configuration has no record of what subnets were supposed to exist and so it will propose to create a network with no subnets at all.

How To Capture information about Virtual Machine resources that will be destroyed?

Background
I was kind of dropped into an IaC project that uses Packer => Terraform => Ansible to create RHEL Virtual Machines on an on-prem VMware Vsphere cluster.
Our vmware module registers output variables that we use once the VMs are created, those variables feed a local_file resource template to build an Ansible inventory with the vm names and some other variables.
Ansible is then run using local_exec with the above created inventory to do configuration actions and run scripts both on the newly deployed VM's and against some external management applications, for example to join the VM to a domain (FreeIPA, sadly no TF good provider is available).
Issue Description
The issue that I have been wrestling with is when we run a terraform destroy (or apply with some VM count changes that destroy a VM resource), we would like to be able to repeat the process in reverse.
Capture the names of the VMs to be destroyed(Output vars from resource creation) so they can be removed from the IPA domain and have some general cleanup.
We've tried different approaches with Destroy Time Provisioners and it just seems like it would require a fundamental change in the approach outlined above to make that work.
Question
I'm wondering if there is a way to get an output variable on destroy that could be used to populate a list the VMs that would be removed.
So far my search has turned up nothing. Thanks for your time.
In general, it is good to plan first, even when destroying:
terraform plan -destroy -out tfplan
Then, you you can proceed with the destroy:
terraform apply tfplan
But at this moment (or before actual destroy), you have a plan what was destroyed, and you can do any analysis or automation on it. Example:
terraform show -json tfplan | jq > tfplan.json
Source:
https://learn.hashicorp.com/tutorials/terraform/plan

How to add a new resource to an existing resource group in Terraform

This would appear to be a fairly simple and basic scenario but I'm frankly at a loss on how to get around this using Terraform and would appreciate any suggestions.
The issue is this. In Azure, I have a number of resource groups, each containing a number of resources, including virtual networks, subnets, storage accounts, etc. What I would now like to do is add new resources to one or two of the resource groups. Typical example, I would like to provision a new virtual machine in each of the resource groups.
Now, so far all of the documentation and blogs I seem to come across only provide guidance on how to create resources whereby you also create a new resource group, vnet, subnet, from scratch. This is definitely not what I wish to do.
All I'm looking to do is get Terraform to add a single virtual machine to an existing resource group, going on to configure it to connect to existing networking resources such as a VNet, Subnet, etc. Any ideas?
I tested for ECS by destroying the launch configuration.
terraform destroy -target module.ecs.module.ec2_alb.aws_launch_configuration.launchcfg
I recreated the launch configuration and it worked:
terraform plan -target=module.ecs.module.ec2_alb.aws_launch_configuration
terraform apply -target=module.ecs.module.ec2_alb.aws_launch_configuration
Also, you can go read more on Terraform target here: https://learn.hashicorp.com/tutorials/terraform/resource-targeting
If you just want to be able to reference your existing resources in your TF script, you normally would use data sources in TF to fetch their information.
So for resource group, you would use data source azurerm_resource_group, for vnet there is azurerm_virtual_network and so forth.
These data sources would allow you to only reference and get details of existing resources, not to manage them in your TF script. Thus if you would like to actually manage these resources using TF (modify, delete, etc), you would have to import them first to TF.

Using Packer to Spin a VM and extract the image in an availability set

We have our corporate requirement ( due to pricing and whitelisting) to have Availability sets in our Azure subscription and resources like Compute should be spun inside that particular availability set. Since Packer while creating the Image spins up a temporary VM inside a temporary resource Group , I am confused (since did not find any documentation around it) if we can configure packer to spin the temporary VM inside the whitelisted availability set.
One possible way I can think of is to spin up the VM in the Resource Group which we created for the Availability Set (Since everything in Azure needs to be inside the Resource Group) that way I am guessing it will be tracked as part of billing but I am still not sure if the intermittent VM will be part of availability set.
Please help and suggest if there is an alternate way to the same .

Create multiple server using Terraform on Azure

I am creating multiple servers on Azure using Terraform template in a same Azure "Resource group", However when i try to run the template for individual servers each time, it is deleting the previous server while creating for next one.
Any idea how i can i reuse the same template for creating multiple server in a same Resource Group.
Thanks.
Terraform is intended to be idempotent, meaning that reapplying the same template makes no changes. If you edit the template, Terraform will edit the environment to reflect any changes or deletions.
If you need multiple VMs, you have at least two options:
Define multiple VM resources in your template.
Define a VM scale set and simply specify the number of VMs that you need.
I was able to achieve this, Here is what i did.
I created 2 separate .tf files under different folders.
1) For creating Resource group, NSG, Storage account, Vnet
2) For creating public ip, network interface and VM itself.
So i could use second configuration file for creating multiple server by just changing the values though parameters

Resources