I have a most of my Azure infrastructure managed with Terraform.
However, I am quickly finding that a lot of the small details are missing.
e.g. client secrets aren't fully supported https://github.com/terraform-providers/terraform-provider-azuread/issues/95
It doesn't seem possible to add an Active Directory Provider to APIM
How Do I Add Active Directory To APIM Using Terraform?
Creating the APIM leaves demo products on it that can't be removed
How Can I Remove Demo Products From APIM Created With Terraform?
etc, etc.
Solutions to these seems to be utilising the cli
e.g. https://learn.microsoft.com/en-us/cli/azure/ad/app/permission?view=azure-cli-latest#az-ad-app-permission-add
Or falling back to the REST API:
e.g.
https://learn.microsoft.com/en-us/rest/api/apimanagement/2019-01-01/apis/delete
How can I mix terraform with the CLI and REST API?
Can they be embedded in terraform?
Or do I just run some commands to run them after terraform has finished?
Is there a way to do these commands in a cross platform way?
Will running the CLI and REST API after terraform cause the state to be wrong and likely cause problems the next time terraform is run?
How can I mix terraform with the CLI and REST API?
You can use the Terraform provisioner local-exec or remote-exec. In these ways, you can run the script with CLI commands or the REST API. For more details, see local-exec and remote-exec. But you need to take care of them. These two ways just run the scripts and display the output, but they do not have the outputs.
If you want to use the result of the script in the same Terraform file for other resources, you need to use the Terraform external data source, see the details here.
Update:
Here is an example.
Bash script file vmTags.sh:
#!/bin/bash
az vm show -d -g myGroup -n myVM --query tags
Terraform external data source:
data "external" "test" {
program = ["/bin/bash", "./vmTags.sh"]
}
output "value" {
value = "${data.external.test.result}"
}
Related
Background
I was kind of dropped into an IaC project that uses Packer => Terraform => Ansible to create RHEL Virtual Machines on an on-prem VMware Vsphere cluster.
Our vmware module registers output variables that we use once the VMs are created, those variables feed a local_file resource template to build an Ansible inventory with the vm names and some other variables.
Ansible is then run using local_exec with the above created inventory to do configuration actions and run scripts both on the newly deployed VM's and against some external management applications, for example to join the VM to a domain (FreeIPA, sadly no TF good provider is available).
Issue Description
The issue that I have been wrestling with is when we run a terraform destroy (or apply with some VM count changes that destroy a VM resource), we would like to be able to repeat the process in reverse.
Capture the names of the VMs to be destroyed(Output vars from resource creation) so they can be removed from the IPA domain and have some general cleanup.
We've tried different approaches with Destroy Time Provisioners and it just seems like it would require a fundamental change in the approach outlined above to make that work.
Question
I'm wondering if there is a way to get an output variable on destroy that could be used to populate a list the VMs that would be removed.
So far my search has turned up nothing. Thanks for your time.
In general, it is good to plan first, even when destroying:
terraform plan -destroy -out tfplan
Then, you you can proceed with the destroy:
terraform apply tfplan
But at this moment (or before actual destroy), you have a plan what was destroyed, and you can do any analysis or automation on it. Example:
terraform show -json tfplan | jq > tfplan.json
Source:
https://learn.hashicorp.com/tutorials/terraform/plan
I am looking for an option from Terraform CLI to provide a run name in terraform Cloud. Ideally that happens if I execute it with Terraform API driven approach.
Here you can see, you can add Reason for Starting a run , Is there an option present to provide the same in CLI?
Learning Terraform, and in one of the tutorials for terraform with azure a requirement was to log in with the az client. Now my understanding is that this was to create a Service Princlple.
I was trying this with Github actions and my assumption was that the properties obtained for the Service Principle. When I tried running terraform plan everything worked out fine.
However, when I tried to do terraform apply it failed until I explicitly did an az login step in the github workflow job.
What am I missing here? Does terraform plan only compare the new configuration file against the state file, not the actual account? Or does it verify the state against the resource-group/subscription in Azure?
I was a little confused with the documentation on terraform plan
We are developing an azure function which should run terraform cmdlets like init, plan and apply.
when we run above commands in powershell, we are getting below error.
Error checking configuration: <nil>: Failed to read module directory; Module directory C:\home\site\wwwroot\databricks-user-sync-modules does not exist or cannot be read
My run.ps1 file includes below sinppet
write-output (terraform --version)
Write-Output ((Get-ChildItem).Name)
Get-Content -Path main.tf
write-output (terraform init)
terraform plan -var-file dev.tfvars
How to run terraform in azure functions.
The error message you are receiving may or may not be related to your attempt to use terraform in Azure Functions.
I'd initially ask whether Azure Functions is the ideal solution to the problem you're trying to solve?
Based on the fact that the functions runtime doesn't include the terraform binary in order to run tf cli commands. You may have more success using Azure Devops pipelines or Github Actions to deploy your terraform code.
Both AzDo and Github can trigger CI/CD operations via a webhook.
https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#repository_dispatch
and
https://learn.microsoft.com/en-us/azure/devops/release-notes/2020/pipelines/sprint-172-update
Unless I am missing something obvious, you may have to provide more context around why you're using Azure Functions for this scenario
I have written a Terraform script to spin-up an infrastructure on Azure. I have also written an Ansible script to patch the VMs launched on Azure with latest updates. But when I am not able to automate the process of patching the VMs once they get launched.
You can use Provisioners in Terraform to execute Ansible Playbooks on Provisioned VM. I'm not sure about your terraform Version. But below code might help. Keep in mind Provisioners are to be used as last resort
provisioner "local-exec" {
command = "ansible-playbook -u user -i '${self.public_ip},' --private-key ${var.ssh_key_private} provision.yml"
}
https://www.terraform.io/docs/language/resources/provisioners/syntax.html
To have end-to-end automation in which the Ansible is run when the instances are launched (and/or at every restart) you can pass in cloud-init configuration from Terraform. This is nice because that config may be referencing other parts of your infrastructure which can be sorted out by Terraform's dependency resolution. You would do this by providing Terraform cloudinit_config to the custom_data argument of the Azure VM in Terraform.
On the Ansible side you can also use the Azure dynamic inventory. With this dynamic inventory you add tags to your resources in Terraform in such a way that they can be filtered and grouped into the Ansible inventory when Ansible is run. This is helpful if the Ansible tasks need to gather facts from hosts.