Azure DevOps and Terraform Extension - Debug - azure

Using a DevOps pipeline with the Terraform Azure extension from Peter Groenewegen 2.4.0 (Latest).
My question is in regards to setting the TF_LOG=Debug global variable using this extension and troubleshooting in general. I'm getting a vague error message,, "##[error] Terraform failed to execute. Error:" , and wanted to see the debug logs for troubleshooting but haven't been able to do this yet. I've tried using a separate task with an export TF_LOG=Debug or adding it to the global variables section, a tfvars file or right on the terraform apply command(##[command]"terraform" apply -auto-confirm -var 'TF_LOG=DEBUG' -input=false -no-color) with the -var TF_LOG=Debug switch. How can I enable Debug using this extension? Perhaps there is a better way to debug issues like this one? Thanks for any response.

Not sure if this will work for the task you're using, but it's worth a shot:
For the Microsoft terraform task for azure pipelines, this can be achieved by adding the TF_LOG variable to the pipeline with an appropriate value.
If you go your pipeline, then click 'Edit', then go to 'Variables' and add it like so:

Related

Using Terraform workspaces in an automation pipeline with TF_WORKSPACE: Currently selected workspace “X” does not exist

I am working on an open source project with Terraform that will allow me to set up ad-hoc environments through GitHub Actions. Each ad-hoc environment will correspond to a terraform workspace. I'm setting the workspace by exporting TF_WORKSPACE before running terraform init, plan and apply. This works the first time around. For example, I'm able to create an ad-hoc environment called alpha. In my S3 backend I can see that the state file is saved under the alpha folder. The issue is that when I run the same pipeline to create another ad-hoc environment called beta, I get the following message:
Initializing the backend...
╷
│Error: Currently selected workspace "beta" does not exist
│
│
╵
Error: Process completed with exit code 1.
Here is the section of my GitHub action that is failing: https://github.com/briancaffey/django-step-by-step/blob/main/.github/workflows/ad_hoc_env_create_update.yml#L110-L142
I have been over this article: https://support.hashicorp.com/hc/en-us/articles/360043550953-Selecting-a-workspace-when-running-Terraform-in-automation but I'm still not sure what I'm doing wrong in my automation pipeline.
The alpha workspace did not exist, but it seemed to be able to create it and use it as the workspace in my first run. I'm not sure why other workspaces are not able to be created using the same pipeline.
I got some help from #apparentlymart on the Terraform community forum. Here's the reply: https://discuss.hashicorp.com/t/help-using-terraform-workspaces-in-an-automation-pipeline-with-tf-workspace-currently-selected-workspace-x-does-not-exist/40676/2
In order to make the pipeline work I had to use terraform commands in the following order:
terraform init ...
terraform workspace create ${WORKSPACE} || echo "Workspace ${WORKSPACE} already exists or cannot be created"
export TF_WORKSPACE=$WORKSPACE
terraform apply ...
terraform output ...
This allows me to create multiple ad hoc environments without any errors. The code in my example project has also been updated with this fix: https://github.com/briancaffey/django-step-by-step/blob/main/.github/workflows/ad_hoc_env_create_update.yml#L110-L146

How to destroy resources created using terraform in azure devops pipeline by using PowerShell

I have a project where I'm using Terraform in Azure DevOps Pipeline create Infrastructure but want to destroy the infrastructure in a PowerShell script running locally.
So the PScommand that I want to run is this:
$TerraCMD = "terraform destroy -var-file C:/Users/Documents/Terraform/config.json"
Invoke-Expression -Command $TerraCMD
But I get the following output:
[0m[1m[32mNo changes.[0m[1m No objects need to be destroyed.[0m
[0mEither you have not created any objects yet or the existing objects were
already deleted outside of Terraform.
[33mâ•·[0m[0m
[33m│[0m [0m[1m[33mWarning: [0m[0m[1mValue for undeclared variable[0m
[33m│[0m [0m
[33m│[0m [0m[0mThe root module does not declare a variable named "config" but a value was
[33m│[0m [0mfound in file
[33m│[0m [0m"C:/Users/mahera.erum.baloch/source/repos/PCFA-CloudMigration/On-Prem-Env/IaC/Terraform/config.json".
[33m│[0m [0mIf you meant to use this value, add a "variable" block to the
[33m│[0m [0mconfiguration.
[33m│[0m [0m
[33m│[0m [0mTo silence these warnings, use TF_VAR_... environment variables to provide
[33m│[0m [0mcertain "global" settings to all configurations in your organization. To
[33m│[0m [0mreduce the verbosity of these warnings, use the -compact-warnings option.
[33m╵[0m[0m
[0m[1m[32m
Destroy complete! Resources: 0 destroyed.
I know this is probably due to that I created the resources through the pipeline and not from local repository, but is there a way to do this?
Any help would be appreciated.
P.S. The State file is saved in the Azure Storage.
I'm going to assume that your code is kept in a repo that you have access to, since you mentioned that it's being deployed from Terraform running in an Azure DevOps Pipeline.
As others mentioned, the state file AND your terraform code is your source of truth. Hence, you'd need for both the PowerShell script and the Pipeline to be referring to the same state file and code, to achieve what you're trying to.
For the terraform destroy to run, it would need access to both your Terraform code and the state file so that it can compare what needs to be destroyed.
Unless your setup is very different from this, you could have your PowerShell script just git clone or git pull the repo, depending on your requirements, and then execute a terraform destroy on that version of the code. Your state file will then be updated accordingly.
I've just run into the problem of keeping Terraform state from an Azure Pipeline build. Repeated builds of the pipeline fail because the resource group already exists, but the Terraform state is not kept by the build pipeline. And I can find no way to execute terraform destroy on the pipeline even if I had the state.
One approach I found in chapter 2 of this book is storing terraform.tfstate in a remote back end. This looks like it will keep .tfstate across multiple builds of the pipeline and from elsewhere too.
I don't know yet if it will allow a terraform destroy.

Convert visual build pipeline in yaml file

I'm working with Azure DevOps pipeline and I'm using the visual designer.
But there is also the YAML file. I would like to export my Build pipeline into a YAML file. It seems to be possible like mentionned in this Github issue (https://github.com/MicrosoftDocs/vsts-docs/issues/2504) using the View YAML button.
But this button is disable in my project (I cannot click on it):
I don't know how to enable it. The preview feature New YAML pipeline creation experience is enabled. I'm using some Task that are tagged as Preview. Can it be a reason?
Does someone know why it is disabled and how to enabled it?
I also have the same problem for several projects on pipeline level:
Try to check the agent level. It may be available:
FYI you will shortly be able to export the entire pipeline to YAML (the ability to export individual tasks will be removed);
https://devblogs.microsoft.com/devops/replacing-view-yaml/

Variable substitution in build pipeline

There are tons of resources online on how to replace JSON configuration files in a release pipeline like this one. I configured this. It works. However, we have multiple integration tests which reach the database too. These tests are run during build time. I haven't seen any option yet to replace config values in the build pipeline. Does it exist? Or do I really have to use this custom task (see screenshot below)?
There is an out-of-the-box task since recently by Microsoft. It's called File Transform. It's currently in preview but it works really well! Haven't had any issues whatsoever with it and it works the same as you would configure it in the release pipeline. Would recommend this any day!
Below you can see my configuration.
There is no out-of-the-box task only to replace tokens/values in files (also in the release pipline the task is Azure App Service Deploy and not only for replace json configuration).
You need to use an external extension from here or write a PowerShell script for that.

Can Terraform mask variables in the console output?

I wanted to post this as a feature request but I wanted to see if anyone else has found some clever way of doing this before I post. Or maybe someone from Hashicorp can tell me this will be a feature in the coming
I have looked high and low for some way to mask variables from the console when running terraform apply/show. Preferably trying to mask variables using a local-exec provisioner when passing variables to a script.
A tool called Terrahelp is the only thing I can find that will do this but it will only apply to variables in a tfvars file which doesn't allow interpolations. This doesn't help since we are trying to use Vault to keep secrets out of the terraform files.
Current Versions
Terraform v0.11.7
provider.null v1.0.0
provider.template v1.0.0
provider.vault v1.3.1
provider.vsphere v1.8.1
Use Case
provisioner "local-exec" {
command = "&'${path.module}\\scripts\\script.ps1' -name ${var.node_name} -pass '${var.pass}' -user ${var.user} -server ${var.server}"
interpreter = ["Powershell", "-Command"]
}
Attempted Solutions
I'm using Vault to keep secrets out of the Terraform files, so I am using the vault provider and calling data from it. I have tried to create a module and output the secrets with the sensitive = true value and then calling that module to use the secrets however that still shows in the console.
Proposal
Allow some kind of sensitive value much like output to variables within Terraform. So if scripts like the above are called in the console they won't show sensitive variable information.
References
https://github.com/hashicorp/terraform/issues/16114
https://github.com/hashicorp/terraform/issues/16643
Terraform 13 was released since this question was asked, and allows variables to be marked as sensitive and not shown in the console.
https://www.terraform.io/docs/configuration/outputs.html#sensitive-suppressing-values-in-cli-output
Thanks for the feedback, the passwords cannot be set for a one time use as some of these are service accounts in AD that do other things which those applications cannot handle constant password changes.
We did find a solution through another product which is Azure/Azure DevOps. We are storing credentials in key vaults in Azure which Azure DevOps has access to and using Azure DevOps pipelines to send terraform code to our Build Server. Azure DevOps seems to act as a shell which hides any secrets from the console and it works pretty well. I would recommend it to anyone who is also looking to hide secrets from terraform files/command line.
Here is how I do it for a few on-premise services:
1 - var.password doesn't actually stores a password. Rather, it stores the name of an environment variable.
2 - My scripts get passwords from those environment variables.
3 - I have a small program that loads secrets to environment and clears them for terraform apply.
So in the end I just bypass Terraform for secrets used by scripts. Not ideal, but I also couldn't find a better solution.
I think https://github.com/cloudposse/tfmask might be what you're looking for:
Command line utility to mask sensitive output from a transform plan or terraform apply.
You first set an environment variable to filter the masked keys (admittedly there's some manual work involved here):
export TFMASK_VALUES_REGEX="(?i)^.*(secret|password|oauth|token|key).*$"
And then pipe terraform commands through tfmask, resulting in masked output:
terraform plan | tfmask
This is self-advertising, but I create a tool called terramask that works with Terraform 0.12.
Issues are welcome 🙂
I can't quite tell if this is your use case or not, but one strategy that we've used with sensitive variables is to make use of the default Terraform behavior to use environment variables to set TF variables, e.g.,
variable "sensitive_variable" {
type = "string"
}
sensitive_var=$(curl url/with/remote/value)
export TF_VAR_sensitive_variable=$sensitive_var
terraform apply

Resources