Can Terraform mask variables in the console output? - terraform

I wanted to post this as a feature request but I wanted to see if anyone else has found some clever way of doing this before I post. Or maybe someone from Hashicorp can tell me this will be a feature in the coming
I have looked high and low for some way to mask variables from the console when running terraform apply/show. Preferably trying to mask variables using a local-exec provisioner when passing variables to a script.
A tool called Terrahelp is the only thing I can find that will do this but it will only apply to variables in a tfvars file which doesn't allow interpolations. This doesn't help since we are trying to use Vault to keep secrets out of the terraform files.
Current Versions
Terraform v0.11.7
provider.null v1.0.0
provider.template v1.0.0
provider.vault v1.3.1
provider.vsphere v1.8.1
Use Case
provisioner "local-exec" {
command = "&'${path.module}\\scripts\\script.ps1' -name ${var.node_name} -pass '${var.pass}' -user ${var.user} -server ${var.server}"
interpreter = ["Powershell", "-Command"]
}
Attempted Solutions
I'm using Vault to keep secrets out of the Terraform files, so I am using the vault provider and calling data from it. I have tried to create a module and output the secrets with the sensitive = true value and then calling that module to use the secrets however that still shows in the console.
Proposal
Allow some kind of sensitive value much like output to variables within Terraform. So if scripts like the above are called in the console they won't show sensitive variable information.
References
https://github.com/hashicorp/terraform/issues/16114
https://github.com/hashicorp/terraform/issues/16643

Terraform 13 was released since this question was asked, and allows variables to be marked as sensitive and not shown in the console.
https://www.terraform.io/docs/configuration/outputs.html#sensitive-suppressing-values-in-cli-output

Thanks for the feedback, the passwords cannot be set for a one time use as some of these are service accounts in AD that do other things which those applications cannot handle constant password changes.
We did find a solution through another product which is Azure/Azure DevOps. We are storing credentials in key vaults in Azure which Azure DevOps has access to and using Azure DevOps pipelines to send terraform code to our Build Server. Azure DevOps seems to act as a shell which hides any secrets from the console and it works pretty well. I would recommend it to anyone who is also looking to hide secrets from terraform files/command line.

Here is how I do it for a few on-premise services:
1 - var.password doesn't actually stores a password. Rather, it stores the name of an environment variable.
2 - My scripts get passwords from those environment variables.
3 - I have a small program that loads secrets to environment and clears them for terraform apply.
So in the end I just bypass Terraform for secrets used by scripts. Not ideal, but I also couldn't find a better solution.

I think https://github.com/cloudposse/tfmask might be what you're looking for:
Command line utility to mask sensitive output from a transform plan or terraform apply.
You first set an environment variable to filter the masked keys (admittedly there's some manual work involved here):
export TFMASK_VALUES_REGEX="(?i)^.*(secret|password|oauth|token|key).*$"
And then pipe terraform commands through tfmask, resulting in masked output:
terraform plan | tfmask

This is self-advertising, but I create a tool called terramask that works with Terraform 0.12.
Issues are welcome 🙂

I can't quite tell if this is your use case or not, but one strategy that we've used with sensitive variables is to make use of the default Terraform behavior to use environment variables to set TF variables, e.g.,
variable "sensitive_variable" {
type = "string"
}
sensitive_var=$(curl url/with/remote/value)
export TF_VAR_sensitive_variable=$sensitive_var
terraform apply

Related

Including Terraform Configuration from Another Gitlab Project

I have a couple apps that use the same GCP project. There are dev, stage, and prod projects, but they're basically the same, apart from project IDs and project numbers. I would like to have a repo in gitlab like config where I keep these IDs, in a dev.tfvars, stage.tfvars, prod.tfvars. Currently each app's repo has a directory of config/{env}.tfvars, which is really repetitive.
Googling for importing or including terraform resources is just getting me results about terraform state, so hasn't been fruitful.
I've considered:
Using a group-level Gitlab variable just as key=val env file and have my gitlab-ci yaml source the correct environment's file, then just include what I need using -var="key=value" in my plan and apply commands.
Creating a terraform module that either uses TF_WORKSPACE or an input prop to return the correct variables. I think this may be possible, but I'm new to TF, so I'm not sure how to return data back from a module, or if this type of "side-effects only" solution is an abusive workaround to something there's a better way to achieve.
Is there any way to include terraform variables from another Gitlab project?

How to destroy resources created using terraform in azure devops pipeline by using PowerShell

I have a project where I'm using Terraform in Azure DevOps Pipeline create Infrastructure but want to destroy the infrastructure in a PowerShell script running locally.
So the PScommand that I want to run is this:
$TerraCMD = "terraform destroy -var-file C:/Users/Documents/Terraform/config.json"
Invoke-Expression -Command $TerraCMD
But I get the following output:
[0m[1m[32mNo changes.[0m[1m No objects need to be destroyed.[0m
[0mEither you have not created any objects yet or the existing objects were
already deleted outside of Terraform.
[33mâ•·[0m[0m
[33m│[0m [0m[1m[33mWarning: [0m[0m[1mValue for undeclared variable[0m
[33m│[0m [0m
[33m│[0m [0m[0mThe root module does not declare a variable named "config" but a value was
[33m│[0m [0mfound in file
[33m│[0m [0m"C:/Users/mahera.erum.baloch/source/repos/PCFA-CloudMigration/On-Prem-Env/IaC/Terraform/config.json".
[33m│[0m [0mIf you meant to use this value, add a "variable" block to the
[33m│[0m [0mconfiguration.
[33m│[0m [0m
[33m│[0m [0mTo silence these warnings, use TF_VAR_... environment variables to provide
[33m│[0m [0mcertain "global" settings to all configurations in your organization. To
[33m│[0m [0mreduce the verbosity of these warnings, use the -compact-warnings option.
[33m╵[0m[0m
[0m[1m[32m
Destroy complete! Resources: 0 destroyed.
I know this is probably due to that I created the resources through the pipeline and not from local repository, but is there a way to do this?
Any help would be appreciated.
P.S. The State file is saved in the Azure Storage.
I'm going to assume that your code is kept in a repo that you have access to, since you mentioned that it's being deployed from Terraform running in an Azure DevOps Pipeline.
As others mentioned, the state file AND your terraform code is your source of truth. Hence, you'd need for both the PowerShell script and the Pipeline to be referring to the same state file and code, to achieve what you're trying to.
For the terraform destroy to run, it would need access to both your Terraform code and the state file so that it can compare what needs to be destroyed.
Unless your setup is very different from this, you could have your PowerShell script just git clone or git pull the repo, depending on your requirements, and then execute a terraform destroy on that version of the code. Your state file will then be updated accordingly.
I've just run into the problem of keeping Terraform state from an Azure Pipeline build. Repeated builds of the pipeline fail because the resource group already exists, but the Terraform state is not kept by the build pipeline. And I can find no way to execute terraform destroy on the pipeline even if I had the state.
One approach I found in chapter 2 of this book is storing terraform.tfstate in a remote back end. This looks like it will keep .tfstate across multiple builds of the pipeline and from elsewhere too.
I don't know yet if it will allow a terraform destroy.

Is there a way to list all the input variables for a terraform resource?

I want to get a list of all the variables (optional or not) and ideally their defaults defined in a resource. Say in the AWS provider, but need anything. I know I can get and look at the documentation (which doesn't list everything I want) or the raw provider code and find all this, but is there a utility that would list this for me?
The schemas-extractor from the Hashicorp Terraform / HCL Language plugin source builds a machine readable schema document for all providers in the terraform-providers org on GitHub.
The Terraform LSP is another likely source of this information.

Can't create multiple Cosmos MongoDB collections at the same time

Trying to create two Collections at the same time throws me this error:
The specified type system value 'TypedJsonBson' is invalid.
Judging by the response log, and the fact that the error is occurring at the apply phase, I suspect it is something with the API.
Samples:
Configuration to simulate the problem: main.tf
Terraform logs: run-pAXmLixNWWimiHNs-apply-log.txt
Workaround
It is possible to avoid this problem by creating one Collection at a time.
depends_on = [
azurerm_cosmosdb_mongo_collection.example
]
I tried your terraform main.tf files on my local PowerShell, it works fine. So the terraform configuration file should be correct.
I would suggest running terraform apply on the Azure cloud shell. You could remove the old terraform.tfstate file and .terraform folder and re-run terraform init locally or verify other reasons on your working environment.
Yes, if Terraform has a means to specify that parent resources need to exist before child resources can be created then you should use this because ARM requires this for any resource to be created.

Terraform Apply has different "plan" than Terraform Plan

I sometime see that Terraform Apply has different "plan" than Terraform Plan.
For instance, today i have seen one of TF files that I am trying to "Terraform Apply" resulted in only 1 "change" and 1 "add" while it got "3 add", "1 change" and "3 destroy" when using "Terraform Plan"
I have been using Terraform for just two months. Is this intended behavior in Terraform?
Could anyone give an explanation for this behavior? Thanks!
Terraform version: 0.11.13
This is unexpected behaviour, but the best practice it to:
terraform plan -out deploy.tfplan
it will save the plan in the deploy.tfplan file.
Now, terraform apply deploy.tfplan.
this will ensure that the plan you want is executed all the Time without fail.
This is not an intended behaviour of terraform unless if there is a mess anywhere. I never saw this kind of issue any time till now. Did you ever edited or deleted your .tfstate state file after you passed the terraform plan command? If you are observing this issue again or still facing this kind of issue, probably you can open an issue with the product owner. But I don't think this is an issue and you will never face this kind of issue again.
Try to follow these steps when trying to perform a Terraform apply .
First make sure the changes to the terraform file has been saved.
Try running a terraform plan terraform-plan before running terraform-apply
Sounds like some the files changes have been made to are not saved with the current terraform file
Can you explain the full scenario? Normally, in my experience it is same.
Difference i can only see -- Either you are using variable file with plan and apply and some variables causes some resources and other way might be if you using a remote location for state and some other job/person also updating the state.
If you are running everything locally, it should not happen like this.
Terraform builds a graph of all the resources.It, then creates the non-dependent resources in parallel to make resource creation slightly efficient. Is any the resource creation fails, It leaves terraform in partially applied state which gets recorded in the tfstate file. After fixing the issue with resource, when you reapply the .tf files it shows you only the new resources to be changed. In your case, I think it has got more to do with the fact that some resource have a policy of "destroy-before-creation" which shows up in result. so when you apply change to 1 resource it ends up showing 1 resource deleted 1 created. This when occurs with some non "destroy-before-creation" type resources, ends up giving you output like what you mentioned above
Did you comment any of the resources in terraform file while triggering command : terraform apply ?
If yes Please check the same as commenting resources in existing terraform file will result in destroying those resources in terraform.
Have been using terraform for quite a long time and this is not an intended behaviour. It looks like something has changed in between plan and apply.
But what you can do is save the plan in a file using
terraform plan -out plan.tfplan
and then deploy using same file
terraform apply plan.tfplan.

Resources