Is possible to keep secrets out of state? - terraform

For example could reference the password as an environment variable? Even if I did do that it would still be stored in state right?
# Configure the MySQL provider
provider "mysql" {
endpoint = "my-database.example.com:3306"
username = "app-user"
password = "app-password"
}

State snapshots include only the results of resource, data, and output blocks, so that Terraform can compare these with the configuration when creating a plan.
The arguments inside a provider block are not saved in state snapshots, because Terraform only needs the current arguments for the provider configuration, and never needs to compare with the previous.
Even though the provider arguments are not included in the state, it's best to keep specific credentials out of your configuration. Providers tend to offer arguments for credentials as a last resort for unusual situations, but should also offer other ways to provide credentials. For some providers there is some existing standard way to pass credentials, such as the AWS provider using the same credentials mechanisms as the AWS CLI. Other providers define their own mechanisms, such as environment variables.
For the MySQL provider in particular, we should set endpoint in the configuration because that describes what Terraform is managing, but we should use environment variables to specify who is running Terraform. We can use the MYSQL_USERNAME and MYSQL_PASSWORD environment variables to specify the credentials for the individual or system that is running Terraform.
A special exception to this is when Terraform itself is the one responsible for managing the credentials. In that case, the resource that provisioned the credentials will have its data (including the password) stored in the state. There is no way to avoid that because otherwise it would not be possible to use the password elsewhere in the configuration.
For Terraform configurations that manage credentials (rather than just using credentials), they should ideally be separated from other Terraform configurations and have their state snapshots stored in a location where they can be encrypted at rest and accessible only to the individuals or systems that will run Terraform against those configurations. In that case, treat the state snapshot itself as a secret.

No, it's not possible. Your best option is using a safe and encrypted remote backend such as S3 + Dynamodb to keep your state files. I've also read about people using git-crypt, but never tried myself.
That said, you can keep secrets out of your source code using environment variables for inputs.

Related

How to hide Terraform "artifactory" backend credentials?

Terraform 1.0.x
It's my first time using an artifactory backend to store my state files. In my case it's a Nexus repository, and followed this article to set up the repository.
I have the following
terraform {
backend "artifactory" {
# URL of Nexus-OSS repository
url = "http://x.x.x:8081/repository/"
# Repository name (must be terraform)
repo = "terraform"
# Unique path for this particular plan
subpath = "exa-v30-01"
# Nexus-OSS creds (nust have r/w privs)
username = "user"
password = "password"
}
}
Since the backend configuration does not accept variables for the username and password key/value pairs, how can I hide the credentials so they're not in plain site when I store my files in our Git repo?
Check out the "Partial Configuration" section of the Backend Configuration documentation. You have three options:
Specify the credentials in a backend config file (that isn't kept in version control) and specify the -backend-config=PATH option when you run terraform init.
Specify the credentials in the command line using the -backend-config="KEY=VALUE" option when you run terraform init (in this case, you would run terraform init -backend-config="username=user" -backend-config="password=password").
Specify them interactively. If you just don't include them in the backend config block, and don't provide a file or CLI option for them, then Terraform should ask you to type them in on the command line.
For settings related to authentication or identifying the current user running Terraform, it's typically best to leave those unconfigured in the Terraform configuration and use the relevant system's normal out-of-band mechanisms for passing credentials.
For example, for s3 backend supports all of the same credentials sources that the AWS CLI does, so typically we just configure the AWS CLI with suitable credentials and let Terraform's backend pick up the same settings.
For systems that don't have a standard way to configure credentials out of band, the backends usually support environment variables as a Terraform-specific replacement. In the case of the artifactory backend it seems that it supports ARTIFACTORY_USERNAME and ARTIFACTORY_PASSWORD environment variables as the out-of-band credentials source, and so I would suggest setting those environment variables and then omitting username and password altogether in your backend configuration.
Note that this out-of-band credentials strategy is subtly different than using partial backend configuration. Anything you set as part of the backend configuration -- whether in a backend block in configuration or on the command line -- will be saved by Terraform into a local cache of your backend settings and into every plan file Terraform saves.
Partial backend configuration is therefore better suited to situations where the location of the state is configured systematically by some automation wrapper, and thus it's easier to set it on the command line than to generate a configuration file. In that case, it's beneficial to write out the location to the cached backend configuration so that you can be sure all future Terraform commands in that directory will use the same settings. It's not good for credentials and other sensitive information, because those can sometimes vary over time during your session and should ideally only be known temporarily in memory rather than saved as part of artifacts like the plan file.
Out-of-band mechanisms like environment variables and credentials files are handled directly by the backend itself and are not recorded directly by Terraform, and so they are a good fit for anything which is describing who is currently running Terraform, as opposed to where state snapshots will be saved.

Can I define terraform cloud credentials in environment variables instead of .terraformrc file?

I understand that I need to define terraform cloud credentials in the .terraformrc file, as explained here:
https://www.terraform.io/docs/commands/cli-config.html#credentials-1
Is there any way not to use the .terraformrc file and set the credential and token in environment variables?
PS:
Just a side question, do we have a StackOverflow tag for Terraform Enterprise or Cloud?
The answer to this is both yes and no.
If the question is authenticating the TFE provider with environment variables, then the answer is yes. That change was made in this PR to enable TFE_TOKEN and TFE_HOSTNAME for authenticating the TFE provider as an alternative to the Terraform CLI config file. You can then interact with your TFE/Terraform Cloud with that provider and authenticating with environment variables.
If the question is authenticating TFE interactions via the Terraform CLI with environment variables, then the answer is no. TFE authentication is not among the listed environment variables for the Terraform CLI. I have also verified in a quick test that the provider authentication environment variables do not similarly function for the CLI. For that, you must use a terraform.rc, .terraformrc, or credentials.tfrc.json.
The lookup of credentials in the CLI configuration is the default way Terraform handles credentials, but you can override that behavior by configuring a credentials helper, which is an external program Terraform will run in order to obtain credentials, instead of consulting the configuration files directly.
Credentials helpers are arbitrary programs that happen to support a particular protocol over their stdin/stdout, and so they can in principle do anything, including checking environment variables. I previously wrote terraform-credentials-env as a credentials helper which does exactly that, and so configuring that helper might be sufficient to get what you needed here, or if not you could potentially use it as an example to write your own credentials helper.
Note that Terraform's model of credentials is host-oriented rather than service-oriented, so in setting this up we're configuring Terraform to use the given credentials for all services on app.terraform.io. That includes both the Terraform Cloud/Enterprise-specific remote backend and the other services that Terraform Cloud is just one implementation of, like the module registry protocol.

GitHub Actions for Terraform - How to provide "terraform.tfvars" file with aws credentials

I am trying to setup GitHub Actions for execute a terraform template.
My confusion is - how do I provide *.tfvars file which has aws credentials. (I can't check-in these files).
Whats the best practice to share the variable's values expected by terraform commands like plan or apply where they need aws_access_key and aws_secret_key.
Here is my GitHub project - https://github.com/samtiku/terraform-ec2
Any guidance here...
You don't need to provide all variables through *.tfvars file. Apart from -var-file option, terraform command provides also -var parameter, which you can use for passing secrets.
In general, secrets are passed to scripts through environment variables. CI tools give you an option to define environment variables in project configuration. It's a manual step, because as you have already noticed, secrets cannot be stored in the repository.
I haven't used Github Actions in particular, but after setting environment variables, all you need to do is run terraform with secrets read from them:
$ terraform -var-file=some.tfvars -var "aws-secret=${AWS_SECRET_ENVIRONMENT_VARIABLE}
This way no secrets are ever stored in the repository code. If you'd like to run terraform locally, you'll need first to export these variables in your shell :
$ export AWS_SECRET_ENVIRONMENT_VARIABLE="..."
Although Terraform allows providing credentials to some providers via their configuration arguments for flexibility in complex situations, the recommended way to pass credentials to providers is via some method that is standard for the vendor in question.
For AWS in particular, the main standard mechanisms are either a credentials file or via environment variables. If you configure the action to follow what is described in one of those guides then Terraform's AWS provider will automatically find those credentials and use them in the same way that the AWS CLI does.
It sounds like environment variables will be the easier way to go within GitHub actions, in which case you can just set the necessary environment variables directly and the AWS provider should use them automatically. If you are using the S3 state storage backend then it will also automatically use the standard AWS environment variables.
If your system includes multiple AWS accounts then you may wish to review the Terraform documentation guide Multi-account AWS Architecture for some ideas on how to model that. The summary of what that guide recommends is to have a special account set aside only for your AWS users and their associated credentials, and then configure your other accounts to allow cross-account access via roles, and then you can use a single set of credentials to run Terraform but configure each instance of the AWS provider to assume the appropriate role for whatever account that provider instance should interact with.

Terraform: Add resource-specific secrets

I know that you can pass general secrets to a resource through terraform variables. Is there a way to configure secrets which change at the resource level?
Specifically, I'm using terraform as a back-end to an app which allows users to set up a server with a password. That password is different for each server. Is there some way to set something like self.password for a single instance so that it:
Is not visible in the github repo where I track the terraform files
and
Can be changed for each individual instance
Right now I'm just going to be creating terraform files like password=var.{unique_id}_password but if feels like there should be a better way
More detail on the use-case:
I have a web application to provision servers for users running another web app. The password for that server is set-up by my application. The password is configured right now using a set-up script that I would like to port to terraform.
The passwords change for each server because a user can set the password for their server only, and that variable should not effect other resources
Here's a super-simplified version of the expected output when a user tries to provision a server
# new-server.tf
resource "digitalocean_droplet" "new_server" {
name = "new_server"
password = "${var.get_the_password_somehow}"
provisioner "remote-exec" {
inline = [
"set-password ${self.password}"
]
}
}
You can use the random_password provider to generate a random string.
Reference: https://www.terraform.io/docs/providers/random/r/password.html
Not sure if your use case requires management or storage of the password, but that is also possible depending on your needs. I see that you are using DO for provisioning resources.
Maybe you can put Hashicorp Vault in place to manage the randomly generated passwords. I'm an AWS guy so I would stick the password in secrets manager.

Make Azure storage account and container before running terraform init?

Correct me if I'm wrong, when you run terraform init you are asked to name a storage account and container for the terraform state.
Can these also automatically be made with terraform?
Edit: I'm using Azure.
I usually split my terraform configurations into two parts.
One that creates a storage account with container, with a specific tag (tf=backend for example). The second one that creates all other resources. I share a backend.tfvars between the two, and in the second one, I get the storage account key using Azure CLI and the previously set tag (that way I don't have to get the key and pass it manually to my second script).
You could even migrate the state of the first terraform configuration once deployed, if you don't want to rely on a local state
Yes, absolutely. You would in general want an S3 bucket for each of your environments, although it's also possible to have a bucket shared across all environments and then set up access controls using bucket policies. Don't create this bucket as part of provisioning other resources, as their lifecycles will likely be different (you would want to retain the bucket for a long time and would be unlikely to want to destroy it).
What you do is you define this bucket in Terraform using local state first. After it is created, you add a remote backend pointing to this bucket.
terraform {
required_version = ">= 0.11.7"
backend "s3" {
bucket = "my-state-bucket"
key = "s3_state_bucket"
region = "us-west-2"
encrypt = "true"
}
}
After you run terraform init, Terraform will ask if you want to migrate the local state file to S3. Answer yes, and after this completes you can delete the local state file, as it's no longer used.
This approach allows you to break out of this chicken and egg situation and still manage all of your infrastructure as code, rather then creating it manually using web console or bash scripts.

Resources